Sep 26 2019
Sep 26
[embedded content]

Rich text editors are an integral part of content creation and content management workflows, but they can often present challenges for developers when it comes to robustness, extensibility, flexibility, and accessibility. What are some of the considerations you should keep in mind when evaluating rich text editors, especially for mission-critical systems like the application Tag1 is building for a top Fortune 50 company?

In this Tag1 Team Talk, we explore the new generation of rich text editors, which are based on a well defined data-structure rather than HTML, but still can export to Markdown or HTML. This allows us to tackle new requirements organizations have, including video embedding, cross-device support, and keyboard-navigable editors. After diving into some of the open-source solutions available in the market, such as Draft.js, CKEditor 5, Quill, Slate, and TapTap, join moderator Preston So (Contributing Editor) and guests Nik Graf (Senior Software Engineer), Kevin Jahns (Real-time Collaboration Systems Lead, Yjs creator), Fabian Franz (Senior Technical Architect and Performance Lead), and Michael Meyers (Managing Director) for an in-depth conversation about why ProseMirror is the best tool for our client’s project requirements.

Be sure to check out our related #TagTeamTalk, A Deep Dive Into Real Time Collaborative Editing solutions (e.g., Yjs, Collab, CKSource, etc.)


------------------------------------
Further reading
------------------------------------

ProseMirror Editor
------------------
https://prosemirror.net/
CZI ProseMirror: https://github.com/chanzuckerberg/czi-prosemirror/
Prosemirror Tables Demo: http://cdn.summitlearning.org/assets/czi_prosemirror_0_0_1_b_index.html
ProseMirror Atlaskit Yjs Demo: https://yjs-demos.now.sh/prosemirror-atlaskit/

Draft.js
------------------
https://draftjs.org/
https://github.com/nikgraf/awesome-draft-js#live-demos
https://quilljs.com/guides/designing-the-delta-format/

CKEditor 5
------------------
https://ckeditor.com/ckeditor-5/
CK5 Demo: https://ckeditor.com/ckeditor-5/demo/

Quill.js
------------------
https://quilljs.com/
Quill.js demo: https://quilljs.com/standalone/full/

Slate.js
------------------
https://github.com/ianstormtaylor/slate/blob/master/Readme.md
Slate Demo: https://www.slatejs.org/#/rich-text

TipTap
------------------
https://tiptap.scrumpy.io/

Fidus Writer
------------------
https://github.com/fiduswriter/fiduswriter

CodeMirror
------------------
one of the most popular code editors for the web
https://codemirror.net/

Text Transcript

Preston So: - Hello, and welcome to the second ever episode of the Tag Team Talks. Today we're gonna be talking about rich text editors and some of the solutions that are out there in this very exciting and growing space. First thing I want to do, though, is get a little bit of a look at our guests today. My name is Preston So. I am the moderator and contributing editor to Tag1 Consulting. And I'm joined today by several amazing folks from all around the world here to talk about rich text editing.

Michael Meyers: - Awesome. My name is Michael Meyers. I'm the managing director at Tag1. I handle business development, sales, partnerships, marketing, strategy, client relations, things of that nature.

Kevin Jahns: - Hi, I'm Kevin Jahns. I'm located in Berlin. And I'm an expert for shared editing and CRDTs. I currently work for Tag1 consulting on a realtime system.

Nik Graf: - Hey, I'm Nick. I've done a lot of frontend development over the last couple years, and also was digging into Draft.js, actually built a plugin system on top of Draft.js. And now doing a lot of work on the same project as Kevin, the realtime collaboration stuff with ProseMirror.

Fabian Franz: - Hi, my name is Fabian. At Tag1 I'm currently a senior technical architect and performance lead. But on this project I'm especially excited about bridging the gap for the editors. And I'm a Drupal enthusiastic, Drupal 7 core maintainer, but also a longtime Drupal Aid contributor where we're also kind of having this switch over from CKEditor 4 to maybe CKEditor 5, so going to the next generation. So this is really exciting to be working a project where we're exploring all of that.

Preston: - Thanks very much to all of our guests. It's a real pleasure to be here with all of you today. This is a very meaty topic. We're gonna be talking for quite some time about this, I'm sure. But first, I just want to say good morning, good afternoon, good evening, to wherever you are in the world. And if you haven't already checked it out there's actually a previous webinar that we've done related to this topic on collaborative editing. It's about how it relates to the ways in which people work today. And I want to make sure that we refer back to that webinar so please take a look at the link also available on this page. Alrighty, so let's go ahead and get a quick background on Tag1. Why are we interested in rich text editing, Mike?

Michael: - So, Tag1, we handle mission critical systems, emergency management. We've helped organizations like the American Civil Liberties Union go from raising $4 million a year in donations to over $120 million a year after President Trump in the U.S. came into power. So we do a lot of performance and scalability. We do high availability. We work with a lot of Fortune 500 companies like Symantec doing cybersecurity, infrastructure management. For this particular project that we're gonna be talking about today we're working with one of the top 10 Fortune 50 companies. They are rebuilding their intranet. It's a highly available, highly scalable, mission critical system used across 200 countries with over 20,000 active users in well over a dozen languages. Realtime collaboration is key to how the modern workforce operates. I spend a lot of my time in things like Google Docs collaborating with the team on all sorts of things. And while our goal with this intranet is to integrate a lot of different systems and not reinvent the wheel, so for example, you'll get a summary of what's going on in Slack on the intranet but all that information comes from Slack and the idea is just to link you off to Slack. These days, people use a lot of third-party tools for what they do best. The challenge with that is that they are disparate systems. And so if you have Box, and Slack, and Quip, and all these other things, it's hard to know what's where. So this system really organizes all of that with centralized authentication and user management so you can, say, create a space for a particular group and it will spin up all of the necessary artifacts we need, from say Slack to Quip, manage permissions. You can use any of these systems independently but everything is sort of synced, meta searched, and managed across this centralized system. And then a key component of this system itself is collaborative editing. And they have, as you can imagine with a global workforce of 150,000+ employees, they have a lot of people with different uses cases and needs. And so, some people, let's say technical people, love Markdown and want to work in one type of editor. People in other groups and departments might prefer WYSIWYG. Some people want to be able to edit HTML directly. And so, the reason that we're looking at editors on top of the ability to do realtime collaboration and work together on information in realtime is that we need to accommodate a lot of features, plugins, enhancements, and different users in different spaces. And so we took a wide range, an assessment of a wide range of editors in the marketplace, did an analysis based on our feature requirements, narrowed it down to a field that we're gonna talk about today, and ultimately selected an editor.

Preston: - I think this landscape is quite broad. There are so many options out there, and it's very difficult to choose which ones are appropriate, especially given that there are so many requirements that people have today. And being able to actually choose based on a variety of different features, which we'll talk about in just a little bit, is a huge prerogative. I mean, there's two areas that you just mentioned, Mike, that are very interesting. And the first is the realtime collaboration which has its own challenges and its own difficulties. Which was the subject, by the way, of our inaugural Tag Team Talk. And of course our second topic today, which is really what a rich text editor is. And combining those two really unleashes a lot of power for these content editors, but it is also very challenging from a technical standpoint. But let's go down to the very, very basics here, and the fundamentals. Sort of in its most basic sense, how would we, as a group, define a rich text editor?

Kevin: - I think that's really, really hard to make a general description of a rich text editor. I think most people think about Google Docs when they hear that. But I would say that a really basic rich text editor is something that supports boldness, italic, and headlines. That's it for me. Because often you really need that feature set. That's basically what you have in Markdown and that you want to have in all the other editors. Sometimes you write a blog post. You basically only need these features. For us developers it's really important to have code blocks too. I think that's a really important feature. But I don't think everyone agrees here. There's links and tables. Actually, a lot of people expect tables but not all editors support tables. So for me, a rich text editor is, yeah, something that supports this, contrary to pure text editors that only support working on text, maybe only paragraphs, and no rich text formatting.

Preston: - Was there a certain minimum, like a threshold that you wanted to reach in terms of the number of features? I know that you all have done a really strong comparison of all of the features available. Was there a certain point where you said, okay, well, we can put a dividing line here where we can say, all right, everything above here we can definitely look for, but everything below this line perhaps maybe we should strike out of our requirements?

Kevin: - I think a baseline for this project, yeah, we had a baseline, a feature set that we want to implement. And for our use case it was really important that our editor is adaptable. And this is not a requirement for all the projects that are out there. Sometimes you really just want to have a plug-in editor that just works and does the basic stuff right. But for us, we wanted to do some custom stuff, and some editors support that, and some not as well.

Nik:

- I could dive in here and give one example that Kevin mentioned. This is for example the tables. I worked a lot with Draft.js in the past, and I know you can do tables, and it's possible. But if you want to do more than just a simple text field and then have, rich content, again, in the table field, this is really, really hard to do with Draft.js. So what people came up with ideas like Draft.js per fields, like editors per field in the table. And then this gets really, really heavy because this has to run on the web browser. While others support this because the structure internally, how the data is managed is completely different. This is completely different. Basically, depending on what your needs are it completely rules out certain editors right away.

Fabian: - Yeah, that's also what I've found in my research of editors. Tables are really tricky with an image in a table, where every normal person is like, hey, that's so easy, it should just work in that. I've also seen for two other editors, either Slate or Quill, where the table plugin was basically instantiating another complete editor within that and then doing some magic to hide the one toolbar, show the other toolbar, so that it's still a seamless experience. Once you go away from those basic features like bold, italic, those they all can do, code play, quotation maybe a little bit more complicated. But basically, what's kind of used, what you are used to from all the old editors, most can do, that's not a problem. But once you get into the nitty gritty and really want some features like autocomplete, you type something and you get a table or something like that, we have that not yet, but it's so useful, and so practical, and so nice. But some editors, it's just way more harder to implement than others.

Preston: - I think we can all agree that as it gets more and more complex you kind of question the usefulness of some of these, especially the inline tables or some of those formatting options.

Preston: - Well, I think we've talked a lot about formatting, and clearly, formatting is a very, very strong interest to a lot of the content editors that we work with on a daily basis. But I think we also, Mike mentioned earlier something very interesting which was about document formats and the interchangeability between those. That's also a very important feature of rich text editors. Because you can do whatever editing you want to, but if you can't extract it and move it into a different format, or make it usable for other systems, it doesn't make any sense. And so I'm curious, when we talk about these document formats, do all of these editors support Markdown, HTML? Do they all support rich text? And I guess my even more pertinent question here is how easy is it to switch between them in these editors, or is it possible at all?

Kevin: - I think it's important to distinguish the underlying document model of the editor, that is often JSON-based, especially in rich text editors, and how you interact with the editor. Most editors somehow support some positional parameters. So, insert something at position X, something like that. Because that's how most humans comprehend text editors. So we somehow try to port that to rich text editors. There's some editors like ProseMirror that are more structured so you really need to say, okay, I want to insert something in this paragraph, inside this table, at this position. But this is also translated to index parameters. Because also ProseMirror, which is structured internally, accepts something like index parameters. So, insert something at position one, for example. And I really like that, especially... Like, in comparison, Quill.js has also an internal model that is purely based on position parameters. It accepts changes that are defined as the delta format. And I really love this data format because it's a really easy description how to insert something into the editor. It's index-based. And it is also perfectly suited for collaboration. But something that is really hard when you only work with index parameters is designing tables. So, when you work with tables, I think something like ProseMirror which is more structured, it's something really cool.

Fabian: - What Kevin said is very great and important but might have been a little bit too much already, too deep already into what our audience expect. So I would really like to step back and just show what is this kind of document model. And we are here at a very exciting point for me personally because we are at a transition point. All of the old generation editors, CKEditor 4, and whatever you all have, some nicer, some not so nice, they all have been built on something that's called a contenteditable. This contenteditable was basically supplied by the browser. It allowed basic formatting, and every of the trillion browsers that are out there, even browsers within the same name, implemented it differently. It was a huge headache. So all the editors said, no, no, no more contenteditable. We really don't want that anymore. The huge advantage of this old generation of editors is you threw them some HTML, it was outputted out of Word, and they could take it. It might have not looked nice, but they could take it. They could see it, they could display it. You could even edit it. So you just threw some HTML on them and then you get also HTML out. So for something like Google, that's perfectly suited. You load the HTML from the database. The user edits the HTML, now it's saved again to the database. The new generation of editors, CKEditor 5, ProseMirror, Quill, they are all having some internal document model. And we are seeing that a lot also in other cases of the web and everywhere, that we're kind of using these extra technologies, these languages that allow to express the same what was in the HTML, but differently. And because they are all having these internal document models what you can do is you can, for example, take... In theory, at least, you can take the same document model and ones display it as What You See Is What You Get, but ones who could also display it as Markdown, as long as you don't have something that Markdown doesn't support within it, and you can basically transfer it back and forth. Because the only thing that changes is the transformation from document model to what the user is editing, and how you are working on the document model. This was the other thing. And that makes for really cool demos. We'll put in a link to a ProseMirror demo where you can really, you have the technical person there, no Markdown, all comments out of the hat, they're just putting in Markdown. You have a non-technical person, they can collaborate on it in the same document because the other person can just click on bold and it's bold, and they see it as What You See Is What You Get. And that's so cool in the new generation of editors. And later we'll talk a little bit about the challenges, but I think that was a good introduction.

Preston: - And just to add a little bit even more context here, I think when you talk about, Fabian, the ways in which we've evolved over time, I mean, long gone are the days when we had those phpBB formatting toolbars which were limited to let's say three or four different buttons, and they never worked half the time. To nowadays, this very advanced and almost abstract way of working that is really kind of a layer above, where you're talking about working with JSON as opposed to working with direct HTML or direct text. We're actually talking about an object tree, which is really amazing, and I think is very compelling. So let's go ahead and move a little bit towards some of the more business requirements here. I do want to talk a little bit about this Fortune 50 client that you mentioned. We know that all of these editors and all of these powerful tools do have the functionality to do these formatting changes, have the abstraction layer as part of this new, this kind of new document model that we talked about. But I wanted to ask, there's kind of differences in how each of these editors like ProseMirror, like Draft.js, like Quill, how they manage all of the underlying data, and also how customizable they are. Can we talk about some of the key requirements here? What's maybe some of the major characteristics that you all wanted to see come out of this project?

Michael: - Before we jump into the technical stuff, I think one of the key things, well, first of all, it had to be collaboration ready because we're integrating this with a realtime collaboration system. But beyond the extensibility that Kevin talked about, which is critical because their needs are constantly changing, we need to integrate it with a lot of different third-party tools and systems. We want to add things like @mentions that tie into central authentication. I'll let these guys dig into that. There were a couple of business requirements. One of them was, you know, prove it. We looked at some really interesting editors that are still in the earlier stages of development, and we could swap them in in the future. That is another aspect of extensibility. We may choose to change editors in the future, or give different users different editors. But for launch we need something that's proven. Something that is really stable, that has a robust open source community behind it that is continuing to develop it with maintainers that are really responsive. We wanted to make sure that it was being used in enterprise production by large organizations. So, ProseMirror, for example, is used by The New York Times. And they've written some great posts about it. They were generous enough to get on the phone and talk to us a lot about their experience to sort of confirm some of our research and thinking in real world scenarios. That was really critical just from a, before we could even evaluate these editors and dig into the features, there was sort of a minimum bar.

Fabian:

- Yeah, and also what was important from the proving standpoint, ProseMirror for example, and we will come later to that, Confluence almost everyone knows, many work with it. It's built up on Atlaskit, and Atlaskit itself is built upon ProseMirror so that was another plus point. The CZI, Zuckerberg Initiative nonprofit, they are building some Google Docs-like clone based on ProseMirror. Also very interesting. So we had several things to just work with, and to see, and use in that. You use those demos, and they just work. Tables look great, things work in that, so that was a huge plus point here for ProseMirror in just being proven by being used by other larger organizations.

Nik: - Maybe I can add a word to Atlaskit. I mean, we'll kind of dig in later. But Atlaskit, as Fabian already mentioned, Confluence is built on Atlaskit, but not only Confluence. Basically every Atlaskit is this designed system from Atlassian, and everything we're building at the moment, everything we're rebuilding, redesigning, is built on top of Atlaskit. So the Atlas editor, core editor, built on top of ProseMirror in their design system. And this also gave us in terms of like, I don't know, kind of showing off at the client a good headstart in the beginning. I mean, they had a different design and they had different widgets, but you could take a lot of that stuff, put a different design on top of it, and get a lot these tools out there. So, while it was not really a requirement, it was a really, really good way to impress early on. And yeah, because Atlas, Atlassian has done a great job with Atlaskit it could give us a good headstart. And yeah, accessibility, multiplatform, all of that, is built in.

Preston: - Let's dig into some of these. I think, Nick, you just mentioned multiplatform. I mean, this is a really interesting kind of idea where you should be able to use a rich text editor on whatever device. On a phone, on a tablet, on an Electron app. Can you talk a little bit about how you thought about multiplatform and why was it so important to these folks?

Nik: - I think in general, the world is becoming way more mobile. And while desktop is still in the use case, probably, for this intranet, but people more often check... They want to edit something on mobile. And while we currently don't talk of it, we wanted to pick a platform that we later on can expand to it. I can tell from, like, some editors, they have their fair share of troubles with mobile because simply the environment, it is different browsers behaving differently, so the underlying document model sometimes already struggles. Mostly they're working fine and it's a matter of your IUX. But yeah, you basically want to pick something that definitely works on all platforms so you can expand in all directions.

Preston: - And one thing you just mentioned also as well, Nick, that I wanted to call out is the notion is extensibility. You know, third-party integrations, being able to work with other tools. One thing Mike had just mentioned was the notion of being able to tie in @mentions and integrate that with the central authentication system. I also know that there are other third-party tools you want to integrate with, and that you see as being important for these rich text editors. Can you give some examples of those? All the group feel free to jump in as well.

Nik: - Yeah, absolutely. Let's say you want to reference a Dropbox file, or a Box file, or you want to mention another user. These are then custom nodes. So you have an editor that only supports the standard HTML text and doesn't allow you to make your own nodes, then you can't do this. That's why this goes back to this document model. Basically, the document model of the editor has to be extensible so you can actually add your own, extend it, and then allow to build a user interface to add these custom nodes in there. And then however you want to implement them, you can just reference an ID to a GitHub issue, for example, and then you could load the data on demand or you could actually put the data in the document. This then, it ties together into, like, into this authentication system, and how you load the data, and so on. This is very dependent on the needs of security and customer requirements. But in the end, the gist of it, you want to be able to create something where you can add a toolbar item to add GitHub issues and connect them, and have them in the rich text document. This is something where you... I mean, you could have for example even still custom markdown syntax. This is where WYSIWYG usually outperforms Markdown or other systems, by far, because the experience is so much better.

Fabian: - For example, hover over them and then it would show you all the details of that GitHub issue, if there's a pull request or not, et cetera. Possibilities are endless, obviously. And I think that's so very cool about that. What's also really great about this kind of editor integration is that there's so many possibilities in extending that. For example, one thing we didn't talk yet about much, correct me if I'm wrong there, but we're building everything on React components. React, you just have your standard React component for like an autocomplete and then you can put it in the editor. And also another nice thing here about ProseMirror where you have the display in the document and how it's displayed in your document what's outside your document different, and that's also another important part for accessibility, which we probably also wanted to talk about.

Preston: - Absolutely, yeah. Accessibility is a topic that's very near and dear to my heart personally. I know that, Mike, you just mentioned earlier as well that when it comes to a large Fortune 50 company like this one, being able to work with this very large workforce that has a variety of different abilities and a variety of different needs is important. We alluded earlier to some of the challenges around accessibility with rich text editors. We talked about things like contenteditable, the contenteditable attribute on DOM elements, ARIA labels. I know that we've looked at some of the capabilities and we've talked about some of the nice things that these editors have. Are there any that I've missed besides the contenteditable and some of the ARIA features?

Nik: - [Nick] Kevin, you want to take this, or should I?

Kevin: - You do, please.

Nik: - In general, I mean, a lot of accessibility you get out of the box if you have the same content... Or, a document structure in HTML. So if you have like headlines well structured and so on, it makes it easier for people, for the screen readers and so on to parse the content and for people to jump around just by their voice input. But then if you... If you actually make popups, dialogs, if you have toggle buttons, that you get into the nitty gritty details, if you make your custom ones you really have to take care of accessibility by your own. If you look at all the standard toolbars and buttons that a lot of these editors provide, or come with, they have accessibility built in. And that's really good because that shows that this is already, like, a standard, that it's kind of expected. But as soon as you start to build your own, like, @mentions, or a GitHub plugin to reference pull requests, and you're doing your own popup and dialog, you really have to take care of yourself, take care of it by yourself. This is still a lot of work. We were fortunate that Atlaskit did a lot of good stuff out of the box. We already got the feedback. There are a couple of improvements we can do. But that's okay. The initial response was quite impressive. Maybe the gist of it is like, even with these new editors, although they're using contenteditable, you can make them very accessible but you have to, as soon as you do custom stuff you have to do it by yourself and you have to take care of it.

Preston: - Yeah, and I think that, you know, you just mentioned this notion of all of the custom work you have might potentially challenge some of the accessibility and be a problem. This is where having that notion of great flexibility, and extensibility, and customizability, comes with great responsibility as well. I know that one of these you mentioned was the popups. For example, having that autocomplete widget show up with the @mention, that's very challenging. As somebody who likes to think through how would I build that accessibly, I actually don't know where I would start. That's a very challenging one.

Nik: - We very recently had a call with an accessibility expert to talk through that one. And yeah, it's... There are things like... I've built a lot of React components in the past that were focusing on accessibility, but even I learned in this call a lot about your concepts like live regions. You can have a live region in your document and then you can announce, basically, state changes. So for example, one thing that we learned that we're currently not doing yet, but we definitely want to, is if you toggle, if you have some text and you toggle it to be bold, you should announce that. You should announce the state. Is it now bold, or is it not bold? Because by just hitting the toggle, if you listen to the voice, the screen reader will just tell you, you toggled the bold. You toggled the bold status. Like, uh, okay, but which one is it now? This is very, very interesting. You really... What I learned basically is turn on the screen reader and dim your screen so it's black and just try to... All the actions that you usually do in this text editor, try to do them just by navigating around with your keyboard or with voice. If you can get through it, then you're in a pretty good state already. By doing this test and this call, and learning about all these things, we noticed a bunch of things that we're missing. But we're working on it. It's an interesting journey.

Preston: - Well, I know what I'm gonna be doing this evening after I get off work. It sounds like actually a lot of fun. Like, playing the game Myst or something. I know that there are also some specific requirements that were more interesting. And I think that there are definitely some interesting implications that come about when you mix rich text editing with some of the other ways in which people like to work with components on the page. Like, maybe the most important component or most popular component right now, React components. How exactly have you thought about things like embedding React components or embedding videos? I know that you've thought about actually placing React views straight into these rich text editors. How has that worked out for you all?

Kevin: - I think that's definitely one of the more interesting things about ProseMirror. Because a lot of people seem to do that. They plug in their rendering engine, like there's UJS. I know a project, TipTop uses UJS instead of React. Other projects like Atlaskit just build on React to render custom node views. And you can basically render everything in the editor that you would render on a website. I saw a project where you render a PDF using a React view because there's this great PDF, React-PDF. I think it's called React-PDF. It's a really cool project. And you just plug it in, and you have a PDF inside your editor. That's really cool, right? There's a lot of other stuff that you can do just like that. And because ProseMirror is already built on this concept of mutable state it's a really nice adoption to just use React within ProseMirror. But because you can do everything without React, I would argue that in Quill.js it's really hard to use something like that, React inside the editor. But still you can do everything you want. You can build your custom tables and stuff like that. But React certainly makes a lot of stuff easier because they have a lot of knowledge in React. So, it really makes stuff easier.

Fabian: - Not only that. There's also the possibility to just reuse components and then combine the best out of the React world. That is also important from a perspective of how to get developers for this project, that really we focused on those collaboration developers as well as React developers to get the best of the best.

Kevin:

- I think we can all agree that you shouldn't manipulate the DOM manually anymore. For the editor itself, we have ProseMirror to handle the DOM. And for custom views, like for any kind of custom stuff, for example how a table is built, like there are a lot of divs around there, a lot of CSS, I wouldn't do that directly in the DOM and manipulate that information in the DOM directly. There are a lot of edge cases that you need to handle, and you can do a lot of stuff wrong. So, React really helps, I think.

Nik: - There was one more specific requirement that is probably worth mentioning, is our comments. We have these annotations or comments. And this is a very interesting aspect that we learned over time. There was for ones this requirement that for a different permission level it shouldn't be part of the document model, so we wanted to have that out of the document model. But it's also really interesting that if you start to do an annotation and you share the document model collaboratively on realtime, you don't really want to... If you're making a draft of a comment, you don't want to share that realtime. And this is the same for let's say @mentions. If you start typing, you don't want the other person on the other end see the autocomplete suggestions. This needs a little bit of rethinking because you basically have parts. The document model is really like the content that you want to share realtime. But there's the other parts like user interface elements or annotations that are seen in draft mode that you want to keep out of it. And then that's really, really useful to share the same component library so you can actually stay in the same system and not build the editor with one UI library and then build these other user interface elements with another library. It's really handy to use the same thing. It keeps us sane and easy to move forward.

- [Kevin] Well put.

Preston: - Let's jump into some of the actual tools that are out there. I think that we've heard a lot of names thrown around. There's been a lot of very interesting technologies mentioned, and a few that we haven't mentioned. We've talked about ProseMirror briefly. We talked about Draft.js. Very briefly talked about Quill and CKEditor 5. But there's also some others. There's Slate and TipTap. What are some of the top open source... When we look at these open source editors on the market, which ones were the ones that really were compelling for you all? And what were some of the strengths and weaknesses?

Kevin: - I think, for me, the biggest strength... Like, we can talk about each of the editors separately. Maybe we go from the most popular ones. Maybe Fabian can explain something about CK 5. He has the most experience with that.

Fabian: - Sure. CKEditor 5 is the popular successor of the very, very popular CKEditor 4. It also switched to a JSON-based model. However, they do up and downcast everything so what you basically in the end still get is HTML. So you have HTML, or you have structured HTML. So, for example you cannot have your hello tag. The document would just not know, what the heck is a hello tag, or a blink tag? It would just ignore it and everything that's in it. Because what it does, when it loads the HTML it loads it into its own document model which is also JSON-based, and then it puts it out. Basically, CKEditor 5 is pretty strong. It has good accessibility. And it also has nice collaboration. The collaboration just had one big flaw. It was not open source. This was unfortunately a deal breaker in terms of the extensibility in that, and also putting anything out for everyone. I mean, Drupal is open source. We work a lot with open source at Tag1. We love open source, and it's so cool that Kevin, as the developer of Yjs, is here. That's also kind of how we found Kevin and the other three. We directly talked with the people who are developing these editors and checked them for our project here when we were interested in some part, et cetera. That was kind of the team we then settled on. But CKEditor 5 is still a little bit young in its thing. It has just recently gotten out of beta. ProseMirror has a little bit longer history in being stable and being used for those things. It will not be a concern because some other big players at settling on CKEditor 5. But just saying experience in how you long you work with something is not worth playing around. And then there's a huge compatibility break with CKEditor 4. So what could have been a huge advantage with CKEditor 5, that all of our Drupal backends will directly work with it, et cetera, is no, because there's just a real break between. So, CKEditor 5 is a completely different product than CKEditor 4. Which has its advantages, but as there is no backwards compatibility, and the collaboration module was not open source, we looked more at the other editors. Slate, for example, we've not talked much about it. It's a great editor. It has, from what I've seen, the nicest backend system. It's really cool, very nice in that, but it's beta, and it's beta since a long time. And we want something proven, we want something stable. And something in beta where there could be hard BC breaks was just too much risk for us here in this project. Nick can maybe talk more about Slate because he knows. Draft.js was more like the original Facebook thing, monolith. It's a great editor. It has a nice backend system, it's React-based. It is harder to extend overall in that, and it's also aged a little bit in that. It's one of the more older editors in that. Also, the community is not as active as, for example for ProseMirror, where it's mainly Facebook committing some patches here and there and maintaining it at the stable state. But in the end, it didn't have the features we needed. So yeah, that was kind of the thing. TipTap with roo. If anyone needs an editor for rooJS, use TipTap, it's great. And it's also ProseMirror, basically, so yeah. It's kind of like a product for ProseMirror for roo. That's cool. And then we ended up with ProseMirror and Quill, and that was kind of the huge race between ProseMirror, Quill, going that. Now, Yjs supported both, so it was also no thing here. But in the end, ProseMirror won basically on the experience. Also the tables plugin looked much nicer in its experience, and how it looked and everything. Quill, the database format is great. It's also a collaboration ready editor. It directly works. But you are then... You need to use the data format it provides, and you then need to use ShareDB. And that again was putting our flexibility a little bit to the test. It's also OT-based, which we talk a little bit about in the other talk. If you're interested, check that out. And we want really something that, where in the end, maybe we'll never get there, maybe we will, but in the end we could at least think about a future of offline editing. And that's again something we talk about there. But Quill and ProseMirror was a really nice race in that ProseMirror is more giving you a framework where you have like nothing and you build your own editor, and Quill is like a ready made editor. You plug it into your application, it just works. It's great in that. But as you add Atlaskit, then Quill got out of the race.

Preston: - Yeah, I understand that... Oh, sorry, go ahead.

Kevin: - I think this was one of the bigger selling points. I think we had Quill and ProseMirror in the end listed down, and we compared it. Like, Quill.js has ShareDB. It's a proven concept, operational transformation. It also works with Yjs. There are a lot of companies that already use Yjs with Quill.js. And then there's ProseMirror. ProseMirror has all these teachers, a great community. I think it has a really interesting concept. And most modern editors nowadays, like all the new editors that pop up, for example Atlassian, they are all built on ProseMirror. We also have the Collab module which is kind of similar to OT. It's like a reconciliation approach. It doesn't really handle conflicts as well as operational transformation, but it clearly works. It's And also, Yjs works with ProseMirror, so I got you there because either way, we could choose any of the editors with Yjs. And this is what I really wanted to do. I really explained that and why we did that in the last webinar. But I think the biggest selling point I felt was one, the community behind ProseMirror, and when we saw Atlaskit and that we could just build on top of Atlaskit. Because we had this existing editor, a lot of features, nice tables, nice interactions. This is, I guess, a big selling point of ProseMirror. And a lot of open source components that you can just use, plug into the editor, and it just works. So yeah.

Preston: - Absolutely. One of the things I know, Nick, you mentioned about ProseMirror was the fact that Atlaskit helped so much more. Is there anything more that you wanted to mention about Atlaskit? I think it's a very compelling story.

Nik: - I think not much more to add. I quickly could repeat that because there's so much there that you really, you can simply start using Atlaskit then you have a good headstart. The biggest trouble you might have, and we went through this, is that this is a big mono-repository, so we had to take out the parts, the core editor, that we needed, and then basically continue to use the rest from Atlaskit and take the bits and pieces. Like, slowly replace the bits and pieces that we actually needed to be changed. But this was like, from a demo, or the experience in this project, this was very good because in a very short period of time, I think it was just a matter of like two weeks or so, we had something ready to show that the client could try, use, and actually feel. And then obviously if you then can test with re-users and so on, or potentially your re-users, you're making better decisions than like, just coming up with, like, hey, and we might to do this, and that, and have a button here. But if it takes you quite some time to slowly build it up than rather starting from something that is fully fleshed and then replacing bits and pieces, I think that was for, like, product thinking and product development, a really compelling story. It was possible for Atlaskit.

Fabian: - Definitely Atlaskit very great for us. And that was what Nick was saying was part of our strategy with this client, that we are showing progress every two weeks, and a huge demo. And with not only the client itself but there's a huge stakeholder team that can all watch the progress of that, how it's done. That was really great to make a great impression quick. But not only a great impression quick. What you shouldn't undersell, Nick, is how you definitely improved the build system. Because I think it took like three minutes, at the start, to just build everything, and now you've put it down to 30 seconds and rebuilds are like 10 seconds. No longer like 30 seconds wait time. Like, change some CSS, wait 30 seconds, drink a coffee.

Nik: - We should say there, props to Sebastian, to me. We was digging into the webpack configuration and getting the hot rebuilding working with good compile time. This was really helpful for faster development. One thing I could add there, though, is like about Atlaskit. Atlaskit is not built with realtime collaboration in mind. For example, certain features, they do things like they change the document model for just like showing different user interfaces. So for example, the @mentions that we are building, and the annotation or commenting section, we basically, we cannot use what's there in Atlaskit or we have to adapt it and change it. Otherwise it would be synced across to other users, and we don't want that. So, while Atlaskit was a good start, we now have to really, especially with this realtime collaboration in mind we have to change a lot of things. But that's fine. I think this was the strategy and the approach, and it was a good one. Highly recommended.

Kevin: - I think it was built with realtime but they use a custom version of the Collab module, which is like a different realtime approach. So we just plugged in the Yjs plugin so all the data that you have is shared immediately. And I'm sure that they have some, I don't know, filtering on the backend happening, filter that out, like all the data that you don't want to share. I'm not exactly sure how that works. But also, backend to the Atlaskit collaboration approach, it is, I think, proprietary. The source code is not available, I think. I'm not sure.

Fabian: - I haven't seen it. I've searched everything on collaboration that's out there on the internet. There's even some prototypes from The New York Times that still can be found. There's a five-year-old ProseMirror version if someone will dig into history.

Preston:

- Absolutely. Well, we are starting to run out of time, but I do want to give some opportunity to talk about some of the more interesting aspects here. By the way, what sort of additions have you made to ProseMirror? Just very, very quickly. I know, Fabian, you've done some work on this.

Fabian:

- One of the important things, and I've already talked a little bit about that with how I explained the document models is reintegrating ProseMirror with Drupal. Now someone says, well, but yeah, Drupal supports many editors, but yes, only those of the old generation. So what we are now talking about is kind of we have these JavaScript mega beasts that are usually run with Node. And they are coming to the old giants of PHP, like old not in terms of being very old, but Google has been around 20 years and it's traditional PHP-based. It's just base HTML that you store in a database. And you have this ProseMirror and it has this JSON model. And how you usually would do that is you would take this JSON, run it through Node. Node would launch a little instance of the editor then would display it, and then the webpage would be delivered. We cannot do that because we are basically bridging this old generation of editors with the new generation of editors. And that's very, very interesting because when I was starting up with that, the React developers were like, why do you want to output HTML? Why do we need that? And the Drupal developers were like, JSON? Why would we put JSON in the database? We are storing HTML, we are storing JSON. We're storing both in the database. We're storing the HTML only kind of like a cache for display purposes. That's what we will be displaying to the user. And the JSON is what we then feed again into ProseMirror, or Atlaskit, or our custom editor. In this case, for loading up the same state as it was before. So, that's very important in that so we don't need to store to HTML, load the HTML again, store it again, and convert it back and forth where we could be losing data where it's prone, but we are storing the document model of ProseMirror directly in the database. We are storing also the HTML so Drupal can display that. That was a little bit of a challenge. A challenge that the whole of Drupal agents somewhere or another will also face because now we're going, kind of like with Drupal itself, with Drupal core, into this direction of this new generation of editors. So, that's a lot of challenges, and I hope we can speak in more detail about that at some other point. But it's really interesting. And then also, just loading this whole React, and then you have a frontend which is still jQuery-based, dramatic person, and AJAX-based, and it's this traditional how Drupal used to work in that. But now this new framework comes and now you want your @mentions but also working with some traditional Drupal comments, and you have to combine those two worlds. And that's very, very interesting.

Preston: - So, it seems that... Oh, sorry, go ahead.

Fabian: - There's one part that we did that was really exciting for me besides all of those mentions, collapsers, sections, and the collaboration and shared editing.

Preston: - Well, unfortunately we are out of time. I did want to get to talking about the integrations but clearly we will have to save that for another time. I just wanted to say thank you so much to all of you in the audience for watching or listening to the live Team Talk. For all of the things that you heard in this call, in this webinar, things like ProseMirror, things like Yjs, things like Draft.js, all of these things, we're gonna have links with all of these technologies that you can take a look at. By the way, please don't forget to check out our previous webinar, the inaugural Tag Team Talk about shared editing, collaborative editing. And by the way, if you're interested in learning about a particular area or a certain topic, please feel free to reach out to us and the team at [email protected]. I want to give a big thank you to our guests today. First and foremost, Nick Graf, our senior software engineer, based in Austria. Fabian Franz, senior technical architect and performance lead. And Kevin Jahns, realtime collaboration systems lead and creator of Yjs. And of course, the managing director of Tag1, Michael Meyers. This is Preston So. Thank you all so much. And until next time, take care.

Sep 26 2019
Sep 26

I am writing this quick tutorial in the hopes it helps someone else out there. There are a few guides out there to do similar tasks to this. They just are not quite what I wanted.

To give everyone an idea on the desired outcome, this is what I wanted to achieve:

Example user profile with 2 custom tabs in it.

Before I dive into this, I will mention that you can do this with views, if all that you want to produce is content supplied by views. Ivan wrote a nice article on this. In my situation, I wanted a completely custom route, controller and theme function. I wanted full control over the output.

Steps to add sub tabs

Step 1 - create a new module

If you don't already have a module to house this code, you will need one. These commands make use of Drupal console, so ensure you have this installed first.

drupal generate:module --module='Example module' --machine-name='example' --module-path='modules/custom' --description='My example module' --package='Custom' --core='8.x'

Step 2 - create a new controller

Now that you have a base module, you need a route

drupal generate:controller --module='example' --class='ExampleController' --routes='"title":"Content", "name":"example.user.contentlist", "method":"contentListUser", "path":"/user/{user}/content"'

Step 3 - alter your routes

In order to use magic autoloading, and also proper access control, you can alter your routes to look like this. This is covered in the official documentation.

# Content user tab.
example.user.zonelist:
  path: '/user/{user}/zones'
  defaults:
    _controller: '\Drupal\example\Controller\ExampleController::contentListUser'
    _title: 'Content'
  requirements:
    _permission: 'access content'
    _entity_access: 'user.view'
    user: \d+
  options:
    parameters:
      user:
        type: entity:user

# Reports user tab.
example.user.reportList:
  path: '/user/{user}/reports'
  defaults:
    _controller: '\Drupal\example\Controller\ExampleController::reportListUser'
    _title: 'Reports'
  requirements:
    _permission: 'access content'
    _entity_access: 'user.view'
    user: \d+
  options:
    parameters:
      user:
        type: entity:user

This is the code that actually creates the tabs in the user profile. No Drupal console command for this unfortunately. The key part of this is defining base_route: entity.user.canonical.

example.user.zones_task:
  title: 'Content'
  route_name: example.user.contentlist
  base_route: entity.user.canonical
  weight: 1

example.user.reports_task:
  title: 'Reports'
  route_name: example.user.reportList
  base_route: entity.user.canonical
  weight: 2

Step 5 - enable the module

Don't forget to actually turn on your custom module, nothing will work until then.

drush en example

Example module

The best (and simplest) example module I could find that demonstrates this is the Tracker module in Drupal core. The Tracker module adds a tab to the user profile.

Sep 26 2019
Sep 26

Although web accessibility begins on a foundation built by content strategists, designers, and engineers, the buck does not stop there (or at site launch). Content marketers play a huge role in maintaining web accessibility standards as they publish new content over time.

“Web accessibility means that people with disabilities can perceive, understand, navigate, and interact with the Web, and that they can contribute to the Web.” - W3

Why Accessibility Standards are Important to Marketers

Web accessibility standards are often thought to assist audiences who are affected by common disabilities like low vision/blindness, deafness, or limited dexterity. In addition to these audiences, web accessibility also benefits those with a temporary or situational disability. This could include someone who is nursing an injury, someone who is working from a coffee shop with slow wifi, or someone who is in a public space and doesn’t want to become a nuisance to others by playing audio out loud.

Accessibility relies on empathy and understanding of a wide range of user experiences. People perceive your content through different senses depending on their own needs and preferences. If someone isn’t physically seeing the blog post you wrote or can’t hear the audio of the podcast you published, that doesn’t mean you as a marketer don’t care about providing that information to that audience, it just means you need to adapt in the way you are delivering that information to that audience.

10 Tips for Publishing Accessible Content

These tips have been curated and compiled from a handful of different resources including the WCAG standards set forth by W3C, and our team of accessibility gurus at Palantir. All of the informing resources are linked in a handy list at the end of this post. 

1. Consider the type of content and provide meaningful text alternatives.

Text alternatives should help your audience understand the content and context of each image, video, or audio file. It also makes that information accessible to technology that cannot see or hear your content, like search engines (which translates to better SEO).

Icons to show image, audio, video

Types of text alternatives you can provide:

  • Images - Provide alternative text.
  • Audio - Provide transcripts.
  • Video - Provide captions and video descriptions in action.

This tip affects those situational use cases mentioned above as well. Think about the last time you sent out an email newsletter. If someone has images turned off on their email to preserve cellular data, you want to make sure your email still makes sense. Providing a text alternative means your reader still has all of the context they need to understand your email, even without that image.

2. Write proper alt text.

Alternative text or alt text is a brief text description that can be attributed to the HTML tag for an image on a web page. Alt text enables users who cannot see the images on a page to better understand your content. Screen readers and other assistive technology can’t interpret the meaning of an image without alt text.

With the addition of required alternative text, Drupal 8 has made it easier to build accessibility into your publishing workflow. However, content creators still need to be able to write effective alt text. Below I’ve listed a handful of things to consider when writing alt text for your content.

  • Be as descriptive and accurate as possible. Provide context. Especially if your image is serving a specific function, people who don’t see the image should have the same understanding as if they had.
  • If you’re sharing a chart or other data visualization, include that data in the alt text so people have all of the important information.
  • Avoid using “image of,” “picture of,” or something similar. It’s already assumed that the alt text is referencing an image, and you are losing precious character space (most screen readers cut off alt text at around 125 characters). The caveat to this is if you are describing a work of art, like a painting or illustration.
  • No spammy keyword stuffing. Alt text does help with SEO, but that’s not it’s primary purpose, so don’t abuse it. Find that happy medium between including all of the vital information and also including maybe one or two of those keywords you’re trying to target.
Illustration of red car with flames shooting out of the back, flying over line of cars on sunny roadway.Example of good alt text: “Red car in the sky.”
Example of better alt text: “Illustration of red car with flames shooting out of the back, flying over line of cars on sunny roadway.”

3. Establish a hierarchy.

Upside down pyramid split into three sections labeled high importance, medium importance, low importance

Accessibility is more than just making everything on a page available as text. It also affects the way you structure your content, and how you guide your users through a page. When drafting content, put the most important information first. Group similar content, and clearly separate different topics with headings. You want to make sure your ideas are organized in a logical way to improve scannability and encourage better understanding amongst your readers.

4. Use headings, lists, sections, and other structural elements to support your content hierarchy.

Users should be able to quickly assess what information is on a page and how it is organized. Using headings, subheadings and other structural elements helps establish hierarchy and makes web pages easily understandable by both the human eye and a screen reader. Also, when possible, opt for using lists over tables. Tables are ultimately more difficult for screen reader users to navigate.

If you’re curious to see how structured your content is, scan the URL using WAVE, an accessibility tool that allows you to see an outline of the structural elements on any web page. Using WAVE can help you better visualize how someone who is using assistive technologies might be viewing your page.

5. Write a descriptive title for every page.

This one is pretty straight forward. Users should be able to quickly assess the purpose of each page. Screen readers announce the page title when they load a web page, so writing a descriptive title helps those users make more informed page selections.

Page titles impact:

  • Users with low vision who need to be able to easily distinguish between pages
  • Users with cognitive disabilities, limited short-term memory, and reading disabilities.

6. Be intentional with your link text.

Write link text that makes each link’s purpose clear to the user. Links should provide info on where you will end up or what will happen if you click on that link. If someone is using a screen reader to tab through 3 links on a page that all read “click here,” that doesn’t really help them figure out what each link’s purpose is and ultimately decide which link they should click on.

Additional tips:

  • Any contextual information should directly precede links.
  • Don’t use urls as link text; they aren’t informative. A
  • void writing long paragraphs with multiple links. If you have multiple links to share on one topic, it’s better to write a short piece of text followed by a list of bulleted links.

EX: Use "Learn more about our new Federated Search application" not "Learn more".

7. Avoid using images of text in place of actual text.

The exact guideline set forth by W3 here is “Make it easier for users to see and hear content including separating foreground from background.” 

There are many reasons why this is a good practice that reach beyond accessibility implications. Using actual text helps with SEO, allows for on-page search ability for users, and creates the ability to highlight for copy/pasting. There are some exceptions that can be made if the image is essential to include (like a logo). Providing alt text also may be a solution for certain use cases.

8. Avoid idioms, jargon, abbreviations, and other nonliteral words.

The guideline set forth by W3 is to “make text content readable and understandable.” Accessibility aside, this is important for us marketers In the Drupal-world, because it’s really easy to include a plethora of jargon that your client audience might not be familiar with. So be accessible AND client-friendly, and if you have to use jargon or abbreviations, make sure you provide a definition of the word, link to the definition, or include an explanation of any abbreviations on first reference.

Think about it this way: if you are writing in terms people aren’t familiar with, how will they know to search for them? Plain language = better SEO.

9. Create clear content for your audience’s reading level.

For most Americans, the average reading level is a lower secondary education level. Even if you are marketing to a group of savvy individuals who are capable of understanding pretty complicated material, the truth is, most people are pressed for time and might become stressed if they have to read super complicated marketing materials. This is also important to keep in mind for people with cognitive disabilities, or reading disabilities, like dyslexia.

I know what you’re thinking, “but I am selling a complicated service.” If you need to include technical or complicated material to get your point across, then provide supplemental content such as an infographic or illustration, or a bulleted list of key points.

There are a number of tools online that you can use to determine the readability of your content, and WebAIM has a really great resource for guidelines on writing clearly.

10. Clearly label form input elements.

If you are in content marketing, chances are you have built a form or two in your time. No matter whether you’re creating those in Drupal or an external tool like Hubspot, you want to make sure you are labeling form fields clearly so that the user can understand how to complete the form. For example, expected data formats (such as day, month, year) are helpful. Also, required fields should be clearly marked. This is important for accessibility, but also then you as a marketer end up with better data.

Helpful Resources

Here are a few guides I've found useful in the quest to publish accessible content:

Accessibility Tools

Sep 25 2019
Sep 25

Yesterday the digital experience world and the Drupal community received the long awaited answer to the question: What’s going to happen with Acquia? when it was announced, first on Bloomberg that Vista Equity Partners would be buying a majority stake in Acquia which it values at $1B. 

Many were caught off guard by the timing, but an event like this had been expected for a long time. After receiving nine rounds of venture funding totaling $173.5M, it was time. As the leader and largest company in the Drupal space, Acquia has a center of gravity that leaves many asking a new question: What Now for Drupal?

What Are the Angles?

Before I attempt to answer what I think this means for Drupal and the Drupal community, I think it is worthwhile to at least speculate on the strategy Acquia plans to pursue as a part of Vista. It seems that everyone I have heard from both offline and online since the announcement yesterday are speculating on the Vista angle (i.e. why did they want Acquia?). As TechCrunch led with “Vista Equity Partners...likes to purchase undervalued tech companies and turn them around for a hefty profit…” Well that’s pretty much what a PE firm does. And to me less interesting than asking: What does Acquia want from Vista?

What I believe Acquia wanted to get out of this is a heavy weight partner with capital and connections that could help develop Acquia into a more formidable competitor to Adobe, Sitecore and other digital experience platforms (“DXP”). It was just last week that Salesforce Ventures made a very sizeable $300M investment in Automattic, the parent company of WordPress. Things are heating up with all of the top digital experience platforms and no one is going to survive, let alone stay in the front of the pack, without some serious capital behind them. 

Who Wins?

I believe Acquia plans to use Vista’s investment and resources to continue making targeted acquisitions and investments to become a more robust and powerful digital experience platform. I would expect them to grow their suite of products, invest even more heavily in sales and marketing to increase revenue and grow its installed base of customers. 

Vista will then have a more valuable asset from which to pursue either an IPO or a strategic acquisition. It is possible this will follow the pattern of Marketo, which Vista bought and then sold to Adobe for a $3B profit or Ping, which they recently took public in an IPO

So there are mutual interests being met and a fair valuation that gets the necessary attention - so both parties win. I also think customers win from increased product development, competition, and a more robust ecosystem.

What Does This Mean For Drupal?

I think this is the best of all possible scenarios for both Drupal (the product) and the Drupal community. While many will bemoan the intrusion of a large private equity firm into the sacred space of an open source community, change was inevitable and it comes with predictable tradeoffs that have to be measured in the context of a new reality for the space. The community needs the indirect investment that this deal provides and it far outweighs the alternatives. If you assume that there were only a few possible scenarios for Acquia that were going to play out sooner or later, they would be:

  1. Organic growth / status quo - In my opinion, the worst scenario due to the dynamics of the market converging. Without a huge infusion of capital like the Vista deal into Acquia, Drupal simply wouldn’t be able to compete fast enough to stay in the top DXP category against Adobe, Sitecore, Salesforce and WordPress. 

  2. IPO - As a liquidation event for VC investors, this could be perhaps the most lucrative, but the public markets are fickle and I believe that would be very hard on a large open source community and product like Drupal due to the dynamics of control for a public company. This may yet come to pass as the end game for Vista, but I think it is good it was not the immediate play. 

  3. Strategic Acquisition - Salesforce, Amazon, Google, IBM and others of this size would be likely acquirers. Again, this may yet come to pass, but it would not have been an ideal immediate short term play for Drupal because of the weight of influence it would add to the community and open source dynamic.  

  4. PE - Obviously, what did happen. This deal brings the financial strength and strategic opportunities without the messiness of the public markets or a new giant controlling the ecosystem. 

As for the direct benefits to the Drupal project, I take Dries at his word in the personal statement he made on his blog that this strategy will allow Acquia to provide even more for Drupal and the community including: 

  • Sponsor more Drupal and Mautic community events and meetups.

  • Increase the amount of Open Source code [sic] contributed.

  • Fund initiatives to improve diversity in Drupal and Mautic; to enable people from underrepresented groups to contribute, attend community events, and more.

Those are all things that directly benefit the community and make open source Drupal better in addition to the opportunities that the deal affords Acquia to better compete against its rivals. 

How Things Line Up From Here…

Consolidation and funding in the digital experience platform (“DXP”) space are going to make for a wild ride as the top players continue to unveil pieces of their strategy.  

  • Adobe - With Magento and Marketo neatly tucked up, Adobe remains the most competitive player both in terms of market share and the comprehensiveness of the offering, though cost and proprietary lock-in into a single homogenous platform are continued weaknesses. 

  • Acquia / Drupal - Recent acquisitions of platform components like Mautic and Cohesion are likely to continue or increase after the Vista deal in an effort to bring an open and more heterogeneous alternative to bear against the others. 

  • Sitecore - The recent acquisition of a top service provider, Hedgehog followed by the subsequent announcement that Sitecore was laying off 7% of its workforce can’t be interpreted as strong signs of health, but the enterprise market is full of Microsoft ecosystems that will be partial to Sitecore’s underlying technology. 

  • Automattic / WordPress - I have a less insight into the WordPress space than I do Drupal, but the SalesForce Ventures investment doesn’t feel like an attempt to gain a CMS for its own offering (sidenote: Salesforce does have a “CMS” and its Ventures has invested in other CMS’s like Contentful).  Founder Matt Mullenwig told TechCrunch that Automattic doesn’t want to change course. With the new influx of cash, there won’t be any big departure from the current lineup of products and services. “The roadmap is the same. I just think we might be able to do it in five years instead of 10,” Their recent acquisition of Tumblr is part of a strategy I don’t fully understand, but seems to be a continued volume market move into the larger media space and less about competing with the other platform providers. However, $300M could go a long way in tooling the platform for lots of purposes. 

I also think there is a lot more to watch on the related martech front surrounding customer data. In April, Salesforce and Adobe announced (in the same week) that they were acquiring competing Customer Data Platform (CDP) products. So this is about the whole digital experience stack; where we are likely going to see more acquisitions and consolidation is beyond the CMS. 

What Does This Mean For our Clients?

Despite the race to create the killer platform, most of our clients have consciously, or organically, adopted heterogenous digital experience platforms. This means they rely on many different components to “weave” together solutions that meets their unique requirements. As Forrester explains, DX is both a platform and a strategy and despite the influence of these major software and cloud players, a “digital experience” needs to be created - that includes strategy, customer research, UX, design, content, brand and the integration of custom and legacy software, and data sources in addition to purchased software. Still, we believe our customers do need to be aware of the changing dynamics in the market and in particular how consolidation will affect their platform investments. 

What Does This Mean For Phase2?

At Phase2, this news comes with much interest. We were one of the very first Acquia partners named after the company was founded in 2008. Over the last 10+ years, we have shared, and continue to share, numerous clients. We are also prolific contributors and implementers in the Drupal space who have been a part of some of the biggest and most impactful Drupal moments over the last ten years. We ourselves, once invested heavily in creating many products that extended and enhanced the capabilities of Drupal because we believe it is a powerful platform for creating digital experiences. 

Over time, as our agency grew and moved “up market”, we have diversified our expertise and have become Salesforce partners, developed Commerce experience, and enhanced our Design, UX and creative capabilities. We also use WordPress, Javascript frameworks for decoupled sites, and static site generators in conjunction with a wide variety of marketing technologies to create digital experience platforms that go beyond websites and CMS

We will continue to monitor the trends and prepare and enable ourselves to create digital experiences that advance our clients goals and we fully expect Drupal will remain a key component of building those experiences well into the future. 

Sep 25 2019
Sep 25

Securing your website is not a one-time goal but an on-going process that needs a lot of your attention. Preventing a disaster is always a better option. With a Drupal 8 website, you can be assured about having some of the top security risks being taken care of by the Drupal security team. 
Drupal has powered millions of websites, many of which handle extremely critical data. Unsurprisingly, Drupal has been the CMS of choice for websites that handle critical information like government websites, banking and financial institutions, e-Commerce stores, etc. Drupal security features address all top 10 security risks of OWASP (Open Web Application Security Project)
Drupal 8 is considered one of the most secure version till date because of its forward-thinking and continuous innovation approach. The Drupal security team had also issued a security bounty program six months before the release of Drupal 8. Through this program, users were invited to test run and find (and report) bugs in Drupal 8. And they even got paid for it! 

Drupal Security Vulnerabilities

It goes without saying that the Drupal community take drupal security issues very seriously and keep releasing Drupal security updates/patches. The Drupal security team is always proactive and ready with patches even before a vulnerability goes public. For example, the Drupal security team released the security vulnerability update - SA-CORE-2018-002 days before it was actually exploited (Drupalgeddon2). Patches and security upgrades were soon released, advising Drupal site admins to update their website.
Quoting Dries from one of his blogs on the security vulnerability – “The Drupal Security Team follows a "coordinated disclosure policy": issues remain private until there is a published fix. A public announcement is made when the threat has been addressed and a secure version of Drupal core is also available. Even when a bug fix is made available, the Drupal Security Team is very thoughtful with its communication. “
Some interesting insights on Drupal’s vulnerability statistics by CVE Details :

drupal securitydrupal security vulnerabilities

1. Keep Calm and Stay Updated – Drupal Security Updates    

The Drupal security team are always on their toes looking out for vulnerabilities. As soon as they find one, a patch/Drupal security update is immediately released. Also, after Drupal 8 and the adoption of continuous innovation, minor releases are more frequent. This has led to easy and quick Drupal updates of a better, more secure version. 
Making sure your Drupal version and modules are up-to-date is really the least you can do to ensure safety of your website. Drupal contributors are staying on top of things and are always looking for any security threats that could spell disaster. A Drupal security update doesn’t just come with new features but also security patches and bug fixes. Drupal security updates and announcements are posted to users’ emails and site admins have to keep their Drupal version updated to safeguard the website.

2. Administer your inputs 

Most interactive websites gather inputs from a user. As website admins, unless you manage and handle these inputs appropriately, you are at a high-security risk. Hackers can inject SQL codes that can cause great harm to your website’s data.
Stopping your users from entering SQL specific words like “SELECT” or “DROP” or “DELETE” could harm the user experience of your website. Instead, with Drupal security, you can use escaping or filtering functions available in the database API to strip and filter out such harmful SQL injections. Sanitizing your code is the most crucial step towards a secure Drupal website.

3. Drupal 8 Security

How is Drupal 8 helping in building a more robust and secure website? Here are a few Drupal 8 security features - 

  • Symfony – With Drupal 8 adopting the Symfony framework, it opened doors to many more developers other than limiting them to just core Drupal developers. Not only is Symfony a more secure framework, it also brought in more developers with different insights to fix bugs and create security patches.
  • Twig Templates – As we just discussed about sanitizing your code to handle inputs better, here’s to tell you that with Drupal 8, it has already been taken care of. How? Thanks to Drupal 8’s adoption of Twig as its templating engine. With Twig, you will not need any additional filtering and escaping of inputs as it is automatically sanitized. Additionally, because Twig’s enforcement of separate layers between logic and presentation, makes it impossible to run SQL queries or misusing the theme layer.
  • More Secure WYSIWYG - The WYSIWYG editor in Drupal is a great editing tool for users but it can also be misused to carry out attacks like XSS attacks. With Drupal 8 following Drupal security best practices, it now allows for using only filtered HTML formats. Also, to prevent users from misusing images and to prevent CSRF (cross-site request forgery), Drupal 8’s core text filtering allows users to use only local images.
  • The Configuration Management Initiative (CMI) – This Drupal 8 initiative works out great for site administrators and owners as it allows them to track configuration in code. Any site configuration changes will be tracked and audited, allowing strict control over website configuration.

4. Choose your Drupal modules wisely

Before you install a module, make sure you look at how active it is. Are the module developers active enough? Do they release updates often? Has it been downloaded before or are you the first scape- goat? You will find all the mentioned details at the bottom of the modules’ download page. Also ensure your modules are updated and uninstall the ones that you no longer use.

5. Drupal Security Modules to the rescue

Just like layered clothing works better than one thick pullover to keep warm during winter, your website is best protected in a layered approach. Drupal security modules can give your website an extra layer of security around it. Some of the top Drupal 8 security modules that you must use for your website –

 Drupal Login Security –

This module enables the site administrator to add various restrictions on user login. The Drupal login security module can restrict the number of invalid login attempts before blocking accounts. Access can be denied for IP addresses either temporarily or permanently. 

Two-factor Authentication –

With this Drupal security module, you can add an extra layer of authentication once your user logs in with a user-id and password. Like entering a code that’s been sent to their mobile phone.

Password Policy –

This is a great Drupal security module that lets you add another layer of security to your login forms, this preventing bots and other security breaches. It enforces certain restrictions on user passwords – like constraints on the length, character type, case (uppercase/lowercase), punctuation, etc. It also forces users to change their passwords regularly (password expiration feature).
 

Username Enumeration Prevention –

By default, Drupal lets you know if the username entered does not exist or exists (if other credentials are wrong). This can be great if a hacker is trying to enter random usernames only to find out one that’s actually valid. This Drupal security module can prevent such an attack by changing the standard error message.

Content Access -

As the name suggests, this module lets you give more detailed access control to your content. Each content type can be specified with a custom view, edit or delete permissions. You can manage permissions for content types by role and author.

Coder -

Loopholes in your code can also make way for an attacker. The Coder module (a command line tool with IDE support) goes through your Drupal code and lets you know where you haven’t followed best coding practices.

Security Kit -

This Drupal security module offers many risk-handling features. Vulnerabilities like cross-site scripting (or sniffing), CSRF, Clickjacking, eavesdropping attacks and more can be easily handled and mitigated with this Drupal 88 security module.

Captcha -

As much as we hate to prove our human’ness, CAPTCHA is probably one of the best Drupal security modules out there to filter unwanted spambots. This Drupal module prevents automated script submissions from spambots and can be used in any web form of a Drupal website

6. Check on your Permissions

Drupal allows you to have multiple roles and users like administrators, authenticated users, anonymous users, editors, etc. In order to fine-tune your website security, each of these roles should be permitted to perform only a certain type of work. For example, an anonymous user should be given least permissions like viewing content only. Once you install Drupal and/or add more modules, do not forget to manually assign and grant access permissions to each role.

7. Get HTTPS

I bet you already knew that any traffic that’s transmitted over just an HTTP can be snooped and recorded by almost anyone. Information like your login id, password and other session information can be grabbed and exploited by an attacker. If you have an e-Commerce website, this gets even more critical as it deals with payment and personal details. Installing an SSL certificate on your server will secure the connection in between the user and the server by encrypting data that’s transferred. An HTTPS website can also increase your SEO ranking – which makes it totally worth the investment.

Sep 25 2019
Sep 25

Your browser does not support the audio element. TEN7-Podcast-Ep-071-Kevin-Thull-Drupal-Archivist.mp3

Summary

If you've ever watched a Drupal Camp or Con session from the comfort of your home, you likely have our guest Kevin Thull to thank. Thull has recorded almost 1700 Drupal sessions, and he keeps looking for more ways to contribute to the Drupal community. Bonus: If you're a gear nerd you'll love hearing about the evolution of the recording system!     

Guest

Kevin Thull, Drupal Developer and Archivist

Highlights

  • Life in Chicago (Cubs fan forever!)
  • Almost artificial limbs
  • Yet another person that joined Drupal because of the helpful, welcoming community
  • How Kevin got started recording sessions
  • Failing is important!
  • Going on the road to record Camps
  • Giant red button debut
  • Iterating to find the perfect sound and audio recording setup
  • Ad hoc recording becomes the Drupal Recording Initiative
  • Midwest Open Source Alliance (MOSA) and fiscal sponsorship and insurance
  • Aaron Winborn Award
  • Contributing to community without coding

Links

Transcript

IVAN STEGIC: Hey everyone! You’re listening to the TEN7 podcast, where we get together every fortnight, and sometimes more often, to talk about technology, business and the humans in it. I am your host Ivan Stegic. My guest today is Kevin Thull, a freelance frontend developer and President of the Midwest Open Source Alliance. You may know him as the guy whose session recording kits are omnipresent at Drupal events across the globe. He’s also the 2018 recipient of the Aaron Winborn Award, an award that is presented annually to an individual who demonstrates personal integrity, kindness and above and beyond commitment to the Drupal community. Hey, Kevin. Welcome to the podcast. It’s a great pleasure to have you on.

KEVIN THULL: Thank you. It’s great to be here.

IVAN: I have so many questions. [laughing] I feel like there’s so much to explore. So maybe you’ll consider coming back if we don’t get to it all?

KEVIN: Definitely.

IVAN: Awesome. Okay, I thought we’d start with some background. So right now, you live in Chicago, and you went to the University of Illinois at Chicago. Are you a lifelong Chicagoan?

KEVIN: I am. Born and raised.

IVAN: Born and raised. So, where did life in school start for you?

KEVIN: I’m on the northwest side of Chicago. So, I went to schools in that area. As far as UIC I went there for Bioengineering. My dream was to create artificial limbs, and then I learned I’d pretty much be in school for the rest of my life. I said nope! [laughing]

IVAN: [laughing] Wow. Bioengineering. That’s amazing. What was the motivation for artificial limbs?

KEVIN: It just seemed an interesting and really useful career. At the time, I graduated college in 1989/90, so there wasn’t a whole lot of advancement at that point. So, it just seemed like a really interesting and rewarding career to go into.

IVAN: Yeah. I’ve seen those artificial limbs that are 3D printed these days.

KEVIN: Yeah, it’s incredible.

IVAN: It really is. Do they use Raspberry Pis, I think, in some cases? Or, I don’t know what it is, but it looks like there’s a really inexpensive way to get things done these days.

KEVIN: Yeah. There was an event at my old job sponsorship conference and one of speakers was one of the inventors of that 3D printing, or some of the innovators of it, and it’s just an incredible story.

IVAN: It really is. It’s kind of what technology and the internet, the original idea behind it was trying to accomplish. Right? Something that can bring the masses. Something that’s cheap and life-changing as a technology, whether it’s hardware or software, it doesn’t matter.

KEVIN: Right. Yeah.

IVAN: So, I have to ask you, since you’re in Chicago. White Sox or Cubs?

KEVIN: I grew up on the northwest side, so Cubs fan forever.

IVAN: Cubs fan. Yes.

KEVIN: I don’t really follow sports at this point anymore.

IVAN: No? Well recent World Champions, I have to give it to you that it’s probably okay to stop following it. Right?

KEVIN: Yeah. And I actually lived near Wrigley Field when that happened.

IVAN: What a beautiful ballpark.

KEVIN: Yeah. It’s wonderful. I hope they don’t change it.

IVAN: Yeah. I’m just so excited about baseball these days, given that the Twins are now number one in the league, and have the best average. We really needed that. I feel like it’s a good omen that that’s what happened to the Cubs who went to the World Series, and now maybe the Twins can do it.

KEVIN: Yeah, that’d be amazing.

IVAN: Definitely would be amazing. [laughing] So, let’s talk a little bit about Drupal. You’ve been in the Drupal ecosystem for more than 10 years with many different areas of interest and expertise from being a site builder to developer, to being involved in the community. Do you remember your first experience with Drupal?

KEVIN: Yeah. Vividly. [laughing] Drupal 6 was just shiny and new, and I was using a product to essentially build a static site, so I was using an early static site generator, just this Perl script that let me create both a car parts website and a couple different product websites, we’ll put it at that. Since it was UI-based the site kept timing out during the rebuilds for the owners of their sites, and telling them, “Oh, you can log in through SSH and run it there.” It was not an option. So, I started evaluating other systems and it was really down to between Joomla and Drupal.

IVAN: Ooh, Joomla.

KEVIN: Right. But feature set, similar, they looked equally capable on paper. So, I looked through support forms, because I’m not a coder by trade, I guess you can say. I can't code my way out of a paper bag, is how I’ll define my programming skills. I’m good with CSS and SaSS, but in terms of the rest, even though I went to engineering school, you’d think I’d be better at it. I looked at the community forums for both, and Joolma’s answers were, “Sure we’ll help you for a bounty,” and Drupal’s answers were “Sure we’ll help you. Can I move in with you to help you build this thing?” Sort of that feel. So, at that point I went with Drupal.

IVAN: Yeah, it was the community that got you hooked, it sounds like.

KEVIN: Absolutely. Then I struggled because there were no migration scripts at that point, so I had to find some custom PHP to brute force it into the database, which worked, data tables were a whole lot easier in Drupal 6.

IVAN: Of course.

KEVIN: Yeah. Then I didn’t quite understand the whole contrib cycle. I was like Drupal 5 versus Drupal 6. Well Drupal 6 is new I’ll use that, and then realized I was sort of stuck waiting for contrib to follow-up. I ended up doing an Ubercart site, struggled with a make/model/year selector, and my first community event, because I had learned a lot through videos. You found videos on archive.org from past events, and that got me a long way. But then I was stuck. 

I was very, very introverted, very shy at the time. I still am a little bit. So, I committed to going to an in-person meetup. I was living in the suburbs at the time, and there was a meetup posted and it’s like, “Well ask your questions and we’ll have Jeff Eaton there because he wrote the book on building Drupal," or he’s one of the writers. I’d been listening to Lullabot podcasts. I was having celebrity anxiety. So, he showed up, and I asked my question, “How could you do this?” He’s like, “Oh, I’m so sorry, 'cause basically there is no solution for that right now.” At least I felt good that it wasn’t me. And if he can’t figure it out, then...

IVAN: Is that code still around that you wrote?

KEVIN: No. The site owner ended up migrating out to BigCommerce at some point. He had several different sites. But we had it going for a while, doing lots of imports and CSV files. So, it was a pretty intense project.

IVAN: So that interaction with Jeff Eaton, was that your first in-person involvement in some sort of a community event?

KEVIN: Yeah. It was the very first Drupal Fox Valley meetup.

IVAN: Drupal Fox Valley meetup. Is that still around?

KEVIN: They are. Yeah. I’m sad I don’t live in the suburbs. That’s one of the reasons I’m sad I don’t live in the suburbs, because it’s a pretty far west suburb but it was a great, great group. I met a lot of wonderful people there, and I count that as one of the reasons that I am where I am today, being part of that group in that community.

IVAN: So that was your first exposure to the community. Is it also the first time you started participating and organizing events as well?

KEVIN: More or less. I did some light volunteering at the Chicago Drupal Camp when it was around, but we ended up as a suburban group. It’s a decent commute. There’s a good community in the suburbs so we decided to have our own DrupalCamp Fox Valley. That was October, 2013. That’s also when I decided I was going to record the sessions, because at the time where I worked, we hosted a marketing conference where I basically was involved in recording sessions. So I’m like, well, a) I learned when I started Drupal from session recordings; b) I do this for work, so it was a no brainer in my mind to do that for events that I’m organizing.

IVAN: So, 2013, there’s the first set of sessions that you decide to record and it’s at a meetup? Or it’s at the Fox Valley Camp?

KEVIN: Yeah. We had our Fox Valley DrupalCamp, or DrupalCamp Fox Valley.

IVAN: So, did you go into that camp thinking, “Okay, I’m going to record every single session?” Or did you say, “Let’s iterate. Let’s choose one room and see how it goes?” [laughing]

KEVIN: No. I figured, I do this for work, so, we’re going to get them all, and the method was have a camcorder in the back of the room just to see when slides change, get the slide presentation from the presenter, make stills of each slide, and then kind of rebuild what would be a screen share. Because that was the process that I did at work, but it was for marketing conference.

IVAN: I see.

KEVIN: So, there were maybe 30 slides or so. At the work event, it was a union hotel, so we brought in AV to do the keynote as a live video production, but in the breakout rooms, to cut costs we just got an audio file from them. So, I would get their deck and any videos that they were playing and kind of rebuild it based on the audio and just what I call the "reference record" to see where those slides change.

IVAN: So, you actually had to rebuild every session there was on any live capture?

KEVIN: Correct.

IVAN: Alright. So that’s kind of version one?

KEVIN: Yeah. That was terrible.

IVAN: Was it? [laughing]

KEVIN: Well, there was one talk, it was like a 45-minute talk and over 100 slides, so it took like three hours to rebuild that.

IVAN: Boy, that was really time intensive.

KEVIN: Yeah, and you know, demos were lost. It’s just a completely different medium. It’s funny because friends of mine at the time were like, “Why are you investing so much time in this, in the post-production? You know, nobody’s going to watch these.” I’m like, “It’s important.”

IVAN: Yeah, and it really is important. Thank you for investing the time. It’s such an asset to the community now, I can’t even imagine what it would be without it.

KEVIN: Yeah. I never once imagined it would be what it is today. [laughing]

IVAN: I would love to know about what the next iteration was after you decide, “I can’t handle doing four hours and 100 slides for a 45-minute talk.” What’s the next iteration?

KEVIN: So, shortly after that event was the first MidCamp. That was March of 2014. We were fortunate enough to get the Drupal Association recording kits. So, the same laptops and splitters that they use at DrupalCon. Because apparently if your event falls in the window of when they’re not needed to be shipped or in shipping or en route, you can just pay the FedEx cost to borrow the equipment. So, you get this giant Pelican case, you could fit a body in it because it’s stuffed with laptops and equipment. And that was also a pretty horrible experience, because it was just a lot of setup. They didn’t work terribly well. Every once in a while, in the recording there would be a dropped frame, so you just see a one-second blue frame. So, of course, I edited those out. It was pretty low res and at the end of the event we were exhausted because it was our first one, it’s like, Oh now we have to drag this giant case and find a FedEx to send it home.

IVAN: So, were the laptops themselves doing the recording,  and you had to have your presentation on the laptop?

KEVIN: No, there was a splitter. So, the presentation computer fed into a splitter that split to the projector and to the recording MacBook. So, the MacBook was basically running a capture software. And to this day that’s the same type of equipment they use at Drupal Con. So, if you go into a session at DrupalCon you’ll see off to the side a table with a laptop on it that has a note saying, “Recording. Do not Touch.”

IVAN: Really.

KEVIN: And it’s just always on.

IVAN: I always wondered about that. Okay, so that’s version 2. So that’s 2014. So, what happens after that? You’ve reduced the amount of time you spend on post-production but you’re still not happy. You still want something better.

KEVIN: Yeah. So, I think it was, DrupalCon Austin was that year, so March is when we did the laptops. Went to DrupalCon Austin. We actually met with the people who produce the videos and I was brainstorming with a fellow organizer, there’s got to be some sort of solution that’s lightweight, inexpensive, device agnostic and no drivers. We kind of came up with this base requirements list and started looking, and it was really difficult to find stuff, because it turns out this is a very lucrative industry, recording events. They don’t want to give away their methods. Even the prosumer-level equipment is really expensive. So, I found this device that the intended market is to record your console gameplay, so it’s HDMI in and out, it records to a thumb drive and it’s got an audio mixer so it can pick up the gameplay audio from your console and then also your commentary through a headset. And it has a standalone mode, because lot of those console gameplay systems require you to either hook to a console where there’s some sort of interface or attach to a PC so you can run it through software. But it’s the only one that had a standalone mode. So, I’m like, well let me try it. So, I bought one and it worked. So, the second DrupalCamp Fox Valley, which was then later in 2014, was where that kit first debuted.

IVAN: Wow. And what was the cost of the kit at the time? Do you remember?

KEVIN: At that time, they came with a lav mic. So, just the unit itself was, I want to say, $180.00 plus dongle, so maybe low $200 per kit, which is really cheap.

IVAN: That’s really reasonable, yeah. Plus, you have to supply the thumb drive, right?

KEVIN: Yeah. When I think of cost per kit, that’s equipment plus dongles plus recording gear. But we have issues where if you had multiple presenters, you’re handing around this lav mic which is not a great way to deal with it, and every once in awhile there was no audio on the record. So, you’ve got the screen recording, no problems, but it’s silent. So those were lost. That’s when I decided to add in the zoom voice recorder which serves as the mic, but also records to an SD card.

IVAN: So that’s the omnidirectional mike that’s hooked up and right next to the console?

KEVIN: Yeah.

IVAN: Okay, so that’s the next version after the Fox Valley Camp?

KEVIN: Yeah. That was all exciting. It was promising. I got most of the recordings for Fox Valley. I was going to BADCamp that year, so that was September 2014, BADCamp was San Francisco, would’ve been late October. I just wanted to show off the kits and they’re like, “Well can you actually record some sessions?” I’m like, “Okay, sure.” I probably had two or three at the time. So, I brought them with me, they’re compact, and I recorded sessions and failed miserably.

IVAN: Really?

KEVIN: I think I caught maybe two out of 20 or 30 that I tried.

IVAN: What was the main issue?

KEVIN: So, this time they were bus powered. So, it would plug into the presenter's laptop.

IVAN: So, the assumption was there’s enough juice coming out of the laptop that’ll actually give you consistent power to power it.

KEVIN: Well there’s juice, but if that power gets interrupted before the file is written, then you get a zero K file.

IVAN: Oh, no!

KEVIN: Right. And generally, with equipment it loses the connection; it writes the file, powers down. Not so with this one. So, then that was wonderful and terrifying. It’s like, Okay, good, failing is important.

IVAN: Absolutely.

KEVIN: Right. Because if it’s working you don’t know how to break it, and therefore you don’t know how to fix it.

IVAN: Exactly.

KEVIN: So that was late 2014. March 2015 MidCamp No. 2 is coming up and I was a little scared, because it worked and then it failed, and here we go again. And, MidCamp was a success. So, it’s like, okay, great. What this tells me is I need to take these things on the road and just get more variables into the equation. So, shortly after MidCamp, I sent out a Tweet saying, “Hey Camps, if you’ll cover my airfare and hotel, I’ll record your Camp.” And St. Louis and Twin Cities took me up on it right away.

IVAN: Yeah, we did. It was like a no brainer for Twin Cities DrupalCamp. We were like, Oh, Kevin’s going to record it?” I think I remember voting yes on that request. I’m like, “Yeah, absolutely. Bring him. We’ll do it.”

KEVIN: Yeah, so that was also terrifying, because like, Oh now this someone else’s money. But by and large it worked really well. In St. Louis, I had 100% capture. So, like, great, this is good. But over time just various variables helped me to iterate on the kit, or whether that’s documentation—because BADCamp there was one year, there’s no time between sessions, and you’ve got six rooms over four buildings, and you’re the only one doing it. It’s like, “Okay, I guess I’m going to make instructions and put them at the podium because I’m not going to be there.” And it worked mostly.

Kevin Thull

IVAN: The giant red button, when did that make its debut?

KEVIN: The red button was part of what I call the beta kit, the 2014 Fox Valley version. So, it was the camcorder, there was the Drupal laptops, the DA laptops and then the Big Red Button. So that was early on the process and it’s just been a matter of smoothing out the whole bit.

IVAN: So, you took it on the road, you got different variables for the kit. Did the kit stay very similar after your beta process or did you change anything major after you were done with Twin Cities and St. Louis?

KEVIN: The bulk of the changes were adding in redundancies and taking out other variables. So, I added in the digital voice recorder, but then I got a remote for it. So, you hit the red button on the video, you’d hit the button on the audio record, and then when you’re done you stop both. Then so many times people forget to do the audio record, and then I realized, why don’t I just leave this thing to record all day long and take that out of the equation. One less thing for presenters to think about. And that worked.

There have been times, and sort of been the bane of my existence for a bit, because invariably someone would bump the power, or they’d turn off the power strip that everything’s attached to. Well, the video recorder powers on automatically. The audio recorder has to be turned on once it’s got power. So, I would lose it that way. Now there’s batteries in there, so there’s a failover. I now discovered there’s a hold switch so you can’t accidentally stop the recording, which has happened before. So, audio has become pretty solid in terms of capturing it.

Then just accidents. I had a four-port USB power, because one of the AV guys, when BADCamp failed miserably, he’s like, “Maybe there’s not enough USB power for some of this stuff.” For the laptops we’ll get a separate powered hub. So, I did but I thought I had to plug it into the laptop, and so, if the laptop went to sleep, it turned off power to the recorder. In one session I accidentally forgot to plug it in, and I think I know who the person is who, what I call happy accidents, her laptop went to sleep, and her recording continued. I'm like, “Oh, it’s because it’s not plugged in.” Because it’s not plugged into the laptop. So, it didn’t perceive that as a signal loss. Yeah, so, just lots of documentation and happy accidents throughout the years.

IVAN: And are you now at a final version of the right or do you have additional changes you want to make for the future?

KEVIN: The issues are currently, for whatever reason, if it’s not Mac OS, even though there’s a voice mixer, an audio mixer in the unit, there’s still no audio on the screen record. So, I don’t know if somehow rather than dubbing the non-audio from the presentation plus the spoken audio from the presenter, rather than mixing them, it’s completely wiped out. So, I had some time before someone’s session who historically had no audio, so I’m like, “Let’s look at your audio system and pick whatever is not chosen.” I assumed that it was choosing HDMI, and we would have to set it to headset. But it was set to headset. I’m like, “Let’s choose HDMI.” Then it worked. So, it’s like, “Oh, that’s cool.” But it’s still not 100%. Either you choose it, there will still be no audio, or it’ll be bad audio that has to be replaced. But it’s improved.

IVAN: I’m just amazed at the speed at which you get these sessions turned around and available online. What’s the secret to doing that?

KEVIN: The unit records to an .mp4 file on the thumb drive that’s attached to it. So, assuming you’ve got good audio, you already have a compressed file to upload to YouTube. So, as long as I go in, like any large break I’ll swap out media, see what I’ve got, fix anything that needs fixing and upload it, assuming then it has decent internet.

IVAN: So, really, you’re not doing any postproduction. That rig does it all for you.

KEVIN: Ideally, right. Yeah. When it works, it works really well. There are some small fixes. I’ve got enough experience where I’ve gotten quick at it. I think my most challenging was last year at DrupalCamp Montreal, it was a completely French spoken session that had no audio. So, trying to time that was tough.

IVAN: Oh, yeah. [laughing]

KEVIN: [laughing] I don’t speak French. So, eventually I figured it out.

IVAN: And what do you think the Achilles heel is with the whole system?

KEVIN: People. And that’s my next focus. When I’m doing it, I get close to 100% capture, pretty consistently. When others are doing it, it’s generally 80% or less, which I’ve learned is still okay. Because it means there are other people doing it, I’m not the blocker. But also, it’s just a matter of presence and making sure that everything’s being checked and rechecked. That a) you’re connecting the presenter laptops; and b) when sessions start, you’re verifying that the recording is recording. You still may lose the one or two that way, but it’s really just a matter of finding the people who care enough to make sure that it’s as successful as possible for any event that they’re managing equipment.

IVAN: How many sessions do you think you’ve recorded since you started in Fox Valley?

KEVIN: I do keep track.

IVAN: You do? So, this is not a guess. Okay. How many. What are we up to?

KEVIN: 1,646 total.

IVAN: Wow.

KEVIN: Although there’s more than that, because I don’t have numbers from Chattanooga. That includes sessions I’ve captured plus sessions—I call them proxy captures. So I now will send equipment to Camps through FedEx. And with instruction documentation. If needed I’ll do a video call with them to kind of go over how the kit works, troubleshooting stuff.

IVAN: And are the kits still around $250? Or has that changed?

KEVIN: So, by adding in the voice recorder that all totals about $450 per setup, which is still relatively affordable. I can get eight of them into a carryon-size Pelican case.

IVAN: That’s great.

KEVIN: They’re portable. They’re lightweight.

IVAN: Yeah. Wow. The quality on the recordings are nothing to be ashamed of. They’re all HD, the audio’s great. I don’t know how you get such great audio. You even get the questions from the auditorium as well.

KEVIN: That’s the audio recorder. I just have it set to multichannel, I think the auto gain and meeting is the setup. The preset. So, it does a good job of it.

IVAN: Yeah. I’m just so proud and amazed and you should be commended at every chance you can get, because this is such an amazing service and such high quality. It’s just amazing to see.

Am I right in saying that you started something called the Drupal Recording Initiative?

KEVIN: Yes.

IVAN: Tell me about that. What is that?

KEVIN: Yeah, this is a funny story. DrupalCorn Camp in Iowa last year, I was very happy to be able to record it. It was one of the first non-Chicago Camps I went to in 2014-2015, but they always had a way to record their sessions. So, I was never going to record theirs, even though I wanted to. This last year they reached out, I guess they didn’t have their typical contact and they wanted me to record it. So, I’m like, “Yes, absolutely.”

Then Matt Westgate of Lullabot, he gave a keynote and either right before or after that, he just nonchalantly asked me, “So, how’s the recording initiative going?” In my mind I’m like, “Oh wow, you just named this thing.” Because forever I was just like, “I’m just recording stuff.” So, it immediately got an upgrade. So, I had to kind of figure out what that was.

So I tried to do a year-end blog post to say how it’s gone for the year, do a little reporting, and the DA [Drupal Association] reached out to me after that, because this past year I realized that a) because this is bigger than just me, I need to start mentoring people, and so they offered to let me do a guest blog post on the DA’s blog so that it would amplify that. So, I’m like, great what am I going to write?

So, I came up with the initiative. Basically, broke it down into various buckets, like training and mentorship, expanded coverage, improved documentation, funding organization, content discoverability, and that was just basically December of last year. Now it’s just a matter of a three to five-year roadmap.

IVAN: So, this is quite recent. So, this is the end of last year you’re through about six months of it. How’s it going?

KEVIN: Surprisingly well. I think it just goes to show when you create a plan, you’ll start…

IVAN: …you’ll start working on the plan.

KEVIN: Yeah. If you don’t have a plan, you’re not going to achieve results. If you have a plan, you have a roadmap and things to shoot for.

IVAN: And how do we find out more about the Drupal Recording Initiative?

KEVIN: One of the items was open accounting, and in order to do that I put it on Open Collective, whether it links through show notes or something. But if you search "Drupal Recording Initiative," you’ll pretty much find it on open collective and I’ve got the entire initiative spelled out there.

IVAN: Excellent. We’ll link to it in the transcript and the show notes of this podcast episode, so keep it tracked there. But, it’s on opencollective.com, and as you said if you do a search for "Drupal Recording Initiative" it should be one of the first results. And I think it was for me.

KEVIN: Excellent. So, it’s working. [laughing]

IVAN: [laughing] So, it’s working. This is actually a really good segue into a question I had about the Midwest Open Source Alliance. It did say on the recording initiatives webpage that it’s hosted by the Midwest Open Source Alliance. What is MOSA? I’m sure that’s what you call it. Right?

KEVIN: It is what we call it.

IVAN: Okay. What is MOSA?

KEVIN: Yeah. MOSA was born out of the fact that the Drupal Association used to provide fiscal sponsorship for events, primarily in the U.S. They ended that program with the recommendation transfer over to Open Collective because they can be your fiscal sponsor. The DA took 10% which went to the Drupal project. Great. Going to Open Collective was going to take 10% and fund Open Source in general, also good but they were going to take a 10% on that initial deposit in addition.

So, as an event, we had already paid our 10% to the DA, so we were going to lose another 10% just to transfer. I wasn’t okay with that, especially because we didn’t know anything about Open Collective. So, that felt like a big jump to me. And there’s still issues like insurance, and getting sales tax exemption in Chicago is an issue.

So, some of the issues that we had when the DA was running this sponsorship program were going to not be fixed by moving the Open Collective. So, some of the MidCamp organizers got together, and we had been talking about this for a while, and that was the impetus to form our town nonprofit.

IVAN: And so, the Midwest Open Source Alliance is a federally recognized nonprofit, and you behave the same way that the Drupal Association did. You are fiscal sponsors for camps. I know that Twin Cities DrupalCamp uses you right now.

KEVIN: Yeah. It was primarily a solution for MidCamp, but we realized that if we could fix this for one, we could fix it for more. We tried to keep the scope smaller, geographically by Midwest, but also open the scope and just call it open source.

IVAN: And are you the fiscal sponsor and the insurance and everything else that a camp needs? Like the Open Collective and like the Drupal Association was to us?

KEVIN: That’s the intent. We’re still working on the insurance part. For any camp to be part of MOSA we have to designate an at large board member. So, in this case that was Dan Moriarty. So, he then is a representative of MOSA, so he can sign insurance using MOSA’s name. So, it’s not his name or his company. I didn’t even know that was a problem until I heard about event organizers being sued, because of something on their website.

IVAN: That’s awful.

KEVIN: Yeah. So, this is important. Even with the DA, at one point they provided insurance, and then they realized they couldn’t because it really wasn't part of their structure.

IVAN: The liability.

KEVIN: Yeah. So then here I am buying event insurance under my own name.

IVAN: Ouch.

KEVIN: Which is terrifying.

IVAN: Yeah, that is terrifying.

KEVIN: You do what you can to get your Camp out.

IVAN: Right. And how is MOSA funded? Is it also through a percentage that the members paid?

KEVIN: Yeah. So, we’re taking 5% from events and that’s been enough because it’s all volunteer run. We take 0% from initiatives, so donations. The Recording Initiative I do pay 5% platform fee to open collective but no additional cost. Because Open Collective itself is not a fiscal sponsor. There are fiscal sponsors on Open Collective. MOSA is now one of those, and the fiscal sponsor decides what percent they’ll take. So, for camps we don’t organize that through Open Collective, so that way we can get 5% to help keep the lights on. But for initiatives, we don’t need to take anything.

IVAN: And you talked about a plan for the Drupal Recording Initiative. What kind of a plan is there for MOSA? What are you guys hoping to achieve in the next few years?

KEVIN: We’ve got a project board on GitHub mostly to sort of finish. We’re building the bike as we’re riding it. It’s like, oh we have to create a nonprofit and run an event. And, oh, Twin Cities is actually going to be our next event, so now we have to figure that out. So, we’re getting documentation and things of that nature, hashing out insurance. We want year-long insurance for Board members, but also how to cover all volunteers of an event during the phase of the event. So, in theory, events don’t need event insurance. MOSA’s insurance would cover it, in theory. There’s a lot of time to talk to a lot of people to get a lot of quotes.

IVAN: Well, you guys are doing a wonderful job, so I wish you all of the best of luck for MOSA. I know that it felt like TC Drupal was looking for something like MOSA, and I’m just glad that we’re in the Midwest, and we’re able to take advantage of the Open Source Alliance.

KEVIN: Yeah, I’m glad it worked out.

IVAN: So, I think the last thing I want to talk to you about is the Aaron Winborn Award. Last year in 2018 in recognition of this incredible service you’ve been providing to our community, you received the Aaron Winborn award. What an honor to receive that. How did that make you feel?

KEVIN: It was incredibly humbling. I’m definitely not here for anything but to give back. So, to have to stand up and thank people. I understand that people really appreciate what I’m doing, but I’m not here for that. I’m here to just make videos available. So, it’s hard to go up there. I’m a very much behind-the-cameras kind of guy. It was wonderful.

IVAN: It was wonderful to see you accept that.

KEVIN: Thanks. Yeah.

IVAN: Did it change your approach to how and what you’re doing? Did it make it more intense? Did it change anything about your approach?

KEVIN: I don’t think so. I think, if anything, more people know me. [laughing] So I’m now Drupal famous.

IVAN: [laughing]

KEVIN: But aside from that, I’d say no.

IVAN: Well, it’s just wonderful to see. You’re just such a great example of how you can contribute to the community without writing a single line of code. Right?

KEVIN: Well, that’s the whole point.

IVAN: You’re a front end developer. You’ve written code. You’ve got patches in there, but you get an award for not writing code. So, that’s just a testament. So, what do you think your advice would be to those who just joined the Drupal community, or even to any open source community who maybe are not developers or who are young developers, or who just started writing code, maybe they’re afraid to show what they’ve written? What would your advice be to them about wanting to contribute?

KEVIN: Just, if you’re passionate about giving back to a community that you’re getting benefit from, don’t let the fact that you’re not maybe working on core module development, don’t let that stop you. There are so many ways that are either technical-lite or non-technical to give back. Documentation would be a great example for Drupal, because it’s still a sticking point. Plenty of opportunity to contribute there. But, at events you always need "day of" volunteers. There are plenty of non-standard ways to get involved. And also especially to bring in any past experience you have. I did video work, that’s not at all Drupal related, but look how big of an impact it’s made.

IVAN: Kevin, thank you so much for spending your time with me today on the podcast. It’s been a pleasure talking to you.

KEVIN: Well, thank you for having me.

IVAN: Kevin Thull is a freelance frontend developer and President of the Midwest Open Source Alliance. You can find him on Twitter @kevinjthull and on Drupal.org @kthull. And we'll have those in the show notes and in the transcription on the website. You’ve been listening to the TEN7 Podcast. Find us online at ten7.com/podcast. And if you have a second, do send us a message. We love hearing from you. Our email address is [email protected]. And don’t forget, we’re also doing a survey of our listeners. So, if you’re able to, tell us about what you are and who you are, please take our survey as well at ten7.com/survey. Until next time, this is Ivan Stegic. Thank you for listening.

Sep 25 2019
Sep 25

One of our members recently asked this question in support:

Wonder if you have, or can suggest, a resource to learn how to access, authenticate (via OAuth preferably) and process JSON data from an external API?

In trying to answer the question I realized that I first needed to know more about what they are trying to accomplish. Like with most things Drupal, there's more than one right way to accomplish a task. Choosing a solution requires understanding what options are available and the pros and cons of each. This got me thinking about the various different ways one could consume data from an API and display it using Drupal 8.

The problem at a high level

You've got data in an external service, available via a REST API, that you need to display on one or more pages in a Drupal site. Perhaps accessing that data requires authentication via OAuth2 or an API token. There are numerous ways to go about it. Which one should you choose? And how should you get started?

Some questions to ask yourself before you start:

  • How much data are we talking about?
  • How frequently does the data you're consuming change, and how import is it that it's up-to-date? Are real-time updates required? Or is a short lag acceptable?
  • Does that data being consumed from the API need to be incorporated into the Drupal-generated pages' HTML output? How does it impact SEO?
  • How much control does a Drupal site administrator need to have over how the data is displayed?

While I'm certain this list is not exhaustive, here's are some of the approaches I'm aware of:

  • Use the Migrate API
  • Create a Views Query Plugin
  • Write a custom service that uses Guzzle or similar PHP SDK via Composer
  • Use JavaScript

I'll explain each one a little more, and provide some ideas about what you'll need to learn in order to implement them.

Option 1: Use the Migrate API

Use the Migrate API combined with the HTTP Fetchers in the Migrate Plus module to ingest data from an API and turn it into Drupal nodes (or any entity type).

In this scenario you're dealing with a data set that doesn't change frequently (a few times per day, maybe), and/or it's okay for the data displayed on the site to lag a little behind what's in the external data service. This approach is somewhat analogous to using a static site generator like Gatsby, or Sculpin, that requires a build to occur in order for the site to get updated.

In this case that build step is running your migration(s). The result is you'll end up with a Drupal entity for each record imported that would be no different than if a user had created a new node by filling out a form on your Drupal site. In addition, you get the complete extract, transform, load pipeline of the Migrate API to manipulate the ingested data as necessary.

Pros:

  • If you've worked with Migrate API before, this path likely provides the least friction
  • Data is persisted into Drupal entities, which opens up the ability to use Views, Layout Builder, Field Formatters, and all the other powerful features of Drupal's Entity & Field APIs
  • You can use Migrate API process plugins to transform data before it's used by Drupal
  • Migrate Plus can handle common forms of authentication like OAuth 2 and HTTP Basic Auth

Cons:

  • Requires a build step to make new or updated data available
  • Data duplication; you've now got an entity in Drupal that is a clone of some other existing data
  • Probably not the best approach for really large data sets

Learn more about this approach:

Option 2: Create a Views Query Plugin

Write a Views Query Plugin that teaches Views how to access data from a remote API. Then use Views to create various displays of that data on your site.

The biggest advantage of this approach is that you get the power of Views for building displays, without the need to persist the data into Drupal as Entities. This is approach is also well suited for scenarios where there's an existing module that already integrates with the third party API and provides a service you can use to communicate with the API.

Pros:

  • You, or perhaps more importantly your editorial team, can use Views to build a UI for displaying and filtering the data
  • Displays built with Views integrate well with Drupal's Layout Builder and Blocks systems
  • Data is not persisted in Drupal and is queried fresh for each page view
  • Can use Views caching to help improve performance and reduce the need to make API calls for every page load

Cons:

  • Requires a lot of custom code that is very specific to this one use-case
  • Requires in-depth understanding of the underpinnings of the Views API
  • Doesn't allow you to take advantage of other tools that interact with the Entity API

Learn more about this approach:

Option 3: Write a Service using Guzzle (or similar)

Write a Guzzle client, or use an existing PHP SDK to consume API data.

Guzzle is included in Drupal 8 as a dependency, which makes it an attractive and accessible utility for module developers. But you could also use another similar low-level PHP HTTP client library, and add it to your project as a dependency via Composer.

Guzzle is a PHP HTTP client that makes it easy to send HTTP requests and trivial to integrate with web services. --Guzzle Documentation

If you want the most control over how the data is consumed, and how it's displayed, you can use Guzzle to consume data from an API and then write one or more Controllers or Plugins for displaying that data in Drupal. Perhaps a page controller that provides a full page view of the data, and a block plugin that provides a summary view.

This approach could be combined with the Views Query Plugin approach above, especially if there's not an existing module that provides a means to communicate with the API. In this scenario, you could create a service that is a wrapper around Guzzle for accessing the API, then use that service to retrieve the data to expose to views.

If you need to do anything (POST, PUT, etc. ) other than GET from the API in question you'll almost certainly need to use this approach. The above two methods deal only with consuming data from an API.

Pros:

  • Able to leverage any existing PHP SDK available for the external API
  • Some of the custom code you write could be reused outside of Drupal
  • Greatest level of control over what is consumed, and how the consumed data is handled
  • Large ecosystem of Guzzle middleware for handling common tasks like OAuth authentication

Cons:

  • Little to no integration with Drupal's existing tools like Views and others that are tailored to work with Entities

Learn more about this approach:

Option 4: JavaScript

Use client-side JavaScript to query the API and display the returned data.

Another approach would be to write JavaScript that does the work of obtaining and displaying data from the API. Then integrate that JavaScript into Drupal as an asset library. A common example of something like this is a weather widget that displays the current weather for a user, or a Twitter widget that displays a list of most recent Tweets for a specific hash tag.

You could also create a corresponding Drupal module with an admin settings form that would allow a user the ability to configure various aspects of the JavaScript application. Then expose those configuration values using Drupal's JavaScript settings API.

While it's the least Drupal-y way of solving this problem, in many cases this might also be the easiest -- especially if the content you're consuming from the API is for display purposes only and there is no reason that Drupal needs to be aware of it.

Pros:

  • Data is consumed and displayed entirely by the client, making it easier to keep up-to-date in real time.
  • Existing services often provide JavaScript widgets for displaying data from their system in real time that are virtually plug-and-play.
  • Code can be used independent of Drupal.

Cons:

  • No server-side rendering, so any part of the page populated with data from the external API will not be visible to clients that don't support JavaScript. This also has potential SEO ramifications.
  • Drupal has no knowledge of the data that's being consumed.
  • Drupal has little control over how the data is consumed, or how it's displayed.

Learn more about this approach:

Honorary mention: Feeds module

The Feeds module is another popular method for consuming data from an API that serves as an alternative to the Migrate API approach outlined above. I've not personally used it with Drupal 8 yet, and would likely use the Migrate API based on the fact that I have much more experience with it. Feeds is probably worth at least taking a look at though.

Conclusion

There are a lot of different ways to approach the problem of consuming data from an API with Drupal. Picking the right one requires first understanding your specific use case, your data, and the level of control site administrators are going to need over how it's consumed and displayed. Remember to keep in mind that turning the data into Drupal entities can open up a whole bunch of possibilities for integration with other aspects of the Drupal ecosystem.

What other ways can you think of that someone might go about solving the problem of consuming data from an API with Drupal?

Sep 25 2019
Sep 25
“Drupal is here to stay, it's only getting bigger with the scale of engagements we are in, our wish is for India to Choose to Lead.” - Drupal India Association

“What is the most resilient parasite? Bacteria? A virus? An intestinal worm? An idea. Resilient... highly contagious. Once an idea has taken hold of the brain it's almost impossible to eradicate. An idea that is fully formed - fully understood - that sticks; right in there somewhere.” This is a dialogue from Christopher Nolan’s Inception (2010) that is congruous with different scenarios of life where you are looking forward to new beginnings and working towards that. An idea can make you ponder over a plethora of options to make something great happen. Drupal India Association (DIA) is also a result of the work of brilliant people and their visionary ideas.

Drupal India Association written on a wall and images of buildings on right hand side


Like Drupal Association, which helps the Drupal community across the globe to build, secure and promote Drupal in addition to the funding, online collaboration, infrastructure and education, there was definitely a great value seen in forming a national level association in India. Channelising funds for events or act as a bank of thought leaders or prevention of scheduling conflict would all require a central body. This is exactly what led to the formation of Drupal India Association.

Floating an idea: How DIA came to fruition

The idea was to have a central organisation that has an India-wide presence and recognition
A long sentence written on the right and a man sitting on the bike on left


The discussions on forming DIA were happening as early as 2012. The idea was to have a central organisation that has an India-wide presence and recognition. The key areas that such a central body would address are:

  • Promotion: Whether you need to organise Drupal-related events (DrupalCamps, DrupalCon, Drupal Training etc.) in India or want to know where should you advertise the events, it can all be streamlined with the presence of a central organisation. You will have access to a wonderful group of thought leaders from the Drupal community of India who can answer your questions related to Drupal promotion. In short, this will be essential to engage the open-source community within India and help the Drupal community in India grow even bigger.
  • Funding: Such a central body can also help simplify the funding process that is imperative to organise large Drupal-related events.
  • Schedule: The window for different Drupal-related events to be scheduled can be easily decided. The question of two or more Drupal events happening concurrently is nullified.

It was only in 2018 when the resolve to plan for a regional chapter strengthened. This was the time when the Drupal community in India came together to chalk out the action plan.

A woman's face on top and a long sentence below


The interest among the Drupal community members in India was palpable.

A woman's face on top and a long sentence below


Efforts started bearing fruits in 2019 when everything fell in place. At Drupalcamp Delhi 2019, the announcement of Drupal India Association as the newly formed organisation was made.

A woman's face on top and a long sentence below


The synergy has developed among the different thought leaders from various agencies, including Vidhatanand (Chief Engagement Officer at OpenSense Labs).

A group of men and women sitting around a huge tableRepresentatives of different agencies meeting at DrupalCamp Pune 2019 to discuss DIA


There is a hope that Drupal India Association will inspire more such local chapters to be formed. And the Drupal Community is already looking forward to many more associations on similar lines.

A woman's face on top and a long sentence below


The Vision

After all the brainstorming and insightful discussions, DIA is finally here and is here with a mission. Be it the marketers, the agencies or the developers, it has something to offer for everyone.

Agencies

The primary vision of Drupal India Association is to provide value for the member organisations and the Drupal Community in India. DIA’s emphasis will be on boosting digital innovation using Drupal and enabling more agencies to innovate using Drupal. DIA will be steadfast in its goals of identifying tech events where it can participate and hire a big booth where every member organisation can take part.

Marketers

Popularising Drupal in India and setting an example to the rest of the world is one of the objectives of DIA. With the help of DIA, marketers will be able to change the way people look at India when it comes to Drupal development and its role in it. DIA will also pave way for India to have a colossal influence over the Gulf and ASEAN (Association of Southeast Asian Nations) regions. Cities in India that were never on the radar of the Drupal community will now be holding Drupal camps and meetups. DIA will be responsible for preparing a calendar of events with the aim of promoting Drupal across different cities in India.

Developers

Drupal India Association’s objective is to proliferate Drupal contributions coming from India and will keep working towards it to make a huge impact.

Conclusion

From being just an idea in the incipient stages to being a central body, Drupal India Association has come a long way. It still has a lot to look forward to. A massive country like India shows a lot of promise to make impactful contributions when it comes to increasing adoption of Drupal by more agencies, make Drupal even stronger, and lead the way. Drupal India Association is committed to making it all happen.

Ping us at [email protected] to know more about Drupal, its remarkable merits and how you can make your invaluable contributions to the growth of Drupal.

Sep 25 2019
Sep 25

For e-commerce sites offering training or events, an extremely interesting function is to offer visitors to subscribe to the training or event in question in order to be notified as soon as a new session, a new date, is available. The interest is twofold: this allows the user to receive real-time notification as soon as a new session is available, and for the e-commerce site it allows him to know the interest generated by his various events or training courses, and can encourage him to strengthen certain products rather than others, in other words to respond to a request that is expressed.

This is the main objective of the Commerce Product Reminder module, which we will discover here.

The configuration of the module is quite simple. Its main concept is to provide a subscription form, allowing the user to subscribe with an email, on Drupal Commerce's Product entities, and then to notify all subscribers as soon as a new variation of the product in question is published (or if an existing unpublished variation is published again).

Module configuration

The module offers us several general configuration options

Commerce Product Reminder general settings

So we can:

  • Disable the sending of notification emails if necessary
  • Use background tasks to send notification mails (recommended option)
  • Log the sending of each notification
  • And finally, select the different product types of Drupal Commerce on which to activate the subscription form.

The module configuration options also allow you to customize the various text elements of the subscription form as well as those of the notification emails sent.

Commerce Product Reminder form settings

The various configurable text elements of the subscription form are

  • An introductory text for the subscription form
  • The label of the form submission button
  • The subscription confirmation message
  • And finally an introductory text on the page allowing anonymous visitors to manage their different subscriptions (The link to this page is automatically available on all notification emails sent to subscribers)

Commerce Product Reminder mail settings

The configurable text elements for notification mails are :

  • The sender email (leave blank to use by default the main email of the site)
  • The body of the mail (tokens associated with the Product and Product Variation entities are available)
  • The subject of the email

Once these different elements have been configured, all that remains is to activate the subscription form on the different product types, on the relevant view mode (generally the Full view mode).

Commerce Product Reminder Extra field

And now all anonymous visitors can subscribe to receive a notification as soon as a new variation is published related to the product in question.

Commerce Product Reminder subscription form

Extension du périmètre fonctionnel du module

The functional scope of the module consists above all in notifying users of a new variation published on a Drupal Commerce product. And its main contribution consists in providing the storage of subscribers, giving them access to a page allowing them to manage their subscriptions while being an anonymous visitor, and then to manage the sending of notification emails.

The logic that determines the sending of notification mails ultimately represents a very small part of the module and can be easily modified by a Drupal 8 developer, as for example if you want to generate the sending of notifications on any other property or fields of a product variation.

It is enough to implement an EventSubscriber that will react on events related to a product variation, events propagated by Drupal Commerce itself, or to overload the EventSubscriber used by the module itself.

/**
 * Class ProductVariationSubscriber.
 */
class ProductVariationSubscriber implements EventSubscriberInterface {

  /**
   * Drupal\commerce_product_reminder\HelperServiceInterface definition.
   *
   * @var \Drupal\commerce_product_reminder\HelperServiceInterface
   */
  protected $helper;

  /**
   * Queue factory.
   *
   * @var \Drupal\Core\Queue\QueueFactory
   */
  protected $queueFactory;

  /**
   * ProductVariationSubscriber constructor.
   *
   * @param \Drupal\commerce_product_reminder\HelperServiceInterface $helper
   * @param \Drupal\Core\Queue\QueueFactory $queue_factory
   */
  public function __construct(HelperServiceInterface $helper, QueueFactory $queue_factory) {
    $this->helper = $helper;
    $this->queueFactory = $queue_factory;
  }

  /**
   * {@inheritdoc}
   */
  public static function getSubscribedEvents() {
    $events[ProductEvents::PRODUCT_VARIATION_INSERT] = ['onProductVariationInsert'];
    $events[ProductEvents::PRODUCT_VARIATION_UPDATE] = ['onProductVariationUpdate'];
    return $events;
  }

  /**
   * This method is called when the product_variation_insert event is dispatched.
   *
   * @param \Drupal\commerce_product\Event\ProductVariationEvent $event
   *   The dispatched event.
   */
  public function onProductVariationInsert(ProductVariationEvent $event) {
    $product_variation = $event->getProductVariation();
    if ($product_variation->isPublished()) {
      $this->sendMailForReminderRelated($product_variation);
    }
  }

  /**
   * This method is called when the product_variation_update event is dispatched.
   *
   * @param \Drupal\commerce_product\Event\ProductVariationEvent $event
   *   The dispatched event.
   */
  public function onProductVariationUpdate(ProductVariationEvent $event) {
    $product_variation = $event->getProductVariation();
    $product_variation_original = $product_variation->original;
    if (!$product_variation_original instanceof ProductVariationInterface) {
      return;
    }
    if ($product_variation->isPublished() && !$product_variation_original->isPublished()) {
      $this->sendMailForReminderRelated($product_variation);
    }
  }

  /**
   * Send reminder mails or queue them.
   *
   * @param \Drupal\commerce_product\Entity\ProductVariationInterface $product_variation
   */
  protected function sendMailForReminderRelated(ProductVariationInterface $product_variation) {
    if (!$this->helper->isEnabled()) {
      return;
    }
    $data = [];
    $data['product_variation_id'] = $product_variation->id();
    $reminders = $this->helper->getRemindersFromVariation($product_variation);
    foreach ($reminders as $reminder) {
      $data['reminder_id'] = $reminder->id();
      if ($this->helper->useCron()) {
        $this->queueFactory->get('commerce_product_reminder_worker')->createItem($data);
      }
      else {
        $this->helper->sendMail($product_variation, $reminder);
      }
    }
  }
}

And then you can modify the methods that are in charge of acting on product variation update or create events, and thus introduce your own business logic. The module will take care of all subscription management and sending notification emails according to your own needs.

A very likely evolution of the module will certainly be to allow an easier modification of the logic triggering notifications, whether via a plugin system or additional configuration options, or even to support other contributed modules such as those related to inventory management for example.

Sep 25 2019
Sep 25

Our dedicated Global Maintenance Team works diligently with our clients to keep their sites updated, secure, and fresh. In this blog, we’ll outline three common maintenance practices we use to keep our clients happy and their sites running smooth. 

Quick Response Times

Regular maintenance can prevent many common issues, but even properly updated sites can have problems. And, when they inevitably do, clients need a quick response – that’s precisely where our team excels. Whether a site is down or clients need help editing content, we’re ready to help. 
 
We use the same channels of communication both internally and externally and this is one of the reasons we have such quick response times. All of our client conversations about projects take place in Slack, so clients can raise an issue at any time and get a quick reply from anybody on the team. This can mean we can take action immediately, whether it's by troubleshooting over chat, creating a ticket for support, or escalating the task for immediate action.

In all cases, we’re able to deliver swift and transparent solutions for our clients because we are able to communicate directly with them.

Backups

Accidents happen, and when they do we can help. In one recent situation, a client deleted a user and subsequently deleted all the content associated with that user as well. It wasn’t immediately clear what had happened, but the site’s performance was suffering. We jumped in to investigate and found the root cause. From there, it was only a matter of contacting amazee.io (our favourite hosting provider) to restore the old back-up on production. After that, everything came back to life and went back to normal. We were able to investigate and solve the client's issue in a transparent and timely manner.  

amazee.io Drupal Example

Audits

We have maintenance and support clients that come to us after building their sites with our Amazee Labs Development Team, as well as clients who hire us to take care of sites built elsewhere. Maintaining existing sites built by other agencies means the code may or may not be in great shape. Every time we onboard a new client, we audit their site. During this process, we check if the modules or core are hacked, patched or up to date. We also check the caching settings and any custom code. Once we’ve done this, and fixed potential issues, we fully onboard new clients into our systems and tools (Github, Lagoon, Jira, etc). 

Site Audit Example

Drupal updates (... and patch parties)

We help our clients understand the importance of frequently updating their sites. In most cases, updates are lightweight and come with instant benefits, like performance and security. Updating the core and modules that make up a site is a common task for our team. For clients that prefer to update their sites less frequently, these updates can be done periodically or in batches. But critical security updates are a different story. 

Security Advisories Example

Every now and then, critical security updates are released for Drupal core or specific modules. These updates need to be pushed immediately because neglecting them can make a site vulnerable to hacking, a loss of data, or both. For critical security updates to be implemented quickly, the maintenance team holds patch parties. 

During a patch party, we get team members from all around the world to focus on making sure all our clients’ sites are secure. For some sites, we have automation scripts, for others, we need to do things manually. Either way, we get all hands on deck to monitor everything and keep our clients updated.

During these concerted efforts, it’s great to have a globally distributed team so we can work continuously to make sure every site is updated, functional, and secure. 

Important events

One of the benefits of keeping our client communication in Slack  –  it’s visible to everyone on the team. That way someone is always available to help and the client is able to monitor its progress.

For important events on certain sites (newsletters, leads exports, etc) we use Slack integration to make sure everything runs smoothly and everyone knows what’s happening and when. You can read more on the subject in this blog post.

With the right tools and our dedicated team of experts, we make sure our clients' sites stay secure and up to date. Stay tuned for more blog posts in this series. 

If you’d like to learn more about the benefits of keeping your website well maintained and ahead of the competition, drop us a line. We’d love to hear from you. 
 

Sep 25 2019
Sep 25

Video of Drupal 8 Override Node Options Module - Daily Dose of Drupal Episode 235

The Drupal 8 Override Node Options module is a simple module that allows you to set who can edit specific node options when creating or editing nodes. This includes things such as the published checkbox, sticky checkbox, promoted to frontpage checkbox, revision information, and authoring information. This is a useful module for building out a more complex content workflow or perhaps just simplifying the content editing experience on your Drupal 8 site by hiding unneeded node options.

Download and install the Override Node Options module just like you would any other module.

composer require drupal/override_node_options

After installing the module, you can configure the module to turn on Global permissions across all node types and specific permissions for each node type. These checkboxes just add additional permissions options on the permissions page.

If you go to the permissions page and search for the Override Node Options section, you will see the available permission options. Here you can set the permission for who can view the various authoring fields that show up on the Node edit forms. You can easily use this to configure specific roles to be able to edit only the authoring information you want them to be able to access.

Sep 24 2019
Sep 24

Our lead community developer, Alona Oneill, has been sitting in on the latest Drupal Core Initiative meetings and putting together meeting recaps outlining key talking points from each discussion. This article breaks down highlights from meetings this past week.

You'll find that the meetings, while also providing updates of completed tasks, are also conversations looking for community member involvement. There are many moving pieces as things are getting ramped up for Drupal 9, so if you see something you think you can provide assistance on, we encourage you to get involved.

Drupal 9 Readiness Meeting

September 16, 2019

Meetings are for core and contributed project developers as well as people who have integrations and services related to core. Site developers who want to stay in the know to keep up-to-date for the easiest Drupal 9 upgrade of their sites are also welcome.

  • It usually happens every other Monday at 18:00 UTC.
  • It is done over chat.
  • Happens in threads, which you can follow to be notified of new replies even if you don’t comment in the thread. You may also join the meeting later and participate asynchronously!
  • Has a public Drupal 9 Readiness Agenda anyone can add to.
  • The transcript will be exported and posted to the agenda issue.

Guzzle, Diactoros, symfony/http-client, and PSRs-7, PSRs-17, and PSRs-18

Drupal 9/8 PHP version requirements

MySQL 5.7 and MariaDB 10.1 will officially end support in Oct 2020

Chx suggested splitting the MySQL and MariaDB drivers eventually as they continue to diverge.

Stable upgrade status, but missing features

Gábor Hojtsy announced Upgrade Status went stable a few days ago. There are various missing features: 

Documenting deprecated code / triggering deprecation errors properly

Drupal 8.8 is the deadline for:

Remove core's own uses of deprecated APIs

Drupal core's own deprecation testing results are really close to done.

Drupal Module Upgrader 7 => 9 directly

There’s been lots of work recently by Amit GoyalRohit Joshi, and Pranit Jha to add Drush 9 support and make the transformations produce Drupal 9 compatible results. They also made the test suite green with lots of work and are looking into the possibility to write new transformations with rector. Unfortunately, due to conflicts of dependencies, rector cannot be added to a Drupal instance without Support PHPUnit 7 optionally in Drupal 8, while keeping support for 6.5 being resolved.

Deprecations in contrib

Admin UI Meeting

September 18, 2019

  • Meetings are for core and contributed project developers as well as people who have integrations and services related to core. 
  • Usually happens every other Wednesday at 2:30pm UTC.
  • Is done over chat.
  • Happens in threads, which you can follow to be notified of new replies even if you don’t comment in the thread. You may also join the meeting later and participate asynchronously!
  • There are roughly 5-10 minutes between topics for those who are multitasking to follow along.
  • The Admin UI Meeting agenda is public and anyone can add new topics in the document.

Design Revision 1

The breakpoint for Cards set to 85rem Vertical Tabs style update.

Design revision 2: heading spacings

We have several options for defining the default:

  • Equal vertical+bottom space: it could be 1em or 0.75em. (margin: 1em 0; or margin: 0.75em 0;
  • Only top: 1em or 0.75em.
  • Only bottom: 1em or 0.75em.
  • Different spacings for top&bottom: margin: 1em 0 0.75em;> go for this

UX meeting

We did a Claro demo and found some bugs. The first one for messages, without icon and title and working on that in a follow-up.

Drupal Core Cross-Initiative Meeting

September 19, 2019

UX Update

Cristina Chumillas talked about UX with the following update:

  • Claro is on track, design components need to be reviewed and blockers resolved.
    • Several issues are nearly complete.
    • Several that still need work from the blocker's list.
  • Issue submitted to add Claro to Drupal Core:
    • Close to getting a green light to add it as an alpha module.
  • Next steps:
    • Need additional accessibility maintainer reviews.
    • Need additional RM support to understand what the level of sign off should be from each of the maintainers.

Workspaces Update

Andrei Mateescu talked about Workspaces with the following update:

Status:

  • List of stable blockers obtained after meeting with maintainers/rm’s.
  • Before marking stable, we need a conversion of path aliases to entities (big patch), which adds some risk.
  • 2 other major asks:
    • Compatible w/ content moderation.
    • Ability to add subworkspaces.
  • On track right now for getting int core, pending final reviews of those changes.

Blockers: 

  • None right now.

Next steps:

  • Work on the 3 major things identified, get final reviews.

Multilingual Migration

Gábor Hojtsy, Alex Pott, and Michael Lutz talked about Multilingual Migration with the following update:

Status:

  • Most issues to get it committed progressing, Alex Pott is working with V Spagnolo on final ones.
  • Hard to get data migrated correctly and still grab old migrations to fit them into new formats.
    • For revisions + translations, one big overhaul of node revision table is the solution landed on to maintain backwards compatibility.
  • Solution is actually working (yay!) just need to do some cleanup, testing & validation.

Blockers:

  • Testing is required to validate the solution will work for people who are expecting granular controls.

Next steps:

  • Testing / Validating the solution to manage both revisions and translations.
  • Later reviews from entity subsystem maintainers, framework mgmt, PM, and RM will need to happen.
    • Meet with Lucas Hedding working with V Spagnolo to review the potential breakdown-scenarios.

Drupal 9

Gábor Hojtsy provided a Drupal 9 update:

Status:

  • Deprecations => making a ton of progress, some hard ones left.
  • Symfony 4 => several mysterious issues found, working to resolve those issues.
  • Upgrade status => Stable release hurraayyy!! So far no issues found apart from one person who had no UI showing.
  • Drupal Module Upgrader => working w/ community to make it produce D9 compatible code. 1.5 tag released yesterday, no feedback yet.
  • Rector => converts your d8 code to d9, looking into merging this into drupal module upgrader.
  • Contrib deprecation errors, Ryan Aslett at the DA is helping to resolve the results of deprecations across contrib! 
  • Core deprecation => requirements recently redefined, D9 branch should be opened with D8.9 branch in less than a month! Whoo-hoo!

Blockers:

Next steps:

  • Hey initiative owners: make your stuff Drupal 9 ready, please.
  • Resolve final deprecations.
  • Keep working on Symfony 4 issues identified.
  • Test modules / deprecated API’s and hold a D9 sprint in Amsterdam.
  • Keep looking into merging rector.

Demo

Keith Jay provided the following update on Demo: 

Status:

  • Progressing for working with layout builder for recipe type in the basic format.
  • Working on expanding features.
  • Working on making a front-page based on layout builder.
  • Creating more tooltips.
  • Great new content coming from a UK-based chef.
  • Layout switcher also in progress.
  • 3081587 => may have a core-related issue, to be continued.

Blockers:

Next steps:

  • Keep working through the above issues, nothing needed.

Auto Updates

Lucas Hedding gave an auto updates with the following update:  

Status:

  • 3 parts:
    1. PSA tells you the update is coming.
    2. Readiness checks are preflight checks to confirm your site can be updated.
    3. In-place updates do the fix.
  • The first two parts have been released in alpha.
  • Video podcast prepared, will be live first weekend of October.
  • DA blog post to promote will follow.
  • In-place updates are also progressing.

Blockers:

  • One issue that needs further discussion to get to RTBC, could use core committer review on this so they know what the blockers are.

Next steps:

  • Work through final issues related to part 3 => in-place updates.
  • Testing and validation to get to RTBC => beta release with all features.
  • Work through the issues identified => stable contrib release.
  • Core release will happen later, not to be rushed at this point.

Composer

Ryan Aslett gave an update on Composer with the following update:

Status:

  • Down to the last couple of items.
  • Made tons of progress.

Blockers:

Next steps:

  • Final reviews / Remediations from core committers and Alex Pott.
  • Write documentation / enablement supports / marketing & promotion of improvements
  • Commit to 8.8!
Sep 24 2019
Sep 24

"Disability is an avoidable condition caused by poor design.”

This is a sentence that I found in a Certified Professional in Accessibility Core Competencies (CPACC) study guide, and it has given me a new perspective on accessibility. 

For those of us who are steeped in the world of web accessibility, an openness to constantly broadening perspectives is at the heart of our effectiveness -- whether we’re thinking about a wide range of experiences when designing sites or paying close attention to the language surrounding disabilities. 

Words matter, and how we talk to and about people with disabilities factors into the bigger picture of our effectiveness as accessibility evangelists. 

Here are some of the essential lessons that I have learned recently as both a web accessibility developer and as a person who is devoted to understanding a wide range of perspectives

Put People First

People-first language puts the person before their disability -- sending a subtle, but powerful, signal that the person is not defined by their disability.
Some examples of person-first language: 

  • A person who is blind 
  • A person with a hearing impairment
  • A person who uses a wheelchair

Notice that when the word “person” comes before any mention of a disability, we are literally putting the person first.

Identity-First Dilemma

While it may appear to be a direct contradiction to person-first language, identity-first language places the disability before the person. Individuals who prefer this form of speech argue that having a disability has had a major influence on their lives and who they are as a person. Their disability is nothing that they need to hide or be ashamed of.  

Some examples of identity-first language:

  • A blind person
  • A hearing-impaired person
  • A disabled person

Given that these two types of language can appear to be in direct contradiction to each other, determining what’s the preferred form and when to use it can be confusing.

In general, it is fair to assume that person-first language does not offend anyone. It is a benign form of speech and if the individual does have a preference, they will usually inform you of such. However, if there is any doubt or discomfort about which form of speech a person with a disability prefers, it’s okay to just ask.

More Insights

Best intentions don’t guarantee against oversights or offer the ability to view the world from another person’s perspective. So let’s take a closer look at some of the terms that many of us use in our day-to-day language, as well as some outdated language, and some terms you should absolutely never use.

“Accessible” vs. “Disabled” or “Handicapped”

When talking about places with accommodations for people with disabilities, use the term "accessible" rather than "disabled" or "handicapped." 

This is how the importance of this distinction was explained to me. I was asked as a mother of a small child whether I ever used the “handicapped” bathroom stall in a public restroom. I said, “sure.” I was then asked why. 

“Well,” I said, “because it is large and easier to maneuver in with my child.” The person then said, “so you use the bathroom because it is more accessible.”

"Uses a Wheelchair" vs. "Wheelchair Bound"

“Wheelchair bound” is a term that many of us use in our daily language and we should avoid it. It has a restrictive connotation, and implies that the wheelchair is a negative thing, instead of something that broadens possibilities and makes a person’s life more manageable. The wheelchair is a tool that helps to provide access, not a punishment that the individual is bound to. Instead of “wheelchair bound,” try saying “uses a wheelchair” or “wheelchair enabled.” 

Strike from the Vocabulary!

There are certain terms we all should take out of our vocabulary entirely. They include retarded, retard, handicapable, cripple, crippled, victim, stricken, or special needs. All of these terms are negatives and in the case of the top two on the list, absolutely unacceptable. In every case, they imply that people with disabilities are not “normal.”

Some additional “don’ts”

  • Don’t ask a person with a disability how they became disabled.
  • Don’t assume that all disabilities are easily observed. The fact that a person using an accessible parking spot is not using a walking aid does not mean that they are lazy or disrespecting the needs of legitimate users of the space. They could have a pain condition or some other issue preventing them from walking long distances. Often, there is more to a situation than can be detected from a casual observation. 
  • When working remotely, don’t presume that you know everyone’s story. There is much that you may not know concerning a team member or client on the other end of a call -- even if it is a video call. Making language sensitivity a habit, in all circumstances, is not just the right thing to do. It’s good business. 

At Promet Source, we’re actually a lot more interested in the “do’s” of accessible web experiences than the “don’ts.” So if you are looking for an empowering web design that’s excellent and accessible? Contact us today.

Sep 24 2019
Sep 24
Acquia partners with Vista Equity Partners

Today, we announced that Acquia has agreed to receive a substantial majority investment from Vista Equity Partners. This means that Acquia has a new investor that owns more than 50 percent of the company, and who is invested in our future success. Attracting a well-known partner like Vista is a tremendous validation of what we have been able to achieve. I'm incredibly proud of that, as so many Acquians worked so hard to get to this milestone.

Our mission remains the same

Our mission at Acquia is to help our customers and partners build amazing digital experiences by offering them the best digital experience platform.

This mission to build a digital experience platform is a giant one. Vista specializes in growing software companies, for example, by providing capital to do acquisitions. The Vista ecosystem consists of more than 60 companies and more than 70,000 employees globally. By partnering with Vista and leveraging their scale, network and expertise, we can greatly accelerate our mission and our ability to compete in the market.

For years, people have rumored about Acquia going public. It still is a great option for Acquia, but I'm also happy that we stay a private and independent company for the foreseeable future.

We will continue to direct all of our energy to what we have done for so long: provide our customers and partners with leading solutions to build, operate and optimize digital experiences. We have a lot of work to do to help more businesses see and understand the power of Open Source, cloud delivery and data-driven customer experiences.

We'll keep giving back to Open Source

This investment should be great news for the Drupal and Mautic communities as we'll have the right resources to compete against other solutions, and our deep commitment to Drupal, Mautic and Open Source will be unchanged. In fact, we will continue to increase our current level of investment in Open Source as we grow our business.

In talking with Vista, who has a long history of promoting diversity and equality and giving back to its communities, we will jointly invest even more in Drupal and Mautic. We will:

  • Improve the "learnability of Drupal" to help us attract less technical and more diverse people to Drupal.
  • Sponsor more Drupal and Mautic community events and meetups.
  • Increase the amount of Open Source code we contribute.
  • Fund initiatives to improve diversity in Drupal and Mautic; to enable people from underrepresented groups to contribute, attend community events, and more.

We will provide more details soon.

I continue in my role

I've been at Acquia for 12 years, most of my professional career.

During that time, I've been focused on making Acquia a special company, with a unique innovation and delivery model, all optimized for a new world. A world where a lot of software is becoming Open Source, and where businesses are moving most applications into the cloud, where IT infrastructure is becoming a metered utility, and where data-driven customer experiences make or break business results.

It is why we invest in Open Source (e.g. Drupal, Mautic), cloud infrastructure (e.g. Acquia Cloud and Site Factory), and data-centric business tools (e.g. Acquia Lift, Mautic).

We have a lot of work left to do to help businesses see and understand the power of Open Source. I also believe Acquia is an example for how other Open Source companies can do Open Source right, in harmony with their communities.

The work we do at Acquia is interesting, impactful, and, in a positive way, challenging. Working at Acquia means I have a chance to change the world in a way that impacts hundreds of thousands of people. There is nowhere else I'd want to work.

Thank you to our early investors

As part of this transaction, Vista will buy out our initial investors. I want to provide a special shoutout to Michael Skok (North Bridge Venture Partners + Underscore) and John Mandile (Sigma Prima Ventures). I fondly remember Jay Batson and I raising money from Michael and John in 2007. They made a big bet on me — at the time, a college student living in Belgium when Open Source was everything but mainstream.

I'm grateful for the belief and trust they had in me and the support and mentorship they provided the past 12 years. The opportunity they gave me will forever define my professional career. I'm thankful for their support in building Acquia to what it is today, and I am thrilled about what is yet to come.

Stay tuned for great things ahead! It's a great time to be an Acquia customer and Drupal or Mautic user.

September 24, 2019

3 min read time

Sep 24 2019
Sep 24

4 minute read Published: 24 Sep, 2019 Author: Colan Schwartz
Drupal Planet , Aegir , DevOps

Aegir is often seen as a stand-alone application lifecycle management (ALM) system for hosting and managing Drupal sites. In the enterprise context, however, it’s necessary to provide mutiple deployment environments for quality assurance (QA), development or other purposes. Aegir trivializes this process by allowing sites to easily be copied from one environment to another in a point-and-click fashion from the Web front-end, eliminating the need for command-line DevOps tasks, which it was designed to do.

Setting up the environments

An Aegir instance needs to be installed in each environment. We would typically have three (3) of them:

  • Development (Dev): While generally reserved for integration testing, it is sometimes also used for development (e.g. when local environments cannot be used by developers or there are a small number of them).
  • Staging: Used for QA purposes. Designed to be a virtual clone of Production to ensure that tagged releases operate the same way as they would there, before being made live.
  • Production (Prod): The live environment visible to the public or the target audience, and the authoritative source for data.

(While outside the scope of this article, local development environments can be set up as well. See Try Aegir now with the new Dev VM for details.)

To install Aegir in each of these, follow the installation instructions. For larger deployments, common architectures for Staging and Prod would include features such as:

  • Separate Web and database servers
  • Multiple Web and database servers
  • Load balancers
  • Caching/HTTPS proxies
  • Separate partitions for (external) storage of:
    • The Aegir file system (/var/aegir)
    • Site backups (/var/aegir/backups)
    • Database storage (/var/lib/mysql)
  • etc.

As these are all out of scope for the purposes of this article, I’ll save these discussions for the future, and assume we’re working with default installations.

Allowing the environments to communicate

To enable inter-environment communication, we must perform the following series of tasks on each Aegir VM as part of the initial set-up, which only needs to be done once.

Back-end set-up

The back-ends of each instance must be able to communicate. For that we use the secure SSH protocol. As stated on Wikipedia:

SSH is important in cloud computing to solve connectivity problems, avoiding the security issues of exposing a cloud-based virtual machine directly on the Internet. An SSH tunnel can provide a secure path over the Internet, through a firewall to a virtual machine.


Steps to enable SSH communication:

  1. SSH into the VM.
    • ssh ENVIRONMENT.aegir.example.com
  2. Become the Aegir user.
    • sudo -sHu aegir
  3. Generate an SSH key. (If you’ve done this already to access a private Git repository, you can skip this step.)
    • ssh-keygen -t rsa -b 4096 -C "ORGANIZATION Aegir ENVIRONMENT"
  4. For every other environment from where you’d like to fetch sites:
    1. Add the generated public key (~/.ssh/id_rsa.pub) to the whitelist for the Aegir user on the other VM so that the original instance can connect to this target.
      • ssh OTHER_ENVIRONMENT.aegir.example.com
      • sudo -sHu aegir
      • vi ~/.ssh/authorized_keys
      • exit
    2. Back on the original VM, allow connections to the target VM.
      • sudo -sHu aegir
      • ssh OTHER_ENVIRONMENT.aegir.example.com
      • Answer affirmatively when asked to confirm the host (after verifying the fingerprint, etc.).

Front-end set-up

These steps will tell Aegir about the other Aegir servers whose sites can be imported.

  1. On Aegir’s front-end Web UI, the “hostmaster” site, enable remote site imports by navigating to Administration » Hosting » Advanced, and check the Remote import box. Save the form. (This enables the Aegir Hosting Remote Import module.)
  2. For every other server you’d like to add, do the following:
    1. Navigate to the Servers tab, and click on the Add server link.
    2. For the Server hostname, enter the hostname of the other Aegir server (e.g. staging.aegir.example.com)
    3. Click the Remote import vertical tab, check Remote hostmaster, and then enter aegir for the Remote user.
    4. For the Human-readable name, you can enter something like Foo's Staging Aegir (assuming the Staging instance).
    5. You can generally ignore the IP addresses section.
    6. Hit the Save button.
    7. Wait for the server verification to complete successfully.

All of the one-time command-line tasks are now done. You or your users can now use the Web UI to shuffle site data between environments.

Select remote site to import

Deploying sites from one environment to another

Whenever necessary, this point-and-click process can be used to deploy sites from one Aegir environment to another. It’s actually a pull method as the destination Aegir instance imports a site from the source.

Reasons to do this include:

  • The initial deployment of a development site from Dev to Prod.
  • Refreshing Dev and Staging sites from Prod.

Steps:

  1. If you’d like to install the site onto a new platform that’s not yet available, create the platform first.
  2. Navigate to the Servers tab.
  3. Click on the server hosting the site you’d like to import.
  4. Click on the Import remote sites link.
  5. Follow the prompts.
  6. Wait for the batch job, Import and Verify tasks to complete.
  7. Enable the imported site by hitting the Run button on the Enable task.
  8. The imported site is now ready for use!

The article Aegir DevOps: Deployment Workflows for Drupal Sites first appeared on the Consensus Enterprises blog.

We've disabled blog comments to prevent spam, but if you have questions or comments about this post, get in touch!

Sep 24 2019
Sep 24

The European edition of the 2019 DrupalCon likely features a more diverse and exciting palette of possible sessions to attend than any previous European ‘Con. There are so many of them that it’s not an easy task picking the ones you absolutely don’t want to miss. 

We at Agiledrop are especially excited by the Business + Marketing track. Since it’s practically impossible to cover all the tracks without missing most of the great sessions, we decided to focus on this track, as well as the more general Industry track. 

Without further ado, here are our picks for the must-see business, marketing and industry sessions at next month’s DrupalCon. Hope to catch you at some of them!

Business + Marketing track

The Art of Mentorship

Monday, October 28, 16:25 - 16:45 @ G104

Maria Totova, Drupal developer, trio-group communication & marketing gmbh, Coding Girls
Todor Nikolov, Drupal developer, Tech Family Ventures, Coding Girls

This session will dive into the importance of mentorship and how the relationship benefits both mentee and mentor. Being mentors themselves, Maria and Todor will share their experiences with teaching and give some tips on effective mentorship. 

If you’re thinking about becoming a mentor, but have some hesitations, or if you’re already mentoring someone, but feel like you could use some improvements, this is definitely a session you’ll want to attend.

The Good, The Bad and The Data: Marketing Strategies for Open Source Companies

Monday, October 28, 17:15 - 17:35 @ G102

Felix Morgan, Content Manager, Amazee Group

This is the perfect session for companies working with open source software that are struggling with marketing. Amazee’s Felix Morgan will present some marketing best practices for such companies by covering three different topics: personas and stakeholders; community and narrative; and data.

Winning and retaining long term clients

Tuesday, October 29, 17:15 - 17:55 @ G103

Owen Lansbury, Co-founder, PreviousNext

Acquiring clients is already a major challenge agencies have to deal with. Retaining these clients, then, and turning them into long-term clients is an even greater challenge. Owen’s session will provide insights on spotting and winning over the types of clients with whom you can forge a long-term relationship, as well as then cultivating that relationship.

Women on top: How to get (and keep) women in your leadership roles

Wednesday, October 30, 9:00 - 9:40 @ G109

Shannon Vettes, Factories Program Manager, Acquia
Lindsey Catlett, Drupal architect, Newell Brands
Jenn Sramek, Director of Learning Services, Acquia

It’s no secret that there’s quite a scarcity of women in technology, especially in positions of leadership. But this lack of diversity is actually harmful to business itself; teams with a greater percentage of women and with women as leaders are generally more productive and successful.

This session will talk about the bias towards women in IT and illustrate the challenges they face in this field, while also providing tips to combat this and attract and retain a diverse range of talent.

Industry track

How to start contributing to Drupal without Code

Monday, October 28, 15:25 - 15:45 @ G102

Paul Johnson, Drupal Director, CTI Digital

Non-code contributions to open source are just as welcome as all the code contributions, and often that much more needed. Much too often, however, non-code contributions to open source have gone underappreciated. 

Fortunately, Paul Johnson is remedying this in the Drupal community and encouraging contribution of any kind. His session will serve as a stepping stone for non-developers working in Drupal to get involved and start contributing.

Drupal’s place in an evolving landscape - Modernising your Commerce architecture

Tuesday, October 29, 10:30 - 11:10 @ G106

Richard Jones, CTO, Inviqa

One of the big buzzwords in Drupal right now is “headless” or “decoupled”. Alongside Drupal, another area where the “headless” approach is gaining ground is ecommerce. In his session, Richard will take a look at the evolution of commerce websites, as well as how Drupal can be used in the commerce ecosystem as the content and experience layer.

In Their Own Words: Stories of Web Accessibility

Wednesday, October 30, 15:25 - 15:45 @ G103

Helena McCabe, Technical Account Manager, Lullabot

Even though the situation is improving, accessibility is still much too often considered of secondary importance when setting up a website. During her session, Helena McCabe will share first-person stories of people with disabilities, with the aim of inspiring attendees to adopt a more inclusive and accessible mindset when designing experiences for the web.

4 Keys to a Successful Globalization Strategy and CMS Platform Architecture

Wednesday, October 30, 15:00 - 15:40 @ Auditorium

Ann-Marie Shepard, Domain Architect, IBM
Tina Williams, Digital and Content Strategist, IBM

For a business operating in international markets, it’s no easy task to keep producing relevant content and maintain web platforms for all the different audiences it’s trying to reach. A well thought-out globalization strategy is needed for this. 

In this session, you’ll learn both the business requirements and the technical solution behind IBM’s optimization of Drupal 8’s translation capabilities to support a successful globalization strategy. 

This was our selection of some of the most interesting sessions from the upcoming DrupalCon. Of course, with so many different tracks, there are many more great ones to attend - you can check out the whole day-by-day and track-by-track program here. See you in Amsterdam next month!
 

Sep 24 2019
Sep 24

Introduction

E-learning boom is gaining momentum. That can be explained not only by the convenience of use but also by the low cost. Many businesses that are interested in developing employees can save a lot of money by implementing e-learning instead of offline learning. Besides, according to SHIFT, 42% of companies say that implementing e-learning has lead to a revenue increase. 

This article will show you why you should use Drupal for creating an e-learning platform. 

Drupal Learning Management System (LMS)

Take a quick read about Drupal advantages. 

Remember, online learning platforms for businesses should be specialized on the company’s goals and should fulfill employees' needs. For example, you need to increase sales, but some sales employees don’t possess the necessary knowledge. That means you need to provide a course where the employees will learn this information. It’s also possible to create a course not only providing knowledge but also for training existing skills.

Here are some tips the company can use while providing an educational course to the employees:

  • Use blended learning. Don’t concentrate only on a specific way of producing materials for studying. Drupal allows mixing learning tools by using video, texts, flashcards, etc. because it has video-based modules, gaming and gamification content, scenario-based learning, microlearning modules, multilingual courses, images, and infographics.
     

  • Create learning paths. Tracking different participants' ways of learning teaches to allocate some problems which need to be cut out or improvements that need to be made. For example, if your employee doesn’t know how to prioritize everyday tasks to achieve a company goal, an e-learning platform should train the employee on how to do it. 
     

  • Reward and recognize training achievements. Even on e-learning platforms, rewards motivate for studying. Here is a tip. Make a microsystem of students where they can act as experts and students at the same time. Some students can help others by correcting their answers and for these corrections receive a nomination like “the best correction”, “the best mentor”, “the best-provided information”. Such gamification increases engagement.
     

  • Ask your employees what they need is the easiest way to engage employees in corporate learning and make the learning effective. The thing is, the revenue generated per employee is 26 % bigger for companies that offer e-learning, according to Topyx.

Drupal modules and profiles for e-learning

First, define a company’s e-learning goal, then choose a realization tool. It can be a profile or a combination of modules. We made a review of Drupal modules and profiles which can be used for e-learning goals. Take a read and choose the best for the company.

  1. Opingo
    The Opingo profile includes the flexible access system based on such roles as a student, teacher, coach, administrator, etc. It has the Theory and Quiz modules to create engaging and interactive content. Opingo has adaptive learning path management, where training materials can be adapted to every student according to his/her previous achievements and some conditional rules. There is a module that manages virtual classroom sessions and allows to implement Instructor-led sessions.
    One of the Opingo advantages is the implemented certificate module. If a student successfully completes the training, a PDF certificate is automatically generated. The e-learning platform is more valuable when it provides students with certificates.
     

  2. Open Academy
    This is a Drupal distribution that brings the best in web publishing for higher education on a customizable Drupal platform. It’s designed to provide academic departments in higher education with a ready to use website. It has sections for courses, news, people, events, and publications. Recent news, events, and publications are listed on the main page. Open Academy is easy in use because of design simplicity. Besides, most academic department's needs are well accomplished there.
     

  3. Julio
    Julio is a distribution targeted for schools, school districts, small colleges, and academic departments within universities. There are sections about faculty and staff, students’ life, academics, admissions, parents and guardians, and sports. Empowered persons can create announcements, events, galleries, and group posts in all sections except for the faculty and staff.
     

  4. Course
    The Application Programming Interface (API) in the Course helps to allocate goals which can be added to the employee’s workflow.
    There is a possibility to use objects which will be marked as graded or ungraded course objects. Also, the Course provides a framework for external learning application integration.
     

  5. Room Reservations
    This module is created for managing study rooms being used while learning.
     

  6. Quiz
    The Quiz module in Drupal allows building graded analytics, the results can be displayed after and during the quiz.
    It’s an effective way to track a student’s progress because it’s easy to analyze. The students are able to see their improvement and also to receive feedback from an administrator. This module also includes the certificate module. Read about 10 Drupal modules for quizzes.
     

  7. User Progress API
    The User progress API module was sponsored by Pennsylvania State University. It has been developed for charting students’ progression throughout a system. 
     

  8. Badges
    The digital badging Drupal module helps provide a visual demonstration of an achievement to enhance accomplishment in the eyes of the student as well as college admission officers.
    Such digital images as badges help to recognize learner’s skills and achievements. That type of visualization gives an opportunity for the student to actually feel improvement during studying.  
     

  9. Certificate
    The Certificate module creates and awards PDF certificates using tokenized HTML templates.
     

  10. H5P - Create and Share Rich Content and Applications
    This module is an authoring tool for rich content - you can create interactive videos, flashcards, board games, etc. Besides, H5P also enables your site to import and export H5P files. H5P files are package files for HTML5 content and applications. You can easily upload them to your Drupal site and publish rich Internet content and applications. 
     

  11. Social learning/Messageboards
    Communication between students matters, it’s like at university. The Social login and Social share Drupal modules allow to log in using the social network sites, which also helps in sharing the content with the student’s network.
    You can also read about 10 Drupal modules for communication with users and 10 free Drupal modules for integration with Social Media

While providing an e-learning platform to your employees, don’t forget that a company learning management system should be well-designed, otherwise, all good intentions will sink in the “failed user's experience” sea.  

Also, guarantee consistent instructor presence at the platform. Two types of problems will be solved by doing this. First, it will be easy to ask some appearing technical questions, therefore studying will go faster. Second, when a lack of motivation captures some students, there is an opportunity to address the instructor for motivation.

Sep 23 2019
Sep 23

The friendliness of Drupal 8 for content editors and website administrators grows every day. New handy features come thick and fast — updated Media Library interface, built-in drag-and-drop Layout Builder, media embed button in CKEditor, and so much more. 

Today, we are happy to announce another lucrative improvement — a new and modern administration theme Claro is supposed to come to D8.8 core! 

Why Drupal 8 needed a new administration theme

The idea of a new administration theme arrived as part of the Drupal team’s striving to make Drupal more competitive in everything. 

Drupal creator Dries Buytaert wrote that he had talked in 2018 to almost a hundred Drupal agency owners to discover their key stumbling blocks in selling Drupal. One of their common replies was about the admin interface having an outdated look. 

The admin UI theme, Seven, had been created back in 2008-2009 in the time of D7, with a few updates in D8. Since then, a lot of water has passed under the bridge and plenty of modern UI design trends have appeared.

There was also an admin UX study performed in 2018 when content editors where asked in detail about their impressions from working with Drupal and they suggested many improvements. According to the famous contributor who worked at this study group, Suzanne Dergacheva, Drupal 8 for content editors is “notoriously intimidating” when it comes to newbies. 

So the great minds of the Drupal community agreed that the admin UI really needed a good brush-up and a new, clean, and modern theme.

The Claro theme in Drupal 8: a good core candidate

One of the results of the above-described decisions was the appearance of the Claro theme. It is a clean, concise, responsive theme with a modern look and an enhanced level of web accessibility. It is being built on top of the Seven theme. 

New Claro admin theme in Drupal 8.8

It is now a contributed project, but there is a proposal to add the Claro theme to Drupal 8.8.0 core as an experimental theme.

The development is in full swing, with the project’s new version, alpha 5, released in September 2019. The maintainers actively welcome any feedback and bug reports about the new theme to brush it up. 

Claro theme principles and features

Here are some features of Claro, both completed and planned:

  • a new, colder color scheme
  • higher contrasts
  • touchscreen readiness
  • the Quick Edit, Toolbar, and Contextual Links components
  • redesign of content pages and all their components
  • redesign of the file and image upload widgets

The Claro theme is built in accordance with the Admin UI & JavaScript Modernisation Initiative. It strictly follows the new Drupal 8 admin theme design guidelines:

  • precise shapes and good contrasts
  • clear hierarchy and relations between elements
  • the clear purpose of each element
  • rational use of white space
  • optimal readability
  • emphasis on what matters
  • visual clues
  • friendly and cheerful colors

and more.

New admin theme Claro expected in Drupal 8.8

Claro at the Drupal Usability meeting

The theme is receiving special attention at the Drupal Usability meeting, according to Gábor Hojtsy’s tweet. The meeting experts are asking to get more feedback, so it looks like the new project is going to be polished to perfection.
 

Gábor Hojtsy about the new Claro admin theme

Claro as part of the top Drupal 8 distribution

Claro is already having a good test drive because is it part of the most popular D8 distribution, or installation kit — Lightning

By the way, speaking about Drupal 8 for content editors, we must admit Lightning is a distribution well-tailored to their needs and great for media and publishing websites. It is actively used by almost 3,000 websites. By itself, Claro is installed on 500+ sites today.

New admin theme Claro is coming to Drupal 8.8

Use the benefits of Drupal 8 for content editors with us!

As you see, the present and future features of Drupal 8 for content editors are very lucrative. To keep up with them, you can contact our development and support team will smoothly update your website to all new releases. 

Sep 22 2019
Sep 22

Our testing approach was two-fold, with one underlying question to answer: what is the most intuitive site structure for users?

Test #1: Top Task survey

During the Top Task survey, we had users rank a list of tasks we think they are trying to complete on the site, so that we have visibility into their priorities. The results from this survey informed a revised version of the navigation labels and structure, which we then tested in the following tree test. The survey was conducted via Google forms with existing Center audiences, aiming for 75+ completions.

We then used these audience-defined “top tasks” to inform the new information architecture, which we tested in our second test.

Test #2: IA tree test

During the tree testing of the Information Architecture, we stripped out any visuals and tested the outline of the menu structure. We began with a mailing list of about 2,500 people, split the list into two segments, and A/B tested the new proposed structure (Variant) vs. the current structure (Benchmark). Both trees were tested with the same tasks but using different labels and structure to see with which tree people could complete the tasks quicker and more successfully.

Sep 20 2019
Sep 20

Drupal is well known for its stability and security out-of-the-box. However, we all know how dangerous the internet can be with all the risks of having your site attacked. There are particular situations in which an extra security measures are needed. This tutorial will deal with specific ways to secure your Drupal site, and the modules involved in this process. 

Let’s get started!

The Out-of-the-Box Features

The Drupal theming system has been improved in Drupal 8 with the use of the templating language, Twig. Twig templating has many advantages over PHP templating. One of them is that it is much easier to visualize on code. Another advantage is the fact, that all variables passed to the template are auto-escaped automatically. That minimizes the risk of a variable including a result that could break the HTML, and therefore your site. The risk is also minimized because you do not have to write custom PHP code in your templates anymore.

The PHP input filter module was part of the Drupal core in Drupal 7, but in Drupal 8, it is a contributed module. This measure eliminates vulnerabilities.

Drupal 8 implemented the trusted hosts configuration. This allows you to associate the Drupal codebase with a specific domain, to prevent online spoofing attacks

Due to the new programmatic approach of Drupal, modules can be enhanced in their functionalities by adding plugins. These plugins add behaviors to the module. The code stays clean and easy to analyze.

The use of Composer as package manager opens the door to new development possibilities for Drupal Open Source software, and it also helps to maintain all modules and keep their dependencies up-to-date and working properly. This is a key factor for the stability of Drupal systems.

How to Enhance Security on Access - Contrib Modules

190919 drupal security

There are two alternatives:

  • Lock attack vectors within the system
  • Limit vulnerabilities by restricting or changing system functions and operations, taking the responsibility from the user

Here are some modules, which provide this kind of functionality:

Automated Logout

Link: https://www.drupal.org/project/autologout

This module allows you to configure the time period for users to be logged in on your site. You can configure different time periods for different user roles, assuming that a user with a higher role, i.e. with access to more valuable information or resources on the site, would need to log in more frequently, than a user with a lower role. 

Session Limit

Link: https://www.drupal.org/project/session_limit

With the Session limit module, you can determine the limit of simultaneous sessions per user on 2 different browsers. If you set the session limit to 1 and open a new session in another browser, the session of the first browser will automatically expire.

Login Security

Link: https://www.drupal.org/project/login_security

Login Security allows site administrators to authenticate user only from valid IP addresses, that way, it is possible to grant or restrict access to your site to whole countries if needed, just by configuring the right domain. Login security restricts also the number of login attempts, so brute force attacks can be avoided. An extra feature of this module is the fact that it can hide Drupal login error message. The attacker will have another problem trying out if the account he wants to access even exists. 

Security Kit    

Link: https://www.drupal.org/project/seckit

Security Kit allows developers to change response headers. This is useful for very specific security needs on sites with high probabilities of Cross-site scripting, Cross-site request forgery, and origin driven attacks.

Honeypot

Link: https://www.drupal.org/project/honeypot

Honeypot prevents robots from filling out the forms of your site, by determining if the user is human or not. It uses two identification methods. One of them is by using a hidden form field, which is not visible to humans if the field has been filled out, Honeypot detects this and blocks the submission. The other method is by taking the time in seconds, in which the form has been filled out. A low value here (2-3 seconds), would speak for a robot, so the module would block  this submission too.

CAPTCHA 

Link: https://www.drupal.org/project/captcha

A captcha is a challenge-response test, to determine whether the user is human or not. This way, the goal of blocking fake form submissions, is achieved since robots will not be able to decipher the captcha text or image.

Auditing - Checking Procedures

190919 drupal security 001

It is always important to permanently review the system logs. This is even more important if your site has already been compromised. The analysis of all this data will help you also to track transactions within your system and perform checks on the ongoing state of the system. A rule of thumb is to log as much data as possible - always!

Some of the modules, which provide this type of functionality are listed below:

Site Audit

Link: https://www.drupal.org/project/site_audit

Site audit performs a static analysis of the whole system, against a set of recommended configurations. The module also stores reports of every audit. By performing this check, you as a developer can be sure and confident that your site is meeting the required security standards.

Security Review

Link: https://www.drupal.org/project/security_review

Security review, like Site audit,  makes also an analysis of the system, but this time, against a set of potential security implications on your site, like file permissions, text formats, and potentially malicious PHP or JS code on the frontend. It also stores reports. 

Login History

Link: https://www.drupal.org/project/login_history 

Login history adds a report to the database with a log of every user login, including timestamp, IP address, and user agent information. As stated before, it is always good to log as much information as possible.

Authentication Measures

190919 drupal security 002

Often, the importance of the information, implicate a reduction of usability. Users take instead of that the extra hassle of these additional authentication procedures.

The modules that you can use for this purpose are listed below: 

Two-factor Authentication  

Link: https://www.drupal.org/project/tfa

Two-factor Authentication provides an additional verification step, which ensures the integrity of the user authenticating. It also provides an API to support different plugins (provided by modules), which integrate several authentication services, like the Google Authenticator module.

simpleSAMLphp Authentication 

Link: https://www.drupal.org/project/simplesamlphp_auth

This module will allow you to replace the default Drupal login with a single-sign-on implementation. The module communicates with identity providers for authenticating users. That way you can validate the identity of the user through a service like Twitter, Facebook or Google

Password Policy

Link: https://www.drupal.org/project/password_policy

The Password Policy module defines a set of rules to force users to have strong passwords. It also forces users to change their password from time to time, depending on the configured options. Password Policy provides an API to define your own set of rules. 

Password Strength

Link: https://www.drupal.org/project/password_strength

This module provides a star rating widget, for users to test the strength of their passwords. You can leverage the API of Password Policy to force users to enter a password with a high rating (minimum rating).

Encryption

190919 drupal security 003

The data at rest (data that is not being used) should be encrypted to prevent attacks of all types. The encryption provides solutions at all levels:

  • Hosting
  • Server
  • CDN
  • Drupal system

As a rule of thumb, and to guarantee the highest security, the best way is to perform encryption at a low level of this stack.

You should always use HTTPS and SSL for all your web traffic, and you should also ask your hosting or cloud provider about full disk encryption.

Some useful modules are:

Key

Link: https://www.drupal.org/project/key

The Key module manages the system and API keys. It provides an API with options to store and retrieve sensitive keys like API keys or encryption keys. The site admin can decide where to store these keys in the system. Examples of API keys are the public keys for services like AWS, PayPal, or MailChimp.

Encrypt

Link: https://www.drupal.org/project/encrypt

The Encrypt module is an API, which provides a common algorithms utility to encrypt and decrypt Drupal application data. Its API is leveraged by many modules, which make use of algorithms to encrypt/decrypt (Real AES, Diffuse), modules to encrypt data, like Field Encrypt and File Encrypt and other modules for more specific use cases, like Pubkey Encrypt.

File Encrypt

Link: https://www.drupal.org/project/file_encrypt

This module focuses on the file system, it performs a check of the encryption on all files.

Field Encryption

Link: https://www.drupal.org/project/field_encrypt

This module encrypts data at the field level, that is, it encrypts field values.

 DevOps (Developer Operations)

190919 drupal security 004

The development process is critical to proactively maintaining the security. You should always use code repositories and pull requests for all your files. Other measures imply performing regular code reviews, tag each one of your code releases, keep the site code always up-to-date (this includes JS libraries), and try always to avoid manual procedures because these entail mistakes. Always try to automate your scripts and use a tool like DRUSH.

Some of the relevant modules in this category are:

Coder

Link: https://www.drupal.org/project/coder

This module has PHP Code Sniffing extensions to test the code on your site and compare it against the coding standards of Drupal.org. Coder will not do anything on your UI. It is rather a command-line tool.   

Hacked

Link: https://www.drupal.org/project/hacked

The Hacked module scans the code of your core and contrib folders and compares it against the code hosted at Drupal.org. It shows the differences between both codebases, so you can take the proper measures regarding your code. 

Backup and migrate

Link: https://www.drupal.org/project/backup_migrate

This is a classic among Drupal modules. Backup and migrate performs regular backups of the codebase and of the database, so you can restore them, for example on a fresh installation. This is very useful if your system has been compromised, and you want to restore it. 

Environment

Securing the infrastructure, in which the system is hosted, is as important as securing Drupal itself. Try always to mitigate attacks before they happen. This module list is supposed to help you with that purpose. 

  1. Use a coding workflow - making sure that the best code ends at the production environment. 
  2. Make a very detailed analysis of the logs - there are very useful tools for this matter, like Splunk or the ELK stack.
  3. If it is possible, try to use cloud-based environments - these are more secure than hosted environments. 
  4. Try to use CDNs every time you can - this acts as a firewall preventing malicious attacks early in the process.
  5. Make sure you have an up-to-date failover environment and test what would happen in case of a failover. 

Please, leave us your comments and questions below.  Thanks for reading!


About the author

Jorge lived in Ecuador and Germany. Now he is back to his homeland Colombia. He spends his time translating from English and German to Spanish. He enjoys playing with Drupal and other Open Source Content Management Systems and technologies.
Sep 19 2019
Sep 19

You will notice that, along with thousands of websites around the world, Drupal.org posted a banner message this week declaring we are opting in to a global Digital Climate Strike on 20th September.

Will @drupal website add the #DigitalClimateStrike banner? @baluertl requested it and mocked up a visual... https://t.co/XcIj9Gf173 pic.twitter.com/Zl0ctyc7G6

— ClimateAction.tech (@climateActTech) September 18, 2019

Of course, because Drupal.org is an essential service to over a million websites around the world, we have to be sure that we still allow them all to continue to access resources here. As such, the full page banner that will appear on websites on the 20th September will be configured to allow visitors to cancel it, should they need to.

Fundamentally, the Drupal Association wants to be a good steward of the environment and recognizes the impact that technology has on environmental issues. We are committed to exploring ways for the Drupal project to reduce its carbon footprint and to become a more eco-friendly platform. Today, we stand with others in the technology industry to educate and inform the general public about some of the ways that the tech industry can support environmental causes.

If the environmental sustainability of Drupal websites is a subject as close to your hearts as it is to ours, you might like to know that recently a #sustainable Slack channel was created for discussion on the topic.

Sep 19 2019
Sep 19

The Drupal 8 View Unpublished module is a simple module that provides you a permission to allow specific roles to view unpublished content. It’s a useful module to help you build out your content editor workflows on your Drupal 8 website.

Download and install the View Unpublished module just like you would any other module.

composer require drupal/view_unpublished

After installing the module, go to the permissions page and search for the View Unpublished section. Here you can set the permission for who can view unpublished content on the site. After setting the permissions you will likely need to clear the cache and rebuild the access permissions before your users will be able to see the unpublished content.

That’s all there is to this module!

Sep 19 2019
Sep 19

Ubercart, once the go-to commerce option for Drupal and the precursor to Drupal Commerce, is slowly fading away. Its usage has been declining for years and a stable Drupal 8 release will never happen. Even one of the original creators has moved on to support a new Drupal ecommerce solution instead of continuing on with Ubercart. In you’re running an ecommerce site that uses Ubercart, this post is for you. Our goal is to show you why you should consider moving off of Ubercart now instead of waiting until it finally reaches end of life.

The decline of Ubercart today

As mentioned in the introduction. Ubercart usage has been declining for years. The Drupal 7 version of the module is where it saw most of its success with usage peaking in 2014/2015, but usage has been continuously dropping since then. The following graph is a snapshot of Ubercart’s usage history as recorded on Drupal.org.

Ubercart usage history
Ubercart usage history (source)

Ryan Szrama, one of the original creators of Ubercart, moved away from it and started the Commerce module for Drupal as a replacement. Since then, the majority of the ecommerce community around Drupal has also moved along with him making Drupal Commerce the new go-to option for ecommerce built on Drupal. Not only does Commerce now have more installs for both Drupal 7 and Drupal 8, but it is also a much more active development community.

Usage-statistics-for-Drupal-Commerce-1
Commerce usage history (source)

Ubercart and Drupal 8

The Ubercart module has never moved over to a proper Drupal 8 release. Development is stuck in alpha and without a new release in over 3 years, there is never going to be a stable Drupal 8 release.

What “alpha” means

In software development, alpha is a term given to a software release that is still very much in development and not ready for production. Here’s the description of alpha from Drupal.org.

alpha: Most reported errors are resolved, but there may still be serious outstanding known issues, including security issues. Project is not thoroughly tested, so there may also be many unknown bugs. There is a README.txt/README.md that documents the project and its API (if any). The API and DB schema may be unstable, but all changes to these are reported in the release notes, and hook_update_N is implemented to preserve data through schema changes, but no other upgrade/update path. Not suitable for production sites. Target audience is developers who wants to participate in testing, debugging and development of the project.

In contrast, the Drupal Commerce module has had many full production-ready releases for Drupal 8 and follows a release schedule for bug fixes and new features. The group behind Drupal Commerce is actively developing the core software and the wider community is also active in supporting the project.

Ubercart and Drupal 7

What Ubercart development still happens focuses on maintenance of the Drupal 7 version only. The catch here is that Drupal 7 reaches end of life November 2021, which will likely spell the effective end of Ubercart as well. If you’re using Ubercart and Drupal 7 together and you want new features and active development, that realistically ended years ago when the majority of the contributor community moved away from the project.

Here’s a couple snapshots of the commit history for both the core Ubercart module and the core Drupal Commerce module. A commit is a term given to code changes that have been added to the module. Commits are typically code improvements, new features, bug fixes and security updates that have been written, tested and approved for release.

ubercart-commit-historyUbercart commit history

drupal-commerce-commit-history
Commerce commit history

When looking at the graphs above, it’s important to know that it’s common to see number of commits trailing off over time. This is because the majority of the core software is built early on and so fewer commits are made over time as development of the core ramps down. What is important to see is that development of Drupal Commerce over Ubercart is still continuing, meaning new features and code improvements are being actively made to the core Commerce software but not to Ubercart.

Another point to note about these graphs is that when commits are ramping down to the core software, development efforts are likely being moved to community built extensions. This data isn’t reflected in the graphs above. The community built extensions is the ecosystem of new add-ons and features that aren’t found in the core software. In the case of Ubercart, this community development is very small and limited whereas for Drupal Commerce community is very active and engaged.

Where to go from Ubercart?

You’ve probably guessed this already, but the clear path moving away from Ubercart is to Drupal Commerce. Commerce is the Ubercart replacement and it’s capable of so much more. It’s also Drupal 8 ready and will provide a painless transition to Drupal 9, when that happens.

Commerce improvements over Ubercart

The following is a list of improvements Commerce for Drupal 8 has over Ubercart:

Drupal 8 improvements over Drupal 7 include:

  • Robust caching and performance for authenticate or unique users, very important for any ecommerce site
  • Drupal’s new rolling release schedule, no more large updates between versions makes updates easier
  • Modern object oriented design, which makes testing, extension and use of 3rd party libraries easier. Commerce follows all of the architectural improvements for Drupal 8 and has, in some cases, lead the way by innovating first.

Commerce improvements over Ubercart include:

  • More secure payment architecture. Commerce encourages the lowest level of PCI risk possible and enforces good practices with it’s payment API, compared to Ubercart’s primarily DIY payment model.
  • Proper variation based product model with unique SKUs for each variation
  • Robust and accurate promotions, discounts and pricing adjustments. If you’ve struggled with pricing accuracy in Ubercart you’ll understand.
  • Multi-store and multi-currency support is robust and built in.
  • And the list goes on…

Why move now instead of later?

While you could wait until Drupal 7 end of life to move your ecommerce site off of Ubercart and onto Drupal Commerce, this is not something we would ever recommend. The truth of the matter is that by waiting until the very end, you’re taking on a lot of unnecessary risk for both your business and your customers. You don’t want to be in a position where you’re scrambling to make-it-happen quickly when suddenly you’re not getting any more security updates to both Drupal 7 AND Ubercart. That is a worse-case scenario and you would be wise to avoid it.

Right now is an ideal time for you to consider making the switch. Both Drupal 8 and Commerce have been used in the wild now for years and the software is very stable. Most likely all of the features and functionality that you currently use has already been ported over to the new versions. The tools that help migrate Drupal 7 and Ubercart over to Drupal 8 and Commerce have been created to assist with the move. Really, from a technical standpoint there’s no reason to not make the move now.

Of course, it can’t be denied that completing a migration to the latest and greatest does take time and effort to do, and there will be a cost involved. All the more reason to start the process now. Right now you have the time to find the help you need and to properly budget and plan how your migration will be executed. Right now it’s not a hassle, it’s an opportunity to make your business better for both you and your customers while at the same time correcting any of the little things that bother you about your site now.

Acro Media has been helping ecommerce owners and operators with consultation and development for well over 10 years. We’re intimate with both Ubercart and Drupal Commerce, and we even staff some of the talented people who built Commerce and the migration tools everyone uses to make the move. If you want to learn more about how your migration would happen, we would love to talk. Click the link below to get started.

Read the full Gartner report

Sep 19 2019
Sep 19

We're back!  Our normally scheduled call to chat about all things Drupal and nonprofits will happen TODAY, September 19, at 1pm ET / 10am PT. (Convert to your local time zone.)

Feel free to share your thoughts and discussion points ahead of time in our collaborative Google doc: https://nten.org/drupal/notes

We have an hour to chat so bring your best Drupal topics and let's do this thing!

Some examples to get your mind firing: how do I recreate [feature] on my Drupal 7 site in Drupal 8? I need to explain [complicated thing] to a non-technical stakeholder -- any advice? How can I get Drupal and my CRM to play nicely?

This free call is sponsored by NTEN.org but open to everyone.

View notes of previous months' calls.

Sep 19 2019
Sep 19

Attending DrupalCon is an investment in your skills, professional development, and in building community connections. 

A lot of attendees don't buy their own tickets—most need to convince someone else (their boss) of the value.

Sep 19 2019
Sep 19

To scale and sustain Open Source ecosystems in a more efficient and fair manner, Open Source projects need to embrace new governance, coordination and incentive models.

A scale that is in balance

In many ways, Open Source has won. Most people know that Open Source provides better quality software, at a lower cost, without vendor lock-in. But despite Open Source being widely adopted and more than 30 years old, scaling and sustaining Open Source projects remains challenging.

Not a week goes by that I don't get asked a question about Open Source sustainability. How do you get others to contribute? How do you get funding for Open Source work? But also, how do you protect against others monetizing your Open Source work without contributing back? And what do you think of MongoDB, Cockroach Labs or Elastic changing their license away from Open Source?

This blog post talks about how we can make it easier to scale and sustain Open Source projects, Open Source companies and Open Source ecosystems. I will show that:

  • Small Open Source communities can rely on volunteers and self-governance, but as Open Source communities grow, their governance model most likely needs to be reformed so the project can be maintained more easily.
  • There are three models for scaling and sustaining Open Source projects: self-governance, privatization, and centralization. All three models aim to reduce coordination failures, but require Open Source communities to embrace forms of monitoring, rewards and sanctions. While this thinking is controversial, it is supported by decades of research in adjacent fields.
  • Open Source communities would benefit from experimenting with new governance models, coordination systems, license innovation, and incentive models.

Some personal background

Scaling and sustaining Open Source projects and Open Source businesses has been the focus of most of my professional career.

Drupal, the Open Source project I founded 18 years ago, is used by more than one million websites and reaches pretty much everyone on the internet.

With over 8,500 individuals and about 1,100 organizations contributing to Drupal annually, Drupal is one of the healthiest and contributor-rich Open Source communities in the world.

For the past 12 years, I've also helped build Acquia, an Open Source company that heavily depends on Drupal. With almost 1,000 employees, Acquia is the largest contributor to Drupal, yet responsible for less than 5% of all contributions.

This article is not about Drupal or Acquia; it's about scaling Open Source projects more broadly.

I'm interested in how to make Open Source production more sustainable, more fair, more egalitarian, and more cooperative. I'm interested in doing so by redefining the relationship between end users, producers and monetizers of Open Source software through a combination of technology, market principles and behavioral science.

Why it must be easier to scale and sustain Open Source

We need to make it easier to scale and sustain both Open Source projects and Open Source businesses:

  1. Making it easier to scale and sustain Open Source projects might be the only way to solve some of the world's most important problems. For example, I believe Open Source to be the only way to build a pro-privacy, anti-monopoly, open web. It requires Open Source communities to be long-term sustainable — possibly for hundreds of years.
  2. Making it easier to grow and sustain Open Source businesses is the last hurdle that prevents Open Source from taking over the world. I'd like to see every technology company become an Open Source company. Today, Open Source companies are still extremely rare.

The alternative is that we are stuck in the world we live in today, where proprietary software dominates most facets of our lives.

Disclaimers

This article is focused on Open Source governance models, but there is more to growing and sustaining Open Source projects. Top of mind is the need for Open Source projects to become more diverse and inclusive of underrepresented groups.

Second, I understand that the idea of systematizing Open Source contributions won't appeal to everyone. Some may argue that the suggestions I'm making go against the altruistic nature of Open Source. I agree. However, I'm also looking at Open Source sustainability challenges from the vantage point of running both an Open Source project (Drupal) and an Open Source business (Acquia). I'm not implying that every community needs to change their governance model, but simply offering suggestions for communities that operate with some level of commercial sponsorship, or communities that struggle with issues of long-term sustainability.

Lastly, this post is long and dense. I'm 700 words in, and I haven't started yet. Given that this is a complicated topic, there is an important role for more considered writing and deeper thinking.

Defining Open Source Makers and Takers

Makers

Some companies are born out of Open Source, and as a result believe deeply and invest significantly in their respective communities. With their help, Open Source has revolutionized software for the benefit of many. Let's call these types of companies Makers.

As the name implies, Makers help make Open Source projects; from investing in code, to helping with marketing, growing the community of contributors, and much more. There are usually one or more Makers behind the success of large Open Source projects. For example, MongoDB helps make MongoDB, Red Hat helps make Linux, and Acquia (along with many other companies) helps make Drupal.

Our definition of a Maker assumes intentional and meaningful contributions and excludes those whose only contributions are unintentional or sporadic. For example, a public cloud company like Amazon can provide a lot of credibility to an Open Source project by offering it as-a-service. The resulting value of this contribution can be substantial, however that doesn't make Amazon a Maker in our definition.

I use the term Makers to refer to anyone who purposely and meaningfully invests in the maintenance of Open Source software, i.e. by making engineering investments, writing documentation, fixing bugs, organizing events, and more.

Takers

Now that Open Source adoption is widespread, lots of companies, from technology startups to technology giants, monetize Open Source projects without contributing back to those projects. Let's call them Takers.

I understand and respect that some companies can give more than others, and that many might not be able to give back at all. Maybe one day, when they can, they'll contribute. We limit the label of Takers to companies that have the means to give back, but choose not to.

The difference between Makers and Takers is not always 100% clear, but as a rule of thumb, Makers directly invest in growing both their business and the Open Source project. Takers are solely focused on growing their business and let others take care of the Open Source project they rely on.

Organizations can be both Takers and Makers at the same time. For example, Acquia, my company, is a Maker of Drupal, but a Taker of Varnish Cache. We use Varnish Cache extensively but we don't contribute to its development.

A scale that is not in balance

Takers hurt Makers

To be financially successful, many Makers mix Open Source contributions with commercial offerings. Their commercial offerings usually take the form of proprietary or closed source IP, which may include a combination of premium features and hosted services that offer performance, scalability, availability, productivity, and security assurances. This is known as the Open Core business model. Some Makers offer professional services, including maintenance and support assurances.

When Makers start to grow and demonstrate financial success, the Open Source project that they are associated with begins to attract Takers. Takers will usually enter the ecosystem with a commercial offering comparable to the Makers', but without making a similar investment in Open Source contribution. Because Takers don't contribute back meaningfully to the Open Source project that they take from, they can focus disproportionately on their own commercial growth.

Let's look at a theoretical example.

When a Maker has $1 million to invest in R&D, they might choose to invest $500k in Open Source and $500k in the proprietary IP behind their commercial offering. The Maker intentionally balances growing the Open Source project they are connected to with making money. To be clear, the investment in Open Source is not charity; it helps make the Open Source project competitive in the market, and the Maker stands to benefit from that.

When a Taker has $1 million to invest in R&D, nearly all of their resources go to the development of proprietary IP behind their commercial offerings. They might invest $950k in their commercial offerings that compete with the Maker's, and $50k towards Open Source contribution. Furthermore, the $50k is usually focused on self-promotion rather than being directed at improving the Open Source project itself.

A visualization of the Maker and Taker math

Effectively, the Taker has put itself at a competitive advantage compared to the Maker:

  • The Taker takes advantage of the Maker's $500k investment in Open Source contribution while only investing $50k themselves. Important improvements happen "for free" without the Taker's involvement.
  • The Taker can out-innovate the Maker in building proprietary offerings. When a Taker invests $950k in closed-source products compared to the Maker's $500k, the Taker can innovate 90% faster. The Taker can also use the delta to disrupt the Maker on price.

In other words, Takers reap the benefits of the Makers' Open Source contribution while simultaneously having a more aggressive monetization strategy. The Taker is likely to disrupt the Maker. On an equal playing field, the only way the Maker can defend itself is by investing more in its proprietary offering and less in the Open Source project. To survive, it has to behave like the Taker to the detriment of the larger Open Source community.

Takers harm Open Source projects. An aggressive Taker can induce Makers to behave in a more selfish manner and reduce or stop their contributions to Open Source altogether. Takers can turn Makers into Takers.

Open Source contribution and the Prisoner's Dilemma

The example above can be described as a Prisoner's Dilemma. The Prisoner's Dilemma is a standard example of game theory, which allows the study of strategic interaction and decision-making using mathematical models. I won't go into detail here, but for the purpose of this article, it helps me simplify the above problem statement. I'll use this simplified example throughout the article.

Imagine an Open Source project with only two companies supporting it. The rules of the game are as follows:

  • If both companies contribute to the Open Source project (both are Makers), the total reward is $100. The reward is split evenly and each company makes $50.
  • If one company contributes while the other company doesn't (one Maker, one Taker), the Open Source project won't be as competitive in the market, and the total reward will only be $80. The Taker gets $60 as they have the more aggressive monetization strategy, while the Maker gets $20.
  • If both players choose not to contribute (both are Takers), the Open Source project will eventually become irrelevant. Both walk away with just $10.

This can be summarized in a pay-off matrix:

Company A contributes Company A doesn't contribute Company B contributes A makes $50
B makes $50 A makes $60
B makes $20 Company B doesn't contribute A makes $20
B makes $60 A makes $10
B makes $10

In the game, each company needs to decide whether to contribute or not, but Company A doesn't know what company B decides; and vice versa.

The Prisoner's Dilemma states that each company will optimize its own profit and not contribute. Because both companies are rational, both will make that same decision. In other words, when both companies use their "best individual strategy" (be a Taker, not a Maker), they produce an equilibrium that yields the worst possible result for the group: the Open Source project will suffer and as a result they only make $10 each.

A real-life example of the Prisoner's Dilemma that many people can relate to is washing the dishes in a shared house. By not washing dishes, an individual can save time (individually rational), but if that behavior is adopted by every person in the house, there will be no clean plates for anyone (collectively irrational). How many of us have tried to get away with not washing the dishes? I know I have.

Fortunately, the problem of individually rational actions leading to collectively adverse outcomes is not new or unique to Open Source. Before I look at potential models to better sustain Open Source projects, I will take a step back and look at how this problem has been solved elsewhere.

Open Source: a public good or a common good?

In economics, the concepts of public goods and common goods are decades old, and have similarities to Open Source.

Examples of common goods (fishing grounds, oceans, parks) and public goods (lighthouses, radio, street lightning)

Public goods and common goods are what economists call non-excludable meaning it's hard to exclude people from using them. For example, everyone can benefit from fishing grounds, whether they contribute to their maintenance or not. Simply put, public goods and common goods have open access.

Common goods are rivalrous; if one individual catches a fish and eats it, the other individual can't. In contrast, public goods are non-rivalrous; someone listening to the radio doesn't prevent others from listening to the radio.

I've long believed that Open Source projects are public goods: everyone can use Open Source software (non-excludable) and someone using an Open Source project doesn't prevent someone else from using it (non-rivalrous).

However, through the lens of Open Source companies, Open Source projects are also common goods; everyone can use Open Source software (non-excludable), but when an Open Source end user becomes a customer of Company A, that same end user is unlikely to become a customer of Company B (rivalrous).

For end users, Open Source projects are public goods; the shared resource is the software. But for Open Source companies, Open Source projects are common goods; the shared resource is the (potential) customer.

Next, I'd like to extend the distinction between "Open Source software being a public good" and "Open Source customers being a common good" to the the free-rider problem: we define software free-riders as those who use the software without ever contributing back, and customer free-riders (or Takers) as those who sign up customers without giving back.

All Open Source communities should encourage software free-riders. Because the software is a public good (non-rivalrous), a software free-rider doesn't exclude others from using the software. Hence, it's better to have a user for your Open Source project, than having that person use your competitor's software. Furthermore, a software free-rider makes it more likely that other people will use your Open Source project (by word of mouth or otherwise). When some portion of those other users contribute back, the Open Source project benefits. Software free-riders can have positive network effects on a project.

However, when the success of an Open Source project depends largely on one or more corporate sponsors, the Open Source community should not forget or ignore that customers are a common good. Because a customer can't be shared among companies, it matters a great deal for the Open Source project where that customer ends up. When the customer signs up with a Maker, we know that a certain percentage of the revenue associated with that customer will be invested back into the Open Source project. When a customer signs up with a customer free-rider or Taker, the project doesn't stand to benefit. In other words, Open Source communities should find ways to route customers to Makers.

Both volunteer-driven and sponsorship-driven Open Source communities should encourage software free-riders, but sponsorship-driven Open Source communities should discourage customer free-riders.

Lessons from decades of Common Goods management

Hundreds of research papers and books have been written on public good and common good governance. Over the years, I have read many of them to figure out what Open Source communities can learn from successfully managed public goods and common goods.

Some of the most instrumental research was Garrett Hardin's Tragedy of the Commons and Mancur Olson's work on Collective Action. Both Hardin and Olson concluded that groups don't self-organize to maintain the common goods they depend on.

As Olson writes in the beginning of his book, The Logic of Collective Action: Unless the number of individuals is quite small, or unless there is coercion or some other special device to make individuals act in their common interest, rational, self-interested individuals will not act to achieve their common or group interest..

Consistent with the Prisoner's Dilemma, Hardin and Olson show that groups don't act on their shared interests. Members are disincentivized from contributing when other members can't be excluded from the benefits. It is individually rational for a group's members to free-ride on the contributions of others.

Dozens of academics, Hardin and Olson included, argued that an external agent is required to solve the free-rider problem. The two most common approaches are (1) centralization and (2) privatization:

  1. When a common good is centralized, the government takes over the maintenance of the common good. The government or state is the external agent.
  2. When a public good is privatized, one or more members of the group receive selective benefits or exclusive rights to harvest from the common good in exchange for the ongoing maintenance of the common good. In this case, one or more corporations act as the external agent.

The wide-spread advice to centralize and privatize common goods has been followed extensively in most countries; today, the management of natural resources is typically managed by either the government or by commercial companies, but no longer directly by its users. Examples include public transport, water utilities, fishing grounds, parks, and much more.

Overall, the privatization and centralization of common goods has been very successful; in many countries, public transport, water utilities and parks are maintained better than volunteer contributors would have on their own. I certainly value that I don't have to help maintain the train tracks before my daily commute to work, or that I don't have to help mow the lawn in our public park before I can play soccer with my kids.

For years, it was a long-held belief that centralization and privatization were the only way to solve the free-rider problem. It was Elinor Ostrom who observed that a third solution existed.

Ostrom found hundreds of cases where common goods are successfully managed by their communities, without the oversight of an external agent. From the management of irrigation systems in Spain to the maintenance of mountain forests in Japan — all have been successfully self-managed and self-governed by their users. Many have been long-enduring as well; the youngest examples she studied were more than 100 years old, and the oldest exceed 1,000 years.

Ostrom studied why some efforts to self-govern commons have failed and why others have succeeded. She summarized the conditions for success in the form of core design principles. Her work led her to win the Nobel Prize in Economics in 2009.

Interestingly, all successfully managed commons studied by Ostrom switched at some point from open access to closed access. As Ostrom writes in her book, Governing the Commons: For any appropriator to have a minimal interest in coordinating patterns of appropriation and provision, some set of appropriators must be able to exclude others from access and appropriation rights.. Ostrom uses the term appropriator to refer to those who use or withdraw from a resource. Examples would be fishers, irrigators, herders, etc — or companies trying to turn Open Source users into paying customers. In other words, the shared resource must be made exclusive (to some degree) in order to incentivize members to manage it. Put differently, Takers will be Takers until they have an incentive to become Makers.

Once access is closed, explicit rules need to be established to determine how resources are shared, who is responsible for maintenance, and how self-serving behaviors are suppressed. In all successfully managed commons, the regulations specify (1) who has access to the resource, (2) how the resource is shared, (3) how maintenance responsibilities are shared, (4) who inspects that rules are followed, (5) what fines are levied against anyone who breaks the rules, (6) how conflicts are resolved and (7) a process for collectively evolving these rules.

Three patterns for long-term sustainable Open Source

Studying the work of Garrett Hardin (Tragedy of the Commons), the Prisoner's Dilemma, Mancur Olson (Collective Action) and Elinor Ostrom's core design principles for self-governance, a number shared patterns emerge. When applied to Open Source, I'd summarize them as follows:

  1. Common goods fail because of a failure to coordinate collective action. To scale and sustain an Open Source project, Open Source communities need to transition from individual, uncoordinated action to cooperative, coordinated action.
  2. Cooperative, coordinated action can be accomplished through privatization, centralization, or self-governance. All three work — and can even be mixed.
  3. Successful privatization, centralization, and self-governance all require clear rules around membership, appropriation rights, and contribution duties. In turn, this requires monitoring and enforcement, either by an external agent (centralization + privatization), a private agent (self-governance), or members of the group itself (self-governance).

Next, let's see how these three concepts — centralization, privatization and self-governance — could apply to Open Source.

Model 1: Self-governance in Open Source

For small Open Source communities, self-governance is very common; it's easy for its members to communicate, learn who they can trust, share norms, agree on how to collaborate, etc.

As an Open Source project grows, contribution becomes more complex and cooperation more difficult: it becomes harder to communicate, build trust, agree on how to cooperate, and suppress self-serving behaviors. The incentive to free-ride grows.

You can scale successful cooperation by having strong norms that encourage other members to do their fair share and by having face-to-face events, but eventually, that becomes hard to scale as well.

As Ostrom writes in Governing the Commons: Even in repeated settings where reputation is important and where individuals share the norm of keeping agreements, reputation and shared norms are insufficient by themselves to produce stable cooperative behavior over the long run. and In all of the long-enduring cases, active investments in monitoring and sanctioning activities are quite apparent..

To the best of my knowledge, no Open Source project currently implements Ostrom's design principles for successful self-governance. To understand how Open Source communities might, let's go back to our running example.

Our two companies would negotiate rules for how to share the rewards of the Open Source project, and what level of contribution would be required in exchange. They would set up a contract where they both agree on how much each company can earn and how much each company has to invest. During the negotiations, various strategies can be proposed for how to cooperate. However, both parties need to agree on a strategy before they can proceed. Because they are negotiating this contract among themselves, no external agent is required.

These negotiations are non-trivial. As you can imagine, any proposal that does not involve splitting the $100 fifty-fifty is likely rejected. The most likely equilibrium is for both companies to contribute equally and to split the reward equally. Furthermore, to arrive at this equilibrium, one of the two companies would likely have to go backwards in revenue, which might not be agreeable.

Needless to say, this gets even more difficult in a scenario where there are more than two companies involved. Today, it's hard to fathom how such a self-governance system can successfully be established in an Open Source project. In the future, Blockchain-based coordination systems might offer technical solutions for this problem.

Large groups are less able to act in their common interest than small ones because (1) the complexity increases and (2) the benefits diminish. Until we have better community coordination systems, it's easier for large groups to transition from self-governance to privatization or centralization than to scale self-governance.

The concept of major projects growing out of self-governed volunteer communities is not new to the world. The first trade routes were ancient trackways which citizens later developed on their own into roads suited for wheeled vehicles. Privatization of roads improved transportation for all citizens. Today, we certainly appreciate that our governments maintain the roads.

The roads system evolving from self-governance to privatization, and from privatization to centralization

Model 2: Privatization of Open Source governance

In this model, Makers are rewarded unique benefits not available to Takers. These exclusive rights provide Makers a commercial advantage over Takers, while simultaneously creating a positive social benefit for all the users of the Open Source project, Takers included.

For example, Mozilla has the exclusive right to use the Firefox trademark and to set up paid search deals with search engines like Google, Yandex and Baidu. In 2017 alone, Mozilla made $542 million from searches conducted using Firefox. As a result, Mozilla can make continued engineering investments in Firefox. Millions of people and organizations benefit from that every day.

Another example is Automattic, the company behind WordPress. Automattic is the only company that can use WordPress.com, and is in the unique position to make hundreds of millions of dollars from WordPress' official SaaS service. In exchange, Automattic invests millions of dollars in the Open Source WordPress each year.

Recently, there have been examples of Open Source companies like MongoDB, Redis, Cockroach Labs and others adopting stricter licenses because of perceived (and sometimes real) threats from public cloud companies that behave as Takers. The ability to change the license of an Open Source project is a form of privatization.

Model 3: Centralization of Open Source governance

Let's assume a government-like central authority can monitor Open Source companies A and B, with the goal to reward and penalize them for contribution or lack thereof. When a company follows a cooperative strategy (being a Maker), they are rewarded $25 and when they follow a defect strategy (being a Taker), they are charged a $25 penalty. We can update the pay-off matrix introduced above as follows:

Company A contributes Company A doesn't contribute Company B contributes A makes $75 ($50 + $25)
B makes $75 ($50 + $25) A makes $35 ($60 - $25)
B makes $45 ($20 + 25) Company B doesn't contribute A makes $45 ($20 + $25)
B makes $35 ($60 - $25) A makes $0 ($10 - $25)
B makes $0 ($10 - $25)

We took the values from the pay-off matrix above and applied the rewards and penalties. The result is that both companies are incentivized to contribute and the optimal equilibrium (both become Makers) can be achieved.

The money for rewards could come from various fundraising efforts, including membership programs or advertising (just as a few examples). However, more likely is the use of indirect monetary rewards.

One way to implement this is Drupal's credit system. Drupal's non-profit organization, the Drupal Association monitors who contributes what. Each contribution earns you credits and the credits are used to provide visibility to Makers. The more you contribute, the more visibility you get on Drupal.org (visited by 2 million people each month) or at Drupal conferences (called DrupalCons, visited by thousands of people each year).

Example issue credit on drupal orgA screenshot of an issue comment on Drupal.org. You can see that jamadar worked on this patch as a volunteer, but also as part of his day job working for TATA Consultancy Services on behalf of their customer, Pfizer.

While there is a lot more the Drupal Association could and should do to balance its Makers and Takers and achieve a more optimal equilibrium for the Drupal project, it's an emerging example of how an Open Source non-profit organization can act as a regulator that monitors and maintains the balance of Makers and Takers.

The big challenge with this approach is the accuracy of the monitoring and the reliability of the rewarding (and sanctioning). Because Open Source contribution comes in different forms, tracking and valuing Open Source contribution is a very difficult and expensive process, not to mention full of conflict. Running this centralized government-like organization also needs to be paid for, and that can be its own challenge.

Concrete suggestions for scaling and sustaining Open Source

Suggestion 1: Don't just appeal to organizations' self-interest, but also to their fairness principles

If, like most economic theorists, you believe that organizations act in their own self-interest, we should appeal to that self-interest and better explain the benefits of contributing to Open Source.

Despite the fact that hundreds of articles have been written about the benefits of contributing to Open Source — highlighting speed of innovation, recruiting advantages, market credibility, and more — many organizations still miss these larger points.

It's important to keep sharing Open Source success stories. One thing that we have not done enough is appeal to organizations' fairness principles.

While a lot of economic theories correctly assume that most organizations are self-interested, I believe some organizations are also driven by fairness considerations.

Despite the term "Takers" having a negative connotation, it does not assume malice. For many organizations, it is not apparent if an Open Source project needs help with maintenance, or how one's actions, or lack thereof, might negatively affect an Open Source project.

As mentioned, Acquia is a heavy user of Varnish Cache. But as Acquia's Chief Technology Officer, I don't know if Varnish needs maintenance help, or how our lack of contribution negatively affects Makers in the Varnish community.

It can be difficult to understand the consequences of our own actions within Open Source. Open Source communities should help others understand where contribution is needed, what the impact of not contributing is, and why certain behaviors are not fair. Some organizations will resist unfair outcomes and behave more cooperatively if they understand the impact of their behaviors and the fairness of certain outcomes.

Make no mistake though: most organizations won't care about fairness principles; they will only contribute when they have to. For example, most people would not voluntarily redistribute 25-50% of their income to those who need it. However, most of us agree to redistribute money by paying taxes, but only so long as all others have to do so as well.

Suggestion 2: Encourage end users to offer selective benefits to Makers

We talked about Open Source projects giving selective benefits to Makers (e.g. Automattic, Mozilla, etc), but end users can give selective benefits as well. For example, end users can mandate Open Source contributions from their partners. We have some successful examples of this in the Drupal community:

If more end users of Open Source took this stance, it could have a very big impact on Open Source sustainability. For governments, in particular, this seems like a very logical thing to do. Why would a government not want to put every dollar of IT spending back in the public domain? For Drupal alone, the impact would be measured in tens of millions of dollars each year.

Suggestion 3: Experiment with new licenses

I believe we can create licenses that support the creation of Open Source projects with sustainable communities and sustainable businesses to support it.

For a directional example, look at what MariaDB did with their Business Source License (BSL). The BSL gives users complete access to the source code so users can modify, distribute and enhance it. Only when you use more than x of the software do you have to pay for a license. Furthermore, the BSL guarantees that the software becomes Open Source over time; after y years, the license automatically converts from BSL to General Public License (GPL), for example.

A second example is the Community Compact, a license proposed by Adam Jacob. It mixes together a modern understanding of social contracts, copyright licensing, software licensing, and distribution licensing to create a sustainable and harmonious Open Source project.

We can create licenses that better support the creation, growth and sustainability of Open Source projects and that are designed so that both users and the commercial ecosystem can co-exist and cooperate in harmony.

I'd love to see new licenses that encourage software free-riding (sharing and giving), but discourage customer free-riding (unfair competition). I'd also love to see these licenses support many Makers, with built-in inequity and fairness principles for smaller Makers or those not able to give back.

If, like me, you believe there could be future licenses that are more "Open Source"-friendly, not less, it would be smart to implement a contributor license agreement for your Open Source project; it allows Open Source projects to relicense if/when better licenses arrive. At some point, current Open Source licenses will be at a disadvantage compared to future Open Source licenses.

Conclusions

As Open Source communities grow, volunteer-driven, self-organized communities become harder to scale. Large Open Source projects should find ways to balance Makers and Takers or the Open Source project risks not innovating enough under the weight of Takers.

Fortunately, we don't have to accept that future. However, this means that Open Source communities potentially have to get comfortable experimenting with how to monitor, reward and penalize members in their communities, particularly if they rely on a commercial ecosystem for a large portion of their contributions. Today, that goes against the values of most Open Source communities, but I believe we need to keep an open mind about how we can grow and scale Open Source.

Making it easier to scale Open Source projects in a sustainable and fair way is one of the most important things we can work on. If we succeed, Open Source can truly take over the world — it will pave the path for every technology company to become an Open Source business, and also solve some of the world's most important problems in an open, transparent and cooperative way.

September 19, 2019

22 min read time

Sep 19 2019
Sep 19

 

With the proliferation in the touchpoints that enterprises use to connect with customers and provide them with the valuable experience, it’s has become a tedious and challenging task to optimize the content far and wide.

Further, the number of devices that consumers use to access brand content- desktops, mobile phones, laptops, tablets, and smartwatches - with yet more looming on the horizon; have their own set of restrictions and specifications which again increases the complexities of content creators & marketers in the dissemination of the personalized content.

Also, this Gartner report suggested that marketers & decision-makers should now opt for a unified experience strategy to streamline their customer-facing content. This can be done through the implementation of the latest technology and channels to promote dynamic personalization and optimize content in an avant-garde manner. And all this can be executed by dint of Content-as-a-Service.

This blog provides further insights on CaaS, its use cases & features, and how enterprises and marketers can leverage Drupal as CaaS for managing their content efficiently.

What is Content as a Service?

Content-as-a-Service (CaaS) focuses on managing structured content into a unified repository or feed that other applications and properties consume.

The idea behind it is to provide a future-ready CMS that makes content readily available by employing API with or without developing the presentation tier. The presentation layer can be a website, a mobile app, or a feed into a device’s interface. 

The idea behind it is to provide a future-ready CMS that makes content readily available by employing API with or without developing the presentation tier

This separation between the content itself and its presentation implies that RESTful APIs, for instance, can provide the same content that serves both your website to an iOS or Android app.

Put simply, it draws a clear line between the people creating the content, the people delivering the content, and of course, the people consuming it.

A long box with different elements inside

Source: Bloomreach

Characteristics of Content-as-a-Service solutions include:

  • The content disseminated across all channels via a Rest-based API

  • A method of developing content as per prescribed content models

  • Structured formats for returning content via simple queries.

  • Distributed authoring and workflow content administration

  • A content repository hosted in the Cloud for universal access

  • Triggers that alert customer experience applications that consume content to content updates

  • Metadata definitions that can be defined and move along with the content via API

How Does CaaS work?

The actual implementation of CaaS can vary as in the case with any architectural pattern but here is a general overview of how CaaS platform may work-

 

Multiple boxes connected in flowchart

 

The content management UI is a web application to centralize all content authoring and content management of the platform. Content is placed inside centralized storage: it is to note that the format and technology used for the same does not matter at this point, what matters is the correct storage of data.

At last, the content is made available through a technology-agnostic API, like REST API. There are products available in the market which lets you author the content whilst working on the presentation layer to provide you with a wide array of applications you may need (for instance, web apps, mobile apps). 

You could, as an alternative, also provide access to the public APIs of these platforms, allowing others to take care of building their own presentation layers and saving you the trouble of working on that. 

Know how Srijan helps enterprises in modernizing their platforms to manage their content across various channels

Explore Our Services

Why CaaS?

Creating dedicated content for every specific medium becomes cumbersome to the point of being unworkable

 

Have you ever thought that how enterprises and marketers can tweak content for each one of the channels and yet ensure that the content is safe and sustainable for any modification in the future? Since it’s understood that creating dedicated content for every specific medium becomes cumbersome to the point of being unworkable.

So, how is it possible? The answer to this simple question is CaaS!

It can be efficient for enterprises those who want to upgrade their CMS either into one which can serve as CaaS or when there was nothing before.

However, the key deciding factor(s) at the end will be your current context. The reasons are mentioned below-

  1. Siloed Content

Enterprise deals with an enormous amount of content and the sources from where it comes in and having to manage them independently can prove labor-intensive. Either company can spend a lot of time from their schedule to simply manage the content or spend too many resources having a team manager & a set of independent tools with the added overhead of getting them to collaborate with each other. 

In either case, they are most likely dealing with one or maybe more of such situations:

  • They don’t own a uniform content format, which can be made use of for easy distribution. 

  • They don’t own a centralized method to make content available for consumers, be they internal or external ones.

  • Metadata is not given due importance in empowering their content and making it rich for consumers.

  • And centralized storage, so, companies have to put extra efforts to move from one source of data to the next.

The adoption of CaaS could be beneficial to anyone looking desperately to switch their content management strategies. A switch to content-centric approach, i.e., Content-as-a-Service, would significantly improve their performance.

2.   Limited formats for your content

 

Content has to be an abstract entity, and choosing the way how it should be consumed, should be your top priority

 

Your problem might not be about managing your content but inefficiency in reaching to the targeted consumers due to a restricted amount of formats you are compatible with. Content-as-a-Service is again the perfect solution for such kind of scenarios.

Many CMS, such as WordPress, take the responsibility for presentation ensuring that you don’t have to worry about it. However, you also get restricted to the number of devices with which representation of your content is compatible. There could be so many devices where your content can be rejected immediately or simply not pleasant to be consumed in. For instance, have you ever considered how will your online trading WordPress website will show stocks on your smartwatch? What about a VR headset? Or a holographic projection? Agreed that last one does not exist yet but you must ensure that the company is well-equipped and future-ready to be compatible with new technologies, especially when it is moving at breakneck speed and released to the public every day.

Even the new foldable phones are going to be accessible for the public now- what will happen then to the content?

Companies will limit their odds of success if they kept their content tied to their representation. Content has to be an abstract entity, and choosing the way how it should be consumed, should be your top priority.

3.  Native mobile app needing content

Content-as-a-Service provides you with the flexibility to use your content however you want, now or in the future

Since content display on mobile phones and apps demand extra attention, most of the traditional CMS fails to provide the necessary tools and facilities for the same. They only provide web-compatible formats (e.g., HTML) making it unfit for your app.

You can work around this by using a headless, decoupled CMS or Content-as-a-Service to simplify your work. 

In a nutshell, Content-as-a-Service provides you with the flexibility to use your content however you want, now or in the future.

What Drives the Adoption of CaaS?

There are two groups primarily that can leverage this type of content delivery the most: developers and business users/content creators.

  1. Developers

Developers do require CaaS no matter they are mobile app developers who need a backend to feed their apps with content or front-end developers who expect to interact with an API. 

Such technologies have been around since long and widely accepted as well, further fueling the demand for CaaS.

2.  Business
  • Those content creators who want to increase the reach of their content to as many platforms and channels as possible- web, mobile, social networks, smart devices, and so on. 

  • It becomes exorbitant to have a separate solution for every channel- development-wise and maintenance-wise. 

  • It is convenient to manage a single editorial team and a single software stack for all channels.

  • CaaS solutions can help developers in being more productive and efficient with the tools they like to use.

CaaS Use Cases

It’s often perceived that there is no single CMS that is equally good for maintaining both a personal blog and a huge online shop. Contrary to the assumptions, CaaS outperforms its harbingers in some use cases-

CaaS focuses on pushing wherever and whenever required, designers need not worry anymore

 

Pushing content on a mobile app via CaaS proves as the most effective way to have dynamic in-app content without having the need to resubmit the app to the app marketplace.

  • Multi-channel publishing

CaaS CMS is also beneficial when content needs to be transmitted across various platforms, for example, you want to push the same content to a website as well as to mobile apps.

  • Rich Web apps

Modern view controller, i.e., front-end frameworks, such as AngularJS, React, and Ember synchronizes well with structured content via APIs.

CMS can considerably reduce the complexities and simplify workflows in an existing project, for instance, eliminating hard-coded content from HTML pages, and maintaining them with a CMS. In contrast, the API by CaaS makes it highly integration-friendly and robust.

  • Tailored UX

The CMS of web age posed strong design restrictions. Though you could fully tweak the UI but building a Wordpress-powered web app from scratch was not very likely. 

On the other hand, as CaaS focuses on pushing wherever and whenever required, designers need not worry anymore!

It becomes a tedious task to manage already existing content and also the one arriving from multiple sources. Therefore, it is best-suited to upload content into one unified repository by creating content via APIs.

Content structured via API makes it easy for AI-powered agents and chatbots to move it around and churn it for ensuring relevance than screen scraping and using natural language for processing unstructured content.

How Drupal Can Prove to Be An Effective CaaS?

Drupal has unfolded the idea of Content-as-a-Service (CaaS) to solve the dilemmas posed by our newfangled digital ecosystem & its extremely high demand for new and different types of content. 

A square with multiple circles and squares connected to each other

 

Following features on how Drupal can be an effective CaaS-

  1. Reusable future-proof content

Drupal content can easily exist in the form of reusable chunks

Generally, CMSes manage content in a back-end repository and push it to the front-end templates for serving an experience.

However, Drupal decouples the back and front end whenever required. So, Drupal content can easily exist in the form of reusable chunks: free from the presentation and set for delivering content to sites and apps. Thus, content becomes future-ready.

  1. Set front-end developers free to create a better experience

With Drupal’s presentation-neutral content and a RESTful API, front-end developers can freely carry out their creative vision and build interactive sites & apps with the tools like Node, Angular, Backbone, Ember and others.

  1. Fill the content bucket more easily 

Content nowadays should not be restricted to one source only rather it should move in and out freely. And Drupal helps in that by ingesting third-party content (e.g. from aggregators and syndicators) to bring content into your Drupal ecosystem and making it further easy to push to any site, app or channel.

  1. Share content beyond your sites

Today, organizations want to share content on multi-channels where the audiences are inside of content aggregators disrupting the news business. Content teams need an optimal way to create content & then share it with minimal effort. And Drupal does that! The other sites and apps you choose can easily churn Drupal content.

  1. Alter the look

The separation of backend content from front-end presentation gives a leading edge to developers to refine an experience without worrying about the content in the CMS.

Additionally, Drupal’s 8.0 version comes with an inbuilt REST API which marked its beginning of API-first initiative.  

REST allows apps and websites to read and update information on the websites via the web. It also encourages developers to rely on HTTP methods to operate on resources managed by Drupal.

Furthermore, the Drupal community has been working on shipping Drupal modules with web service APIs instead of depending on a central API module in the upcoming releases of Drupal.

Contenta, one of the Drupal’s distributions, is an HTTP API provided for ready-to-use purpose with full auto-generated documentation. It offers modern API capabilities with JSON API, and also feeds content in the JS-driven websites, mobile applications, TV and even fridge applications.

Contenta supports Create Once, Publish Everywhere, be it single application development or multi-channel publishing.

Then there is another distribution, Reservoir, which helps in implementing the Decoupled Drupal architecture, is very flexible and simple-to-use for building content repositories of any application. It also helps in modeling content, governing content, and interacting with that content through HTTP APIs. 

In a nutshell, Drupal’s API-first approach offers the following benefits which further bolsters CaaS model-

  • The decoupled approach separates the presentation layer from the service layer thus allowing a detailed and dedicated focus on each of them.

  • A foolproof approach to help organizations connect to infinite digital signages for enhancing customer experience

  • Increased interaction with customers on their preferred devices will eventually scale up your marketing efforts

  • The decoupled approach is flexible and open for changes, addition, and modification of the structure.

  • Deploying a front-end framework like Angular or React will lead to sophisticated, enriched and dynamic web experience

 

Learn more about Drupal API-first initiative from here-

[embedded content]

Features to Lookout For in CaaS

CaaS comprises of three vital parts: the editing interface (typically a web app), the CMS infrastructure capabilities, and the development ecosystem.

Web editor

  • Enables content architects to design the structure of the content

  • Enables content editors to manage content from creating, updating to collaborating on it.

Technical infrastructure

  • Offers performance, uptime, and scalability to ensure that enterprises can rely on their vendor to deliver content in mission-critical applications.

  • SLAs with short incident response times and access to dedicated staff- so in case of a problem with a mission-critical app, companies can be provided back up again and fast.

  • Mobile delivery capabilities so that great user experience can be delivered even in network-challenged environments ( like subways, rural areas) and high bandwidth cost areas (such as emerging markets).

  • API-based importing, management, and delivery for controlling content programmatically in both ways

  • All-inclusive and up-to-date documentation to help the development team start using the tools quickly.

  • CDN ( content delivery network) to deliver the content rapidly

Development ecosystem

  • SDKs and libraries to increase the speed no matter what the tech stack is

  • Demo app source code so that developers don’t feel the need to reinvent the wheel all over.

  • Third-party integrations to obtain value from existing tools.

Other Characteristics of CaaS

The decoupled approach ensures that code and content are placed separately so that marketers and developers can do their respective work

  • Decoupled approach

The decoupled approach ensures that code and content are placed separately so that marketers and developers can do their respective work. Teams can also work parallelly on a creative copy, enticing visuals, and expert integrations in one unified platform.

This is the quintessence of the headless CMS approach - agnosticism towards how content is presented. This frees developers from creating highly custom front-ends and apps since they get to define the content display part.

A box with various elements listed inside and interconnected

                                                                       Source: Gartner 

  • Cloud setup

The complete separation of the content management and display part allows organizations to migrate infrastructure between Cloud and hybrid, even at the site level or project level. Some projects can be installed locally while some on Cloud depending on the business’ choices for optimization as per needs. 

Centralized Content-as-a-Service lets businesses evaluate the content consumption across the digital ecosystem. This ceases businesses from duplicating their efforts and content when posting to microsites, international sites, or apps. It can also measure the use of that content by looking at the API connections used to deliver that content, and keeping track of where the content is going. 

In the End

The digital revolution and breakthrough in technology have accelerated the efforts of content creators - be it creation, designing, or dissemination. The goal is clear- refined user experience.

With that said, the creation of content in abundance and its delivery as a service through thousands of APIs will generate more data thereby assisting content developers to create more precise business models.

The technology is already in place, and the architectural patterns will allow enterprise systems to scale up without hampering their performance.

Content-as-a-Service ensures that developers are rendered maximum freedom and flexibility to realize their digital innovation. Drupal as a CaaS has been delivering a wonderful experience to both content editors and developers alike.

It is definitely a convenient way to ensure that your strategy is future-proof and can handle any new media in the future.

Sep 18 2019
Sep 18

BADCamp is stoked to be offering two full days of trainings that can help advance your skills and career, led by some of the best teachers in the community.

Full-day trainings cost $50, and half-day trainings cost $25 to cover operating expenses. This year we are excited to be offering two free community workshops. 

Available Trainings:

 If you can't afford this or it is super complicated to get funding, please reach out to the BADCamp organizers via the contact form and we will help!

There are no training refunds after September 23. If you are not able to attend your training, do let us know so we can open up your spot to folks on the waiting list.

Community Workshops:

Do you think BADCamp is rad?

Willing to pay for your ticket?  If so, then you can give back to the camp by purchasing an individual sponsorship at the level most comfortable for you. As our thanks, we will be handing out some awesome BADCamp swag.

We need your help!

BADCamp is 100% volunteer driven and we need your hands! We need stout hearts to volunteer and help set up, tear down, give directions and so much more!  If you are local and can help us, sign up on our Volunteer Form.

Sponsors

A BIG thanks to our sponsors who have committed early. Without them, this magical event wouldn’t be possible. Interested in sponsoring BADCamp? Contact us via the contact form.

Sep 18 2019
Sep 18
[embedded content]

What is real-time collaborative editing, and what are some of the most compelling technologies available in the space? In the inaugural TAG Team Talk, hosted by Preston So (Contributing Editor, Tag1 Consulting), we conduct a wide-ranging discussion about both the business prerogatives and technical ins-and-outs of real-time collaborative editing and its landscape today, with our guests Kevin Jahns (creator of Yjs and collaborative editing expert at Tag1 Consulting), Fabian Franz (Senior Technical Architect and Performance Lead, Tag1 Consulting), and Michael Meyers (Managing Director, Tag1 Consulting). In this talk, we explore collaborative editing, diving into how it works and some of the challenges borne by shared editing. Through the lens of Yjs, a real-time collaboration framework that supports not just text but also collaborating on drawings and 3-D models, we take a look at Operational Transformation (OT) and how implementing Conflict-free Replicated Data Types (CRDT) drives decentralized server approaches in collaborative editing and supports more robust distributed applications with true real-time support.

Yjs: https://github.com/yjs/yjs

ProseMirror: https://prosemirror.net

Great Overview of CRDT
https://conclave-team.github.io/conclave-site/#conflict-free-replicated-data-type-crdt

Deep dive int CRDT by the author of Automerge: https://www.youtube.com/watch?v=yCcWpzY8dIA

Yjs was inspired by:
Sharedb https://github.com/share/sharedb
DerbyJS https://derbyjs.com/

Text Transcript

- Hello, good morning or good evening, wherever you are. And welcome to the first ever Tag1 Team Talk. This is Preston So speaking to you loud and clear from New York City. I'm the contributing editor to Tag1 Consulting. And it's my pleasure today to jump into a deep dive into realtime collaboration tools. I'm joined by three guests today. First and foremost, I want to introduce my partner in crime Michael Meyers, the managing director at Tag1 Consulting. I'm also joined by two folks from around the globe. We've got Fabian Franz, senior technical architect and performance lead, as well as Kevin Jahns, creator of Yjs, a key contributor to realtime collaboration projects, and a maintainer of open source projects. So, I wanted to start off by introducing ourselves one by one, and then we'll jump into what Tag1 Consulting's all about and get into some of these meaty topics we've got for today. My name is Preston. I'm the principal product manager at GatsbyJS as well as contributing editor to Tag1 Consulting. Mike, you wanna go next?

- Sure. Mike Meyers, managing director of Tag1 Consulting. I have 20 years of experience working in technology, managing and running teams. I've been working with Tag1 first as a client for over a decade, and joined the team about two years ago.

- Hi, so, I'm Kevin Jahns. I live in Berlin, Germany. I'm the creator of Yjs open source framework for realtime collaborative editing. Because of that, I got many interesting opportunities to work for really cool, interesting companies on realtime editing. They mainly use my framework and need feedback or help to deliver a great product. I'm currently really happy to work for Tag1 and one of the Fortune 150 companies to make that happen.

- Thanks Kevin, pleasure to have you with us. And Fabian?

- Hey, my name is Fabian Franz. And I'm at Tag1, as you already said. I'm a senior technical architect. I really enjoy architecting large websites and making them really fast. And I'm so excited about realtime collaboration because I think realtime kind of is the future of the web. But we also did a DrupalCon presentation slightly related to that topic, in Amsterdam. And besides that, I'm a Drupal core 7 maintainer right now, and working with Tag1 is just fun.

- And we've known each other for a long time, Fabian. Well, perfect. I think first thing we want to do, though, is set the stage a little bit. What exactly is Tag1, and why are we all here in this room? From what I understand, Mike, you all are the emergency support and rescue, and performance and security experts. Can you tell us a little bit about Tag1 Consulting and what you're all about?

- Definitely. We're the company you turn to when you have difficult and challenging problems to solve. We build mission critical, highly scalable, highly available and secure applications for companies like Symantec. We've worked for them for over a decade. We manage and built their infrastructure work and oversee their agency partners, setting architecture design and doing manual code reviews for security and quality. What we're gonna talk about today is a project that we're working on for a top 50 Fortune company, Fortune 50 company. They are rebuilding their intranet, and realtime collaboration is a key component of it. And frankly, I can't believe that realtime collaboration isn't a default feature in every online application, certainly every CMS. When I think about my daily workflow, I'm constantly working in something like Google Docs, collaborating with the team in realtime. And I know a lot of people, we're talking to another client about a project and they do all of their work in Google Docs and then just pump it into their Django CMS. And so, we really believe that this is the future of a lot of applications, and we're excited to be working on this particular project, so we wanted to talk to everyone about it because it's got some awesome tech behind it.

- Absolutely, I agree. I think that realtime collaboration is one of those intractable problems. It's not quite on the level of some of the key problems in computer science, but it definitely is a very, very, a large problem set that actually hasn't been solved completely across the board. Google Docs obviously has solved a lot of those issues. But I want to dig a little bit more into why it is that we're all on this call, why it is that we're so interested in realtime collaboration. Obviously it's something that is important to all of us. But can you tell us more about this Fortune 50 company and what some of their requirements are, and what exactly realtime collaboration means to them?

- Sure. It's a gargantuan project, multiyear project. It's used by 20,000 active users at any one time, across 200 countries, in well over a dozen different languages. It interfaces with a large number of third-party tools and systems, from centralized authentication access control through to Slack integration, as well as integration with other third-party tools and a number of internal proprietary tools. The idea is that we live in a world where you want to integrate the best technology for the job as opposed to reinvent the wheel. And so, what this... Their new generation of their intranet is to really pull in and integrate all of these different systems into one place. Because the downside of integrating all these different applications is that it can be hard to collaborate and find things if you have files in Box, if you have documents in Quip, if you have communications in Slack. All these disparate systems have different organizational structures, typically are run by different groups and departments. The idea is that the intranet will be the central hub that pulls together all these tools and information, helps you find what you need, but also provides that in a universal search and collaborative layer on top of it as sort of like the metasystem where you can talk about everything from, and track those tasks and schedules, to what your roadmaps and sprint reports are for your particular initiative.

- There's actually, and I recognized a Dilbert comic about it, where he's like, I need this document on my desk tomorrow, the boss says. And he's like, would you like it via Dropbox, via Box, via mail, via Slack, via whatever, via Skype?

- That's right. Sometimes I work with a friend and we don't have an organization to work under with. And I sometimes wonder how can I send a large file, like just 100 MB, to my friend. It's really hard right now.

- Yeah, it is really difficult. And I think this really calls out one of the things that you actually alluded to earlier, Mike, which was the fact that we have a lot of folks who are using CMS's as part of their daily workflows, but they need a realtime collaboration tool to bring it all together. I think the fact that we've heard from a lot of folks that they use Google Docs before copy and pasting over to a CMS or over to their desired tool shows that this is really the nexus of where all of these interactions occur. And so, I guess I'm curious just to dive into some of the business requirements just briefly before we get into the technology. I know that's where we definitely want to spend a lot of time. But why is it that something like Google Docs doesn't fit the bill for this use case?

- I can't talk too much about the company itself. But for business reasons, they are unable to use Google Docs. They're a competitor company and they can't have highly sensitive, mission critical information on third-party servers. This entire application is air gapped, highly secure. They have very different levels of projects, from public to ultra secure. There's even separation of data and communications. So, things that are for the highly secure projects are stored on different servers and the realtime communication goes through its own system. Everything from physical separation of data and information to air gap. And while they are able to use... Or sorry, not able to use Google Docs, the other reason is that they want this highly integrated into the system. A third-party collaboration tool, whether it be Slack, or Quip, or Google Docs, or something that facilitates that collaboration, has the same challenging problem that we talked about earlier. And so, by integrating this directly into their intranet it's a core feature. And I should also mention that they want to go well beyond text editing. We're working on collaborative drawing, for example, so that you could have whiteboards. In the future you could have your agile Kanban boards show up on this system even though they're coming from a third-party tool via APIs. You can see that information. Our goal isn't to replicate all of these systems in the intranet, but to present you with the most mission critical information that you need at a glance, and to then link you into these systems. In the case of collaborative editing, it's such a core feature to enable teams to work together that it's built in.

- Also, one of the challenges with using something like Google Docs always is access. Being a large company, obviously they have a central directory service where everything is managed and accessed, and you now need to enable this third-party system this access. That's pretty challenging because they're mostly scaled for a few users, in that you invite your team, and then you work with them, collaborate with them on that. But then you need to put it back into the CMS and there's this kind of gap between your workflow so you're always like, now you've published a version, but now some more changes need to be done. Who will synchronize that, who is dealing with that, et cetera. It's basically two systems you need to work with. Even for our workflows where Tag1 had a Drupal watchdog before, it was challenging, and kind of what is the source truth once you get to a certain point in that.

- Absolutely. I think the notion of something like realtime collaboration being built in, being an intrinsic part of an end-to-end system, is a kind of key feature. But I think what you just alluded to in terms of permissioning, and having to figure out, having to reconcile user roles and permissions across two very different systems can be very challenging. And having the ability to do that in a way that is really integrated with the overarching system I think makes a lot of sense. Let's go a little bit away from the business requirements and let's dig into a little bit of the main concepts behind realtime collaboration. I know that all of us on this webinar and podcast all know some of the basic components. But just so we can bring it down to the level of what some of the core fundamentals are, how does shared editing exactly work? How does this idea of realtime collaboration actually begin from the standpoint of clients, communication, concurrency? Where do we start with? What are the main elements that are involved?

- I do think we start with a very basic, just what is shared editing in general. And many companies also use already something like Etherpad where you can just have some text and collaborate it. I know we also used it for the Drupal community when we had challenging things to solve just to facilitate note keeping a little bit in that. The most basic thing about shared editing is really just text. It's not formatted text like Google Docs or something. It's really just text. And this is very simple. So, you have an example of like, for example, I have the text written, "This is some text." And now Michael wants to make it, "This is some nice text." But I want to write a little intro, so I wrote, "Hello Preston, this is some text." And now you need to know a little bit about how editors work, and an editor needs to know how this works. My, "Hello," would be inserted at position zero, so at the start. But Michael's, "Nice," would be inserted at position 12. But now, if I write before him and the editor still thinks he should put it in at position 12, then it's wrong, because we kinda... The positions changed because I've added some text, so editing has shifted a little bit. And the idea of CRDTs here is that instead of having this position-based system, just on a very, very basic level, they are very more complicated but just as a layman's explanation, is that for every character, you are giving it an identifier. And this identifier is also a little bit how a human would reduce this conflict, resolve this conflict. Because instead of saying, hey, insert this, "Nice," at position 12, you're just saying insert it after, "Some." And that's basically the idea, kind of. And because of that, however the text changes, Michael's, "Nice," will always be inserted after the, "Some." That's kind of how you can think about how you get shared editing to work on a very, very basic level.

- Let's talk a little bit about the communication. I think that that was a very, very great explanation of how concurrency would function in this kind of environment. I'm curious, though, there's a notion of a difference in communication between how some of the traditional collaborative tools have worked in the past and how, let's say Yjs or some of the more modern tools do it today. I know that, for example, in the past the way that people would do this kind of collaboration would be through a server and a client. A centralized server that houses all of the information about what's changing and manages all of that, as you said. But one of the key differences that I noticed about Yjs is that it's focusing on a more peer-to-peer, a more decentralized approach. Do you want to talk more about that, Fabian or Kevin? What does that mean to somebody who is evaluating some of these solutions?

- The classical approach is like, Google Docs in general, operational transformation, it mostly works in the client-server environment. So you have a central point where you do all the transformations between the document updates. For example, a user inserts something at position zero and another user inserts another character at position zero. The server decides how the document looks like at the end and does the transformation before sending it to the other peers. So there's like a limited concurrency because there's only concurrency between the server and the client. And, like, it's pretty unfair because their operational transformation approach is they do work peer-to-peer, but they have a lot of overhead, a lot of metadata. In CRDTs in general, you don't need a central point of transformation. Like, all CRDTs, they allow for commutative operations. So it doesn't matter in which order your changes come in. The document will look the same for every peer as long as all the peers exchange all the document updates. And that's a really cool feature. So, you don't rely on a central server anymore, and you can actually scale your server environment. You can do really cool things with that. Most people think, oh, peer-to-peer, you have peers communicating directly with each other, and I am a huge fan about that. But especially for businesses, it makes sense to have still a server environment. And here, the advantage of Yjs or CRDTs in general is that you can scale your servers indefinitely. If you know that Google Docs is limited to a limited number of users, I think the limit of users who can concurrently edit the document is set to about 15 people. And after that you will just see a copy of some snapshot, and maybe it gets updated. I haven't looked into it for quite some time, but I know there's a limit. Which is kind of strange. The problem here is that you have centrality and you need to make sure that you never lose document updates because the centrality is also your central point of failure. So if this point, like if this server burns down, your hard disk, your document is gone. You can never merge again, you can never restore that document. But in peer-to-peer or decentralized systems, well, it doesn't matter if one system burns down. You still have another one.

- I think that single point of failure is a really key and worthy of emphasizing aspect here. Which is that one of the things that is a very key concern for enterprises today, and just companies in general, is the notion of having the ability to keep things decentralized. If you have that central point of failure, everything falls over, you don't have access to anything. And the notion, I think, of peer-to-peer, decentralized, shared editing, allows for local copies of those documents to be on every single node, let's say, of every single person who's working on the document.

- Yeah.

- Yeah, go ahead, Fabian.

- Even if it's very different from how, for example, something like Git works, because it has very different properties, you can use the same examples as advantages. For example, one of the nice examples that is always quoted with Git is some developers can work on a plane together. They can collaborate on some code on the plane together because everyone has a whole repository of all changes. And that's kind of the same with Yjs, to some extent. You can optimize it to not have solo history, but you can have solo history, so two people that happen to be on the same flight could realtime collaborate on that document and later send it back to server. But they would not need a central server component. They could just do it. They could put work on the plane. And I think that's a really cool feature. And not just two, it could be a whole company, like 50 people finishing a last minute document collaboratively. Internet on planes has gotten better, yes, but it would be really bad if your Google Docs suddenly while you're on that plane. With Yjs, you just have the guarantee that your personal server works and you can finish your last minute document.

- It was pretty amazing, we actually saw this in action, unintentionally. We were doing a demo to key stakeholders. We had just integrated the realtime collaborative system which we kind of built independently at first, with Drupal, the core of the internet. And we were doing a demo of the system in action completely in development environments. We're pre-alpha. And during this demo, the server failed. Yet the collaborative demo just kept going and working perfectly because it was peer-to-peer in the backend. And the infrastructure engineer on the call was able to get the server back online, and it was just absolutely seamless. So it was really cool. All of us were like, my god, we didn't intend to demo that. That was amazing.

- And I think that, I think Fabian, you mentioned something that was really key there, which is that idea of a spotty connection. Obviously people working on flights. This case that you just described, Mike, as well is a very good example of this. This kind of approach that focuses on the notion that everyone has a local copy, just as you said, with Git-based approaches as well, this is the kind of thing that even if you're in, let's say the middle of nowhere and have a really horrible 3G connection you can still get the editing capabilities, and get that sync going, and make sure that things stay up to date. I think it's a very, very key value proposition. I want to jump back into Yjs itself because I think that a lot of people are really interested in operational transformation. A lot of people are interested in what exactly CRDT is, and all of those things. But first let's talk about a very key issue in collaborative editing that I think really uses a little bit of discussion. And that's the notion of an edit history. One of the biggest challenges that all content, all people working in content governance and content strategy have is the ability to distinguish between different undo histories. How does undo really work in the context of shared editing versus single editors? And how exactly does that work from a technical standpoint? Is there a global state, what's driving that state? How do you reconcile some of these things? We can talk about it from the perspective of Yjs or other tools as well.

- Man, undo/redo, it's a really, really hard problem, especially in CRDTs. In operational transformation, this problem is not that hard. You can just do undo/redo your own operations. You can just do the transformations in the document history. But you don't really have that history in Yjs, or in CRDTs because it just, it works differently. It's not really operations that you put in a linear log. It's more like operational set are in some kind of tree, and you can't really decide, okay, where do I need to start undoing. So the thing in Yjs is... Yeah, so, the way I implement it actually is in this tree I give a path of the undo/redo operations. So I basically say, this is the kind of state that you need to remove when you hit undo, and this is the kind of state that you need to add when you hit undo. And this is basically how I manage it. It's kind of complicated without deep diving into how Yjs works. But yeah, it's mostly a solved problem at this point. So, the undo/redo thing, it works on linear text really, really well. And there's great progress on structured documents, like on ProseMirror, which is, it's one of the editors that Yjs supports. So, there you basically have this structured document, a paragraph which has a table and the table has some lists maybe, and there you type and hit undo. You really need a good sense of what to undo and what the user expects when you hit undo/redo. Yeah, there's great progress. And the undo/redo implementation is configurable. I'm not saying that it's done yet. There's still users reporting, "Oh, that's weird," because it's pretty new, undo/redo on structured documents.

- Just to show how difficult undo/redo is, you can do some great experiments with other out-of-the-box solutions like Google Docs. You can have lots fun. Also with something like conflicting things, like Google Docs has some offline support, and I've taken the liberty to just test some scenarios. Like, you will have a paragraph, a long line, and do an enter break in the middle, and how does the system react? And it is really cool to just try out this, like, perfect product, built on knowledge of giants with Google Wave and all the technological and theoretical things, and then you try some undo/redo scenarios, and you get some really weird results even in the top notch product on the market right now. It's really interesting.

- Yeah, I think the notion of, like, what happens when you get a merge conflict or a sync conflict in Google Docs. It's always a fun experiment. It's rare that I get those messages these days, but it's always an interesting experiment. I do want to go back to something you just mentioned though, Kevin. And I do want to dwell a little bit on some of those really nitty gritty features of Yjs and really do the deep dive. But I'm curious, you just mentioned that the undo/redo behavior is actually configurable in Yjs. Can you talk a little bit about how that is, or what sorts of configuration can be expected?

- Oh yeah. The thing is, in Yjs you have... Yjs basically, well, my idea of Yjs is that you can see it as a shared type system. So you can create shared data types, something like an array, or a map with a key-value store. These data types, they work concurrently. So one user can manipulate them and the other one receives the updates, and then they sync. This is how I see Yjs. And then there are bindings to editors like ProseMirror, and CodeMirror, and many others. And often you have this big Yjs structure, and it may be a structure of several documents. So, when many users, or when you are working on several documents at the same time, and then you hit undo, what do you expect to happen? Does it also undo something from other documents? I don't think so. In my mind, I work on one document, I only undo the changes of that document. So this is what I mean of configure it, being able to configure it. You can specify what can be undone by your undo manager. There can be several undo managers, several document histories, for your local user. And you can specify what can happen and what cannot happen. And there are two mentions in Yjs. It's mainly you have a scope to a specific data type. For example, this undo manager can only undo stuff on this document. And then you have the scope also of... So, for example, what happens, when I created a paragraph, you created this big chunk of text in that paragraph but I didn't know about that, and then I hit undo? Can I undo the creation of that paragraph or not? Most users, as I found out, would say no, you should not delete content if you didn't create it. So in this case, you are not able to undo the changes of other users because sometimes these changes are intertwined. And the challenge here is really to figure out what is it what users expect when you hit undo.

- That's very interesting. I think that that really digs at, I think, the central issue of undo/redo, which is that people have different expectations of it. And having that as a configurable element I think is a very, very compelling idea. Well, okay, so let's dig out of undo history. Oh, go ahead. Sorry, Fabian.

- And there's also one of the key points why we think that CMS's itself should have the collaborative technology, and Drupal should have it out-of-the-box as a contributed module, Azure CMS should have it out-of-the-box. It should be really as simple as install a JavaScript library, like a plus module, and maybe a little server component if you don't want peer-to-peer. But then the centralized point here is that every client, every user, has their own expectations, they have their own challenges, they have their own things. And also, also all the editor space, even going to add a little bit of what we wanna go in more detail maybe in future episodes, they are no longer shipping out-of-the-box editors. They are shipping editor toolkits so every CMS can build their own editor to their needs. Every client, every project, can build their own editor. And I think that's a very, very key point to our current ecosystem of CMS's. And that someone else might need completely different undo managers than I need or this project needs. And that's also the beauty of open source. You can just take it, refine it, extend it, and do it. That's so cool with Yjs being open source that you just have this ability to do all these cool things.

- I'm not sure if we have the liberty to do this, but can we dig into a few of these potential scenarios where having this kind of differentiated configuration would be useful? Fabian, I think you mentioned a very interesting idea, which is that different users want to have different approaches to how they edit. How have you both seen this shake out in terms of the things you've tried, experiments you've tried? Or Mike, has this come up in terms of use cases? What sorts of things have you all seen in the real world that relate to this idea of having completely independent experiences?

- So basically, top 50 Fortune client it is customizability is key. It's very important that they have, for example in this case, their own editor which is exactly working to their needs. And the collaborative functionality is everything like you're expecting from Google Docs like just comments and later, even track changes, still working on that, pretty challenging. That's all very key in that. And it's important that, for example, if you use a technology like React upon which you can build your editor... Think of Confluence which is built upon Atlaskit. And Atlaskit is, by the way, also open source and one of the components we're using here as base. Then you can extend it, and you can build your own and you can customize it. That was really a key, central point to being able to not be stuck with some proprietary solution, or be stuck with some out-of-the-box solution where you cannot define anything, where you just get sent a black box and then you have to work with it. Because then if you need to change to the undo thing, well, yeah, you can pay Windows for it, obviously, but it's really out of your control. And here you can work directly with , but even later you can maintain it with a different team or whatever because it's accessible code. Another advantage of open source, basically. But yeah, the extendability, and being able to, for example, also define the server component. In this case, a server component is needed even though peer-to-peer would work and works well within the same browser or within even cross-browser with some tricks. Because you want, for example, authentication. If you store confidential data in a shared editing thing, you don't just want to be able to just access the server and see it, but really you want to... Which is usually a Node server, by the way. But really, you want to have authentication tokens like you're used from different APIs where you can secure your server. Then you can say, yeah, this operation is allowed or this operation is even not allowed. Even being able, as Yjs messages, even if it's not kind of like messages it's still some transformation sent over the wire. And then you could deep inspect it, and can say, hey, this is not allowed. This document was not touched for 300 days or whatever, and now someone is deleting all the content? Maybe deny that. Or keep a snapshot before that. And that's also so... So here, central server is nice and needed because you want to have that control but it gives us this flexibility in that, and you don't get that flexibility if you work just with a black box.

- Control is something that's a key theme of this project, this organization. Why we don't want to reinvent the wheel and rebuild certain awesome third-party technologies, this client of ours has repeatedly been let down by vendors that have made commitments to add certain features or do certain things over time. And one of the reasons that they're rebuilding this system and investing in these kinds of technologies is that the business cost of not being in control of sort of core technologies is huge. And so, relying on open source projects, being able to own and manage this code internally for their needs is critically important. This is gonna be a system that's gonna be around for 5+ years. The current system has been in place for a little over seven. And so, a key recurring theme is we need to be able to have long-term maintenance and control over key aspects of our technology.

- Yeah, and I think... I'm sorry, go ahead, Kevin.

- All right. I just wanted to add, one of the things that a lot of people appreciate when I talk to people who use Yjs is that you can easily use Yjs on top of your existing communication protocol. So there are many existing projects that want to integrate collaboration into their system, and they already have some kind of WebSocket connection or long polling setup, and they want to use that system. And for good reason. They don't want to change their whole infrastructure, add another communication layer like some proprietary WebSocket communication protocol, and build on that. They want to integrate Yjs into their existing protocol. That's something that you can easily do with Yjs. And this is also something that Fabian talked about just now. I completely agree with that. I didn't think about it when I implemented like that, but it just came up that it is really well appreciated. And because of that, I put everything into separate modules. Yjs is split up into like 20 modules right now. Once you build a system, you basically say, okay, how do I communicate? I use this existing module, or I can rewrite my own module on top of the Yjs code, or just on top of the documentation that you have. And if you want to support your own editor it's so easy because the editor support is not baked into Yjs. It's not just built for one thing, it's built for everything. And that makes it incredibly difficult to do, but I think it's a really nice concept.

- Yeah. And speaking to that, the editor support, if you have textareas somewhere and you want to make collaborative, you can do that right now. There's a translator for a rich text area, like the contenteditable, or just any textarea. You can just take it, put Yjs on it. And there's also a nice demo site. There will be some post information. We'll link to that with some links in that. There's a really cool, there's even two demo sites. One is really showing like ProseMirror, Quill, and even Atlaskit, the whole Confluence experience in that. But there's also shared editing for a nice thing. There's even a 3D modeling demo baked in together with 3D modeling. It's insane, it's really insane. Also drawing. It's, like, really cool. The possibilities are so endless. It's so nice that it's so framework agnostic. Really not just editors and really not just one editor. But really, if you have a textarea, you can use Yjs, it will be collaborative, it works. And even for older things, if someone wanted to invest in that CKEditor 4, it would be possible, in theory, to make that collaborative. People wouldn't even need to operate newer technologies. It would need work, investment, et cetera, of course, but it's possible.

- Yeah, I think this notion of flexibility and customizability, but also especially this notion of pluggability, where you can insert Yjs into any situation and it works. I think also the flexibility. Fabian, you mentioned earlier that in certain cases you do want to have a server component that acts as the safety kind of mechanism. But you might not want one, and Yjs leaves that option open as well. And I think just the notion of being able to insert it agnostically anywhere you want is a very, very key characteristic. I think one of the key characteristics that we identified a little bit earlier but we haven't really dwelled on, though, is one that's very interesting to me. Which is that you can use Yjs for things like text, you can use Yjs for textareas and contenteditable, but what about drawings and 3D modeling? I know, Kevin, you've sort of framed Yjs as being for more than just text. Can you talk a little bit about the drawing capabilities? What does collaborative drawing mean, and how does Yjs fit into that?

- Yeah, definitely. This all comes from the idea that Yjs is not just for specific applications, but it's just a framework for shared data types. You can build any application on data types because, well, that's what we usually do as developers. We have these key data structures, especially data. In JavaScript we have only two or three main data structures, which is arrays and maps. You can build your own data structures on top of that and they're abstractions, but basically you just have arrays and maps. And maybe set is also something that, well, I really like the idea of having a set too. And you have the same capabilities in Yjs but the difference is, these data structures or data types, they are collaborative, they are shared, so when you do an edit to this data type the change is transmitted to all the other peers automatically, and they sync, and then you get updates of what changed. And you can design your application like that. Just use the shared data types and build your application. And drawing was something that is really, really easy to implement on top of data types, or shared data types. A line is basically just a list, an array of vectors. So, this is what is happening in the drawing demo. When you start drawing, you have a map, you insert a line to that map, and then you start drawing. It's really easy. And then you can maybe configure it by adding some options to the map, or something to color, who created the line, all this information. You can just add it to the shared data structure. And this is also how the 3D modeling is created. It's really basic, but it's really cool because it shows that everything you do, like rotating the 3D object, the rotation is a part of the shared data type. You can add annotations and these annotations are just X, Y, Z values, and you add this to the shared data structure. And just like this you can build amazing applications, just on top of this idea.

- Yeah, I was thinking about the classic React example, the to-do list. And the to-do list basically, the results are often shown for Redux data stores like transformations, et cetera. And now put this final array that you put your data in, or this kind of map, probably an array in this case. Put that in the Yjs data structure and it's automatic. It's kind of almost automatically collaborative. That's just great.

- Is one of these in particular harder than the others? Is collaborative text editing actually harder than drawing?

- Collaborative text editing, like, I want to distinguish between text editing and rich text editing. Because text editing is pretty easy. It's harder than having a 3D model, in my opinion. From my perspective as a developer of Yjs, it is harder because text editing, you need to optimize that a lot to make it performant and don't block the thread. You want to be able to insert, like, a megabyte of text into your data structure. And that's not always easy. First of all, because of the time it takes to send to the other peers, but also the time it takes to parse that operation, that change. So, text data structure is heavily optimized in Yjs, and that's why it's performant. For you, it's as easy as the array, as using the array data structure. Rich text on the other hand, it's a bit weird because you can either have structured documents. This is what I think of in ProseMirror as like, you have a paragraph, inside that you have a table, and so on. And then you have also formatting attributes to your text. So, when you write the text, "Hello world," and you want to make, "world," bold, you don't remove the bold text and add some text, tags like in HTML, around this word, "world." You want to assign attributes to this text. And this is also possible in Yjs. It's like this rich text notion from the classical Windows. I think they developed the term rich text. You assign attributes to text. This is also one of the problems here. And like, as soon as you realize that this is going on, you can either have structured documents, or rich text documents, or both combined. In ProseMirror, they're kind of both combined. You have marks, so you can assign properties to it, to text. And as soon as you realize what's going on here, it's fairly easy to build applications on top of that. But building editor support, that's a whole different level for if you build a structured editor support, for example for ProseMirror. It was really, really hard to get right.

- Yeah. And speaking of performance for inserting, we did a test. And if you take a really, really long document, you paste it 10 times into Google Docs, it takes longer and longer. It takes around seven seconds. And with Yjs and CRDTs, it's instant. It's really, it's really just instant. I was astonished. I also tested some other things, which I don't name to not shame. But there I was even able to freeze, to completely freeze the collaborative editor, and it was not reacting anymore after a while, and undo . It's just not a pleasant experience. It was so nice to see this just working so well with Yjs. With one of the key points where I was able to present, with stakeholders that were evaluating the technology on the client side with me to say, hey, this is really working great in practice. Just try pasting a big document. Now try the same in Google Docs. It's like a difference of night and day.

- Absolutely. Yeah, and I think one of the things that's interesting is if you look at Yjs in relation to some of the other tools that are out there and available, I know that, for example one of the, some of the very common tools people use out there are things like CKEditor 5, ProseMirror collab, as well as Draft.js. But one of the things I know is that there are certain features that are lacking. I know that you don't want to name and shame, Fabian. But I'm curious, what makes Yjs superior? One of the things I know, for example, that Yjs does better is one thing you mentioned earlier, Kevin, around content annotations, being able to directly apply those annotations. What are some other examples of how Yjs is better than some of these other solutions and stacks like ShareDB, Automerge, the CKSource service? What are some of the things that you both found were either better in Yjs, or lacking in others, or things that you noticed?

- First of all, just to get it out of the picture, the CKSource solution unfortunately is proprietary. It's a server black box. You have to pay for it. Which is fine, that's fine. But as I've already outlined a lot on this call, open source is just better. Because, well, I mean, Drupal is open source, many technologies are open source, and those open source technologies, they thrive, because companies invest in them, they mature, and everyone can use them, and that's so important. That puts out the proprietary solutions. They might be useful for the one where each project of that, but they're not useful, in my mind, to the larger community in that. And for Automerge, ShareDB, I'll let Kevin speak. expert.

- Yeah. So, about these two projects, ShareDB, it was always my goal to be as good as ShareDB. It's a really cool project. And I was inspired a lot of the features they have, because they also have the notion that you can just manipulate your data and then everyone gets synced up. And I love that idea. At the time when I created Yjs, they didn't support, for example, objects as key values or so. And this was like, okay, it can't be so hard to implement that. I also want to make it peer-to-peer. So, this is why I created Yjs. And it only took me six years to do that, but that's fine. So, I think Yjs against ShareDB, they're both great projects, but ShareDB is based on operational transformation which is centralized, and I always loved the idea of having a decentralized system. And Automerge, also a really cool project. I really like the maintainer of that project. He is really active in the research community, and he's a lot more credible than I am in the papers he created. He is a really good writer. And if you get the chance, and are interested in CRDTs, you should definitely check out one of his talks about CRDTs because he really explains it well how it works. But now against Automerge, right now, Automerge is not as performant as Yjs. That's just what I want to say there. Yjs is really focused on shared text editing. And Automerge also supports text editing, but it's not as fast. I created some benchmarks. Maybe we can link that too. You can also find it on the GitHub repository of Yjs. But yeah, these are the main reasons against that. It still needs to be seen which algorithm is better. But the thing is, in Yjs I had more time to create Yjs and I implemented a lot of optimizations especially for shared text editing.

- Yeah, and it's also a very important point in if you want to do a collaborative system for application with a JSON structure, try out Automerge, try out Yjs. We are open source. There's always a good solution for your project that you need. But if you want to do text editing then Yjs gives you this undo manager. It gives you these functionalities for rich text. It gives you this control. It gives you this basic server component if you need. With Automerge, you build everything your own. You can do that, it's fine. People have done that with certain editors. But it's really Yjs gives you a headstart and a framework to work with shared editors here especially.

- Yeah, and this is really interesting. I think one of the things... By the way, just to help our viewers and our listeners, we keep on throwing this CRDT acronym around. It's actually one of the big reasons why I think Yjs is so compelling. CRDT, by the way, stands for commutative replicated data type. You can look at it on Wikipedia in the operational transformation article, very, very useful. But I think just to dig into CRDT a little bit briefly, I think one of the really key aspects of Yjs that we've obviously been touching on here and there is the fact that because of this focus on data types it's very powerful for rich text editing, more so than some of the, especially for those who need more than just the plain text editing feature. But I think this is actually just one example of some of the really interesting features in Yjs. One of the things that I found really interesting is because of that kind of agnosticism and because of that focus on that kind of lower level, we actually find that other things are possible with Yjs, like using multiple document formats. So you can use rich text, you can use markdown, you can use HTML, anything that's kind of a parsable AST. What I'm curious though is that there is a topic very near and dear to my heart which I know that Kevin, you focused on, which is actually the notion of accessibility. So I'm curious, how exactly does accessibility work in the context of realtime collaboration for Yjs? Especially given the fact that rich text editing, realtime collaboration, both very, very difficult things to do in an accessible way. Are you using things like ARIA labels? What are your thoughts on accessibility with realtime collaboration?

- Accessibility is firstly an issue for the editors. Yjs is not really concerned about accessibility because it's not really part of the framework. But if the editor supports accessibility then it's baked into Yjs. By the way... No, actually that's all I want to say about that. Most editors nowadays are really accessible so there's not a lot of concern there. I'm also not an expert in accessibility, but I'm also really concerned about what happens when you hit undo/redo, for example. Which is, by the way, not working in Google Docs or in most editors. Try hitting Edit and then Undo in Google Docs. It doesn't work. And I'll figure out why.

- Very interesting. Wow.

- But maybe this is part of the discussion when we talk about editor support, or which editor we choose, Tag1, and for the company that we contract for.

- We'll do a followup talk on that. I think that the whole editor component, we did a ton of research. So maybe we'll do a Tag talk next on the whole editor component and how we ended up with ProseMirror and the integration of all that.

- Yeah, because that's an area I'd love to dig into in terms of ProseMirror's capability. CKEditor also has really great accessibility features. But how that relationship comes together, how Yjs really integrates with some of these editors and how those features get exposed, that's a really interesting topic that I know we're gonna have to save for next time. In these last few minutes here... We've covered a lot of ground here. We've covered realtime collaboration. We've covered some of the problems with concurrency in these editing tools. We've also talked about CRDT from the very, very high level standpoint. I'm curious now, I think one of the things people are interested in is, well, what is the future of Yjs, Kevin? What exactly do the next couple years hold for Yjs? What are some of your goals in the future, moving forward?

- My goals. My goal is to create editor support for all major editors out there right now. There's currently editor support for code editors like Ace, and CodeMirror, and there's rich text editors, for example ProseMirror and Quill support. There are many interesting additions to Yjs, and I want to increase that number. I want a lot of people using that technology. Because if you use Yjs as the entry point for your data model you can have different text editors. For example, you can have CodeMirror and Ace, like one user uses that and the other uses a different editor, and that's a really interesting idea for me. The other thing is, what I'm really interested in is real decentralized systems. And there's mainly the dotProject and the IPFS project, and building on top of that system, like each centralized system, wow, it's so amazing. You can build an application without a central server only having, well, peers, just working stations meshed together and they somehow create an environment where you can store data, and that's very interesting to me.

- I think that is a very, very compelling idea that I'd love to talk more about with you, Kevin. Yeah, the idea of having a completely serverless implementation I think is very interesting. Well, I did want to say that we are out of time. However, I think one of the things I wanted to call out is something, Fabian, you said at the very beginning of this whole broadcast, which is, this should be something that's part of every CMS and online application. We should be building interfaces that are editable. We should be really enabling these content editor workflows. And I think clearly Yjs has the right idea, has the right journey in mind. And I can see, given that Tag1 is focused not just on Drupal but also all of these other technologies, and building things like central Node.js backends for these editors, all of that sort of work really highlights, I think, the notion that realtime collaboration is a very important concern for everybody. Not just developers, but also our content editors and our marketers who are working with us on a daily basis. Any last words about realtime collaboration, Yjs, something you want to leave our viewers and listeners with?

- I think it's just an awesome technology. And really, even repeating myself, it is such a difference if you start using the system, even if we are just in demo mode as we are pre-alpha, to just collaborate with someone within the Drupal scene. It feels so different to being in Google Docs or being isolated because it's where you create your content normally, it's where you work together, it's the user tools. You can even attach a file directly. You don't have to upload it to Google Docs and later upload it to Drupal. You really have the experience in that you can use a media browser, you can use a media browser, your whole media library that's on the CMS, it's not in Google Docs. You select your file, it's interactive, and it's collaborative. Your peers will see it as well. And I think that's just fantastic, and that's why I'm saying realtime systems for editors, but also realtime updates. Like, I made an update, you see that I made an update directly on your screen. That's kind of, in my mind, the future of CMS's and, in a way, also the web.

- And this is intended to be a CMS-independent solution. We look forward to adding this to Django and Wagtail, to WordPress. Every CMS should have this. I'd also say that we just scratched the surface of all of these technologies. I think this is a really interesting conversation so we'll definitely setup some future talks to dig more into the details, whether it's the editor or the underlying tech, to get into the nitty gritty.

- Absolutely. I think that today we've done a very good overview of the what, the how, and the Yjs. And I want to go ahead and wrap things up here. If you have any thoughts on how this show, what you're interested, in certain topics, please feel free to reach out to us. I want to thank our guests today. Fabian Franz, senior technical architect and performance lead at Tag1. Also Kevin Jahns joining us all the way from Berlin, right. The creator of Yjs as well as a key expert and contributor to realtime collaboration. And of course my partner in crime Michael Meyers, managing director at Tag1. Thank you all so much, and we'll see you next time for Tag1 Team Talk. Thank you.

Sep 18 2019
Sep 18

BADCamp is only a couple of days away!!

With the shift in venues, we thought we'd put together a list of local places that serve up good grub. Knowing that the attendees of BADCamp are diverse, so are the restaurants on our list. We have indicated if we have confirmed that they serve vegetarian and vegan cuisine.

Angeline's Louisiana Kitchen

Angeline's brings the flavor and atmosphere of a New Orleans neighborhood restaurant ​to downtown Berkeley, with great music, libations and the classic dishes invented in the Big Easy's greatest kitchens.

(Vegetarian, Vegan)

Cancun

Founded in 1994 by Jorge Saldana, Cancun opened its doors downtown providing a central location for easy, dine-in or takeout, healthy Mexican food in Berkeley. Beloved from the beginning, Cancun, with its local farm to table ingredients, fresh salsa bar, big open space and high ceilings, continues to offer homemade traditional Mexican dishes, made to order with love.

Eureka

Discover the Eureka! all-American culture through one-of-a-kind experiences, weekly events such as Steal The Glass, daily “Hoppy” Hour, and an inventive rotating beer and craft beverage program, small-batch whiskeys, and a mouthwatering menu featuring gourmet burgers.

(Vegetarian)

Jupiter

Housed in an old livery stable from the 1890's, with interior inspired by the oldest bar in Berlin, Jupiter exudes charm & rare atmosphere. Steps off BART, in the heart of Downtown Berkeley, this brewhouse features two stories of seating, a heated beer garden, live music, delicious food & incredible local beer.

The Butcher's Son

Yes, everything is vegan at The Butcher's Son.

(Vegetarian, Vegan)

Ippudo

Every steaming bowl is an “arigato” – a thank you, which Ippudo serves to their customers together with a smile.

The Veggie Grill

At Veggie Grill, vegetables are the rockstars! They see every season as an opportunity to create bold and delicious ways to bring people together.

(Vegetarian, Vegan)

top dog

top dog grew out of a boy's love of sausage, a staple in his German immigrants' New York home over the WWII years. Steaks? Tubesteaks! His paper route to a well-mixed neighborhood assured that Italian, Polish, even Hungarian sausages were soon no strangers to that developing appetite and palate. Nor had he far to go to a cart or stand offering kosher style "Franks", usually steeped but better off the griddle.

(Vegetarian)

Tuk Tuk Thai

Tuk Tuk Thai is a aid-back cafe serving popular Thai dishes like curries & noodle soups & offering delivery too.

(Vegetarian, Vegan)

Maison Bleue

Maison Bleue a little taste of France.

(Vegetarian)

Revival

Revival Bar + Kitchen is a ustainably Sourced California Brasserie and Cocktail Lounge.

(Vegetarian, Vegan)

The Flying Falafel

The Flying Falafel serves Mediterranean goodies with an aerial twist. Falafel cooked to order, hummus, and veggies galore. Serving catering and party platters.

(Vegetarian)

Venus

Custom California Cuisine made from local and sustainable ingredients.  Everything that comes from The Venus kitchen is handmade. No cans. No freezers.

(Vegetarian)

Sep 18 2019
Sep 18

Many thanks to Kaleem Clarkson (kclarkson) and his team for organizing a great DrupalCamp Atlanta. I had a time of learning, connecting and being inspired!

Doug Vann, Carole Bernard, and Rudy Dodier in a selfie

I started my day at DrupalCamp Atlanta by participating in the workshop “Introduction to Drupal,” led by longtime Drupal community member Doug Vann (dougvann). Joining me was Rudy Dodier, from Platform.sh. Doug covered everything from the history of Drupal, to setting up a basic website to how the word “system” in Content Management System can be an acronym for: Saves You Some Time Energy Money.

I took copious notes, as I continue to connect the dots to the power of the Drupal project - to how it is leading a digital transformation across industries. I absorbed it all, and was eager to learn more. I met other developers and individuals who contribute so much to the Drupal project and to the Drupal community. From my conversations with Ray Saltini (rgs) and Mike Anello (ultimike) to Suzanne Dergacheva (pixelite), I was struck by the level of commitment demonstrated by the community. You'll get a sense for this in Suzanne's slides for her Growing the Drupal Community talk.

Suzanne Dergacheva, Heather Rocker, and Carole Bernard standing together

Heather Rocker (hrocker) also attended and presented at the Career Fair. She spoke about the importance of the Association’s initiative on Diversity, Equity & Inclusion and the benefits that come from actively recruiting and welcoming new individuals (especially those from underrepresented communities) to lend their skills to the project.

I realize the extensive number of stories that are within this vast and passionate community, and I am excited to promote and talk about them. I am looking forward to being a communications and marketing advocate for the Drupal community, the Drupal project and the Drupal Association. From the specific needs of developers, to the importance of broadening our audience, to the necessity of career fairs to bring students on the Drupal train, and to the need for marketing to grow Drupal adoption, I heard and learned so much in a short visit to Atlanta. But, what impressed me as much as the day was the contagious enthusiasm for what the community is doing and for what it can accomplish!

Slide showing faces of the 8 Atlanta Camp Leaders - ATL Board Members - Kaleem Clarkson, Brandon Firfer, Sarah Golden, Adam Kirby, Trish Smith, Nikki Smith, Dominic Thomas, and Advisor Dave Terry
The DrupalCamp Atlanta Leadership Team, without whom the event wouldn't have been possible!

ATL team onsite together doing a power pose.

Thanks to everyone who came out for #DCATL this year! We loved meeting y'all. Let's continue to grow and support the @drupal community! pic.twitter.com/q9IgHjVRkA

— DrupalCamp Atlanta (@DrupalCamp_ATL) September 14, 2019

Sep 18 2019
Sep 18

The Drupal world is looking forward to the “ninth edition” of the great drop. It’s going to continue Drupal’s chosen path with its API-first orientation, the latest versions of third-party libraries, advanced editor-friendliness, and much more.

The Drupal 9 release is scheduled to come on June 3, 2020. One more big step has just brought it closer — the Drupal 8.7.7 version.

This is a very special update indeed. In this blog post, we will tell you what’s new in it and why you should update to Drupal 8.7.7 as part of your site’s easy and smooth journey to Drupal 9.

What’s new in Drupal 8.7.7: a big surprise inside

Drupal 8.7.7 came on September 4. Although it’s not even a minor update — it's been called a patch release — it brought a major new feature by introducing a new core versioning system that helps websites be more ready for Drupal 9.

New core version requirement key in Drupal 8.7.7

The new core version requirement key is meant for module and theme creators to mark their project’s compatibility with Drupal core versions. This compatibility can apply to multiple versions — for example, D8 and D9. This was not possible with the old “core: 8.x” key.

Drupal core contributor Gábor Hojtsy described this feature in his blog post. He emphasized that 8.7.7 is the first release to support modules compatible with multiple major core versions.

Gábor Hojtsy's quote about Drupal 8.7.7

The key is to be added to the info.yml file of the module or theme:

name: My Module Name

type: module

core_version_requirement: ^8 || ^9

Websites that update to Drupal 8.7.7 now will have another benefit. The new requirement key also allows marking the compatibility with minor and patch releases (e.g. ^8.7, ^8.8, ^8.7.7, ^8.8.0, ^8.7.23, and so on). “Such precision was never possible before!”, Gábor Hojtsy writes.

What happens to the old “core: 8.x” key?

  • The old key still exists for core versions prior to 8.7.7. They do not work with the new core_version_requirement key. So developers will need to list both types of keys in order to allow their code to work with the older and newer versions.
  • If modules or themes are not supposed to work with versions older than 8.7.7, it’s enough to just use the new key.

You can read more details about the key in the drupal.org announcement. But why is marking the Drupal 9 compatibility so important, and how should an update to D8.7.7 help your website’s future? The reason comes next.

Drupal 8.7.7 is a step towards your smooth upgrade to Drupal 9

The topic of D9 compatibility for modules and themes is so hot because many D8 websites have a chance to be fully ready for Drupal 9 the day it arrives.

D9 is being built on the basis of D8, but without deprecated code and with new versions of third-party libraries. Websites that are clean from deprecated code and are kept updated will be instantly welcomed to “cloud number 9”!

So if you update to Drupal 8.7.7 now, you will be closer to D9 than ever. Thanks to the new core versioning system, your modules and themes will be ready to “jump” to the ninth Drupal as soon as it comes.

Update to Drupal 8.7.7 and prepare for Drupal 9 with our team!

Our development team will help you prepare for Drupal 9 easily:

  • perform your website’s update to Drupal 8.7.7
  • update your modules and themes
  • find and replace the deprecated code
  • apply the new core version requirement key
  • follow all subsequent updates for your Drupal 9 readiness
  • make an upgrade to Drupal 8 (if you are still with the seventh version)

The future looks really bright for websites that follow the latest recommendations. Let yours be among them!

Sep 18 2019
Sep 18

It’s great to live at a time when a robust CMS can share its content with an ultrafast JavaScript front-end. One of the best examples of this is combining Drupal and GatsbyJS. We are happy to see a new tool for it that is fresh from the oven — Gatsby Live Preview module for Drupal 8. 

It provides Drupal editors with easy content creation workflows, making Drupal and Gatsby integration a more lucrative idea for developers and website owners.

GatsbyJS: a great companion for Drupal

The GatsbyJS modern site generator inspires the Drupal community more and more for many reasons. Here are at least a few:

  • It is based on the hottest technologies such as the ReactJS front-end framework, the GraphQL query language, and the Webpack JavaScript module bundler. 
  • It is amazingly fast and provides real-time content updates. Every static Gatsby site is, in fact, a full-fledged React app. 
  • It comes packed with 1250+ source plugins to retrieve data from particular data sources. This includes the Drupal source plugin that connects your Drupal site as a data source to your Gatsby site.
  • It has 250+ starter kits to quickly set up a Gatsby website that will display your Drupal 8 data.
GatsbyJS starter kit exampleGatsbyJS starter kit example 2

The Gatsby Live Preview module in Drupal 8

The contributed module called Gatsby Live Preview allows Drupal content editors to make content updates and immediately see how it will look on the Gatsby site before deployment. 

This easy content creation is provided by showing Drupal on the left and Gatsby on the right:

Gatsby Live Preview module in Drupal 8

The maintainer of the module, Shean Thomas, gave a talk and showed slides of the Gatsby Live Preview module at Decoupled Days in New York on July 18, 2019. 

Thomas explained the problem that the module solved. Previously, there was no easy way to see during content creation how changes would look like before you click “save.” Among the available options was to run the Gatsby development server before deploying the changes to live, which required the entire site regeneration. 

According to Shean Thomas, among the plans for the future is integrating the module with the Drupal 8’s Content Moderation module. The core Content Moderation and Workflows modules take content creation to a new level through handy editorial workflows in Drupal 8

The module is very new with its alpha release out on August 14, 2019. It is based on the tool introduced by the Gatsby team — the Gatsby Preview Beta

Steps to install and configure the module 

This part comes when the main setup is complete. So we assume you are done with:

  • Gatsby site creation
  • Gatsby Source Drupal plugin installation (version 3.2.3 or later)
  • configuring the gatsby-config.js file to list your Drupal website’s address
  • building up your Gatsby pages to display Drupal content

So the live preview setup steps are as follows:

  • install and enable the Gatsby Live Preview Drupal module the way you prefer
  • set up a Gatsby cloud preview account
  • set the “preview” flag to “true” in the “options” (the Gatsby Source Drupal plugin’s file)
  • Gatsby is now ready to follow the content changes at a particular URL
  • copy the preview URL from the Gatsby cloud to the “Gatsby Preview Server URL” (Configuration — System — Gatsby Live Preview Settings of your Drupal admin dashboard)

Examples of easy content creation & preview with the module 

The Decoupled Days' speech about the Gatsby Live Preview module greatly inspired the Drupal community. In order to make it easy for people to get started with Drupal and Gatsby integration. 

Drupal contributor Jesus Manuel Olivas decided to improve some features in the module.

The developer also added this setup to projects based on the Drupal Boina Distribution and shared his impressions about the module in the blog post with the video. Let’s have a look at this easy content creation process:

  • On the left side, we see the Drupal site where some content is added via the admin interface. 
Gatsby Live Preview module in Drupal 8 exampleGatsby Live Preview module in Drupal 8 example
  • On the right side, we see the Gatsby site update immediately after the “Save” button is clicked in Drupal.
Gatsby Live Preview module in Drupal 8 example

 

Get the most of Drupal and Gatsby integration!

Our developers will help you enjoy the incredible speed that GatsbyJS is able to give to your Drupal website! They can:

  • set up a Gatsby website and establish content retrieval from Drupal
  • build your Gatsby pages exactly according to your wishes thanks to GraphQL
  • install and configure the module for Drupal and Gatsby live preview for easy content creation

Our Drupal team are masters of modern JavaScript technologies. You can entrust this integration to us!

Sep 18 2019
Sep 18
many different people profile heads in different colors

Part 3 of the "Mastering Drupal 8 Multilingual" blog series provides site building and front-end tips and techniques to improve the multilingual experience for both editors and end-users.

Previous posts in this series covered planning and budgeting for multilingual, as well as the process for installing the modules needed and the basics of content, configuration and interface translation. If you missed posts one or two, you may want to read those posts before proceeding.

Sep 18 2019
Sep 18

Your browser does not support the audio element. TEN7-Podcast-Ep-070-Using-Kubernetes-for-Hosting.mp3

Summary

After months deep in the weeds of Kubernetes, our DevOps Tess Flynn emerged with the best practices for melding Docker, Flight Deck and Kubernetes to create a powerful open source infrastructure for hosting Drupal sites in production (powered by our partner, DigitalOcean). Ivan and Tess take a deep dive into why we chose this combination of tools, our journey to get here, and the nitty gritty of how everything works together.    

Guest

Tess Flynn, TEN7 DevOps

Highlights

  • Why offer hosting ourselves now?
  • Differences in hosting providers
  • The beauty of containerization, and the challenge of containerization
  • The best container orchestrator
  • What’s with hosting providers and their opaque pricing? (and why we like DigitalOcean)
  • Kubernetes’ highly dynamic environment: updated with just a code push
  • Flight Deck, the genesis of our journey to Kubernetes
  • Docker enables consistent environments
  • Flight Deck + Kubernetes + DigitalOcean
  • You can do this all yourself! (or we can help you with our training)
  • It all runs on Drupal OR other platforms
  • In order to enjoy Drupal + Kubernetes, you must let go of your local file system and SSH, and reevaluate your email system
  • Complex files vs. static files and S3
  • Kubectl! (it sounds cuter when you say it out loud)
  • Cron jobs run differently in Kubernetes
  • A Tess talk isn’t complete without a car analogy: Kubernetes is like a garage that comes pre-stocked with all the tools you’ll need to work on your car

Links

Transcript

IVAN STEGIC: Hey everyone! You’re listening to the TEN7 podcast, where we get together every fortnight, and sometimes more often, to talk about technology, business and the humans in it. I am your host Ivan Stegic. We’ve talked about DevOps at TEN7 on the show before. We’ve done an episode on why we decided to expand our hosting offering to Linode back at the end of 2017. We’ve talked about why we think it’s important to have a good relationship with your hosting company. And, we’ve written about automation and continuous integration over the years as well.

For the last year or so, we’ve been working on our next generation of hosting service, and our DevOps Engineer, Tess Flynn, has been deep in the weeds with Kubernetes. Today, we’re going to spend some time talking about what we’ve done—and how you could be doing it as well,—given that we’ve open sourced all of our work.

We’re also rolling out training at BadCamp this year, that’s in October of 2019, and we’ll be at DrupalCorn as well, in November. So, we’ll talk about that and what you might learn by attending. So, joining me again is our very own Tess Flynn. Hello, socketwench.

TESS FLYNN: Hello.

IVAN: Welcome, welcome. I’m so glad you’re on to talk shop with me. I wanted to start with why. Why are we hosting our own sites and those of our clients? There are so many good options out there for WordPress, for Drupal: you've got Acquia and Pantheon, Blue Host, and others. We typically use the provider that makes the most sense, based on our clients’ needs.

We’ve had a close relationship with ipHouse and their managed hosting services for a long time. But why start hosting now? For us, as an organization, it’s kind of been the perfect storm of circumstances, from the technology being mature, to the cost of it, and the availability of it, to where we are as an organization from a developmental point of view, to even being more conscious of vendor lock in and actively trying to avoid it.

So, I want to talk about technology a little bit more with you, Tess. What’s so different now than it was a few years ago? Why is it suddenly okay for us to be hosting ourselves?

TESS: There’s been kind of an explosion over the last few years of managed Kubernetes hosting providers. Now, we’ve had managed hosting providers forever. We’ve had things that are called Infrastructure as a service (IaaS) provider; that’s going to be things like AWS and Google Compute Cloud, as well as other providers, including DigitalOcean, but also say, Linode and other ones, which just provide raw hardware, virtual machine and root login. Lately, however, a lot of people would rather break up their workloads into containers, using something that’s similar to Docker. And I’ve talked about Docker before, but Docker is an alternative take on virtualization technologies, which works on taking applications and putting them in their own individual, virtual environment. I’m glossing over so many things when I say that, but it gets the general point across, with the two minutes before everybody else falls asleep.

IVAN: Right.

TESS: What’s really nifty about putting applications into a container is that now the container doesn’t really care where it is. You can run it on your system, you can run it somewhere else, you can run it on a hosting provider. And, the great thing about these containers is that you can download ones that other people have created. You can modify them, make your own, and you can string them together to build an entire application service out of them. And that’s really, really great. That’s like infrastructure Legos.

But the problem is, once you get the containers, how do you make sure that they’re on the systems, on the actual hardware where they are supposed to be, in the number of copies that there’s supposed to be, and that they can all talk to each other? And the one’s that aren’t supposed to talk to each other, can’t? That’s a lot trickier. For a long time the problem has been that you really only have two solutions: you do it yourself, or you use something like Docker Swarm. I don’t have the greatest opinion of Docker Swarm. I have worked with it before in a production environment, it’s not my favorite.

IVAN: It’s a little tough, isn’t it? We’ve had a client experience on that.

TESS: It’s a little tough, yeah. It’s not really set up for something like a Drupal workload. It’s set up more for a stateless application. A prototypical example is, you need to calculate the progression of matter within the known galaxy, factoring a certain cosmological constant. Take that variable, set it into a compute grid and go, “Hey, tell me what the results are in 15 years.” But you don’t really do that with Drupal. With Drupal, you’re not just going to send off one thing and always get the same thing back. There’s going to be state, which is preserved. That’s going to be in the databases somewhere, and there are going to be files that are uploaded somewhere. And then you have to get load balancing involved, and then it gets really complicated, and it’s like ugh. I really didn’t like how Swarm did any of this stuff. It was very prescriptive. It was, you do it their way, and nothing else.

IVAN: No flexibility.

TESS: No flexibility at all. It was really, really not fun, and it meant that we had to do a lot of modification of how Drupal works, and incur several single points of failure in our infrastructure, in order to make it work in its form. That whole experience just did not get me interested or excited to make a broader Swarm deployment anywhere else.

Then I ran across Kubernetes, and Kubernetes has a very different mentality around it. Kubernetes has more different options for configurations, and you can tailor how Kubernetes manages your workload, rather than tailoring your workload to work with Docker Swarm. That’s why I really liked it. What's really nifty is, once you have Kubernetes, now you have an open source project, which is platform agnostic, which doesn’t care about which individual hosting provider you’re on, as long as you have containers, and you can send configuration to it somehow, it’s fine, it doesn’t care.

A lot of managed hosting providers are going, “Hey, you know, VMs [virtual machines] were kind of nifty, but we really want to get in on all this container stuff now, too.” “Oh, hey, there’s a container orchestrator,” which is what Kubernetes is, and what Docker Swam is, as well, a container “orchestrator” which does all of the making sure the containers are on the right systems, are running, they can talk to the containers they're supposed to, and can’t talk to containers they're not supposed to.

That made a lot of infrastructure providers go, “This is not really a Platform as a service anymore. This is another form of Infrastructure as a service. As such, that is a segment that we can get into."

So, first it started with Google Kubernetes Engine, which is still considered today the defacto version. Amazon got into it, Azure got into it. And all of these are pretty good, but a lot of these huge cloud service providers, you can’t get clear pricing out of them to save your life.

IVAN: Yeah. That’s so frustrating, as a client, as a business owner. How do you do that? It’s insane.

TESS: I mean, the only way that it seems that is deterministic, in order to figure out what your bill is going to be at the end of the month, is to spend the money and hope that it doesn’t kill your credit card. [laughing]

IVAN: Yeah, right, and then try to figure out what you did, and ways of changing it, and then hell, you’re supposed to be just charged that every month from now on, I suppose.

TESS: It’s just a pain. It wasn’t any fun, whatsoever. So, an alternative approach is, you could actually install Kubernetes yourself on an Infrastructure as a service provider with regular VMs.

IVAN: And, we considered that, right?

TESS: Oh, I considered it, and I even spun that up on a weekend myself. It worked. But the problem is, I’m a colossal cheapskate and I didn’t want to spend $30.00 a month for it. [laughing]

IVAN: [laughing] If only there was a supporting ISP that had free Kubernetes support, and just charged you for the compute engines that you used.

TESS: I was really kind of sad that there wasn’t one, until six or eight months ago, when DigitalOcean announced that they have in beta (now it’s in production) a Kubernetes service, where the pricing was incredibly clear. You go to the cluster page, you select the servers that you want to see (the nodes as it were). I know, Drupal nodes, infrastructure nodes, it’s really confusing. Don’t even get physics people involved, it gets really complicated. [laughing]

IVAN: No, please. No, don’t. [laughing]

TESS: But you select which servers that you want to have in your Kubernetes cluster, the sizing, and the price is just listed, right there, in numbers that you can understand! [laughing]

IVAN: Per month, not per minute.

TESS: I know, per month, not per minute.

IVAN: It’s just the small things. Crazy.

TESS: And, it really targeted the kind of market that we are in for a hosting provider, and it made me really excited, and I really wanted to start putting workloads on it, and that’s what started the entire process.

IVAN: It really was, kind of a fortuitous series of events, and the timing kind of just really worked out. I think one of the biggest things for us, for me, is that with Kubernetes, we don’t have to worry about patching and security updates, and monitoring them, and these large hardware machines that we have to keep patched and updated. Essentially, it’s updated every time we do a code push, right? I mean, we’re still concerned with it, but it’s a much easier burden to bear.

TESS: Right. Now what’s going on is that, every time that we do a push, we’re literally rebuilding every system image necessary to run the underlying application. Which means that if we need to push a system update, it’s really just a matter of updating the underlying container's base image to the newest version. We’re already using Alpine Linux as our base containers, which already is a security-focused minimal container set.

IVAN: So, this is actually a good segue to what I wanted to talk about next. A few years back (as opposed to six to nine months back), which is how we kind of got down the road to get to Kubernetes was, I think the origin of all this really is, Flight Deck, and the desire for us to make it easy for developers who work at TEN7—and anyone else who uses Flight Deck, honestly—to have the same development environment locally. Basically, we wanted to avoid using MAMP and WAMP and different configurations so that we could eliminate that from any of the bug-squashing endeavors that we were going into. So, let’s talk about this started with Docker and led into Flight Deck, and what a benefit it is to have the same environment locally as we do in staging and production.

TESS: So, there’s a joking meme that’s been going around, and DevOp cycles, of a clip of a movie where, I think a father and son are sitting and having a very quiet talk on a bench somewhere in a park, where the kid is saying, “But it works on my machine.” And then the Dad hugs him and says, “Well, then we’ll ship your machine.” [laughing] And, that’s kind of what Docker does. But joking aside, I wanted to get that out of the way so I’m not taking myself too seriously. [laughing]

So, one of the problems with a lot of local development environments—and we still have this problem—is that traditionally we’ve used what I consider a hard-installed hosting product. So, we’re using MAMP or WAMP or Acquia Dev Desktop, or if you’re on Linux you’re just installing Apache directly. And all of those work fine, except when you start working on more than one site and more than one client. So, suddenly you have this one problem where, this one client has this really specific php.ini setting, but this other client can’t have that setting. And MAMP and WAMP work around this through a profile mechanism which, underneath the covers is a huge amount of hyperlinking and weird configurations, and spoofing, and like eww, it makes me shutter.

IVAN: Yeah, it makes me cringe just to talk about it, yeah.

TESS: And, the problem is that, every time you have to do this, every developer has to do this themselves, they can’t just standardize on it. So, if somebody has an individual problem on their system, that only happens on their system at 3:45 on a Thursday, after they’ve had chili for lunch or something or other, then you can’t really reproduce it. So, the solution really is, you need to have replicatable, shareable, consistent development environments across your entire team. And that’s what Docker does.

Docker provides that consistency, that shareability, and makes sure that everybody does, in fact, have the same environment across the board. That’s the entire point of that, and that’s where the whole joke about, “Well, then we’ll ship your machine,” [laughing] because that is in essence what containers are. They are system images that run particular bits of software. Now, once we moved everyone to Docker for development, we now had a consistent environment between all of our systems, so that now we didn’t have to work about a number of different problems.

Another good example is, this site uses PHP 5, this site uses PHP 7—a little out of date now, but it was very relevant two years ago—in which case, how do you make sure you’re on the right version? Well, with Docker, you change a text file, and then you boot the containers up, and that’s it.

IVAN: And that text file lives in a code repository, right? So, everybody else gets that change?

TESS: Mm hmm, because you are literally sharing the same environment; you are enforcing a consistent development environment across your entire team for each individual project. And, if you use that strategy, you have something that is flexible, yet at the same time incredibly consistent.

IVAN: And this is really important across all of our developers, and all of our local development that we do, but the challenge then becomes, how do you consistently replicate this in a staging or in a test environment, and even in production? So, that’s kind of the genesis of how we thought Kubernetes could help us here, right?

TESS: Right.

IVAN: So, the challenge to you from me was, how do we make this work in production?

TESS: So, the nice thing about Flight Deck is, it was always designed with the intention of being put into production, But the orchestration component just wasn’t there, and the hosting component wasn’t there. Kubernetes showed up, and that solved the orchestration component, and then, eventually DigitalOcean showed up and now we have the hosting component. So, now, we have all the pieces together to create a consistent environment that is literally the same containers, from the first time someone starts working on the project, to when it gets deployed to production. That is the height of continuous integration ideals, to make sure that you have consistency across all of your environments. That you don’t have different, weird shared environments along the way, that everything is exactly the same so that you know that it will work.

IVAN: I want to stop right there, just so our listeners can appreciate the power of what you just said. You basically said, “I’m going to be working on a website, or a web application locally, with some sort of stack of required server components, whose version numbers and installation profile is configured in a certain way. My teammate is able to replicate that environment exactly, to the version, simply by using the same repo, and by using Flight Deck.

Moreover, all of those version numbers and the stack that is being used, is actually also the same now in staging and, most amazingly to me, in production. So, we can guarantee that what container is functioning in production on the Kubernetes cluster, is actually on staging and on everyone else’s machine. We’ve totally eliminated any variability and any chance that the environment is going to be causing an issue that one person may be seeing that another isn’t.

TESS: That’s correct.

IVAN: That’s pretty amazing!

TESS: It’s a really difficult thing to do, but starting with the containers and building that from the base up actually makes it a lot easier, and I don’t think that any other local development environment, even container based local development environment such as DDEV and Lando are doing this quite yet. Last I heard, I think DDEV was working on a production version of their containers, but it’s not the same containers, whereas with Flight Deck, it literally is the same container.

IVAN: It’s the same configuration. Everything is the same. That’s pretty amazing. I’m still kind of really impressed with all of the stuff that we’ve done, that you’ve done. And, honestly, this is all open source too. This is not like TEN7’s proprietary product, right? We’ve open sourced this, this is all on the web, you can download it yourself, you can figure it out yourself, you can do this as well. You can start your own hosting company.

TESS: That’s correct. The key item which puts all this together is, the Ansible role called Flight Deck Cluster. What Flight Deck Cluster does is, it will create a Flight Deck-flavored Kubernetes cluster and it works perfectly well on DigitalOcean. There’s no reason why it can’t work on say, Google Kubernetes Engine or AWS or anyone else. The architecture that Flight Deck Cluster uses is meant to be simple, durable and transportable, which is something that a lot of other architectures that I’ve seen just don’t have.

IVAN: So, we’ve designed a lightweight set of Docker containers called Flight Deck that you can use locally. We’ve evolved them so that they work with Kubernetes, which you can deploy anywhere in staging and production. We’ve open sourced them. And, the fact that it runs Kubernetes, all you need is a service that supports Kubernetes and you should be able to run all of this in those other locations.

So, we’ve talked about how we started with Docker and how that evolved, and I talked about how we've open sourced it and it’s available to you. I want to spend a little bit of time getting into the details, into the nitty gritty of how you would actually do this for yourself. Is there an app I download? Is it all the YML, all the YML files that we’ve open sourced? What would someone who wants to try this themselves, what would they have to do?

TESS: The first thing that I would probably do is, start running Flight Deck locally. Because you don’t need to pay any extra money for it, you just need to use your local laptop, and it’s also a good experience for you to learn how to interact with Docker by itself. That looks good on a résumé and it’s a good skill to actually have.

I have a talk that I used to give about Docker, and I know that there’s a blog post series that I posted somewhere a long time ago, about how Docker actually works under the covers. Both of those are going to be invaluable to understand how to get Flight Deck working on your local environment, and once you have it working on your local environment, then the next problem is to figure out the build chain. Now the way that our build chain works is, that we have another server, which is a build server, and what the build server does, is it’s going to receive a job from Gitlab and that job is going to take all of the files that constitute the site, it will build them into a local file system, and then it will put those inside of a container which is based on Flight Deck. Then it will upload those to a container registry somewhere else. So that we already have a few additional pieces of technology involved. But the nice thing is, Gitlab is open source, Ansible is open source, and all of our build processes are run through Ansible, and the Docker registry is also open source. It's just a container that you can run somewhere. There’s also services that you can buy that will actually provide you a container registry on a fee basis. All of those are definitely options. Once you have the container in a registry somewhere, then you can run Flight Deck Cluster to build out the rest of the cluster itself.

IVAN: You make it sound so easy. [laughing]

TESS: I make it sound easy, but it’s a lot of code, but it is all open source and it is all there for you to use. Right now, our cluster is based on a development version of Flight Deck, which I’ve been calling Flight Deck 4, and this version is intentionally natively designed for a Kubernetes environment. But it still works perfectly fine under Docker Compose locally, and it is literally the containers that we are using in production right now, at this minute. All of those containers have been thoroughly documented. They have nice readmes which describe exactly how you configure each individual container. And the Flight Deck Cluster role on GitHub also has an extensive readme document which describes how every individual piece is supposed to work.

IVAN: So, the easiest way to get to all that documentation into the repo is to simply go to flight-deck.me. That will redirect you to a blog post about Flight Deck on the ten7.com website, and at the bottom of that post you’ll see links to the GitHub repos and all of the other information that you’ll need to get to that.

So, I wanted to talk about the fact that the hosting itself, the Kubernetes hosting that we have, is optimized for Drupal right now—I kind of struggle to say "optimized for Drupal." It’s just configured for Drupal. There’s no reason that Kubernetes is, and what we’ve released, is locked into Drupal. We are hosting our own React app on there. We have a CodeIgniter app that’s running, we even have a Grav CMS site on it. There’s no reason why you couldn’t host WordPress on it, or ExpressionEngine or any other php, MySQL, Apache, Varnish, Stack on it. Right? There’s nothing innately that forces you to be Drupal on this, right?

TESS: Nope.

IVAN: And that’s also from a design perspective. That was always the intention.

TESS: It’s intended to be run for Drupal sites. However, it always keeps an eye towards being as flexible as possible.

IVAN: So, I think that’s an important thing to mention. Let’s talk about some of the challenges of running Kubernetes in a cluster in production. It’s not like running a server with a local file system, is it?

TESS: [laughing] No, it isn’t.

IVAN: [laughing] Okay. Let’s talk about the opportunities of things to learn.

TESS: The biggest, scariest thing about Kubernetes and Drupal is, you have to let go of your local file system. That is the most scary thing that I have to tell people about Kubernetes.

IVAN: So, no file system, huh?

TESS: No file system.

IVAN: Does that make it slow?

TESS: Well, not really. Let me describe why. The problem is, that— and I’ve had this in my Return of the Clustering talk—is that we’re used to something which is called “block storage.” Now, block storage is pretty great. It is a literal attached disk to the server. So, it is mounted on the server, you have direct access to it, and you can store all kinds of things to it. And it’s fast, and it’s right there. It has no failover, it can’t be shared across the systems, but ehhh, whatever, we have one big server, who cares about that.

Then, if you do try building a traditional server cluster, well, you can’t quite do that. So then you get network file system involved, NFS. And then now, all of the file reads and writes occur over the network to some other centralized server. Okay, it still looks like a local block storage, it still works like block storage, so, okay, sure. But the problem with that is that network file systems, by their base nature, introduce a single point of failure.

Now, that’s not good by itself. If the NFS server goes down, your entire site no longer looks or functions correctly. But the problem is, that it also doesn’t scale either. There’s a natural limitation between the number of different replications for frontend server, servers that intercept the actual requests from people, and then send them to the Drupal backend for processing, and then push back their responses. There’s a natural limitation between those systems and those that can access NFS. And as soon as you have too many accesses, suddenly NFS is not going to be keeping up with you and your performance drops to the floor.

Also, NFS is kind of persnickety. You have to tune it. You have to make sure that it has enough RAM, enough bandwidth. You have to make sure it’s physically proximate to the rest of the servers. And, all of this is because it’s trying to replicate block storage. Now, block storage is great for a whole bunch of data, but in a cloud architect's perspective, there are really two different kinds of data. There’s complex data and static data.

And when I tell people about this, they go, “Well, what’s a complex file?” A lot of people will say, “Well, we have a whole bunch of files which are all linked together, that’s complex, right?” Nope. “Well, we have some Excel documents that’s on an NFS file, that’s complex, right?” Not really. So, what is a complex file? 

I spent hours, tried to squeeze an answer [laughing] out of the internet for this, and eventually arrived at the answer from a cloud architect's perspective: “complex files, such as the files which constitute the actual underlying disk storage for say, a MySQL database.” Data, which is written sparsely and seemingly randomly in multiple locations at multiple times with strict concurrency requirements. Now when I say that, does that sound like anything that we actually upload in a Drupal site?

IVAN: Nope.

TESS: Nope. None of it does. Block storage is required for complex data. But for static data, which is virtually everything that a Drupal site hosts, we don’t need it, it’s too much. It’s way too complicated. And, it doesn’t scale. So, what’s the solution? The solution really is, we need to treat the file system like an API. We need to treat the file system like a database. We don’t care where the database is, as long as you have an IP, a login and the correct credentials to actually get to the database, and then we have multiple readers, multiple writers. That’s what we want for a file system, right? Well, it turns out, there’s a thing that does that already, it’s called S3.

IVAN: Yes, AWS, hello. [laughing]

TESS: And the nice thing about S3 is, it’s perfect for static data. It’s API accessible and it can be made internally redundant. So, it has its own high availability built in that we don’t need to worry about. The nice thing that’s even more than that, is when we say S3, most people go, “Oh, Amazon.” No. S3 is, in fact, a standard. It is not just Amazon’s implementation of S3. There are multiple implementations of S3. So, I usually like saying an S3-compatible hosting provider. And that’s going to include anybody who runs any kind of S3-compatible service. And there’s actually an open source product called Ceph that actually provides an S3 frontend for file storage. And that is actually a service that DigitalOcean also provides. They have DigitalOcean spaces, which provide an S3-compatible static file interface, that’s actually powered by a Ceph cluster underneath the covers. So, open source all the way down to the core.

IVAN: Well, I didn’t know that spaces was Ceph underneath the covers. That’s cool.

TESS: It’s just buried in there. You could find it though.

IVAN: Cool. So, file storage is a challenge, but we fix that by using S3.

TESS: Yep, because Drupal 7 and 8 actually have very good S3 support. There’s S3 FS, that particular module which is excellent for doing Drupal 7 sites. We’ve been using Fly System for Drupal 8 for a few different reasons, but there are reasons that are a little bit easier for us. But your mileage may vary.

IVAN: And, if you’re going to host something that’s not Drupal related, you would need to find some other S3-compatible layer module, right?

TESS: Like for the CodeIgniter application, we are currently looking at implementing that as well.

IVAN: And, there’s a React app as well that we’ve deployed. That uses the underlying Drupal site, though, doesn’t it?

TESS: Yes, that doesn’t actually need a local file system.

IVAN: There’s no SSH access to a cluster of Kubernetes, is there?

TESS: Yes, that’s the other thing. It’s like after I already brutalized you with saying, “No, you can’t have a local file system,” now I take your SSH away as well. [laughing]

IVAN: [laughing] But there is something to use to replace it, right?

TESS: There is. The problem is that, you really, really, really, really, really, really, really shouldn’t use SSH in Kubernetes. SSH is a very dangerous thing to have running anywhere, because it is a potential security access point that can be used and abused, both internally and externally. You really don’t want to have to run it, because if you want to run SSH in Kubernetes, you have to run it in a container. And if you run it in a container, you’re running it as root. And if you’re running it as root, you’re running it as root on the underlying hardware that’s powering the cluster, and that’s bad. [laughing] You don’t want to do that.

So, instead you want to access what is typically called “the backplane.” The backplane is going to be access to the workload via the orchestration system. So, for Kubernetes, the backplane access comes in the form of a command line application called Kubectl or “Kube control” or “Kubey control” or “Kubectl” or like 15 other different names. [laughing] I always thought of Kubectl, that’s my favorite.

IVAN: Let's spell it out. [laughing] I like that one too. k-u-b-e-c-t-l

TESS: And this application not only lets you interact with the orchestrator, but also allows you to directly access individual containers as well. Although getting to an individual container is a little bit more difficult, once you’ve done it a few times, it’s not that hard. Because Kubernetes is so popular, there’s a lot of other command line environments, which will have auto completion assistance for Kubectl as well. So, for me, if I enter in a parameter to Kubectl, say for name space, I can hit tab and it will give me a list of the name spaces that I have. So I don’t actually have to type it out.

IVAN: Pretty slick.

TESS: I use Z Shell (ZSH) but that’s me, I’m weird. Some people like using Fish or some other shell. And I’m sure there’s auto completion mechanisms for your favorite shell somewhere.

IVAN: There’s not a whole lot of challenges then, with Kubernetes. You’ve kind of mentioned a few that are surmountable. Is there anything else, a budding developer, a budding DevOps person should know about, that are looking to start to explore hosting for themselves?

TESS: Well, they should also keep in mind that email is a problem.

IVAN: Yes! We discovered that in the last few weeks, didn’t we?

TESS: Yes, we did.

IVAN: So, we decided that we were going to use an external, transactional email provider. We ended up on SendGrid. But you don’t think of these things once when you’re working on a cluster that’s managed because, hey, these machines all have SendMail on them.

TESS: Yup, and that’s one thing that you really can’t rely on when you start working with a container-based workload. It exposes a lot of these things. But, we’re not where we were two or three years ago where this would’ve been a huge, scary, problem. These things have existing solutions, which are not that difficult to implement, even today.

IVAN: And there are some free tiers as well that you can use, especially if you don’t have a high volume of emails that you’re sending out.

TESS: If you’re only sending 500 emails a day, you can configure your G Suite email as the SMTP provider.

IVAN: Exactly. What about cron? Isn’t that a problem too?

TESS: Cron is a little bit different in Kubernetes. So, the thing with cron is that, in Kubernetes, cron isn’t just something that runs a command. In a traditional server workload, cron is some background process that exists in the system, and when a certain time shows up, it runs a certain command that you tell it to. And, it assumes that you’re running it on literally the same exact system that is running everything else, your web workload. Right?

IVAN: Right.

TESS: That’s not quite the case in Kubernetes. In Kubernetes, a cron job actually runs a container. So, when you actually have your web workload, you’re going to have one container, say, for Apache, somewhere, which is running your site. Then you have a cron job in Kubernetes, and that cron job will literally spin up a completely separate container in order to actually run that process.
So, that’s a bit different.

Now, the only real part of that which gets really confusing is, if you don’t have a nice separation of all of the different infrastructure we just finished talking about, if you don’t have any local disks that you need to worry about, if you don’t have SendMail you have to worry about, if you don’t have any of this stuff and you can scale out your web container to 10 or 20 or more, and not have a problem because they all rely on external API-based providers, then it doesn’t really matter what you do with cron. You just literally run the same container that you run for your web workload, with the same configuration and everything else, but you only tell it run a particular command, instead of "Run Apache." And that’s it. That’s what we do. And, it’s actually not very hard.

IVAN: What’s your favorite thing about Kubernetes? I’m only going to give you five minutes at the most. [laughing]

TESS: [laughing] I think the thing that I like the most about it, is probably the ability to easily scale things. Once you actually have solved all the underlying infrastructure problems, you basically have just a container-based workload that you can say, “I need to run three of these.” Then you can tell it and it will run three of them, and it will just run it, that’s it, you don’t need to worry about it. It already load balances it for you. How can I describe this? Well, let’s go back to the infamous car analogies again.

IVAN: They work.

TESS: They work, but you know they work within a US cultural context of a certain decade period, of a certain geographic location, but let’s put that aside for a second.

So, a car analogy. Let’s say you have a car, and you want to do some work on it. And you go to your garage and what do you see? The car and an empty garage. That’s often what a lot of other systems look like. When you have to do traditional clustering with regular virtual machines, or even self-hosted physical machines, you have to go over to your local hardware store, buy all the tools, buy the car jack, buy an engine lift, buy an air compressor and a whole bunch of other stuff, in order to do your car stuff, and it’s a lot of work and a lot of investment.

With Kubernetes, it’s more like, Okay, I go to my garage and I have Kubernetes. So I have all the tools already. All the tools are just there on the walls, right now. I can just start working. That’s what I really like about Kubernetes. It provides me a room with all the tools for me to actually make this workload do what I want it to do, rather than having to go and grab yet another thing, then another thing, then another thing. Then try to make compromises to make two things, which aren’t the thing that I can’t get right now, but they’re the two I have, to work together.

IVAN: I love the analogy. [laughing] I think that works, Tess. So, what about training? Wouldn’t it be great if, instead of trying to figure this all out yourself (like we did), you could just have us show you how to do it?

TESS: Gee, wouldn’t it? [laughing]

IVAN: Wouldn’t it be great? Well, guess what? That actually exists. We’re going to be doing some free trainings at BadCamp and then at DrupalCorn as well. We’ll be at BadCamp next month, the beginning of October. Now, they’re free trainings, but there is a cost of use to attending the training itself, so I think you have to register and it’s $20, or $10 at DrupalCorn. They’re free as far as we’re concerned.

Can you talk through, just a little bit about the format of the training that we have set up? What are you going to learn and who is it for?

TESS: So, we’ll briefly touch upon different kinds of Kubernetes hosting providers, as well as what Kubernetes actually is and what it does, and what it gives you. Then afterwards, we’re going to start containerizing your particular application. So, we’ll start working with containers, putting them onto Kubernetes, getting used to how to use Kubectl, how to work with individual definitions within Kubernetes, and making all of these pieces work together.

IVAN: And, it’s a four-hour workshop, it’s half a day, you get to spend time with Tess, and I think I’ll be there too. It’s going to be great. So, if you want to contribute to Flight Deck, or to Kubernetes, the Kubernetes Flight Deck Cluster that we have, we’d love it. It’s all online. You can visit ten7.com, and you’ll find it there on the what we give back page and you can also visit us on github.com/ten7, and you’ll see all the repos there. We’d love your help. Thank you, Tess, so much for spending your time with me today. This has been truly great.

TESS: Not a problem.

IVAN: So, if you need help with your own hosting, or figuring out what makes most sense to you, we’d love to be there to help you, whether you’re a developer or a large university, or a small business, it doesn’t matter. We’re happy to provide consulting, whether that means deploying your own Kubernetes or having us do it for you, or even selecting another vendor that makes the most sense to you.

Just send us an email and get in touch. You can reach us at [email protected]. You’ve been listening to the TEN7 Podcast. Find us online at ten7.com/podcast. And if you have a second, do send us a message. We love hearing from you. Our email address is [email protected]. And don’t forget, we’re also doing a survey of our listeners. So, if you’re able to, tell us about what you are and who you are, please take our survey as well at ten7.com/survey. Until next time, this is Ivan Stegic. Thank you for listening.

Sep 18 2019
Sep 18

Your browser does not support the audio element. TEN7-Podcast-Ep-070-Using-Kubernetes-for-Hosting.mp3

Summary

After months deep in the weeds of Kubernetes, our DevOps Engineer Tess Flynn emerged with the best practices for melding Docker, Flight Deck and Kubernetes to create a powerful open source infrastructure for hosting Drupal sites in production (powered by our partner, DigitalOcean). Ivan and Tess take a deep dive into why we chose this combination of tools, our journey to get here, and the nitty gritty of how everything works together.    

Guest

Tess Flynn, TEN7 DevOps Engineer

Highlights

  • Why offer hosting ourselves now?
  • Differences in hosting providers
  • The beauty of containerization, and the challenge of containerization
  • The best container orchestrator
  • What’s with hosting providers and their opaque pricing? (and why we like DigitalOcean)
  • Kubernetes’ highly dynamic environment: updated with just a code push
  • Flight Deck, the genesis of our journey to Kubernetes
  • Docker enables consistent environments
  • Flight Deck + Kubernetes + DigitalOcean
  • You can do this all yourself! (or we can help you with our training)
  • It all runs on Drupal OR other platforms
  • In order to enjoy Drupal + Kubernetes, you must let go of your local file system and SSH, and reevaluate your email system
  • Complex files vs. static files and S3
  • Kubectl! (it sounds cuter when you say it out loud)
  • Cron jobs run differently in Kubernetes
  • A Tess talk isn’t complete without a car analogy: Kubernetes is like a garage that comes pre-stocked with all the tools you’ll need to work on your car

Links

Transcript

IVAN STEGIC: Hey everyone! You’re listening to the TEN7 podcast, where we get together every fortnight, and sometimes more often, to talk about technology, business and the humans in it. I am your host Ivan Stegic. We’ve talked about DevOps at TEN7 on the show before. We’ve done an episode on why we decided to expand our hosting offering to Linode back at the end of 2017. We’ve talked about why we think it’s important to have a good relationship with your hosting company. And, we’ve written about automation and continuous integration over the years as well.

For the last year or so, we’ve been working on our next generation of hosting service, and our DevOps Engineer, Tess Flynn, has been deep in the weeds with Kubernetes. Today, we’re going to spend some time talking about what we’ve done—and how you could be doing it as well—given that we’ve open sourced all of our work.

We’re also rolling out training at BadCamp this year, that’s in October of 2019, and we’ll be at DrupalCorn as well, in November. So, we’ll talk about that and what you might learn by attending. So, joining me again is our very own Tess Flynn. Hello, socketwench.

TESS FLYNN: Hello.

IVAN: Welcome, welcome. I’m so glad you’re on to talk shop with me. I wanted to start with why. Why are we hosting our own sites and those of our clients? There are so many good options out there for WordPress, for Drupal: you've got Acquia and Pantheon, Blue Host, and others. We typically use the provider that makes the most sense, based on our clients’ needs.

We’ve had a close relationship with ipHouse and their managed hosting services for a long time. But why start hosting now? For us, as an organization, it’s kind of been the perfect storm of circumstances, from the technology being mature, to the cost of it, and the availability of it, to where we are as an organization from a developmental point of view, to even being more conscious of vendor lock in and actively trying to avoid it.

So, I want to talk about technology a little bit more with you, Tess. What’s so different now than it was a few years ago? Why is it suddenly okay for us to be hosting ourselves?

TESS: There’s been kind of an explosion over the last few years of managed Kubernetes hosting providers. Now, we’ve had managed hosting providers forever. We’ve had things that are called Infrastructure as a service (IaaS) provider; that’s going to be things like AWS and Google Compute Cloud, as well as other providers, including DigitalOcean, but also say, Linode and other ones, which just provide raw hardware, virtual machine and root login. Lately, however, a lot of people would rather break up their workloads into containers, using something that’s similar to Docker. And I’ve talked about Docker before, but Docker is an alternative take on virtualization technologies, which works on taking applications and putting them in their own individual, virtual environment. I’m glossing over so many things when I say that, but it gets the general point across, with the two minutes before everybody else falls asleep.

IVAN: Right.

TESS: What’s really nifty about putting applications into a container is that now the container doesn’t really care where it is. You can run it on your system, you can run it somewhere else, you can run it on a hosting provider. And, the great thing about these containers is that you can download ones that other people have created. You can modify them, make your own, and you can string them together to build an entire application service out of them. And that’s really, really great. That’s like infrastructure Legos.

But the problem is, once you get the containers, how do you make sure that they’re on the systems, on the actual hardware where they are supposed to be, in the number of copies that there’s supposed to be, and that they can all talk to each other? And the one’s that aren’t supposed to talk to each other, can’t? That’s a lot trickier. For a long time the problem has been that you really only have two solutions: you do it yourself, or you use something like Docker Swarm. I don’t have the greatest opinion of Docker Swarm. I have worked with it before in a production environment, it’s not my favorite.

IVAN: It’s a little tough, isn’t it? We’ve had a client experience on that.

TESS: It’s a little tough, yeah. It’s not really set up for something like a Drupal workload. It’s set up more for a stateless application. A prototypical example is, you need to calculate the progression of matter within the known galaxy, factoring a certain cosmological constant. Take that variable, set it into a compute grid and go, “Hey, tell me what the results are in 15 years.” But you don’t really do that with Drupal. With Drupal, you’re not just going to send off one thing and always get the same thing back. There’s going to be state, which is preserved. That’s going to be in the databases somewhere, and there are going to be files that are uploaded somewhere. And then you have to get load balancing involved, and then it gets really complicated, and it’s like ugh. I really didn’t like how Swarm did any of this stuff. It was very prescriptive. It was, you do it their way, and nothing else.

IVAN: No flexibility.

TESS: No flexibility at all. It was really, really not fun, and it meant that we had to do a lot of modification of how Drupal works, and incur several single points of failure in our infrastructure, in order to make it work in its form. That whole experience just did not get me interested or excited to make a broader Swarm deployment anywhere else.

Then I ran across Kubernetes, and Kubernetes has a very different mentality around it. Kubernetes has more different options for configurations, and you can tailor how Kubernetes manages your workload, rather than tailoring your workload to work with Docker Swarm. That’s why I really liked it. What's really nifty is, once you have Kubernetes, now you have an open source project, which is platform agnostic, which doesn’t care about which individual hosting provider you’re on, as long as you have containers, and you can send configuration to it somehow, it’s fine, it doesn’t care.

A lot of managed hosting providers are going, “Hey, you know, VMs [virtual machines] were kind of nifty, but we really want to get in on all this container stuff now, too.” “Oh, hey, there’s a container orchestrator,” which is what Kubernetes is, and what Docker Swam is, as well, a container “orchestrator” which does all of the making sure the containers are on the right systems, are running, they can talk to the containers they're supposed to, and can’t talk to containers they're not supposed to.

That made a lot of infrastructure providers go, “This is not really a Platform as a service anymore. This is another form of Infrastructure as a service. As such, that is a segment that we can get into."

So, first it started with Google Kubernetes Engine, which is still considered today the defacto version. Amazon got into it, Azure got into it. And all of these are pretty good, but a lot of these huge cloud service providers, you can’t get clear pricing out of them to save your life.

IVAN: Yeah. That’s so frustrating, as a client, as a business owner. How do you do that? It’s insane.

TESS: I mean, the only way that it seems that is deterministic, in order to figure out what your bill is going to be at the end of the month, is to spend the money and hope that it doesn’t kill your credit card. [laughing]

IVAN: Yeah, right, and then try to figure out what you did, and ways of changing it, and then hell, you’re supposed to be just charged that every month from now on, I suppose.

TESS: It’s just a pain. It wasn’t any fun, whatsoever. So, an alternative approach is, you could actually install Kubernetes yourself on an Infrastructure as a service provider with regular VMs.

IVAN: And, we considered that, right?

TESS: Oh, I considered it, and I even spun that up on a weekend myself. It worked. But the problem is, I’m a colossal cheapskate and I didn’t want to spend $30.00 a month for it. [laughing]

IVAN: [laughing] If only there was a supporting ISP that had free Kubernetes support, and just charged you for the compute engines that you used.

TESS: I was really kind of sad that there wasn’t one, until six or eight months ago, when DigitalOcean announced that they have in beta (now it’s in production) a Kubernetes service, where the pricing was incredibly clear. You go to the cluster page, you select the servers that you want to see (the nodes as it were). I know, Drupal nodes, infrastructure nodes, it’s really confusing. Don’t even get physics people involved, it gets really complicated. [laughing]

IVAN: No, please. No, don’t. [laughing]

TESS: But you select which servers that you want to have in your Kubernetes cluster, the sizing, and the price is just listed, right there, in numbers that you can understand! [laughing]

IVAN: Per month, not per minute.

TESS: I know, per month, not per minute.

IVAN: It’s just the small things. Crazy.

TESS: And, it really targeted the kind of market that we are in for a hosting provider, and it made me really excited, and I really wanted to start putting workloads on it, and that’s what started the entire process.

IVAN: It really was, kind of a fortuitous series of events, and the timing kind of just really worked out. I think one of the biggest things for us, for me, is that with Kubernetes, we don’t have to worry about patching and security updates, and monitoring them, and these large hardware machines that we have to keep patched and updated. Essentially, it’s updated every time we do a code push, right? I mean, we’re still concerned with it, but it’s a much easier burden to bear.

TESS: Right. Now what’s going on is that, every time that we do a push, we’re literally rebuilding every system image necessary to run the underlying application. Which means that if we need to push a system update, it’s really just a matter of updating the underlying container's base image to the newest version. We’re already using Alpine Linux as our base containers, which already is a security-focused minimal container set.

IVAN: So, this is actually a good segue to what I wanted to talk about next. A few years back (as opposed to six to nine months back), which is how we kind of got down the road to get to Kubernetes was, I think the origin of all this really is, Flight Deck, and the desire for us to make it easy for developers who work at TEN7—and anyone else who uses Flight Deck, honestly—to have the same development environment locally. Basically, we wanted to avoid using MAMP and WAMP and different configurations so that we could eliminate that from any of the bug-squashing endeavors that we were going into. So, let’s talk about this started with Docker and led into Flight Deck, and what a benefit it is to have the same environment locally as we do in staging and production.

TESS: So, there’s a joking meme that’s been going around, and DevOp cycles, of a clip of a movie where, I think a father and son are sitting and having a very quiet talk on a bench somewhere in a park, where the kid is saying, “But it works on my machine.” And then the Dad hugs him and says, “Well, then we’ll ship your machine.” [laughing] And, that’s kind of what Docker does. But joking aside, I wanted to get that out of the way so I’m not taking myself too seriously. [laughing]

So, one of the problems with a lot of local development environments—and we still have this problem—is that traditionally we’ve used what I consider a hard-installed hosting product. So, we’re using MAMP or WAMP or Acquia Dev Desktop, or if you’re on Linux you’re just installing Apache directly. And all of those work fine, except when you start working on more than one site and more than one client. So, suddenly you have this one problem where, this one client has this really specific php.ini setting, but this other client can’t have that setting. And MAMP and WAMP work around this through a profile mechanism which, underneath the covers is a huge amount of hyperlinking and weird configurations, and spoofing, and like eww, it makes me shutter.

IVAN: Yeah, it makes me cringe just to talk about it, yeah.

TESS: And, the problem is that, every time you have to do this, every developer has to do this themselves, they can’t just standardize on it. So, if somebody has an individual problem on their system, that only happens on their system at 3:45 on a Thursday, after they’ve had chili for lunch or something or other, then you can’t really reproduce it. So, the solution really is, you need to have replicatable, shareable, consistent development environments across your entire team. And that’s what Docker does.

Docker provides that consistency, that shareability, and makes sure that everybody does, in fact, have the same environment across the board. That’s the entire point of that, and that’s where the whole joke about, “Well, then we’ll ship your machine,” [laughing] because that is in essence what containers are. They are system images that run particular bits of software. Now, once we moved everyone to Docker for development, we now had a consistent environment between all of our systems, so that now we didn’t have to work about a number of different problems.

Another good example is, this site uses PHP 5, this site uses PHP 7—a little out of date now, but it was very relevant two years ago—in which case, how do you make sure you’re on the right version? Well, with Docker, you change a text file, and then you boot the containers up, and that’s it.

IVAN: And that text file lives in a code repository, right? So, everybody else gets that change?

TESS: Mm hmm, because you are literally sharing the same environment; you are enforcing a consistent development environment across your entire team for each individual project. And, if you use that strategy, you have something that is flexible, yet at the same time incredibly consistent.

IVAN: And this is really important across all of our developers, and all of our local development that we do, but the challenge then becomes, how do you consistently replicate this in a staging or in a test environment, and even in production? So, that’s kind of the genesis of how we thought Kubernetes could help us here, right?

TESS: Right.

IVAN: So, the challenge to you from me was, how do we make this work in production?

TESS: So, the nice thing about Flight Deck is, it was always designed with the intention of being put into production, But the orchestration component just wasn’t there, and the hosting component wasn’t there. Kubernetes showed up, and that solved the orchestration component, and then, eventually DigitalOcean showed up and now we have the hosting component. So, now, we have all the pieces together to create a consistent environment that is literally the same containers, from the first time someone starts working on the project, to when it gets deployed to production. That is the height of continuous integration ideals, to make sure that you have consistency across all of your environments. That you don’t have different, weird shared environments along the way, that everything is exactly the same so that you know that it will work.

IVAN: I want to stop right there, just so our listeners can appreciate the power of what you just said. You basically said, “I’m going to be working on a website, or a web application locally, with some sort of stack of required server components, whose version numbers and installation profile is configured in a certain way. My teammate is able to replicate that environment exactly, to the version, simply by using the same repo, and by using Flight Deck.

Moreover, all of those version numbers and the stack that is being used, is actually also the same now in staging and, most amazingly to me, in production. So, we can guarantee that what container is functioning in production on the Kubernetes cluster, is actually on staging and on everyone else’s machine. We’ve totally eliminated any variability and any chance that the environment is going to be causing an issue that one person may be seeing that another isn’t.

TESS: That’s correct.

IVAN: That’s pretty amazing!

TESS: It’s a really difficult thing to do, but starting with the containers and building that from the base up actually makes it a lot easier, and I don’t think that any other local development environment, even container based local development environment such as DDEV and Lando are doing this quite yet. Last I heard, I think DDEV was working on a production version of their containers, but it’s not the same containers, whereas with Flight Deck, it literally is the same container.

IVAN: It’s the same configuration. Everything is the same. That’s pretty amazing. I’m still kind of really impressed with all of the stuff that we’ve done, that you’ve done. And, honestly, this is all open source too. This is not like TEN7’s proprietary product, right? We’ve open sourced this, this is all on the web, you can download it yourself, you can figure it out yourself, you can do this as well. You can start your own hosting company.

TESS: That’s correct. The key item which puts all this together is, the Ansible role called Flight Deck Cluster. What Flight Deck Cluster does is, it will create a Flight Deck-flavored Kubernetes cluster and it works perfectly well on DigitalOcean. There’s no reason why it can’t work on say, Google Kubernetes Engine or AWS or anyone else. The architecture that Flight Deck Cluster uses is meant to be simple, durable and transportable, which is something that a lot of other architectures that I’ve seen just don’t have.

IVAN: So, we’ve designed a lightweight set of Docker containers called Flight Deck that you can use locally. We’ve evolved them so that they work with Kubernetes, which you can deploy anywhere in staging and production. We’ve open sourced them. And, the fact that it runs Kubernetes, all you need is a service that supports Kubernetes and you should be able to run all of this in those other locations.

So, we’ve talked about how we started with Docker and how that evolved, and I talked about how we've open sourced it and it’s available to you. I want to spend a little bit of time getting into the details, into the nitty gritty of how you would actually do this for yourself. Is there an app I download? Is it all the YML, all the YML files that we’ve open sourced? What would someone who wants to try this themselves, what would they have to do?

TESS: The first thing that I would probably do is, start running Flight Deck locally. Because you don’t need to pay any extra money for it, you just need to use your local laptop, and it’s also a good experience for you to learn how to interact with Docker by itself. That looks good on a résumé and it’s a good skill to actually have.

I have a talk that I used to give about Docker, and I know that there’s a blog post series that I posted somewhere a long time ago, about how Docker actually works under the covers. Both of those are going to be invaluable to understand how to get Flight Deck working on your local environment, and once you have it working on your local environment, then the next problem is to figure out the build chain. Now the way that our build chain works is, that we have another server, which is a build server, and what the build server does, is it’s going to receive a job from Gitlab and that job is going to take all of the files that constitute the site, it will build them into a local file system, and then it will put those inside of a container which is based on Flight Deck. Then it will upload those to a container registry somewhere else. So that we already have a few additional pieces of technology involved. But the nice thing is, Gitlab is open source, Ansible is open source, and all of our build processes are run through Ansible, and the Docker registry is also open source. It's just a container that you can run somewhere. There’s also services that you can buy that will actually provide you a container registry on a fee basis. All of those are definitely options. Once you have the container in a registry somewhere, then you can run Flight Deck Cluster to build out the rest of the cluster itself.

IVAN: You make it sound so easy. [laughing]

TESS: I make it sound easy, but it’s a lot of code, but it is all open source and it is all there for you to use. Right now, our cluster is based on a development version of Flight Deck, which I’ve been calling Flight Deck 4, and this version is intentionally natively designed for a Kubernetes environment. But it still works perfectly fine under Docker Compose locally, and it is literally the containers that we are using in production right now, at this minute. All of those containers have been thoroughly documented. They have nice readmes which describe exactly how you configure each individual container. And the Flight Deck Cluster role on GitHub also has an extensive readme document which describes how every individual piece is supposed to work.

IVAN: So, the easiest way to get to all that documentation into the repo is to simply go to flight-deck.me. That will redirect you to a blog post about Flight Deck on the ten7.com website, and at the bottom of that post you’ll see links to the GitHub repos and all of the other information that you’ll need to get to that.

So, I wanted to talk about the fact that the hosting itself, the Kubernetes hosting that we have, is optimized for Drupal right now—I kind of struggle to say "optimized for Drupal." It’s just configured for Drupal. There’s no reason that Kubernetes is, and what we’ve released, is locked into Drupal. We are hosting our own React app on there. We have a CodeIgniter app that’s running, we even have a Grav CMS site on it. There’s no reason why you couldn’t host WordPress on it, or ExpressionEngine or any other php, MySQL, Apache, Varnish, Stack on it. Right? There’s nothing innately that forces you to be Drupal on this, right?

TESS: Nope.

IVAN: And that’s also from a design perspective. That was always the intention.

TESS: It’s intended to be run for Drupal sites. However, it always keeps an eye towards being as flexible as possible.

IVAN: So, I think that’s an important thing to mention. Let’s talk about some of the challenges of running Kubernetes in a cluster in production. It’s not like running a server with a local file system, is it?

TESS: [laughing] No, it isn’t.

IVAN: [laughing] Okay. Let’s talk about the opportunities of things to learn.

TESS: The biggest, scariest thing about Kubernetes and Drupal is, you have to let go of your local file system. That is the most scary thing that I have to tell people about Kubernetes.

IVAN: So, no file system, huh?

TESS: No file system.

IVAN: Does that make it slow?

TESS: Well, not really. Let me describe why. The problem is, that— and I’ve had this in my Return of the Clustering talk—is that we’re used to something which is called “block storage.” Now, block storage is pretty great. It is a literal attached disk to the server. So, it is mounted on the server, you have direct access to it, and you can store all kinds of things to it. And it’s fast, and it’s right there. It has no failover, it can’t be shared across the systems, but ehhh, whatever, we have one big server, who cares about that.

Then, if you do try building a traditional server cluster, well, you can’t quite do that. So then you get network file system involved, NFS. And then now, all of the file reads and writes occur over the network to some other centralized server. Okay, it still looks like a local block storage, it still works like block storage, so, okay, sure. But the problem with that is that network file systems, by their base nature, introduce a single point of failure.

Now, that’s not good by itself. If the NFS server goes down, your entire site no longer looks or functions correctly. But the problem is, that it also doesn’t scale either. There’s a natural limitation between the number of different replications for frontend server, servers that intercept the actual requests from people, and then send them to the Drupal backend for processing, and then push back their responses. There’s a natural limitation between those systems and those that can access NFS. And as soon as you have too many accesses, suddenly NFS is not going to be keeping up with you and your performance drops to the floor.

Also, NFS is kind of persnickety. You have to tune it. You have to make sure that it has enough RAM, enough bandwidth. You have to make sure it’s physically proximate to the rest of the servers. And, all of this is because it’s trying to replicate block storage. Now, block storage is great for a whole bunch of data, but in a cloud architect's perspective, there are really two different kinds of data. There’s complex data and static data.

And when I tell people about this, they go, “Well, what’s a complex file?” A lot of people will say, “Well, we have a whole bunch of files which are all linked together, that’s complex, right?” Nope. “Well, we have some Excel documents that’s on an NFS file, that’s complex, right?” Not really. So, what is a complex file? 

I spent hours, tried to squeeze an answer [laughing] out of the internet for this, and eventually arrived at the answer from a cloud architect's perspective: “complex files, such as the files which constitute the actual underlying disk storage for say, a MySQL database.” Data, which is written sparsely and seemingly randomly in multiple locations at multiple times with strict concurrency requirements. Now when I say that, does that sound like anything that we actually upload in a Drupal site?

IVAN: Nope.

TESS: Nope. None of it does. Block storage is required for complex data. But for static data, which is virtually everything that a Drupal site hosts, we don’t need it, it’s too much. It’s way too complicated. And, it doesn’t scale. So, what’s the solution? The solution really is, we need to treat the file system like an API. We need to treat the file system like a database. We don’t care where the database is, as long as you have an IP, a login and the correct credentials to actually get to the database, and then we have multiple readers, multiple writers. That’s what we want for a file system, right? Well, it turns out, there’s a thing that does that already, it’s called S3.

IVAN: Yes, AWS, hello. [laughing]

TESS: And the nice thing about S3 is, it’s perfect for static data. It’s API accessible and it can be made internally redundant. So, it has its own high availability built in that we don’t need to worry about. The nice thing that’s even more than that, is when we say S3, most people go, “Oh, Amazon.” No. S3 is, in fact, a standard. It is not just Amazon’s implementation of S3. There are multiple implementations of S3. So, I usually like saying an S3-compatible hosting provider. And that’s going to include anybody who runs any kind of S3-compatible service. And there’s actually an open source product called Ceph that actually provides an S3 frontend for file storage. And that is actually a service that DigitalOcean also provides. They have DigitalOcean spaces, which provide an S3-compatible static file interface, that’s actually powered by a Ceph cluster underneath the covers. So, open source all the way down to the core.

IVAN: Well, I didn’t know that spaces was Ceph underneath the covers. That’s cool.

TESS: It’s just buried in there. You could find it though.

IVAN: Cool. So, file storage is a challenge, but we fix that by using S3.

TESS: Yep, because Drupal 7 and 8 actually have very good S3 support. There’s S3 FS, that particular module which is excellent for doing Drupal 7 sites. We’ve been using Fly System for Drupal 8 for a few different reasons, but there are reasons that are a little bit easier for us. But your mileage may vary.

IVAN: And, if you’re going to host something that’s not Drupal related, you would need to find some other S3-compatible layer module, right?

TESS: Like for the CodeIgniter application, we are currently looking at implementing that as well.

IVAN: And, there’s a React app as well that we’ve deployed. That uses the underlying Drupal site, though, doesn’t it?

TESS: Yes, that doesn’t actually need a local file system.

IVAN: There’s no SSH access to a cluster of Kubernetes, is there?

TESS: Yes, that’s the other thing. It’s like after I already brutalized you with saying, “No, you can’t have a local file system,” now I take your SSH away as well. [laughing]

IVAN: [laughing] But there is something to use to replace it, right?

TESS: There is. The problem is that, you really, really, really, really, really, really, really shouldn’t use SSH in Kubernetes. SSH is a very dangerous thing to have running anywhere, because it is a potential security access point that can be used and abused, both internally and externally. You really don’t want to have to run it, because if you want to run SSH in Kubernetes, you have to run it in a container. And if you run it in a container, you’re running it as root. And if you’re running it as root, you’re running it as root on the underlying hardware that’s powering the cluster, and that’s bad. [laughing] You don’t want to do that.

So, instead you want to access what is typically called “the backplane.” The backplane is going to be access to the workload via the orchestration system. So, for Kubernetes, the backplane access comes in the form of a command line application called Kubectl or “Kube control” or “Kubey control” or “Kubectl” or like 15 other different names. [laughing] I always thought of Kubectl, that’s my favorite.

IVAN: Let's spell it out. [laughing] I like that one too. k-u-b-e-c-t-l

TESS: And this application not only lets you interact with the orchestrator, but also allows you to directly access individual containers as well. Although getting to an individual container is a little bit more difficult, once you’ve done it a few times, it’s not that hard. Because Kubernetes is so popular, there’s a lot of other command line environments, which will have auto completion assistance for Kubectl as well. So, for me, if I enter in a parameter to Kubectl, say for name space, I can hit tab and it will give me a list of the name spaces that I have. So I don’t actually have to type it out.

IVAN: Pretty slick.

TESS: I use Z Shell (ZSH) but that’s me, I’m weird. Some people like using Fish or some other shell. And I’m sure there’s auto completion mechanisms for your favorite shell somewhere.

IVAN: There’s not a whole lot of challenges then, with Kubernetes. You’ve kind of mentioned a few that are surmountable. Is there anything else, a budding developer, a budding DevOps person should know about, that are looking to start to explore hosting for themselves?

TESS: Well, they should also keep in mind that email is a problem.

IVAN: Yes! We discovered that in the last few weeks, didn’t we?

TESS: Yes, we did.

IVAN: So, we decided that we were going to use an external, transactional email provider. We ended up on SendGrid. But you don’t think of these things once when you’re working on a cluster that’s managed because, hey, these machines all have SendMail on them.

TESS: Yup, and that’s one thing that you really can’t rely on when you start working with a container-based workload. It exposes a lot of these things. But, we’re not where we were two or three years ago where this would’ve been a huge, scary, problem. These things have existing solutions, which are not that difficult to implement, even today.

IVAN: And there are some free tiers as well that you can use, especially if you don’t have a high volume of emails that you’re sending out.

TESS: If you’re only sending 500 emails a day, you can configure your G Suite email as the SMTP provider.

IVAN: Exactly. What about cron? Isn’t that a problem too?

TESS: Cron is a little bit different in Kubernetes. So, the thing with cron is that, in Kubernetes, cron isn’t just something that runs a command. In a traditional server workload, cron is some background process that exists in the system, and when a certain time shows up, it runs a certain command that you tell it to. And, it assumes that you’re running it on literally the same exact system that is running everything else, your web workload. Right?

IVAN: Right.

TESS: That’s not quite the case in Kubernetes. In Kubernetes, a cron job actually runs a container. So, when you actually have your web workload, you’re going to have one container, say, for Apache, somewhere, which is running your site. Then you have a cron job in Kubernetes, and that cron job will literally spin up a completely separate container in order to actually run that process.
So, that’s a bit different.

Now, the only real part of that which gets really confusing is, if you don’t have a nice separation of all of the different infrastructure we just finished talking about, if you don’t have any local disks that you need to worry about, if you don’t have SendMail you have to worry about, if you don’t have any of this stuff and you can scale out your web container to 10 or 20 or more, and not have a problem because they all rely on external API-based providers, then it doesn’t really matter what you do with cron. You just literally run the same container that you run for your web workload, with the same configuration and everything else, but you only tell it run a particular command, instead of "Run Apache." And that’s it. That’s what we do. And, it’s actually not very hard.

IVAN: What’s your favorite thing about Kubernetes? I’m only going to give you five minutes at the most. [laughing]

TESS: [laughing] I think the thing that I like the most about it, is probably the ability to easily scale things. Once you actually have solved all the underlying infrastructure problems, you basically have just a container-based workload that you can say, “I need to run three of these.” Then you can tell it and it will run three of them, and it will just run it, that’s it, you don’t need to worry about it. It already load balances it for you. How can I describe this? Well, let’s go back to the infamous car analogies again.

IVAN: They work.

TESS: They work, but you know they work within a US cultural context of a certain decade period, of a certain geographic location, but let’s put that aside for a second.

So, a car analogy. Let’s say you have a car, and you want to do some work on it. And you go to your garage and what do you see? The car and an empty garage. That’s often what a lot of other systems look like. When you have to do traditional clustering with regular virtual machines, or even self-hosted physical machines, you have to go over to your local hardware store, buy all the tools, buy the car jack, buy an engine lift, buy an air compressor and a whole bunch of other stuff, in order to do your car stuff, and it’s a lot of work and a lot of investment.

With Kubernetes, it’s more like, Okay, I go to my garage and I have Kubernetes. So I have all the tools already. All the tools are just there on the walls, right now. I can just start working. That’s what I really like about Kubernetes. It provides me a room with all the tools for me to actually make this workload do what I want it to do, rather than having to go and grab yet another thing, then another thing, then another thing. Then try to make compromises to make two things, which aren’t the thing that I can’t get right now, but they’re the two I have, to work together.

IVAN: I love the analogy. [laughing] I think that works, Tess. So, what about training? Wouldn’t it be great if, instead of trying to figure this all out yourself (like we did), you could just have us show you how to do it?

TESS: Gee, wouldn’t it? [laughing]

IVAN: Wouldn’t it be great? Well, guess what? That actually exists. We’re going to be doing some free trainings at BadCamp and then at DrupalCorn as well. We’ll be at BadCamp next month, the beginning of October. Now, they’re free trainings, but there is a cost of use to attending the training itself, so I think you have to register and it’s $20, or $10 at DrupalCorn. They’re free as far as we’re concerned.

Can you talk through, just a little bit about the format of the training that we have set up? What are you going to learn and who is it for?

TESS: So, we’ll briefly touch upon different kinds of Kubernetes hosting providers, as well as what Kubernetes actually is and what it does, and what it gives you. Then afterwards, we’re going to start containerizing your particular application. So, we’ll start working with containers, putting them onto Kubernetes, getting used to how to use Kubectl, how to work with individual definitions within Kubernetes, and making all of these pieces work together.

IVAN: And, it’s a four-hour workshop, it’s half a day, you get to spend time with Tess, and I think I’ll be there too. It’s going to be great. So, if you want to contribute to Flight Deck, or to Kubernetes, the Kubernetes Flight Deck Cluster that we have, we’d love it. It’s all online. You can visit ten7.com, and you’ll find it there on the what we give back page and you can also visit us on github.com/ten7, and you’ll see all the repos there. We’d love your help. Thank you, Tess, so much for spending your time with me today. This has been truly great.

TESS: Not a problem.

IVAN: So, if you need help with your own hosting, or figuring out what makes most sense to you, we’d love to be there to help you, whether you’re a developer or a large university, or a small business, it doesn’t matter. We’re happy to provide consulting, whether that means deploying your own Kubernetes or having us do it for you, or even selecting another vendor that makes the most sense to you.

Just send us an email and get in touch. You can reach us at [email protected]. You’ve been listening to the TEN7 Podcast. Find us online at ten7.com/podcast. And if you have a second, do send us a message. We love hearing from you. Our email address is [email protected]. And don’t forget, we’re also doing a survey of our listeners. So, if you’re able to, tell us about what you are and who you are, please take our survey as well at ten7.com/survey. Until next time, this is Ivan Stegic. Thank you for listening.

Sep 18 2019
Sep 18

So, you’re thinking about starting an international web portal? Or maybe you have a website that is targeted for more than one language? You might have a B2B application and want to expand your market to other parts of the world? Then you should read on ...

Most of you probably think that in order to launch your product, site or service world-wide, all you need is to translate it. Guess again. That’s not enough. 

The world is represented by many different countries and by extension by many different cultures. Each culture has their own “niche” habits, behaviors and even perspectives on things. The same sentence might appeal to someone while offending someone else.

Even the structure of the content can lead to bad conversion rates if it’s not tailored to the target audience. This is where localization comes into play. 

As the name implies, localization means to make something feel local. Something that connects with the audience you are targeting. This means that you need to get your hands dirty and do the research. 

For example, if you want to expand your product to China, make sure to study its culture and their habits. How do most  Chinese sites structure their content? What are the best practices for user experience? How does the navigation look? How big are the images? How do they read the text? Those are just a few questions that you need to answer. 

After you have most (if not all) of the answers, you need to start implementing the solutions. This means that you often need to drastically change the layout and the content of the site. Even changing an image on a blog post can have a positive effect on its performance. 

A great example of good localization is the MSN website. The screenshots below demonstrate the English and the Chinese website. Notice the difference?

English version of the MSN website

 

Chinese version of the MSN website

If you take the time and visit both msn.com and msn.cn you will see quite a difference in both the layout and the content itself. In comparison, we can deduce that the regular website favors imagery over text, and the opposite applies to the Chinese website. And this is only the homepage we’re talking about!

Another good example is Starbucks' website. Below you can see the comparison of Starbucks.com and the Japanese version. 

English version of the Starbucks website

 

Japanese version of the Starbucks website

As you can see again, the pages are vastly different. The Japanese website is packed with a lot more information compared to the regular website. Again the trend of large images over text is clearly visible. 

Localization by itself is a huge topic and we won’t cover all of its aspects in this post, but I want to briefly talk about one website feature that doesn't need localization, as it is seen as a best practice in any culture - namely, good website performance. 

Many of you might live in a part of the world where you get quite a decent internet connection. I like to think of internet speed like water. There are places in the world where there are large bodies of water with fast streams, but there are also places where water is scarce. The same applies to internet speed. 

This means that we need to make sure that our websites run as fast and are as optimized as they possibly can be. Not everyone can afford the luxury of fast internet access and if the page loads slowly you’re likely to lose a potential new client or user. Humans are not patient beings that are willing to wait for your page to load. 

One of the things that impact the performance of a website are images. There are a lot of handy ways to optimize images in order to achieve faster-loading websites. If your site is built on the Drupal CMS, however, you don’t even need to do any extra coding - all the image optimization features are available right in the core

If you want to learn about more ways of improving the performance of your Drupal website, Ana has you covered with her tips to speed up your Drupal site

This brings this post to a close, but just to recap: 

  • Translations are not enough.
  • Make sure to study your target audience and their habits.
  • Customize the structure and the content of the website.
  • Make sure to optimize your website for slow internet connections. 
  • Don’t be afraid to drastically customize the layout of the website.
  • Small changes can go a long way.
Sep 18 2019
Sep 18

These are interesting times for securing tech talent.

With the current unemployment rate at 3.7 percent, the job market is highly competitive, and that’s particularly true for the technology sector. 

Reaching out to agencies that can offer the right talent when and where it is needed is proving to be the solution among savvy organizations. 

Key among the advantages that the right agency relationship offers: the opportunity to leverage specific expertise on an ongoing, ad-hoc, or one-off basis.

In our rapidly changing market, here are six reasons why joining forces with an agency might be a better idea than hiring or relying on in-house talent:

  1. Hiring processes are costlier and more complicated than ever before. 
  2. Employees are an investment and represent overhead in benefits, office space, and training.
  3. Ensuring that an in-house staff consists of employees who have the depth and breadth of skills required to meet current and future needs may not be feasible for many organizations. 
  4. Top talent knows their worth. Along with their increasingly high salary demands, they tend to continue looking for new and better opportunities.
  5. If the hiring decision doesn’t work out, there are costs associated with parting ways along with the risk of legal liabilities.
  6. The market is moving forward at a rapid pace with constantly emerging needs for new types of specialization. The expectation that a few great hires or one exceptional, in-house team, can anticipate and proactively take on every new challenge impedes both agility and opportunity. 

 

Agility Rules

How to respond to these challenges in an environment where not keeping pace with what’s new and next is not an option? Strategic agency relationships. 

Reaching out to firms with targeted expertise that specialize in focused training, rapid-fire development, exceptional design, astute marketing, or incisive consultation on a depth and breadth of topics is proving to be the optimal path forward.  

While there is sometimes a tendency to view relationships with contractors as “all business” and lacking the connections that easily develop among in-house teams, I’ve often experienced a high degree of commitment and connectedness within agency and client relationships that are built upon a personal stake in clients’ success. Bridging this divide calls for an intentional focus, and can be facilitated by a workshop that’s designed to provide a deep understanding of a client's business and knowledge transfer to the agency partner.

There is no question that relationships optimize outcomes. Trust, genuine commitment, and true connections serve to drive game-changing synergies. In many respects, I’ve found that the quality of the relationships can make the difference between a transactional, task-focused approach, and a strategic, long-term vision.

And who can argue that work is considerably more fulfilling and fun when we’re connected with each other both professionally and personally

Looking to join forces with an agency that offers industry-leading expertise, and launch a relationship that can ignite new possibilities in web development, human-centered design, strategic planning, and accessibility remediation? Contact us today

Sep 18 2019
Sep 18

With every new release, Drupal is emerging a valuable asset in the enterprise marketing stack. The additions to Drupal core, especially with Drupal 8 and after, have made it a digital platform that comes equipped for all the standard marketing best practices right out of the gate. In addition to that, the larger Acquia ecosystem is also delivering new solutions that empower Drupal be more than just a CMS. These bring in some powerful martech capabilities that have made Drupal into a platform that’s ready to deliver the results that enterprise marketing teams want.

This post delves into the key modules and solutions that enable smart content management in Drupal, both in terms of creating and publishing content, as well as leveraging that content in diverse ways to drive results.

Smart Content

Smart Content is a Drupal toolset that can help deliver anonymous website personalization in real time, for Drupal 8 sites. Essentially, site admins get the ability to display different content to site visitors, based on whether they are authenticated or anonymous users.

Some examples of how you can leverage it include:

  • Displaying a smart block showcasing your latest offer or most popular blog to a first time visitor to the site
  • Displaying a smart block that showcases different industry specific case studies for different users in your database
  • Displaying a smart block only for mobile viewers of your site, maybe asking them to view it on your mobile app

Now this module in itself has limited functionality, but becomes very useful when used in combination two other Drupal modules:

Smart Content Blocks

Included within the Smart Content module, these allow you to insert a Smart Block on any page and set up conditions that govern the content being displayed within the block. These conditions can be used to hide or show a specific content in certain cases, and form the basic personalization rules for your Drupal site. The blocks have an easy interface within the Drupal 8 backend, giving you the flexibility to add any number of blocks, anywhere on a page. 

It's important to note that all of your content, irrespective of format, is available to show and promote through Smart Content Blocks. Ebooks, videos, images, blogs, service pages—anything that’s already in the Drupal CMS can be delivered to a block. 

Smart Content Segments

A complete set of conditions grouped together to achieve a reusable composite condition. For example, a set of the following three conditions:

  • showcase only on desktop
  • showcase if location is France
  • showcase for anonymous users

               will create a smart content segment that can be applied to any smart content block to ensure                 that it's displayed to anonymous users from France, viewing the site on a desktop. This                             feature saves you time as you don' have to set up the same set of individual conditions every                   time.

At the heart of Smart Content are the conditions, editable rules that allow you to govern the display of content. The interface is easy to manage, and familiar to marketers working on a Drupal site. 

sc-1

You have your choice of the basic conditions for personalization like the browser, language, device type etc. You also have the more advanced options like targeting different industries based on third party IP lookups, or tapping into existing segmentations or campaigns from a marketing automation system. Essentially, anything that has an API with available data can be used as conditions to help drive your personalization strategy with Smart Content.

Layout Builder

The Layout Builder module, experimental in Drupal 8.5 and 8.6, had a stable release with Drupal 8.7. This module allows content authors to easily build and change page layouts and configure the presentation of individual content, content types, media and nodes. It also allows you to add user data, views, fields and menus. 

This is a huge asset for enterprise marketing and digital experience teams because:

  • The module gives a drag-and-drop interface to create custom layouts for specific websites sections and pages, with the ability to override templates for individual landing pages when required
  • Content authors can seamlessly embed video across the site to create a more interactive user experience, and increase engagement and conversions
  • Marketers can now build and preview new pages at their own pace, without the fear of negatively affecting the existing user experience.

All of this means that marketing teams now have more control over the site, and can make changes and additions independently. This also reduces the turn-around-time for new campaigns by reducing, or even eliminating, dependencies on development teams. Think high-impact landing pages designed exactly as you want, but without the waiting around or constant back-and-forth with developers.

Media Library

With the release of Drupal 8.7, the CMS now has a stable media library module.

It provides a visually appealing interface for browsing through all the media items in your site. With the new version, multimedia properties can be added to content either by selecting from existing media or by uploading new media through bulk upload support. Once uploaded, users can remove or reorder any images ready for import. 

It provides an easy way to upload several media assets in your Drupal website quickly. Let’s you add alt-text, check the images before uploading.

Powered by Views, it allows site builders to customize the display, sorting, and filtering options.

Acquia Lightning

As enterprise marketing teams launch large scale campaigns, they often need to put together new microsites that work flawlessly. And they usually have to do it at a short notice, to leverage critical marketing opportunities in time. 

Having to depend upon the development teams to create one from scratch, and the constant coordination required to make that happen, can lead to the marketing team losing precious time. 

Acquia Lightning, an open source Drupal 8 distribution, is the perfect solution for this challenge. Lightning give you a basic ready-to-launch site with pre-selected modules and configurations that can cut development time by 30%. This allows:

  • Development teams to publish optimized Drupal 8 sites in short time frames
  • Editorial teams can easily work with layout. Media and content on these sites, and have them campaign-ready in no time

Some of the key features in Lightning that are particular great for marketers are:

Moderation Dashboard

This dashborad gives you  complete visibility into your Drupal content status, with a structured overview of where every pieces of content is in the editorial process. Besides tracking content status, you can also manage access controls determinig who can access which pieces of content at the backend.

Screenshot 2019-10-01 at 6.50.09 AM

The key pieces of information you can view of the dashboard are:

  • Current drafts in progress
  • Content you created
  • Content needing review
  • Recent site activity
  • Individual editor activity in the last 30 days

Moderation Sidebar

Screenshot 2019-10-01 at 7.03.38 AM

The moderation sidebar allows you to stay on the website frontend as much as possible while making edits and managing the editorial process for any piece of content. Actions like editing text and layout, publishing a piece, creating new draft and more can be easily achieved with the sidebar. And it's quickly accessible by clicking "New Tasks" on any piece of content. For marketers no really keen on getting into the backend, this sidebar is a simple way to make the edits they need, with minimal chances of error. 

Scheduled Publishing

As the name suggests, this feature in Acquia Lightning allows you to set a piece to publish at a future date. This functionality give you a better view of when content is set to launch, and also ensure that it launches at optimal times, according to reader preferences. And this happens without you having to be on the job at odd hours, just waiting around to publish content.

Screenshot 2019-10-01 at 7.17.14 AM

You can schedule publish times from on individual pieces by editing the 'Current Status' to select “Schedule a Status Change” . Then choose “Published” and select your preferred publishing date and time.

Acquia Lift

We cannot talk of smart content management with Drupal without talking about Acquia Lift. For enterprise sites built on Drupal, there’s nothing more suitable for the personalization than Acquia Lift.

Acquia Lift is a solution designed to bring in-context, personalized experiences to life. It’s a powerful blend of data collection, content distribution, and personalization that enables enterprise marketing teams to closely tailor the user experience on the site. And all this without excessive dependence on development or IT teams.

Acquia Lift gives enterprises three key elements to drive their personalization and reflect it with their website content:

Profile Manager

This helps build a holistic 360 degree profile of your users, right from when they are anonymous visitors on the site, up until the stage where they are repeat customers. It collects user demographic data, historical behaviour data, and real-time interactions so you can get a complete understanding of who your users are, what they want, and then work on how best to deliver that.

Content Hub

The Content Hub is a cloud-based, secure content syndication, discovery and distribution tool. Any piece of content created within the enterprise can be aggregated and stored here, ready to be pushed out to any channel, in any format. 

Faceted search and automatic updates give visibility into the entire gamut of content being created within the enterprise - in different departments, across websites, and on different platforms.

Experience Builder

This is the heart of Acquia Lift - the element that allows you to actually build out a personalized experience from scratch. The Experience Builder is a completely drag-and-drop tool that lets you customize segments of your website to showcase different content to different target segments, based on data pulled from the Profile Manager.

Enterprise marketing teams can 

  • set up rules that define what content should be shown to which segment of site visitors
  • perform A/B tests to accurately determine what type of content drives more conversions for which user segments. 

All this can be done with simple overlays atop the existing website segments, without impacting the base site, and without depending on IT teams for implementation.

With a commitment to creating ambitious digital experiences, every new Drupal release has brought in new features to add to the marketing ecosystem. While the overarching focus is on being flexible and scalable, these solutions are creating real impact on customer experience, conversions, online sales and brand proliferation.

And for enterprise teams contemplating shifting to Drupal for diverse proprietary CMSes, the payoff from empowered marketing team alone makes it worth the effort.

While most of the features mentioned here can be accessed by your teams easily if they are already using Drupal, some require guidance. Especially Acquia Lightining and Acquia Lift will need skilled teams to set it up for you, before marketers can start reaping the benefits. 

If you are looking to deploy Lift or Lightning, just drop us a line and our Drupal experts will be in touch.

Sep 17 2019
Sep 17

DrupalCon Amsterdam is approaching — next month! Are you still deciding about going? With the robust program, each day will be full of exciting sessions, social activities and contribution sprints galore! Check out the breadth and scope of our offerings, with insight you won’t want to miss.

The event kicks off with Trainings on the morning of Monday, October 28. Choose from these seven options to further your learning:

Pages

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web