Aug 17 2018
Aug 17

Modern digital services need to integrate and interact with each other to provide a seamless user experience and data integrity in every context — in the browser, in a native app or wherever. That means designing APIs to connect the systems in a fast, secure and standardised way. Here are some healthy practices for designing APIs for digital services.

In my previous blog post on managing microsites with Drupal 8 I promised to write something further and fuller about designing web APIs. This is less directly about Drupal 8, but I will comment on how to implement the recommendations here in Drupal 8.

These are the things that I take time to think about when building a web API.

Design the thing

As a developer, it’s all too easy, and too tempting, to just jump right into coding something. It’s certainly a weakness I suffer from and that I have to cope with.

Before putting the proverbial pen to paper, though, it’s really important to understand why we’re building an API in the first place. What are the problems we’re trying to solve? What do the users need or want?

With regard to building an API, that means thinking about the consumers of the data provided by your API. If you’re building a decoupled CMS, the main user is the frontend system. In other circumstances it may also mean other websites, embedded widgets, apps on mobile devices, and so on. Whatever it is, due consideration needs to be given to the needs of those consumers.

That means understanding your user’s needs, examining the patterns of behaviour of those users, and ultimately translating those into a design.

Sound like familiar language? Yes, that’s the language of visual designers and user experience specialists. In my books, I’d suggest that means you would do well to work closely with specialist design colleagues when designing and building an API.

Your web API needs to be designed: needs; behaviours; analysis; patterns; traits; design; feedback; improve.

Be an artisan with your API

Take time. Research. Think. Plan. Design.

Beware, Drupal

When you’re working with Drupal, it is too easy to jump over the design step. Drupal does so much out of the box that it’s too easy to start coding without thinking properly about what we’re coding.

The availability bias when you’re a specialist Drupal developer, having it as the go-to toolkit, is that we think about the solutions to the problems (if we’ve even got as far as articulating the problems) in a Drupally way. For instance, since Drupal has a menu system it’s easy to think about navigation in a decoupled CMS system in terms of the way Drupal handles the menu system, which prevents you from thinking about other ways of handling navigation.

The same is true with Drupal 8’s support for REST. Drupal 8 core includes REST resources for most entities in a Drupal installation. That’s very useful. But, it can also make you lazy, just using these core RESTful API endpoints for nodes or comments or whatever, with all the guff they include, without even thinking about whether they’re appropriate, whether all the guff they include is appropriate, whether it’s useful or formatted appropriately.

That goes also for REST exports from Views. They can be useful, giving you a quick way of creating a RESTful API endpoint. The problem is, thought, that also confines you to working with the way Views works and what it can produce. You may find that a problem if you want to support optionally requesting for additional objects to be embedded in the response, for instance (see below).

Resist the temptation! Instead, take the time to think from the other end first.

I’ll return to the question of designing your API below, but first we need to talk about documentation, since designing and documenting your API can be part of the same process.

Documentation

Yeah, I know. Most devs find this just the dullest thing in the world to write. With a web API, though, it’s incredibly important. If you want people to actually be able to use your API, they need to know how to work with it. It’s horrible trying to work with an undocumented or under-documented API.

So, what should go into the documentation for a web API? Here’s some pointers.

The basics:

API reference

Yeah, this is probably what everyone thinks of when they think of documentation for a web API, but it is in fact only part of the documentation—maybe the most important part, but only part.

There a plenty of good blog posts and descriptions of what your API reference should include, so there’s no need for me to reiterate that here.

The most important thing to say, though, is that, beyond identifying resource paths, actions and parameters, your reference should describe in full both what the request should and the response will look like.

Mock server

It is incredibly helpful to include a mock server with your API documentation. Preferably, your mock server will handle the documented requests and responses of each resource.

This will help those building apps and tools that will consume your API to get up-and-running quickly.

For gold stars and a round of applause:

Tutorials, guides, cookbooks

If your API gets to be any substantial scale then the developers who use your API will find it incredibly useful to have some tutorials and guides included in your documentation.

These should cover common tasks, or how to work with specific sections of your API. A guide to ‘best practices’ with your API may be appropriate to help people make the most out of your API.

Check out the guides in MailChimp’s API documentation for a good example. Twitter’s API docs ‘best practice’ section are great as well.

Quick start

One invaluable guide is the ‘getting started’ or ‘quick start’ guide. This can often be just a single page, with a succinct summary of the things you need to do to get going.

The YouTube API ‘getting started’ page is a useful example.

Useful tools

There’s lots of useful tools out there to help you get started when you document your API design. Here’s some suggestions.

API Blueprint is an open-source high-level API design language that is very useful for writing your documentation. The language is similar to Markdown, so it’s easy to work with. There are a number of SaaS tools offering services based on API Blueprint. One that I really like is Apiary.io (though they’ve recently been bought by Oracle so who know where that’ll take them), but there are others, like Gelato.

You might also consider Read the Docs and daux.io amongst others. There’s also the Open API Initiative, which is ‘focused on creating, evolving and promoting a vendor neutral API Description Format,’ though the initiative is ‘based on the Swagger Specification.’ Open API is an initiative of Swagger.io, and they have a list of tools and frameworks using the specification. The OpenAPI specification is on GitHub.

Whatever you use, your documentation should (probably) end up in a public location so that other developers can use it. (An exception might be for an API used in a secure decoupled system.)

Keep it simple

So, let’s return more directly to the question of designing your web API.

An important rule of thumb for me is to ‘keep it simple, stupid.’ There is no need to include anything more in the resources of your API than is necessary.

I say this as a long-time Drupal developer, knowing full well that we have a superpower in overcomplicating things, all those extra divs and classes all over the markup, all those huge arrays.

This is still true in the core REST resources of Drupal 8. For example, when GETting the core Content resource for node 10 /node/10?_format=json the response gives us …

{
"nid": [
{
"value": "10"
}
],
"uuid": [
{
"value": "6bfe02da-b1d7-4f9b-a77a-c346b23fd0b3"
}
],
"vid": [
{
"value": "11"
}
],

}

Each of those fields is an array that contains an array that contains the value name:value pair as the only entry. Whew! Exhausting. An array within an array, when there’s only one level-1 array ? Really? Maybe we could render that a little more simply as …

{
"nid": "10",
"uuid": "6bfe02da-b1d7-4f9b-a77a-c346b23fd0b3",
"vid": "11",

}

… which might help our API’s consuming applications to parse and use the JSON data more easily. Like I said above, I’d suggest that just using the core entity REST resources isn’t often the place to start.

The simplicity mantra should pervade your API design. Include only the data that is needed for the consuming apps. Pare it down, so it’s as easy to read as possible.

As a result, when you come to build that API in your Drupal 8 backend system, it will demand a good discipline on you of not just throwing out in the API resource responses what’s easiest but rather what’s best.

What’s in a name?

This is true in particular when it comes to your naming conventions and API resource paths.

Don’t just add root-level endpoints ad infinitum. Use well-structured paths for your resources where the depth of the path elements make sense together. The result should be that your resources are explorable via a browser address bar. E.g.

GET /articles/5/comments/19

… makes intuitive sense as a path: get comment 19 on article 5.

On the other hand, don’t just add depth to your resource paths unnecessarily. Separating things out with some logic will help make things intelligible for developers using your API. E.g.

GET /articles/comments

Umm? What’s that? The comments on articles — why would I want that? However …

GET /comments?contenttypes=articles

… is more obvious — a path to get comments, with a content types filter. Obvious. It also suggest we might be able to filter content types with a comma-separated list of types—nice!

Find a straightforward naming convention. Make the names of resource endpoints and data fields obvious and sensible at first glance.

Overall, make the way you name things simple, intuitive and consistent. If the title field of a data object in your resources is called ‘title’ in one place, ‘name’ in others and ‘label’ in still others, for instance, then it adds unnecessary complexity for writing reusable code.

Easy peasy, lemon squeezy

When designing your web API, it needs to be simple to use and work with. Help users to get just what they want from your API.

Support limiting response fields

You’ll make developers smile if you provide a way of limiting the fields that are returned in a response. You don’t always want to get everything from a resource. Being able to choose exactly what you want can help speed up usage of an API.

For example, consider supporting a fields parameter, that could be used like this:

GET /articles/5?fields=id,title,created

Support auto-loading related resources

The opposite might also be important, being able to load extra resources in the same request. If a request can combine related resources then fewer requests will need to be made, which again will help speed up using an API.

Supporting an embed query parameter could give you this. For example:

GET /articles/5?embed=author.name,author.picture,author.created

… would enable users to load also the article author’s name, their picture and the date their account was created. Note the dot syntax, which might be useful.

Flexible formats

Another way of making it easy for users is to support flexibility in the format of the data in the response. JSON is usually what people want to handle, but some do still prefer to use XML.

There’s also the problem that JSON has no support for hyperlinks, the building blocks of the web, which is a curiosity as the W3C admit. There are JSON protocol variants that attempt to address this, like HAL and JSON-LD, but I refer you to a fuller discussion of JSON and hypermedia and some useful resources on hypermedia and APIs from Javier Cervantes at this point.

Keep it steady, Eddy

When designing your API, you should expect it to have a certain lifetime. In fact, it’s bound to last long enough to need changing and improving. But what do you do about rolling out those changes?

Your devs will need the flexibility to change things, especially if they find bugs, and they’ll get frustrated if they can’t adapt the API to make improvements.

Your users need reliability and stability, though, and they’ll get frustrated if the API keeps changing and their consumer app dies without warning.

So, from the start, include versioning.

A pretty sensible thing is use a path element to specify the version number. E.g.

GET /api/v1/articles/5

You could use a query parameter instead, of course, though since query parameters are optional that would mean that without the version parameter your API would return the latest. Consumers who’d inadvertently missed including the version in their requests would be vulnerable to changes making their app die, which might result in some flame support emails.

Support that thing

Make sure there’s a way for your users to let you know when they have problems, there find a bug, or whatever.

If its an internal API, like with a decoupled CMS and frontend, then that is probably your bug tracker.

If it’s a public API, then you’ll need some public way for people to contact you. If you host your repository on e.g. GitHub then there’s support for issues baked in.

Respond.

Giant lists of bugs that never get addressed are soul-crushing.

Some other things to consider

Authentication and security

You’ll probably want to include some authentication to your API. You shouldn’t rely on cookies or sessions for your API as it should be stateless. Instead, by using SSL (you’re using SSL, right? yes, you’re using SSL.), you can implement a token-based authentication approach.

However, where a token approach is inappropriate, OAuth 2 (with SSL) is probably the best way to go. Here’s some further discussion on API security and authentication, if you’d like to read in more depth.

Caching

HTTP has a caching mechanism built in — woot! Just add some response headers and do some validation on request headers and it’s there.

I’ll point you elsewhere to read more about the 2 key approaches, ETag and Last-Modified.

Use HTTP status codes

HTTP defines lots of meaningful status codes that can be returned in your API responses. By using them appropriately, your API consumers can respond accordingly.

Useful errors

If a request has an error, don’t just return an error code. Your API should provide a useful error message in a format with which the consumer can work. You should use fields in your error message in the same way that a valid response does.

Healthy API design

In summary, when building an API it’s not healthy to just jump in and start writing the code for the API from a specification. Neither is it healthy to just rely on the default resources of CMS tools like Drupal 8. APIs always need to be tailor-made for the task.

APIs need to be designed.

If you can make your web API simple to understand and adopt, easy to work with, incorporating plenty of flexibility, if it’s stable and reliable and well-supported, then you’re well on your way to being the proud owner of a healthy API.

Aug 17 2018
Aug 17

There are lots of situations in which you need to run a series of microsites for your business or organisation — running a marketing campaign; launching a new product or service; promoting an event; and so on. When you’re with Drupal, though, what options do you have for running your microsites? In this article I review and evaluate the options in Drupal 8, make a recommendation and build a proof of concept.

So, I want to run some microsites …

A client brought me an interesting problem recently, something they need to solve for their production Drupal site. They are an international humanitarian agency who, alongside their main production website, want to run some microsites for a number of their public campaigns. Although they could run them on the main site, they’ve found too many limitations in trying to do that. Campaign teams, frustrated with the lack of flexibility and slow protocols for getting changes made to support their bespoke needs, have often gone off with their small budget and dynamic team to create something quick that fits their campaign with Squarespace or Wordpress or something.

That made the campaigners really happy. But, when the campaign or event lapsed, the campaign site quickly got out of date and went unloved, the campaign team moved on and no-one could remember how to log into the system and it became abandoned.

Hearing this story was so familiar — the same thing often happened when I was a senior developer at Oxfam International.

So, they said, could something be done about it? What, if anything, could be done with Drupal to support campaigners get their microsites running? What would give them the fast, bespoke solution to their microsite problems, whilst still keeping all the content well-managed and being able to share that content with the main site or other microsites?

I scratched my chin and had a think.

How about Drupal multisites?

Since some of its earliest versions, Drupal has included a feature for multi-sites — running several sites from a single codebase installation, sharing the core system, contributed and custom modules and themes. Each multisite has its own database, its own settings and configuration, its own content, and so on. Ideally, it also means updates can be done once.

So, multisites could be an option. Many people find them to be a real workhorse for their context, and often they are right on the money.

Why use multisites

The Drupal.org documentation for multisites includes a simple rule-of-thumb for when to multisite:

As a general rule on whether to use multisite installs or not you can say:

- If the sites are similar in functionality (use same modules or use the same drupal distribution) do it.

- If the functionality is different don’t use multisite.

(DrupalCon Austin [June 2014] held a interesting debate on Drupal multi-sites, its pros and cons, gotchas and suggestions, which is available on YouTube.)

There’s several compelling reasons to use them.

First, having a single codebase to maintain is a huge plus. Forked codebases can soon become orphaned, and unloved codebases become fraught with problems too quickly.

Second, multisites often mean there is also a single hosting platform to maintain, which is also a major advantage.

That can often mean, thirdly, that multisite installations can make better use of resources, both the server resources and financial, personnel or other physical resources. For example, since multi-sites share the same core and modules, that code need only go into the opcode cache once, saving server resources.

Caveat: is the end of multisites support on the horizon?

It should be noted that a proposal has been made to deprecate support for multisites in Drupal, with a view to removing it in the future.

The basic argument for this is that it’s an ‘old skool’ way of thinking about handling multiple sites. Git and Composer create practices and codebase structures that point in other directions.

The modern approach to multi-site is: git — Same code, different sites. Under your control. And well-maintainable.

There are a number of positive reactions to that proposal, which are variations on a theme:

+1. Multisite is a historical oddity at this point and I’d never tell anyone to use it.

But there are many more negative reactions, which largely go along these sorts of lines:

-1. Multisite has been a workhorse for a ton of Drupal sites and is well established in our code.

In that light, Drupal’s multi-site feature is likely to stay around for a while.

Classic problems with Drupal multisites …

It’s not all a bed of roses, though. There are some classic sticking points when working with Drupal multisites.

First off, handling traffic. One site’s traffic spike can be another site’s nightmare when the hosting resources are all hogged by The New York Times tweeting a link to a page on a site of yours; one site’s ‘BEST DAY EVA!’ can be the worst of times for all the rest.

The load on your database server may also be an issue. Multisites often use a single database server, and heavy load or slow queries in one DB can impact the performance of others. This might even be caused in the normal running of your Drupal sites, such as when running cron.

Running updates often causes headaches. When you update code, you’re updating all your sites at once. That means the updates are deployed, well, instantly across all your sites, but if they need update processes to run, such as updating the database, that can throw unexpected problems or errors.

And the worst of the worst: a small piece of poorly written, inadequately reviewed or tested code mysteriously jumps itself onto production — that never happens, right? No one ever lets that happen, do they? *ahem* — and takes down all your sites at once! It’s just an urban myth, a story to scare the children with at night, right? Never happens.

… and how to mitigate them

There are of course a number of ways to foresee these things happening and be ready for them.

On the performance questions, with smaller demands you can just ride it out — sites on the same hosting platform are fairly tolerant of resources being shared around, and the spare capacity is there for times just like there.

For larger performance demands, handling the pressure is a challenge in any hosting set-up, dedicated hosting just as much as shared. With modern cloud infrastructure, the option of scaling up your infrastructure or spinning up a new cluster when you’re experiencing ongoing heavy demand is much easier than in the past, especially if you plan for it as a possibility.

The next set of mitigations are all about best practice.

For starters, test, test, test. Don’t let any code onto production that hasn’t been tested thoroughly.

Have a solid release process that you always follow. If possible, include dev, staging and quality assurance stages. This should give you lots of points to catch things before they’re released onto your production sites.

Automate all the things. There are lots of ways of automating things to ensure they run consistently and quickly too, from shell scripts up to continuous integration tools. Use them.

And finally, be intelligent. With code changes that need database updates, for example, design your code so that it can be deployed to handle an interval before the database is updated. Or, with important but more volatile updates, be smart about choosing the time of day and week that you deploy it. Don’t ever push something out at 5pm on a Friday afternoon if you want to stay friends with your colleagues, your customers and your family.

Well, yes, in short, kinda. You could run microsites using Drupal’s multi-site feature. Things would work fine, though of course you’d have all the problems described above and have to take the mitigating actions.

However, it wouldn’t solve all the needs described above without some smart thinking. Plus, I’d suggest that you would also have some other problems to solve.

First, multisites all use different databases (sharing databases and tables is possible with Drupal multisites, but really unadvisable!) so the need of a single place for managing all the site content wouldn’t really be satisfied. The way around that would involve using web services, posting and pulling content from one site to another.

Neither would we have a unified search. There are fairly straightforward ways around that, using a tool like Apache Solr. The sites would need to share an index, with each document in the index including a site field, and there’s a contrib module that does that already (although no Drupal 8 version yet).

Lastly, and maybe more pertinently, you would still have all the ‘Drupalisms’ to live with. First of those is the visual design layer, the public user’s interface for the sites, what gets called the ‘theme layer’ in Drupal lingo. Many designers really dislike Drupal’s theme layer, and would really prefer to work with the pure frontend tools they use in other contexts. Drupal 8 has made major strides forward with the theme layer so it’s not as tough for designers as it once was, it’s true, but many (most?) frontend specialists would still rather not work with it.

Some consider influential Drupal figures consider multisites as ‘not enterprise grade’ and opinions like that are worth considering if your situation is enterprise scale.

Other approaches with Drupal

There are a few other ways of supporting microsites with Drupal that might be worth considering.

Domain Access

The Domain Access project was created to support just this kind of functionality. The project overview says as much:

The Domain Access project is a suite of modules that provide tools for running a group of affiliated sites from one Drupal installation and a single shared database. The module allows you to share users, content, and configurations across a group of sites.

This might work. However, there are many of the same problems with core multisites described above with this approach, with one additional one: everything in one database.

Our experience of using it, and this is echoed by others too, is that with a small number of very similar sites Domain Access can work well. With a larger number of fairly different sites, it’s a right pain and actually makes things quite difficult, requiring lots of complicated custom code.

Organic Groups

The Organic Groups suite of modules could be a solution for building microsites. The project allows users to create a ‘group’ within a Drupal site. The group can have its own users, admins, content, menus, even its own visual design. However, it would need every microsite to sit internally, within the main site, so does not solve the need to supporting external sites on their own domain. So, not really the perfect fit.

Best practice: with Git

I quoted above from @sun in the discussion on deprecating multisite support about the modern best practice:

The modern approach to multi-site is: git — Same code, different sites. Under your control. And well-maintainable.

This is certainly my standard recommendation and will give you many advantages: independence of sites for performance, design, etc; single codebase to maintain (though you’ll have a challenge developing and maintaining the variations you’ll want or need for each microsite); better control over updates; and so on.

You might even look writing an install profile to make a full distribution, though with Drupal 8 there is less of a need to do this. With Drupal 8, I’d advocate that you use Drupal Composer to build your site and just export your full site config into your repo (being careful to remove any sensitive settings from the repo with your .gitignore file).

Or you might also consider using Aegir to manage your multiple sites — use Drupal to deploy Drupal, if that’s not too much Inception.

Microsites and Drupal

So if multisites could work but would be a bit of a pain, the other Drupal approaches are even less appealing, and you’d rather not keep multiplying Drupal installations, how else could we do microsites with Drupal?

Well, there are two major moves in modern web development that might help here: RESTful web services, and decoupled CMS architectures (a.k.a. ‘headless’ CMS). My proposal for managing microsites in Drupal 8 depends on both these ideas:

  • Treat your Drupal site as a pure content management system (CMS) — a content hub that allows authors, editors and administrators to create, update and manage the content for which they’re responsible, but doesn’t have any meaningful frontend presentation layer to it.
  • Present the data of the content in the hub CMS via a RESTful API.
  • Implement a separate frontend for the visual presentation layer that communicates with the content hub CMS via the API.

There need be no limit to the number of frontends that use the CMS’s API (though practically you may limit access with firewalls, CORS or some other means) so you could power a primary public site, other sub-sites, native mobile apps or even another CMS or two, each potentially with their own visual design. The limit is your own imagination and situation.

RESTful web services and Drupal 8

A new addition to Drupal 8 is the RESTful Web Services API. REST resources can be exposed to allow other things to talk to/consume/feed a Drupal site. Many core entities have REST resources, but it is also fairly easy to build custom REST resources. (There a number of interesting web services contrib projects that are worth considering, such as the GraphQL project that presents a GraphQL schema, and the RELAXed Web Services project that extends the core REST resources.)

Design your own web services API

The freedom to build custom REST resources in Drupal 8 allows a lot of freedom in designing a RESTful API.

In a forthcoming blog post I’ll write in more about designing an API. For now, all I need to say is you need to actually design your API. Don’t simply use the out-of-the-box Drupal core REST resources — think about the RESTful API that would best serve the frontend you want to have.

My heartfelt recommendation is you do this, designing your API, using the skills of those who’re best at designing things — your designers. They understand best what your users want to do on your sites, will be able to describe what data they want for the frontend (content with/without markup, etc.) and help you design the API that is most appropriate to your needs.

There are some API design golden rules and best practices that you should consider. Also I’d recommend using an API design tool like Apiary.io or Swagger.io. They’re invaluable for many reasons, not least of which is the lovely documentation they generate and mock data servers they include that can help frontend devs get going quickly.

Decoupled frontend

With the content hub now presenting the managed content as RESTful data, we just need a standalone frontend system to present your website to your users: one for your primary site, and one for each of your microsites. Your frontend specialists can then work with the right tools for the task, then.

There are several advantages to consciously uncoupling the content management and the frontend.

Freedom: frontend specialists are free to the implement the user experience with native tools that are built for the job.

Performance: everything in this architecture can be streamlined. The CMS simply presents the content data. The frontend focuses on the display logic.

Experience: the website can respond to users in real time, communicating back and forth with the CMS to give real-time interactions in the browser.

Future proof: it becomes much easier to replace any part of the system as you require, such as redesigning the website without re-building the CMS.

Microsites in Drupal 8

So, how might we do this practically in Drupal 8? Here’s how I tackled it.

First, I thought about designing a quick prototype API that could be used to describe microsites and their content. I used Apiary.io to design it, and you can view the API at docs.campaignmicrosites.apiary.io.

The final part is the standalone frontend tool. For this I used React. I used React to build my frontend app, but there’s obviously plenty of other options depending on what the frontend needs to do. React worked for me because I just wanted the view layer, but Angular or Ember could be more appropriate if the frontend needed to be a more substantial app. You’d need to evaluate the frontend options carefully.

I’m not a frontend specialist, so my prototyping code is pretty fugly. Despite that, we’re able to serve two microsites simultaneously on different URLs, with a different theme, just by switching the campaign ID in the API request.

Bingo!

Deploying to production

There’s a few things I might do to deploy this to a production system.

Secure serving

As a good internet citizen, I’d want to put everything on SSL.

Frontend deployment

To deploy the frontend, I’d be looking at options to run the apps on a NodeJS server so that most of the scripts can run server side.

I’d probably want to put an Nginx instance in front of it, for SSL termination, caching static assets and reverse proxy.

Use Drupal multisites ;-P

I think there is actually a neat way of using Drupal’s multi-sites feature here: use a different domain for the RESTful API. For example:

Editorial interface: hub.yourdomain.com
API interface: api.yourdomain.com

Both of these point to your Drupal codebase but you can then handle requests differently on each domain. For example, you might add an authentication provider that checks the domain to give you some access control, so there’s no access to the editorial interface on the API subdomain, and none to the API on the editorial domain.

Caching etc.

This would then allow you to do some smart things with caches and other parts of your hosting stack, offloading much of the pressure on the codebase to the caching architecture and removing the actions of editorial staff from affecting the RESTful API’s performance.

Databases

It might also be possible to configure GET requests to only use a slave database, which could be useful for performance — though may be more hassle than it’s worth. POST, PUT, PATCH and DELETE requests would still need to go to the master.

In summary

This prototype worked really well for me and I was very happy with the results, and it gave me something very interesting to discuss with the client.

The advances made in Drupal 8 to operate with current standard web practices are good news for developers and for web projects big and small. For this prototype, the particular improvements with providing RESTful resources means that I was able to create a decoupled Drupal system to support a main website and unlimited microsites in an amazingly short space of time.

… and something to take away

If you’re interested in following up this thought experiment with my Drupal 8 prototype, I’ve put the code into a repo in GitHub:

Just …

$ git clone [email protected]:ConvivioTeam/Convivio-ContentHub.git {some_empty_directory}
$ cd {some_empty_directory}
$ composer install

… and you’re away.

(My React code is shamefully dirty, so I’m not prepared to share that at moment. ;-) I may tidy it up in the future and share it here.)

Aug 01 2018
Aug 01

This is part two of a two-part series.

In part one, we cogently convinced you that regardless of what your organization does and what functionality your website has, it is in your best interest to serve your website securely over HTTPS exclusively. Here we provide some guidance as to how to make the move.

How to transition to HTTPS

To fully transition to HTTPS from HTTP means to serve your website HTML and assets exclusively over HTTPS. This requires a few basic things:

  • A digital certificate from a certificate authority (CA)
  • Proper installation of the certificate on your website’s server
  • Ensuring all assets served from your website are served over HTTPS

Let’s break these down.

Acquire a digital certificate

As briefly discussed in part one, to implement HTTPS for your website you must procure a digital certificate from a certificate authority. Just like domain name registrars lease domain names, CAs lease digital certificates for a set time period. Each certificate has a public and private component. The public component is freely shared and allows browsers to recognize that a “trusted” CA was the one to distribute the certificate. It is also used to encrypt data transmitted from the browser. The private complement is only shared with the purchaser of the certificate, and can uniquely decrypt data encrypted by the public key. CAs use various methods through email or DNS to “prove” that the person who purchased the certificate is a rightful administrator of the domain for which they purchased it. Once you’ve received the private key associated with the certificate, you’ll need to install it on your server. Annual certificate costs can be as much as $1,000 or as little as nothing. More on that in a moment.

Install the certificate

Lots of wires

Installing an HTTPS certificate manually on a server is not a trivial engineering task. We explain this at a high-level in part one. It requires expertise from someone who is experienced and comfortable administering servers. There are many different installation methods unique to each permutation of server software and hosting platform, so I won’t expend any real estate here attempting to lay it out. If you have to do a manual installation, it’s best to search your hosting provider’s documentation. However, depending on the complexity of your website architecture, there are ways to ease this process. Some hosting platforms have tools that substantially simplify the installation process. More on that in a moment as well.

Serve all resources over HTTPS: avoid mixed content

Once you’ve installed your certificate, it’s time to ensure all assets served from your pages are served over HTTPS. Naturally, this entire process should be completed in a staging environment before making the switch to your production environment. Completing a full transition to HTTPS requires attention to detail and diligence. “Mixed content”, or serving assets from a page over both HTTP and HTTPS, can be both tedious and insidious to rectify. The longer your site has been around, the more content there is and the more historic hands have been in the pot of content creation, the more work there will be to make the switch. Depending on your platform (CMS or otherwise) and how it was designed, there may be many avenues for different stakeholders to have included assets within a page over time. Developers, site admins, and content editors usually have the ability to add assets to a page. If any assets start with http://, they’ll need to be updated to https:// to prevent mixed content warnings.

We have recently helped a client who has been publishing articles at a high cadence for over 10 years with many different stakeholders over that period. Practices weren’t consistent and uncovering all the ways in which HTTP resources were served from the page was a substantial undertaking. Therefore, be prepared for a time investment here – there may be many areas to audit to ensure all assets from all your pages are being served over HTTPS. Some common ways mixed content HTTP assets are served from a site whose HTML is served over HTTPS are:

  • Hard-coding a resource: e.g. http://www.example.com/img/insecure-image.jpg
  • Using a 3rd-party library or ad network reference: http://www.example.com/js/analytics.js
    • This is common for libraries that haven’t been updated in a while. Most all of them now serve the same assets over the same path securely with HTTPS.

Even practices that were previously encouraged by Paul Irish, a leading web architect for the Google Chrome team, may have contributed to your mixed content problem, so don’t feel bad. Just know, there will likely be work to be done.

The risk of mixed content

These “not secure” bits of mixed-content expose the same risk that your HTML does when served over HTTP, so browsers rightfully show the user that the experience on your site is “not secure”.

Mixed content is categorized in two ways: active and passive. An asset that can contribute to passive mixed content would be an image; it doesn’t interact with the page but is merely presented on the page. An example that can be an active asset of mixed content is a javascript file or stylesheet since its purpose is to manipulate the page. Passive mixed content, albeit categorically less severe than active, still exposes opportunities for an attacker to learn a lot about an individual’s browsing patterns and even trick them into taking different actions than they intend to on the page. Therefore passive mixed content still constitutes enough of a threat for the browser to issue a warning display.

mixed content browser errors

In the case of an active asset that is compromised, if it were a javascript file, an attacker can take full control of the page and any interaction you may have with it like entering passwords, or credit card information. The mechanism behind this is a somewhat sophisticated man-in-the-middle attack, but suffice it to say, if the browser recognizes the vulnerability, the best scenario is the poor user experience we discussed in part one, the worst is total data compromise. Your audience, and by association your organization, will be seeing red.
Chrome misconfigured HTTPS

The good news about moving to HTTPS

Ensuring your website serves assets exclusively over HTTPS is not as hard as it used to be, and is getting easier by the day.

There are free digital certificates

There’s no such thing as a free lunch, but a free certificate from a reputable CA? It would seem so. People are just giving them away these days… Seriously, many years in the making since its founding by two Mozilla employees in 2012, the Let’s Encrypt project has vowed to make the web a secure space and has successfully endeavored to become a trusted CA that literally does not charge for the certificates they provide. They have shorter lease cycles of 60 to 90 days, but they also offer tooling around automating the process of reinstalling newly provided certificates.

There are easier and cheaper ways to install certificates

With the advent of the aforementioned free certificate, many platform-as-a-service (PaaS) hosting options have incorporated low cost or free installation of certificates through their hosting platform sometimes as easily as a few clicks. Let’s Encrypt has been adopted across a broad range of website hosting providers like Squarespace, GitHub Pages, Dreamhost, all of which we use alongside many others.

For many of our Drupal partners, we prefer to use a platform as a service (PaaS) hosting option like Pantheon, Acquia, or Platform.sh. Both Pantheon and Platform.sh now provide a free HTTPS upgrade for all hosting plans; Acquia Cloud, another popular Drupal PaaS, is still a bit behind in this regard. We have found that the efficiency gains of spending less time in server administration translates to more value to our clients, empowering additional effort for the strategy, design, and development for which they hired us. In addition to efficiency, the reliability and consistency provided by finely tuned PaaS offerings are, in most cases, superior to manual installation.

A good example of the evolution of hosting platforms maturing into the HTTPS everywhere world is our own Jekyll-based site, which we’ve written about and presented on before. We first set up HTTPS over GitHub pages using CloudFlare guided by this tutorial since we found it necessary to serve our site over HTTPS. However, about a year later GitHub announced they would provide HTTPS support for GitHub pages.

Similarly, we had previously implemented Pantheon’s workaround to make HTTPS on all of their tiers accessible to our clients on their hosting platform. Then they announced HTTPS for all sites. We’re thankful both have gotten easier.

There are tools to help with the transition to HTTPS

Through its powerful Lighthouse suite, Google has a tool to help audit and fix mixed content issues. Given the aforementioned tedium and potential difficulty of tracking down all the ways in which people have historically added content to your site, this can be an invaluable time saver.

You can also use tools like Qualys SSL Labs to verify the quality of your HTTPS installation. See how our site stacks up.

Wrap-up

Given the much greater ease at which many modern hosting platforms allow for HTTPS, the biggest barrier, primarily a matter of effort, is to clean up your content and make sure all assets are served over HTTPS from all pages within your website. So, if you haven’t already, start the transition now! Contact us over the phone or email if you need assistance and feel free to comment below.

Jun 26 2018
Jun 26

This is part one of a two-part series on transitioning to HTTPS

For some time, major internet players have advocated for a ubiquitous, secure internet, touting the myriad benefits for all users and service providers of “HTTPS everywhere”. The most prominent and steadfast among them is Google. In the next week, continuing a multi-year effort to shepherd more traffic to the secure web, Google will make perhaps its boldest move to date which will negatively impact all organizations not securely serving their website over HTTPS.

To quote the official Google Security Blog

Beginning in July 2018 with the release of Chrome 68, Chrome will mark all HTTP sites as “not secure”

Chrome insecure message for HTTP
Google blog

Given the ambiguous “in July 2018”, with no clearly communicated release date for Chrome 68, it’s wise to err on the side of caution and assume it will roll out on the 1st. We have readied our partners with this expectation.

So what does this mean for your organization if your site is not served over HTTPS? In short, it’s time to make the move. Let’s dig in.

What is HTTPS?

HTTP, or HyperText Transfer Protocol, is the internet technology used to communicate between your web browser and the servers that the websites you visit are on. HTTPS is the secure version (s for secure) which is served over TLS: Transport Layer Security. What these technical acronyms equate to are tools for internet communication that verify you’re communicating with who you think you are, in the way you intended to, in a format that only the intended recipient can understand. We’ll touch on the specifics in a moment and why they’re important. Put simply, HTTPS enables secure internet communication.

Why secure browsing matters

Leaving aside the technical details for a moment and taking a broader view than communication protocols reveals more nuanced benefits your organization receives by communicating securely with its audience.

HTTPS improves SEO

Since Google accounts for 75-90% of global search queries (depending on the source) SEO is understandably often synonymous with optimizing for Google. Given their market domination, competitors are taking queues from Google and in most cases it’s safe to assume what’s good for SEO in Google is good for optimizing competing search engines.

In the summer of 2014, Google announced on their blog that they would begin to favorably rank sites who used HTTPS over HTTP. It’s already been nearly four years since we’ve known HTTPS to be advantageous for SEO. Since then, Google has consistently advocated the concept of HTTPS ubiquity, frequently writing about it in blog posts and speaking about it at conferences. The extent to which serving your site over HTTPS improves your SEO is not cut and dry and can vary slightly depending on industry. However, the trend toward favoring HTTPS is well under way and the scales are tipped irreversibly at this point.

HTTPS improves credibility and UX

Once a user has arrived at your site, their perceptions may be largely shaped by whether the site is served over HTTP or HTTPS. The user experience when interacting with a site being served over HTTPS is demonstrably better. SEMrush summarizes well what the data clearly indicate; people care a great deal about security on the web. A couple highlights:

You never get a second chance to make a first impression.

With engaging a participant of your target audience, you have precious few moments to instill a sense of credibility with them. This is certainly true of the first time a user interacts with your site, but is also true for returning users. You have to earn your reputation every day, and it can be lost quickly. We know credibility decisions are highly influenced by design choices and are made in well under one second. Combining these two insights, with the visual updates Chrome is making to highlight the security of a user’s connection to your site, drawing the user’s attention to a warning in the URL bar translates to a potentially costly loss in credibility. Unfortunately it’s the sort of thing that users won’t notice unless there’s a problem, and per the referenced cliché, at that point it may be too late.

Browsers drawing attention to insecure HTTP

Much like search, browser usage patterns have evolved over the last five years to heavily favor Google Chrome. Therefore, what Google does carries tremendous weight internet-wide. Current estimations of browser usage put Chrome between 55% and 60% of the market (again, depending on sources). Firefox has followed suit with Chrome as far as HTTP security alerts go, and there’s no indication we should expect this to change. So it’s safe to assume a combined 60-75% of the market is represented by Chrome’s updates.

Google Chrome HTTP warning roll out

Google (and closely mirroring behind, Firefox) has been getting more stringent in their display of the security implications of a site served over HTTP (in addition to sites misconfigured over HTTPS). They’ve shared details on the six-step roll out on their general blog as well as on a more technical, granular level on the Chrome browser blog.

In January 2017, they began marking any site that collects a password field or credit card information, served over HTTP as subtly (grey text) not secure.

Chrome insecure message for HTTP
Laravel News

Then, in October 2017, they tightened things up so that a site that collected any form information over HTTP, would have the same “not secure” messaging. They added the more action-based aspect of showing the warning on the URL bar when a user entered data into a form. This is an especially obtrusive experience on mobile due to space constraints, which more deeply engages the user cognitively as to exactly what is unsafe about how they’re interacting with the site.

Chrome insecure message for HTTP
Google blog

Next, in July 2018, all HTTP sites will be marked as not secure.

In September 2018, secure sites will be marked more neutrally, removing the green secure lock by default connoting a continuing expectation that HTTPS is the norm and no longer special.

Chrome insecure message for HTTP
Google blog

In October 2018, any HTTP site that accepts any form fields will show affirmatively not secure with a bold red label, much like a misconfigured HTTPS site does now.

Chrome insecure message for HTTP
Google blog

Though they haven’t yet announced a date, Google intends to show affirmatively not secure for all HTTP sites. The drive is clearly to establish the norm that all the web traffic should be served over HTTPS and that outdated HTTP is not to be trusted. This is a pretty strong message that if Google has their way (which they usually do) HTTPS will inevitably be virtually mandatory. And inevitably in internet years, may be right around the corner.

HTTPS vastly improves security for you and your users

Returning to the technical, as mentioned previously, HTTPS helps secure communication in three basic ways.

  • Authentication “you’re communicating with who you think you are”
  • Data integrity “in the way you intended to”
  • Encryption: “in a format that only the intended recipient can understand”

What authentication does for you

In order for the browser to recognize and evaluate an HTTPS certificate, it must be verified by a trusted certificate authority (CA). There are a limited amount of CAs who are entrusted to distribute HTTPS certificates. Through public-key cryptography, a fairly complex but interesting topic, through inherent trust in the CA who has provided the HTTPS certificate for a given site, the browser can verify any site visitor is positively communicating with the expected entity with no way of anyone else posing as that entity. No such verification is possible over HTTP and it’s fairly simple to imagine what identify theft would be possible if you were communicating with a different website than you appeared to be. In the event any of the major browsers cannot validate the expected certificate, they will show a strong, usually red warning that you may not be communicating with the expected website, and strongly encourage you to reconsider interacting at all.

Chrome misconfigured HTTPS

Therefore, the authentication gives your users the confidence you are who you say you are, which is important when you’re engaging with them in any way whether they’re providing an email, credit card or simply reading articles.

How data integrity helps you

Ensuring perfect preservation of communication over the internet is another guarantee HTTPS provides. When a user communicates with a website over HTTPS, the browser takes the input of that communication and using a one-way hashing function creates a unique “message digest”: a concise, alphanumeric string. The digest may only be reliably recreated by running the exact same input through the same hash algorithm irrespective of where and when this is done. For each request the user makes to the website, the browser passes a message digest alongside it and the server then runs the input it receives from the request through the hash algorithm to verify it matches the browser-sent digest. Since it is nearly computationally impossible to reverse engineer these hash functions, if the digests match, it proves the message was not altered in transit. Again, no such data integrity preservation is possible over HTTP, and there is therefore no way to tell if a message has been altered en route to the server from the browser.

What encryption does for you

Communicating over an unencrypted HTTP connection allows for some easily exploitable security risks in the case of authentication to a site. To demonstrate how easy it can be to take over someone’s account on an HTTP connection, a tool called Firesheep was developed and openly released in mid 2010. Major social media platforms Facebook and Twitter were both susceptible to this exploit for some time after Firesheep was released. The identity theft is carried out through a means called session hijacking. With Firesheep installed, a few clicks could log you in as another user who was browsing over WiFi nearby on any HTTP website. This form of session hijacking is possible when the authentication cookies, small identifying pieces of information that live in your browser while you’re logged into a site, are transmitted to the server on each request over HTTP. Over WiFi these messages are broadcasted into the air in plain text, and can be picked up by anyone listening. HTTPS prevents this since the communication is encrypted and unintelligible to eavesdroppers.

In the example of a CMS like Drupal or any other system in which there is a login, if an administrator with elevated site permissions is logged in over HTTP, they’re subject to the same risk if that traffic is monitored or “sniffed” at any point along its path from the browser to the server. This is especially easy over WiFi but is not relegated to only WiFi. The cookies are sent to the server upon every request, regardless of whether or not the user entered their password during the active session or not. Depending on the admin’s privileges, this access can be easily escalated to complete control of the website. Encryption is a big deal.

HTTPS is required for the modern web

One of the more promising developments of the last few years, is the pervasiveness and effectiveness of Progressive Web Apps (PWAs). PWAs is the name coined for a set of technologies that provide a feature-set for mobile browsing akin to native applications, yet is entirely served through the web browser. PWAs require all communication to be done over HTTPS. Some of the possibilities with PWAs that were previously relegated to native applications only are:

  • Providing content and services based on the user’s location data
  • Providing interaction with the user’s camera and microphone within the browsing experience
  • Sending push notifications
  • Serving off-line content

If you aren’t taking advantage of any of these features that are possible through PWAs, it’s something your organization should strongly consider to further engage users. Before the ambitions to be on feature parity with native applications are fully borne-out, PWAs will continue to evolve the power of layering deeper engagement with users on top of your existing mobile experience with minimal effort. PWAs simply do not work over HTTP. HTTPS is required to open the door to their possibilities.

Barriers to HTTPS have been lifted

Historically, considering a move to HTTPS has been held back by some valid concerns for webmasters whose job it was to select where and how their websites were hosted. A few of the fundamental apprehensions could be categorized as:

  • No perceived benefit. People often assumed if they weren’t collecting financial or personal information, it wasn’t necessary. We’ve covered why holding this belief in 2018 is a misconception. Savas Labs made the move in July 2017 to serve exclusively over HTTPS for our statically-generated Jekyll website even though at the time we had no forms or logins.
  • Performance costs. We know reducing latency is crucial for optimizing conversions and HTTPS does require additional communication and computation. However, with the broad adoption of the HTTP/2 protocol over the last few years, HTTPS now usually outperforms HTTP.
  • Financial costs. HTTPS was too complex and costly to implement for some. Large strides have been made across many hosting providers who now bundle HTTPS into their hosting offerings by default, often at no additional cost. Let’s Encrypt, a relatively new and novel certificate authority, first began offering free certificates (which they still do) and then made it easy to automatically renew those certificates, helping to ease the burden and cost of implementation.

We cover each of these in more detail in part two that will help guide you on how to make the move to HTTPS.

Conclusion

To revisit Google’s announcement:

Beginning in July 2018 with the release of Chrome 68, Chrome will mark all HTTP sites as “not secure”.

Interpreting that and providing our perspective:

You’re not part of the modern web unless you’re exclusively using HTTPS.

A bold, if slightly controversial statement, but for ambitious organizations like the folks we’re fortunate enough to work with each day, HTTPS-only is the standard in mid 2018 and beyond. Given the benefits, the lifted previous barriers, and the opportunity for the future, very few organization have a good reason not to exclusively serve their sites over HTTPS.

Have we convinced you yet? Great! Read part two for some assistance on how to make the move.

Additional resources

Mar 25 2018
Mar 25

Updating Drupal 8 core with Composer has proven to be a problematic process for many developers. For some, this is nearly as upsetting as the fact that the Composer logo is actually a conductor, and some have abandoned the platform entirely, opting to stick with Drupal 7.

The process isn’t always as simple as running composer update drupal/core and going about your day — the update from 8.3 to 8.4 was notoriously difficult and I recently experienced an issue while updating from 8.4.5 to 8.5.0. In this article, I’ve provided instructions for updating D8 core with Composer, plus some tips for dealing with common issues.

This is especially important now as we await a highly critical security update to all versions of Drupal, to be released on Wednesday, March 28, 2018. This level of security update is quite rare, but the update needs to be implemented on all sites as soon as possible after its release.

As the PSA linked to above notes, the Drupal Security Team will be providing security releases for unsupported minor versions 8.3.x and 8.4.x due to the issues many have encountered when updating from these versions. If you’re still on one of those versions, the update may be more straightforward if you stick with the release for that minor version.

General instructions for updating core

First, let’s cover the steps needed to update Drupal 8 core with Composer.

  1. To update the core package, run:

     composer update drupal/core --with-dependencies -v
    
    

    It’s recommended to run the update command with the --with-dependencies flag to update any of Drupal core’s dependencies as well.

  2. To capture any included database updates, run drush updb -y.
  3. To capture any included configuration changes, run drush config-export -y and commit the changes.

All three of these steps are necessary whenever the core package is updated.

Dealing with errors

Core version doesn’t update

If you run the composer update command but core isn’t updating, edit your composer.json file to include the specific version of core you want, e.g. ^8.5. Then, run the composer update command again.

Composer command outputs errors

Composer may not be able to resolve all of the dependencies of core and will output an error like this:

Your requirements could not be resolved to an installable set of packages.

  Problem 1
    - Conclusion: don't install drupal/core 8.5.0
    - Conclusion: don't install drupal/core 8.5.0-rc1
    - Conclusion: don't install drupal/core 8.5.0-beta1
    - Conclusion: don't install drupal/core 8.5.0-alpha1
    - Conclusion: don't install drupal/core 8.6.x-dev
    - Conclusion: remove symfony/config v3.2.9
    - Installation request for drupal/core ^8.5 -> satisfiable by drupal/core[8.5.0, 8.5.0-alpha1, 8.5.0-beta1, 8.5.0-rc1, 8.5.x-dev, 8.6.x-dev].
    - Conclusion: don't install symfony/config v3.2.9
    - drupal/core 8.5.x-dev requires symfony/dependency-injection ~3.4.0 -> satisfiable by symfony/dependency-injection[3.4.x-dev, v3.4.0, v3.4.0-BETA1, v3.4.0-BETA2, v3.4.0-BETA3, v3.4.0-BETA4, v3.4.0-RC1, v3.4.0-RC2, v3.4.1, v3.4.2, v3.4.3, v3.4.4, v3.4.5, v3.4.6].
    - symfony/dependency-injection 3.4.x-dev conflicts with symfony/config[v3.2.9].
    - symfony/dependency-injection v3.4.0 conflicts with symfony/config[v3.2.9].
    - symfony/dependency-injection v3.4.0-BETA1 conflicts with symfony/config[v3.2.9].
    - symfony/dependency-injection v3.4.0-BETA2 conflicts with symfony/config[v3.2.9].
    - symfony/dependency-injection v3.4.0-BETA3 conflicts with symfony/config[v3.2.9].
    - symfony/dependency-injection v3.4.0-BETA4 conflicts with symfony/config[v3.2.9].
    - symfony/dependency-injection v3.4.0-RC1 conflicts with symfony/config[v3.2.9].
    - symfony/dependency-injection v3.4.0-RC2 conflicts with symfony/config[v3.2.9].
    - symfony/dependency-injection v3.4.1 conflicts with symfony/config[v3.2.9].
    - symfony/dependency-injection v3.4.2 conflicts with symfony/config[v3.2.9].
    - symfony/dependency-injection v3.4.3 conflicts with symfony/config[v3.2.9].
    - symfony/dependency-injection v3.4.4 conflicts with symfony/config[v3.2.9].
    - symfony/dependency-injection v3.4.5 conflicts with symfony/config[v3.2.9].
    - symfony/dependency-injection v3.4.6 conflicts with symfony/config[v3.2.9].
    - Installation request for symfony/config (locked at v3.2.9) -> satisfiable by symfony/config[v3.2.9].

This happens when one of Drupal’s dependencies is updated and the new version requires an updated version of another package. To resolve this, include the dependency package causing the issue in the composer update command. The --with-dependencies flag this will ensure that the dependency’s dependencies are also updated. To fix the error above, I ran:

composer update drupal/core symfony/config --with-dependencies -v

You’re not alone

If you continue to run into problems, the best advice I can give you is to search for the specific update you’re trying to make. Every time I’ve had an issue I’ve been able to find discussions online regarding that specific update and potential resolutions.

In fact, when I got the error above while trying to update to 8.5.0, I found this helpful article by drupal.org user eiriksm and was able to resolve the issue. Check out the article and its comments for more discussion on how to deal with Composer issues when updating Drupal 8 core.

Jan 24 2018
Jan 24

Every year I participate in a number of initiatives introducing people to free software and helping them make a first contribution. After all, making the first contribution to free software is a very significant milestone on the way to becoming a leader in the world of software engineering. Anything we can do to improve this experience and make it accessible to more people would appear to be vital to the continuation of our communities and the solutions we produce.

During the time I've been involved in mentoring, I've observed that there are many technical steps in helping people make their first contribution that could be automated. While it may seem like creating SSH and PGP keys is not that hard to explain, wouldn't it be nice if we could whisk new contributors through this process in much the same way that we help people become users with the Debian Installer and Synaptic?

Paving the path to a first contribution

Imagine the following series of steps:

  1. Install Debian
  2. apt install new-contributor-wizard
  3. Run the new-contributor-wizard (sets up domain name, SSH, PGP, calls apt to install necessary tools, procmail or similar filters, join IRC channels, creates static blog with Jekyll, ...)
  4. write a patch, git push
  5. write a blog about the patch, git push

Steps 2 and 3 can eliminate a lot of "where do I start?" head-scratching for new contributors and it can eliminate a lot of repetitive communication for mentors. In programs like GSoC and Outreachy, where there is a huge burst of enthusiasm during the application process (February/March), will a tool like this help a higher percentage of the applicants make a first contribution to free software? For example, if 50% of applicants made a contribution last March, could this tool raise that to 70% in March 2019? Is it likely more will become repeat contributors if their first contribution is achieved more quickly after using a tool like this? Is this an important pattern for the success of our communities? Could this also be a useful stepping stone in the progression from being a user to making a first upload to mentors.debian.net?

Could this wizard be generic enough to help multiple communities, helping people share a plugin for Mozilla, contribute their first theme for Drupal or a package for Fedora?

Not just for developers

Notice I've deliberately used the word contributor and not developer. It takes many different people with different skills to build a successful community and this wizard will also be useful for people who are not writing code.

What would you include in this wizard?

Please feel free to add ideas to the wiki page.

All projects really need a couple of mentors to support them through the summer and if you are able to be a co-mentor for this or any of the other projects (or even proposing your own topic) now is a great time to join the debian-outreach list and contact us. You don't need to be a Debian Developer either and several of these projects are widely useful outside Debian.

Nov 08 2017
Nov 08

This is part two of a two-part series.

In part one, we discussed how Drupal 8’s adoption in its first two years was a bit lackluster compared to what many expected. Grounded in a better understanding of the shortcomings of the past two years, we’ll try to equip those of you considering Drupal 8 with the information you need to make the best decision for your organization as you continue to invest in the powerful framework of Drupal.

Image of Drupal 8 adoption curve
Credit: Angie Byron AGAIN on “Everything you need to know about the top 8 changes in Drupal 8” from May 2015

Drupal 8 adoption is certainly no longer in the “early adopter” phase, yet we still haven’t entered the “majority” phase. For most organizations not yet powered by Drupal 8, our stance is: it’s probably time to upgrade. The value you’re missing out on with the newer software is real. Perhaps less obviously, if you’re investing in your Drupal 7 site beyond passive maintenance, you may well be doubling your long-term costs by deferring and exacerbating what will need to be refactored later. For organizations who have web staff, work with an agency, or do any non-trivial customization to their Drupal website, this applies to you.

A disclaimer upfront

Drupal agencies like ours benefit from upgrades in the short-term because they are usually a substantial undertaking. This fact, in part, is why over the past two years people have written more often about encouraging an upgrade to Drupal 8 rather offering a more holistic and measured perspective. A small dose of healthy skepticism typically serves site owners best. If Savas Labs is to live into its values we must factor in the needs of two other stakeholder groups when advising on an upgrade: our clients, and the collective Drupal community. Given the substantial effort to upgrade, if we focus solely on the short-term, we do our clients a disservice. In doing that, the next time those site owners and admins have the option to select a tool to power their web systems, they may look elsewhere remembering their pain and disappointment in recent experience. This ripple effect has the potential to create many former Drupal users. Imbued with the open source ethos, we believe we owe it to the broader Drupal community from which we’ve gained so much to consult with honesty and integrity.

What you’re missing out on

As we discussed in part 1, we lived through the challenges of the complete re-architecture of the Drupal application from 7 to 8.

Angie Byron, the person I apparently can’t stop referencing, said in 2013:

For people who grew up learning PHP on Drupal, and there are a lot of people for whom that’s true, I think Drupal 8 will be kind of a big adjustment for them.

Though it wasn’t easy, at Savas Labs we feel strongly that it was a wise investment that’s just beginning to pay off. At this point in its maturation, we believe now (as other Drupal leaders have felt for some time), that Drupal 8 is superior to previous versions in nearly all use cases for which organizations currently use Drupal. Some argue that Drupal 8 has become too complex and left smaller sites behind, but it’s important to consider their incentives for a well-rounded perspective. Via Acquia, Pantheon and other hosting providers, you can serve up a Drupal 8 website within minutes equipped with more features and a superior user experience to previous versions on a free tier to boot! While simultaneously catering better to those not writing code, the engineers who have always pushed Drupal to its physical limits have more power to build sophisticated tools and integrations that can do more for their clients than ever before.

In exploring this deeper, let’s start with the technical, and dig into the more nuanced to answer the Drupal 8 question: “What’s in it for me?” (WIIFM?), for you.

WIIFM? Features.

It’s fairly easy to find information touting Drupal 8’s strengths around the web, and it’s pretty straightforward that software we write today (and have been writing for 4 years) is superior to software written 8.5 years ago (or 10.5 years ago with Drupal 6). Let’s look briefly at some high-impact improved features for site owners and admins.

Design/UX/Usability Improvements with Drupal 8

In developing Drupal 8, perhaps for the first time, the Drupal leadership took user experience work seriously and developed a cohesive strategy to improve UX for Drupal 8. The results paid off.

  1. Responsive out of the box: Given that Drupal 8’s release came long after responsive web design became popular enough to garner its own acronym, naturally, all themes (administrative and otherwise) were developed to be responsive. RWD has been a must for years, but it took heavy lifting to achieve in Drupal 7.
  2. Better content authoring: Drupal 8 has adopted a more Wordpress-like UX for editors, which for many years had been cited as a distinction between the two, rightfully favoring Wordpress. Content authoring layout improvements coupled with responsiveness have made administration from a phone a pleasant experience.
  3. Accessibility at lower cost: Accessibility efforts, though not prioritized by all, continue to gain traction as we continue to expand our ability to be inclusive. We’ve seen clients threatened with lawsuits over not adhering to accessibility standards. Whether motivated by benevolence or risk-aversion, accessibility should be on your radar, and it’s easier in Drupal 8.
  4. Multilingual in core: With a cohesive system now in core, we have been able to build a couple of multilingual sites with relative ease, not having to dedicate substantial additional time to the translation component.

Drupal 8 multilingual is a world of difference. What would take 22 or more modules in Drupal 7 you would do with 4 (and all in core). - @kristen_pol

RESTful possibilities

One of the developmental focuses of Drupal 8 we believe has tremendous impact on how organizations can maximize the value of their content is the API-first Initiative. We will likely write an entire post about this in the future, but in short the initiative makes Drupal 8 much better equipped to serve as a central content repository that can expose content to many types of devices in the formats they require for display. Historically, Drupal has been pretty exclusively focused on producing HTML (one format) for a web browser (the device/software). Drupal 8 now treats Roku, iOS & Android Apps, video game systems and the web browser all as first-rate citizens for content consumption. As the number and variety of devices that connect to the web continues to rapidly grow, Drupal 8 can serve as a powerful hub that provides relevant content and experiences to end-users. You’d be remiss to snooze on this one. To get an idea of the possibilities check out Contenta CMS, a Drupal 8 distribution built by some of the people behind the initiative.

WIIFM? Performance.

If you take performance seriously, which you should, there’s a lot to like about Drupal 8. Sticking with the theme of sophistication, Drupal 8 provides a much more granular ability to cache specific components than its predecessors. And as we know in the high-performance web world, cache is king. When Drupal 8 first came out, a leading Acquia engineer showed some mixed results on Drupal 8 performance. The heavier codebase invariably means having to swim upstream to make it outperform the lighter codebase in Drupal 7, but I’m happy to say the architects had their flippers on when working through these challenges. Take these two fundamental points:

  1. Regardless of how fast the underlying code executes, what matters to users is perceived performance, i.e. how long they have to wait to interact with the page. Perceived delay has been drastically reduced by an experimental-turned-core module (more on that later) called BigPipe. BigPipe loads components of a page in the order in which a user is expected to interact with them while delivering more expensive components as they’re available. This breaks with the tradition of all-or-nothing webpages served by Drupal that were either in the cache or not, lending itself to a Facebook-like experience, which is where BigPipe came from.

    GIF of Drupal 8 BigPipe Video
    Slower video here

  2. Modern performance tactics derive the largest gains from outside of the application leveraging services like a Content Delivery Network (CDN), and/or a web application accelerator, like Varnish, to serve up resources to anonymous traffic (users not signed-in) as quickly as possible. For most sites, anonymous traffic comprises a majority of overall traffic. Traditionally, there have been limitations to improving performance for authenticated traffic, and that’s where Drupal 8 shines. With BigPipe and a more granular caching system, Drupal 8 can substantially outperform Drupal 7’s authenticated user experience, so it’s a win-win.

If you’re made of time today, check out our other articles we’ve written about performance for a deeper dive into this broad and complex topic.

WIIFM? People.

We know that behind any powerful movement are powerful people. To quote Dries, as I did in my Drupalcon talk in New Orleans:

fostering the Drupal community is actually more important than just managing the code base.

Also, atop the Drupal.org homepage used to read

Come for the code, stay for the community.

Without needing to resonate with all the warm and fuzzies that many within the community do, these sentiments show the richness and value of the Drupal community. And that rich community, not out of neglect, but rather necessity, has moved on from Drupal 7. Top designers, developers, and strategists are working on few Drupal 7 projects these days, and most would prefer to move on. For those who work with web designers and engineers (or used to be one like me), you know that they often have an insatiable appetite for learning, and want to do that with increasingly relevant tools to their growth and output. Sticking to dated software is an effective way to weed out the best and brightest.

The Improved Developer Experience (DX) of Drupal 8

Just like happy customers tend to be repeat customers, happier developers also produce returns; they’re more productive.

There are a few improvements in Drupal 8 that make life substantially better for developers. The Configuration Management Initiative was a boon to developers who struggled with a module called “features” which was not designed to do what most of us used it for. The CMI addresses the previous workaround, rife with inconsistencies, of moving site configuration from development to staging and production environments. Although it may seem trivial, developers love this better system in Drupal 8, and it means more efficient development, therefore higher ROI.

Proudly found elsewhere / not invented here / getting off the island

A primary Drupal 8 philosophy that has largely been successful, but yet to fully bear fruit, is the concept to drastically reduce “Drupalisms” that had proven a challenge for newcomers to the system who had to learn a suite of things specific to only Drupal. The proudly found elsewhere paradigm seeks to mitigate this by leveraging the best of other open source tools when possible rather than reinvent the wheel. A few of the tools Drupal 8 now exploits are Symfony components, Twig templating engine, and Composer Dependency Manager. This “borrowing” has two positive consequences: 1) it reduces the workload for Drupal core contributors by utilizing what’s freely available and well vetted through other communities, 2) it allows people familiar with those other frameworks a smoother onramp to productivity in Drupal. I believe we haven’t yet seen a majority of the benefit to the Drupal 8 project from the many people who were already versed in Symfony and TWIG before working with Drupal.

To quote Angie Byron for the thousandth time (full video here):

For people who are classically trained or have experience in other languages, Drupal 8 is going to make a lot more sense to them than Drupal 7 did. We’re just falling more in line with what the larger people are doing… within the broader PHP community.

WIIFM? Cost savings.

The active decision to upgrade or passive indecision to wait both have cost implications. Perhaps this is the most useful section for readers whose primary responsibilities aren’t technical.

Continuing to invest in Drupal 7 (or earlier) can be costly in ways that may not be abundantly apparent on the surface. For most organizations who work with an agency, custom development is where a brunt of the efforts are spent, and therefore is the primary cost driver. “Custom development” occurs when the functionality a client requests is either not freely available on the open-source market or the agency is unaware of its existence and a developer will write code for the specific use case to “extend” the out-of-the-box functionality. The 80-20 rule applies well to software development in Drupal: roughly 20% of the functionality a client requests accounts for 80% of the effort of a project since that 20% must be built from scratch. When site owners request various functionality, it can be difficult for them to differentiate what may constitute custom development efforts vs. freely available from the contributed community. Given the high effort of customization and related technical debt accumulated, site owners should request a high degree of transparency to understand what requires custom development when establishing project budgets. This way, the organization can do a cost/benefit analysis on a granular, per-feature basis. The goal for developers should be to always start with exploring what already-made wheels are out there for the turning before crafting their own. Be wary of alternative thinking. Yet, as extensible and rich the Drupal community is, nearly all of our engagements require customization.

Easy Drupal upgrades forever

A happy Drupal sunrise
Image from Dries’s blog post

To the surprise of the community, in an abrupt departure from business-as-usual in early 2017, Dries committed to “easy upgrades forever”, starting with Drupal 8 of course. The short of it is Drupal 8 to 9 upgrades should be far easier (and less expensive) than any previous major version upgrade, and so will be from here on in. That means for those not yet on Drupal 8, you only have one final difficult upgrade left in your Drupal journey until the end of time.

This is a fairly natural outcome given the possibilities afforded by a more structured, object-oriented architecture coupled with the growing desire to ease upgrade pain that has been building for some time. Although difficult technical work is needed to flesh out exactly how this will be done, the commitment from the top is worth putting stock in and the community is on the way to making this grand proclamation a reality. When upgrades are far easier, they will help rectify some of the sentiment of leaving the smaller sites behind, since major version upgrades will be a much less daunting task with Drupal 8 and beyond.

Drupal 6 or 7 custom development is especially expensive

However, perhaps the most important point is that you may be doubling your efforts for the final time if you’re doing custom development on Drupal 6 or 7, since it will invariably need to be rewritten to work on Drupal 8 with the same 80-20 rate we mentioned earlier. Given the commitment to easy upgrades and guidelines for backwards compatibility, it’s quite likely that custom code written for Drupal 8 will be highly portable to Drupal 9 and 10 that won’t require an arduous rewrite.

A Drupal 6 house
Not our actual house, and it’s not this bad.

I live in the equivalent of a Drupal 6 house. My partner and I keep putting off things we’d like to do now in prep for a more substantial renovation “on the horizon.” We’re not going to get solar panels before replacing the roof, and we won’t upgrade to a high energy efficiency HVAC system until we restructure some of the foundation. We’re being mindful of mitigating our overall costs, which makes sense, but this all sets up a perverse incentive to make no improvements in the immediate. The same can be true for an older Drupal site. As she frequently reminds me, I’ll remind you: it’s probably time to take the plunge and build your Drupal 8 house.

Cost to upgrade is going down

While there remains one final decidedly not easy upgrade if you’re not yet on Drupal 8, the good news is the cost to upgrade has gone down and will continue to. As of the release of 8.4.0, migrating from Drupal 6 is nearly all the way there:

Core provides migrations for most Drupal 6 data and can be used for migrating Drupal 6 sites to Drupal 8, and the Drupal 6 to 8 migration path is nearing beta stability.

The sentiment on 7, expectedly so, is not as far along:

The Drupal 7 to Drupal 8 migration is incomplete but is suitable for developers who would like to help improve the migration and can be used to test upgrades especially for simple Drupal 7 sites. Most high-priority migrations are available.

So migration from 7 still requires some work. More on this ahead.

Early adopters paved the way

We all owe a debt of gratitude to those who were willing to take the risk of building on Drupal 8 in its earlier days. We commend both organizations and agencies who were ambitious and willing to incur some risk to help push the rest of the project forward. We’re proud to put ourselves on that list, starting 2.5 years ago, but it unsurprisingly came with challenges and lessons learned. Mistakes that come with experience are virtually entirely positive for the future since we’ve learned what to do and what to avoid. It’s time for you to benefit from the work of the early adopters.

WIIFM? The Future.

The future is uncertain; the only things guaranteed are death and taxes. Actually, even those I’m not so sure :wink: Regardless, the future for Drupal 6 and 7 are a known entity not likely to get much better. The upside of Drupal 8, while partially known, is largely still in the making and will keep getting better over time.

Continuous innovation with experimental modules! Who doesn’t want that?

Among the suite of other firsts, Drupal 8 has updated its minor version approach to accommodate for innovations in core and this is another game changer. Previously, the first version of Drupal 7 (7.0) was essentially functionally the same as the latest (currently 7.56). Now new minor releases introduce experimental modules which are driven by agreed-upon priorities and are then vetted over time to see if they’ll graduate from the “experimental” label and be fully baked into core. Therefore Drupal 8 can and will adapt; Drupal 7 cannot. The transparent structure leadership established provides a good balance of innovation and predictability with two minor version releases a year. To track these for your own planning, at any time you can check the development roadmap.

Javascript

The Drupal community has been abuzz with “headless” or “decoupled” Drupal since the advent of Drupal 8. The basic idea is that Drupal can lean on its strength of being an excellent tool for highly structured and organized data in the backend while allowing freedom and flexibility of choice on the presentation layer (front end). Though discussed two years ago to no formal conclusion, Dries has recently cited React as the go-to presentation layer for Drupal administrative interfaces come early 2018. This is a fairly big deal, and formally moving forward to more tightly link with React has many implications we haven’t yet fully explored. As the lines between websites and web applications continue to blur, this proudly-found-elsewhere addition may prove to be a powerful one that will not be possible for Drupal 6 or 7. We see this as another wise move to be more in-sync with the rapidly growing impact of JS frameworks.

Access to complementary tools

The re-architecture towards object-orientation helped Drupal join the modern PHP community’s framework (Symfony, Laravel, Cake, Phalcon, Zend, Slim, CodeIgniter, Yii, and Fuel to name the most popular) development practices. One subtle yet substantial value to that move is now many tools that are built to help support these other frameworks are available to Drupal as well. As the toolkit for modern web development grows richer and more robust, the more Drupal can utilize, the better. A couple examples we’ve recently used to help inform project quality and future maintenance costs are Code Climate and Scrutinizer. These tools have much less value analyzing a Drupal 6 or 7 site.

Our advice

So we’ve dumped a lot of information on you at this point, but it may still not be entirely clear what you should do with your outdated Drupal site. Ahead we provide general suggestions as well as what is pertinent to site owners for each version separately (Drupal 6 and 7).

To everyone on Drupal <8

We still have <3 for Drupal <8. Drupal 5, 6, and 7 got us to where we are. But here’s what we think you should consider about where you’re going.

  1. Plan 3 years out if possible. A stitch in time saves nine, and every minute of planning saves 10 in execution. They’re clichés, but true. Planning well requires a real dedication to strategic and investigative work; there’s no way around it. The upside is it allows you to be intentional about when to incorporate an upgrade, rather than being at the mercy of expiring security support. Organizational stakeholders are usually not compelled by upgrading for the sake of upgrading without other bells and whistles that come with it. An experienced partner can help shepherd the long-term planning process to provide guidance on efforts and things to consider. If you’re not working with an agency, do it yourself. Expect to redesign and do a software upgrade every 3-5 years and time those together if possible. Factor in upgrades to other systems that integrate with your website as well as any initiatives that may require functional improvements. Put all of these larger investments on a roadmap with a timeline and be clear about what components are dependent on or impacted by other components. With technical work, the devil is in the details, so a thorough assessment or “discovery project” is usually the best next step. Discovery work is light on upfront investment yet thorough enough to guide your organization through the many choices in your roadmap. This is really the best way to use your resources most efficiently. If your organization hasn’t historically done this, it handicaps you a bit at the moment, but if you’ll excuse one final cliché: there’s no time like the present.
  2. Be mindful of what you don’t need. We all get excited about the possibilities of new functionality. However, when things we’ve built have outlived their purpose, let them go. Given the complexity and interdependence of the tools we build, customizations take the form of mounting, insidious and potentially crippling technical debt if left unaddressed. The cost to upgrade the technical debt is likely the major cause for most of those who have not yet upgraded; it is certainly the case for all of our partners who haven’t. This debt can be hard to track, and it’s not something most agencies proactively share since they have a hand in creating it and can also be shortsighted. Ask for answers as to how your partner is managing your technical debt. If you don’t get good answers, keep asking. Another subset of this concept is that even if you want to maintain certain functionality, it needn’t be done in the with the same modules on the Drupal 8 platform. So don’t take a given module’s absence in Drupal 8 as a certainty that it cannot be efficiently achieved in Drupal 8. In many cases, it can.
  3. Training will be required. If you plan to build with the same team that built your <8 site, and they have not worked on any other Drupal 8 or object-oriented PHP projects, make sure you dedicate time and budget resources for substantial training.

To those with production Drupal 6 sites.

I wrote a Drupal 6 series as Drupal 8 had announced its release (the overview, the risks, the options, Drupal 7 or 8) which is a good reference for both what was true then and what has changed now. Then, I certainly encouraged a conversation with your partner about what is right for you. That has not changed. What has changed is the maturity of the migration system, making it easier to port your content from 6 to 8. An upgrade to Drupal 8 by way of migration should be where you start based on all of the above, and the job before committing to that path is to well vet all requirements to migrate. Given that migrations are high-effort, you should explore alternatives with your development team. How much it’s worth to invest in a migration depends on how valuable your old content is to preserve, which varies widely among organizations.

If through research you uncover that you’re still not ready for Drupal 8, you should make an action plan to follow up on the components that will allow you to upgrade to Drupal 8 and track those over time. You should look into efforts to upgrade to Drupal 7, being mindful of how you can mitigate the costs to the Drupal 8 upgrade. You should consider support with the MyDropWizard team in the immediate. It’s lead by David Snopek who is on the core security team and has an impressive Drupal resume. It’s hard to assess how the support provided by MDW compares to the core security team, but it’s much better than not having any security support. I would also caution those to not use the relief from having coverage through MDW as a reason to rest on your laurels. If Drupal is still working for you, you should be thinking about how to get to Drupal 8. Additionally, as we agreed earlier there’s no longer even certainty to death and taxes, it’s possible that things could change for MDW, and you’d be without support again.

To those with production Drupal 7 sites.

As far as building new on Drupal 7, I have a hard time conceptualizing for whom that is the right choice. MDW wants to keep the door open to building new in Drupal 7, but others point out incentives again. Much like the hat tip to Dries for accepting criticism, I must commend MDW accepting these comments on their blog.

If you have a high degree of customization and technical debt, keep track of the development of Drupal 9. Upgrade now cannot be prescriptive for the 900,000+ sites still on Drupal 7. We generally agree with Angie’s recent presentation at Acquia Engage called Drupal 9 and Backwards Compatibility: Why now is the time to upgrade to Drupal 8 for those on Drupal 7:

If it’s working for you that’s fine! (Until Drupal 9) But if D8 offers features you want, consider earlier adoption.

So, if you’ve determined that you’ll remain on Drupal 7 for some time, your development team should be aware of a couple Drupal modules (xautoload and service_container) that make writing Drupal-8 like code possible in Drupal 7. These tools will help familiarize developers with Drupal 8 paradigms and possibly reduce substantial technical debt in the future.

To those not on Drupal, but considering it

If you’re in the market for a CMS and have ambitious web goals, you should at least check out Drupal. It’s been holding fairly steadily in the CMS market, and given some of the problems of the past with versions before Drupal 8, we think this speaks very promisingly of the future for Drupal. This is not to say that it is the right fit for all websites. It is not. However, with free hosting options, an improved and simplified admin experience, and the most powerful backend of the open source CMSes, it does fit a lot of needs.

We want to hear from you

We want to hear from your experience, whether or not it resonates with what we’ve presented here. Are you having challenges to upgrading that you feel went unaddressed here? Notice anything we overlooked? Comment away, or write us privately if that’s appropriate. I’m also often on Drupal slack (@chrisarusso) checking in on our local #TriDUG meetup conversations.

Nov 01 2017
Nov 01
1 November 2017

Yesterday a project on github was moved, causing Drupal’s own build process, and that of many other websites, to “break”. Here’s what happened:

  1. The coder library was removed from github (it’s main home is on drupal.org).
  2. Drupal core’s composer.json had a reference to the now non-existent repository.
  3. Anyone attempting to obtain Drupal by downloading it’s source and running composer install couldn’t, due to the broken link.
  4. Many other developers who tried to build their websites were similarly left disappointed.

This issue on drupal.org captures the problem in more detail.

We’ve been here before

In March 2016 a JavaScript library called left-pad was removed from the npm package manager, causing the builds of many front-end projects that used it to break.

This seems to be a risk that comes with dependency management, and raises the question of should vendor be committed to version control? I’m hoping that this post will help you answer that.

Notice that I’m only talking about full applications or website codebases here, not libraries. If you’re working on some sort of standalone component for use within a larger project (like a contrib module), you definitely don’t want to ship vendor to your users. Nor do you want to include composer.lock.

A lean codebase

The keep-vendor-outside-git argument favours a lean codebase. There’s some merit to this. After all, upgrading a module is often considered a single, atomic change, and it’s nice when a pull request of the form “Upgrade the EVA module to 1.2” comes down to a single line:

diff --git a/composer.lock b/composer.lock
index 378e3be3fa..8f6d7d31a9 100644
--- a/composer.lock
+++ b/composer.lock
@@ -637,17 +637,17 @@
         },
         {
             "name": "drupal/eva",
-            "version": "1.1.0"
+            "version": "1.2.0"
         },

It’s not hard to cross reference that with a changelog in the other project, should you need to. And by downloading the code each time prevents you from modifying a module without auditing it. Every patches is explicitly listed.

It’s interesting that composer’s own documentation recommends this approach.

Resilience

The commit-vendor-to-git argument focuses on resilience. You don’t want your build process to be dependent on external services that may or may not be available. Furthermore, you want to be able to track all changes to your source code in one place, both the bespoke code and any other libraries.

Pascal Morin articulates this well here: Do you really need composer in production? The CocoaPods dependency manager for iOS also leans towards this position.

Can we have our cake and eat it?

We’re looking at two objectives - the resilience of not being dependent on packagist/github/drupal.org for building, plus the advantages that come with a lean codebase. By trying to achieve both, are we trying to have our cake and eat it?

In an ideal world, each project would have some kind of artefact repository. One that’s under your control and from where you can obtain every library/version combination you’ve ever used in the project. This is what Maven, a Java dependency management tool, suggests.

Do I think the core Drupal repository should contain a vendor directory? No, I don’t. I think it’s Drupal’s job to be a component, augmented with contributed modules and perhaps custom code. Every Drupal website is different; as soon as you add a contributed module the contents of vendor change.

But unless you have the resources to manage an artefact repository, your website’s repository probably should contain vendor. You can still have the benefits that composer brings, but why introduce another point of failure at build time?

Notes

Oct 27 2017
Oct 27

Drupal 8’s official release was nearly two years ago, and many ask how is it doing? Has it lived up to its ambition to revolutionize Drupal websites?

In the first of a two-part series, we’ll provide our insight into the evolution of Drupal 8 over its first two years in the wild. In part two, we’ll look at important factors to consider in your Drupal investments going forward.

(Drupal) Change is hard

To launch a website (much like to rock a rhyme that’s right on time) is tricky; to operate a web system in a way that uplifts your organization is just plain hard. Although keeping up with the most current software is typically advisable, there are costs to doing so, even for free software like Drupal. Without the vendor lock-in that comes with proprietary Content Management Systems, Drupal site owners have a high degree of freedom to consider how best to invest their web resources. However, this freedom also has a price (see a pattern emerging?) in the form of time and stress incurred from the responsibility to select the best digital tools to drive your organization for years with limited information to evaluate the nearly limitless options.

Of the 1 million+ organizations whose main window to the digital world is powered by Drupal, many have priorities that compete in time and budget with web system investments. When considering these priorities, determining the right time to invest in an improved user experience, design, feature-set, or software upgrade can be difficult.

With the ever-increasing complexity and interconnectedness of the software systems we build, even the world’s most prominent organizations, often with legions of engineers, have had colossal mishaps with upgrades. By default, upgrades are not easy.

To upgrade or not? That is the question.

Like any decision, whether you’re building new or upgrading, the fundamental question is: do the benefits outweigh the costs? In the specific case of investing in Drupal the question becomes: when does making the leap to Drupal 8 rather than continuing to invest in Drupal 7 (or possibly 6… don’t tell me it’s 5 :wink: ) outweigh the costs to take that leap? For any organization to properly answer that question, it’s necessary to look 3-5 years out with regard to budget and organizational goals. It’s also helpful to better understand how the broader community has approached this same decision over the past two years. Let’s take a look.

Taking stock of Drupal 8’s adoption

stock market image

After nearly two years since its public release, how has the adoption of Drupal 8 gone?

Analysis from the top

Before DrupalCon North America in May 2016 in New Orleans, Drupal founder and current project lead Dries Buytaert blogged that Drupal 8 was doing “outstanding,” citing statistics to substantiate his optimistic view.

Based on my past experience, I am confident that Drupal 8 will be adopted at “full-force” by the end of 2016.

Many in the community contested the veracity of his optimism in the article’s comments and I commended Dries (yes that’s me and not him, and definitely not him) for facilitating an open conversation that elicited a broad perspective.

About a month later, some six months after Drupal 8 was released, Savas Labs attended DrupalCon NOLA.

During the perennial “Driesnote,” Dries continued to present Drupal 8 as well on its way to match if not exceed the success of Drupal 7.

I really truly believe, Drupal 8 will take off. My guess is that by the end of this year [2016] Drupal 8 will serve an escape velocity… it will become the de facto standard.

and

The new architecture, features, as well as frequent releases: all of these things make me feel really, really optimistic and bullish about Drupal 8.

Adoption by the numbers

According to the usage statistics available on the Drupal website, when writing this nearly 80% of the world’s Drupal websites were powered by version 7.

drupal stats d8A graph started by Angie Byron of Acquia that I updated to present.

When Drupal 7 was released on January 5, 2011, there were already more Drupal 7 sites than sites powered by the major version two releases prior: Drupal 5 (A). The same feat for Drupal 8 took over nine months after its release to achieve (C). Total Drupal 7 sites eclipsed total sites of its predecessor version (6) about 13 months after the release of Drupal 7 (B). After nearly 2 years from the release of Drupal 8, it has not yet eclipsed Drupal 7 installations, and at present there are over 700,000 more Drupal 7 sites than Drupal 8.

Our take on Dries’s bullish-ness

To his credit, the future is notoriously difficult to predict, and even when predicting it, Dries spoke of the significant work that lay ahead to see his vision come to fruition. He also made the referenced comments well over a year ago, and I’ll concede speaking in hindsight is infinitely easier. Having said that, comparing the total number of upgraded Drupal 8 sites to Drupal 7 sites over the same period from release in a community that had grown ~220% since Drupal 7’s release, while factually indisputable, was probably not as accurate as using adoption percentages to analyze overall trends.

Even the most conservative interpretations of “escape velocity” or “full-force” would have to concede that we’re at least a year behind Dries’s hopes when he was reporting from DrupalCons Barcelona and New Orleans on impending rapid Drupal 8 adoption. But, what’s a dictator worth his salt to do, benevolent or not, other than to stretch the stats a bit to show what he would like to be true for his beloved community, from which he also profits?

Our assessment

After two years, the data unequivocally show, as I began discussing at DrupalCon New Orleans, the rate of Drupal 8 adoption is objectively slower than Drupal 7. At this point, a majority of organizations have not yet upgraded from 7 to 8, though likely many have begun efforts. Taking a simplistic view, this means Drupal 8 has either been more costly to upgrade, a comparatively less valuable product, or perhaps both.

Regardless, since it matters to our partners, we found it important to explore the reasons behind the slow adoption rather than to pretend it’s not happening. After architecting Drupal 8 web systems for 2.5 years, we have gained insight into the relatively slow adoption.

Drupal 8 adoption challenges

Drupalers haven’t written much about the retrospective analysis of the Drupal 8 adoption challenges. But without being able to take a real, honest look inward, we cannot improve. We must know thyself because the examined Drupal problems are worth fixing! We highlight here the most prominent challenges that have slowed Drupal 8 adoption.

1. Complete code re-architecture

The massive shift of the underpinnings of the Drupal code is a decision that has long been debated within the community. There’s no question it has proven a challenge for proficient Drupal 7 developers to develop on Drupal 8: for most, substantial training and learning is required. Training takes time, and time can often mean money. The loss in short-term efficiency for seasoned Drupal developers made early adoption riskier, and typically added to a project’s expense. Joining with other prominent frameworks known outside of Drupal like Twig and Symfony (colloquially referred to as “getting off the island”) was a collective decision by wise Drupal leadership with the long-term value of the product in mind, but in the short-term, for the average Drupal developer, it meant more new things to learn.

2. Slow contributed module porting

Historically Drupal has derived much of its usefulness from the rich contributed module ecosystem that extends the features of Drupal core. Contributed modules, although crucial to most live Drupal websites, by definition are not directly driven by those that oversee Drupal core development. This disconnect invariably leads to some important modules not having a usable upgraded version when a new major version of Drupal core is released. This is well-known within the Drupal community, explained at great length by Angie Byron (second reference), and not unique to the Drupal 8 release. Tremendous amounts of individual and community efforts are required to upgrade modules to the latest major version. Due to #1 from above, these efforts were further exacerbated by the re-architecture. Costs to upgrade even one module (it’s common for a Drupal 7 site to use 100) are often greater than clients or agencies are willing to absorb on a given project.

3. Incomplete upgrade path

We often describe websites as comprised of three main asset groups: the code (Drupal core, contributed and custom modules), files (think media assets like images), and the database where content and site configuration lives. When upgrading, you download the new Drupal code, which has a set of instructions that must be run to apply complex updates to the database. Files remain unchanged. A well-oiled upgrade process is required to update the content and configuration from the site being upgraded into a format intelligible to the new system. The approach to perform those upgrades has also changed in Drupal 8 to what is now referred to as “a migration”. As of the most recent minor release of Drupal 8 in October states:

…Drupal 6 to 8 migration path is nearing beta stability. Some gaps remain, such as for some internationalization data. The Drupal 7 to Drupal 8 migration is incomplete but is suitable for developers who would like to help improve the migration and can be used to test upgrades especially for simple Drupal 7 sites. Most high-priority migrations are available.

“Nearing beta stability” after two years out from release is not ideal though it is reality since perfecting these migration tasks is hard work. One can discern from the Drupal 7 -> 8 migration snippet that it’s clearly further afield, and for those who need to preserve their content, perhaps a non-starter for a 7 -> 8 upgrade. The inability to efficiently update database structures adds to project expense. Whatever doesn’t come over “for free” with the migration will need to be manually replicated by a human, and humans are costly, as our time is precious.

4. Stance on backwards compatibility

Drupal’s approach to backwards compatibility is famously “for data, not code”. Briefly put, in their words: “While the upgrade path will reliably preserve your data, there is no backward compatibility with the previous Drupal code.” If you want to dig deeper, there’s a lot of good discussion on this topic.

WordPress’s approach, perhaps more than anything, explains its ubiquity and ability to better keep sites on the latest version. In their words:

Major releases add new user features and developer APIs. Though typically a “major” version means you can break backwards compatibility (and indeed, it normally means that you have), WordPress strives to never break backwards compatibility. It’s one of our most important philosophies, and makes updates much easier on users and developers alike.

Albeit a bit confusing, even for the non-technologist, you get the sense they’re more worried about breaking stuff and want upgrades to Just Work™. The strength of the Drupal approach is it allows for more innovation, and in some ways, less baggage since preserving backwards compatibility often means hanging on to outdated code. The trade-off is, once old code is determined to be holding innovation back, it’s cast to the side, and new structures must be implemented in the updated version. Historically, this paradigm has caused many to get stuck in an outdated Drupal version for longer than they’d like because they cannot afford an upgrade.

5. Inertia, perceived value, and expense

A modern organization is focused on more than just their website, and for investments that don’t deliver direct, visual, tangible change, stakeholders often overlook them, even when they may present value. Examples where the value is oft-invisible to clients are investing in an automated testing framework that ensures perpetual site integrity, or vigilantly applying security updates as they become available. In either case, the client may perceive them as optional, but foregoing them is likely to cost the organization in the long-run.

Since Drupal only provides security support for two major versions at a time (presently 7 and 8), for many, the prime motive for a new release, often framed as a mandate, is to upgrade from the version two major releases prior, which has fallen out of support. When Drupal 8 came out, Drupal 6 fell out of support after a grace period of three months, generously extended from the day of release given some of the community’s recognition of some of the challenges we’ve documented here.

If an organization doesn’t heed the security warnings, and doesn’t find enough value in the new features, they may choose to ignore the upgrade completely. The truth is it’s hard to estimate the future risk of using outdated software. However that future risk is very real, and digital security compromises show no signs of slowing down. Savas Labs always advocates for timely security coverage, but it has not always been a budgetary possibility for our partners to upgrade from Drupal 6 to Drupal 8 upon release of 8.

An answer to the Drupal 6 problem

In addition to our experience, the usage data show many organizations did not plan sufficiently to upgrade from Drupal 6 to 7 or 8 upon Drupal 8’s release. Recognizing that, a Drupal agency My Drop Wizard set up long-term security support for the many Drupal 6 sites that were not ready to upgrade to Drupal 8. It’s debatable whether or not this was a good thing for the community. People forced to change, often will change sooner than they would otherwise, but they may resent you for it. Conversely, you’d be hard-pressed to find an MDW client who didn’t experience anxiety relief when offered an inertia-compliant alternative.

Organizations that don’t perceive opportunity in the value the new software provides will look at an upgrade strictly as an expense to avoid, likely citing topics we’ve covered here.

Takeaways

Through experience and analysis, we see there are many understandable and justifiable reasons why many organizations haven’t yet upgraded to Drupal 8. Now that we’ve done the hard reflection, the good news is that the present is a much brighter place for not only Drupal 8 but all future versions of Drupal. We have made it through most of the difficult growing pains, and there’s great reason to believe that the community has invested wisely in the future. In part two, we cover the costs of investing in Drupal 7, and why it’s probably time to move to Drupal 8.

Oct 08 2017
Oct 08

Have you been to an event recently involving free software or a related topic? How did you find it? Are you organizing an event and don't want to fall into the trap of using Facebook or Meetup or other services that compete for a share of your community's attention?

Are you keen to find events in foreign destinations related to your interest areas to coincide with other travel intentions?

Have you been concerned when your GSoC or Outreachy interns lost a week of their project going through the bureaucracy to get a visa for your community's event? Would you like to make it easier for them to find the best events in the countries that welcome and respect visitors?

In many recent discussions about free software activism, people have struggled to break out of the illusion that social media is the way to cultivate new contacts. Wouldn't it be great to make more meaningful contacts by attending more a more diverse range of events rather than losing time on social media?

Making it happen

There are already a number of tools (for example, Drupal plugins and Wordpress plugins) for promoting your events on the web and in iCalendar format. There are also a number of sites like Agenda du Libre and GriCal who aggregate events from multiple communities where people can browse them.

How can we take these concepts further and make a convenient, compelling and global solution?

Can we harvest event data from a wide range of sources and compile it into a large database using something like PostgreSQL or a NoSQL solution or even a distributed solution like OpenDHT?

Can we use big data techniques to mine these datasources and help match people to events without compromising on privacy?

Why not build an automated iCalendar "to-do" list of deadlines for events you want to be reminded about, so you never miss the deadlines for travel sponsorship or submitting a talk proposal?

I've started documenting an architecture for this on the Debian wiki and proposed it as an Outreachy project. It will also be offered as part of GSoC in 2018.

Ways to get involved

If you would like to help this project, please consider introducing yourself on the debian-outreach mailing list and helping to mentor or refer interns for the project. You can also help contribute ideas for the specification through the mailing list or wiki.

Mini DebConf Prishtina 2017

This weekend I've been at the MiniDebConf in Prishtina, Kosovo. It has been hosted by the amazing Prishtina hackerspace community.

Watch out for future events in Prishtina, the pizzas are huge, but that didn't stop them disappearing before we finished the photos:

Jul 20 2017
Jul 20

Simple Style Guide was created to be a fully flexible style guide for Drupal developers and site builders.

I’ve been using style guides for a while now. I can’t imagine developing any site without one, regardless of size. The idea behind this module was to enable devs and site builders to create a fully functional, living, style guide with only the elements you want and nothing more.

What I wanted was the ability to create one in a fast, effecient manner. No elements are required. No elements are added by default. And all this funcationality is fully accessible to site builders without having to write a single line of code.

Style Guide Settings

Default Patterns
You can choose from a set of very basic default patterns such as headings, text, lists, blockquote, horizontal rules, table, alerts, breadcrumbs, forms, buttons, and pagination. Chosen elements will appear on the style guide page. Choose as many default options as you like, or choose none.

Color Palette
You also have the ability to create a color palette by adding a hex color code, a class name, and usage descriptions (if desired).

Live Example of a Color Palette

Custom Patterns
You can also create custom patterns. Custom patterns can be any chunk of html that you want. There are no restrictions.

Add Any Custom Patterns

With these tools, I hope you will be able to create a very flexible style guide/pattern library. To view a live example of a working style guide, you can check out this page:

https://eisforeveryone.com/simple-styleguide

Jul 20 2017
Jul 20

Simple Password Reveal alters password fields on user login and user edit forms to show plain text by default, while also adding a checkbox for concealing the password as needed.

Rather than creating friction for a user to show a password every time by clicking a checkbox, the password is revealed by default. In my own experience, I generally prefer password fields to be plain text almost all the time. It’s only when in public or during a presentation that I want to conceal passwords. And I’m not the only one…

There is another module that provides similar functionality. However, Simple Password Reveal takes a different approach than the Password Toggle module. They use javascript to add a checkbox to each password field in any and all forms. They also have a Drupal 7 version.

This module attempts to keep things simple by concentrating solely on the user login and user edit pages. If you need this feature on custom forms, on forms loaded by ajax, or for a Drupal 7 site then this module may not be for you.

Simple Password Reveal also uses form alters to add one checkbox per form, rather than one checkbox per input. So, for example, when you are on the user edit page you have three password fields — current password, new password, and confirm password. Rather than having a checkbox for each password field, this module only has one.

Jul 20 2017
Jul 20

The Simple MailChimp module for Drupal 8 intends to be the easiest way to add MailChimp integration to your site.

There is already a MailChimp module for Drupal, of course. There are several of them.

The main MailChimp module itself does a lot…

The MailChimp module allows users to manage email marketing efforts through MailChimp’s service. The module supports the creation and sending of campaigns, management of email lists and individual subscribers, and offers standalone subscribe and unsubscribe forms.

The problem with these modules is that they either do too much, or they are too specific in their use case. What I often need on my sites, more than anything else, is just a checkbox at the bottom of a form that will allow me to subscribe users to my MailChimp list if they choose to do so. Most likely, I need a checkbox at the bottom of many forms.

I don’t need to manage campaigns, lists, etc. from within my Drupal site. I just need a checkbox. Maybe a few options (MailChimp groups), but that’s it. And, again,I need it on all forms. I need it on my subsciption form of course, but I also need it on warranty registrations, user registrations, webforms, or any other form that may be included with my site.

This is where the Simple MailChimp module comes in.

Example form with “group” options.

Again, this module is not meant in any way to be as robust as the MailChimp module. You can’t manage subscribers. You can’t work with lists. It simply gives you the ability to add a checkbox for subscribing to a single MailChimp list, and also allows a field for one interest group option.

To configure this module, you will need your MailChimp API key, list ID, and a mapping of fields. See screenshot below. Under “Enabled Forms” you would enter one Drupal form id per line, and then map the email field (and other fields) to the appropriate merge fields.

Simple MailChimp supports most MailChimp field types:

  • text
  • zip_code
  • number
  • address
  • date
  • phone
  • birthday
  • website

The idea of this module is to be simple. It does not make any assumptions. It does not provide any public facing forms. It’s simply for adding a checkbox to existing forms.

Download Simple MailChimp and try it out.

Jul 20 2017
Jul 20

One of my pet peeves is searching for a local event and finding details for that event… 3+ years ago.

Many Drupal sites feature some sort of event type node. It’s really anything with a start date, and likely, an end date. The problem is, most developers don’t take into account whether or not that content should live on once the end date has come and gone.

Perhaps, in some instances, keeping that content on your site makes sense. In most cases though, it does not.

For instance, my 3 year old was really into dinosaurs. I knew there was a dinosaur exhibit coming to town, but I didn’t quite remember the name. Searching online provided quite a few local results. And many of those results were for events in the past.

Examples

Discover the Dinosaurs (06/21/2014)
http://www.evansvilleevents.com/home/events/discover-the-dinosaurs
(event has since been unpublished!)

DISCOVER THE DINOSAURS ROARS INTO EVANSVILLE! (12/14/2012)
http://www.evansvilleevents.com/home/2012/12/discover-dinosaurs-roars-evansville
(event has since been unpublished!)

Dino Dig! (06/02/2015)
http://www.cmoekids.org/events/community-events/dino-dig

Event from 2 Years Ago

Discover the Dinosaurs Unleashed (02/18/????)
http://www.evansvilleliving.com/event/discover-the-dinosaurs-unleashed

Sometimes sites will even have past events ranking higher in search results than upcoming events.

There’s a whole other blog post I could write about how useful it is to have the year accompanying the day and month on web content — particularly tech blog posts. Was this written in February of this year or 2006? How can I know?!?

The Drupal Solution

For Drupal sites, there’s a relatively easy fix. It requires a small custom module and the contributed Scheduler module.

The Scheduler module is simple and great. Simply enable it for your content type, and enable, at the very least, the unpublish setting. Once that is set up, create a custom module and invoke the hook_entity_presave() function.

This code is pretty self explanatory. All I’m doing is checking to be sure it’s an event node type that’s being saved, and if so, find the start and end date values to be used when setting the “unpublish_on” field.

You’ll of course have to make sure your node type and field names match up.

Once that’s set up, any time an event is saved, your node is scheduled to unpublish one day after the end date.

If you have a Drupal 7 site, this same idea can be applied. The code in the hook_entity_presave() will be a bit different.

I wish I could start a massive movement to help clean up web content that should have been unpublished or removed long ago. Until then, hopefully this article finds a few devs so that they can ensure their site isn’t one of those sending out poor results.

Jul 06 2017
Jul 06
7 July 2017

Drupal has a thriving community in Bristol and the south-west, and they’ve put on some excellent events over the last few years. Last weekend they had their third DrupalCamp Bristol, and I was fortunate to be able to attend and speak.

The day opened with a keynote by Emma Karayiannis on self care and supporting others within open source communities.

Emma shared some of her contributions to Drupal, where she is part of the Community Working Group and track chair for the Being Human sessions at DrupalCon.

Look after yourself. Don’t feel that you can only contribute a little bit or that your opinion doesn’t matter. Just find something rewarding and start small. Getting involved can be daunting, so get to know the part of the community that deals with your interest, ask how to get involved and ask for someone to help you.

Burnout is real, and happens much more when we work alone, taking on lots of responsibility without anyone to partner with. So look for someone to co-work or co-lead with rather than try to be a lone superhero. It gives you the freedom to step away if necessary. Ask yourself: if I had to stop this tomorrow, what would happen?

It’s easy to become overwhelmed without realising it. You need to regularly check you’re looking after yourself, are still motivated, and aren’t taking on too much. Be accountable to your family, colleagues and friends, and step back if necessary.

Look after others. It’s healthy for an open source community to have people who think differently to you. Be respectful of other people and aware that miscommunication is very easy online, particularly with people whose native language is different to yours. But also accept that you’ll never be able to make everyone happy.

Make sure the people are really ok even if they appear fine. Experienced contributors, remember that you were once a beginner, and provide opportunities and safe spaces to include new people. Appreciate people for who they are and not just the work they do.

After a short break we split into two tracks.

Deji Akala provided an interesting look into the technical details on what happens on each page request. Along the way he summarised various parts of Drupal and concepts such as the autoloader, symfony handlers, the service container and event handling.

It’s an interesting exercise to unpick the index.php file line by line and discover what happens behind the scene in a single line of code:

$response = $kernel->handleRequest();

I then gave a short talk about Composer and Drupal. I’ve spoken to a number of people recently and it’s become clear that there’s still a bit of confusion surrounding how to use Composer with Drupal. I certainly found it unclear and started to look into it.

I pitched this at beginner developers, and the main things I wanted people to go away understanding were:

  • what the require, update and install commands really do
  • the difference between Drupal itself and the various template projects available

That was the first time I’d given that talk. It felt a bit raw but led to some interesting Q&A time, and it’s given me valuable insight for enhancing this in future.

To others contemplating public speaking - do it! Events like this are an ideal place to start, everyone’s friendly and on your side. You’ll gain knowledge, experience and friends from doing it. I was really glad to see that several of the speakers here were first timers - well done!

Ross Gratton shared some insights into using front end task runners like Gulp with Drupal.

Ross has been working on a large Drupal site utilising several themes, in 24 languages and with over 125 custom modules. He discussed the pros and cons of different architectural decisions, such as where to put source code and assets, what to put in version control and how to manage conflicts on such a large site.

He then shared some of the process of separating assets out to the brand level as opposed to a project level, treating a style guide or pattern library as a separate deliverable.

After lunch, George Boobyer spoke on web security, a topic often overlooked in the planning and budgeting of projects.

Security is perceived as complex but isn’t that hard, and any effort you make is rewarded. Recently we’ve seen a lot of ransomware attacks, but often these just have the same impact as a disk failure, so alongside keeping software up to date, have backups and test them.

George gave some examples of websites that had been attacked and were now hosting spam content, very often not visible to the naked eye but only to search engines. Often user data is obtained by way of database dumps that have been left accessible to the world - don’t put these in the document root.

Ana Hassel shared some insights as a freelancer. As a site builder, Ana has been able to use Drupal to focus on her clients’ needs and come up with a repeatable process for estimating and selling her work.

Ana also shared how she had invested some time learning the learning the command line and setting up scripts for everyday tasks. This had given her a better, more repeatable workflow and more predictable deployment and hosting.

An interesting perspective on personal development came from Johan Gant. I felt it complemented Emma’s keynote well with some recurring themes, and gave the day a nice mix of technical and human elements.

Johan covered issues such as Imposter syndrome, depression and burnout. Burnout often comes from a lack of engagement, and seems to be a particular risk for knowledge workers. If the values you have aren’t aligned with those of your employer or project, you can burn out very quickly.

Be selective about what you learn—patterns and techniques will last for a long time, whereas frameworks come and go. You need to make time to explore new things, but make sure you are following your interests rather than trends. Avoid stagnation—ask yourself if what you’re doing is satisfying. It’s healthy to seek new challenges, but means getting out of your comfort zone.

Lee Stone finished by sharing about how his organisation does extensive code reviews.

As well as preventing bugs, code reviews aid in training. New developers can learn the business by reading code, and junior developers can grow by asking why something is the way it is, or by asking about things they don’t understand. They often bring fresh ideas this way.

It’s important to review the code, not the person writing it. So don’t make these things too personal, and don’t take them personally! Prefer terms like “we” rather than “I” and “you” to foster a sense of team, and provide solutions rather than just stating something’s wrong.

After the talks we headed to ZeroDegrees in Bristol for a social time. It was great to catch up over dinner with people I hadn’t seen for a while, and make some new friends.

Thanks to everyone who helped make DrupalCamp Bristol such a great event. See you next year!

Jun 04 2017
Jun 04
5 June 2017

This is the last part of a series on improving the way date ranges are presented in Drupal, by creating a field formatter that can omit the day, month or year where appropriate, displaying the date ranges in a nicer, more compact form:

  • 24–25 January 2017
  • 29 January–3 February 2017
  • 9:00am–4:30pm, 1 April 2017

The first post, looked at porting some existing code from Drupal 7 to Drupal 8, adding an automated test along the way. In the second post, we made the format configurable.

There’s currently no administrative interface though, so site builders can’t add and edit formats from Drupal’s UI. We’ll add that in this last post.

Routing

According to the routing overview on drupal.org, a route is a path which is defined for Drupal to return some sort of content on.

For our administrative interface, we want to define a number of routes:

  • /admin/config/regional/date_range_format - show a list of the formats, with links to:
  • /admin/config/regional/date_range_format/add
  • /admin/config/regional/date_range_format/*/edit
  • /admin/config/regional/date_range_format/*/delete

There are two ways in which our module can provide routes. We could include a routing.yml file along with our module. This file contains the same kind of information as would have been in hook_menu in Drupal 7. But it’s a static file—if we want something that’s dynamic we can provide it at runtime using a route provider.

For dealing with entities, it’s often much easier to use Drupal’s bundled AdminHtmlRouteProvider class. This examines various properties on the entity annotation—we’ll look at those next—and provides suitable routes for us automatically.

To use this route provider, we add the following to the entity annotation:

@ConfigEntityType(
  …
  handlers = {
    "route_provider" = {
      "html" = "Drupal\Core\Entity\Routing\AdminHtmlRouteProvider",
    },
  },
  …
)

At this point we need to run the drupal router:rebuild command from Drupal console. We must do this whenever we change a routing.yml file or any of the properties in the entity that affect routes.

The collection view

An entity can define a collection view—typically a page showing a list of entities with links to edit them. Drupal provides a list builder which can be used to show a list of entities with buttons for common add/edit/delete type tasks. We’ll create one of these for our new configuration entity:

<?php
namespace Drupal\daterange_compact;

class DateRangeFormatListBuilder extends ConfigEntityListBuilder {
  function buildHeader() {
    /* return an array of column headings */
  }
  function buildRow(EntityInterface $entity) {
    /* return an array of column values for the given entity */
  }
}

We then associate this list builder with our entity by declaring it within the @ConfigEntityType annotation:

handlers = {
  "list_builder" = "Drupal\daterange_compact\DateRangeFormatListBuilder",
}

The actual list builder is quite a rich, showing examples of different ranges. You can see the full implementation here.

The collection page

Once we have the list builder in place, we can add the collection link to our @ConfigEntityType annotation. The route provider will pick up on this link template and provide a route for the entity collection page automatically.

links = {
  "collection" = "/admin/config/regional/date_range_format"
}

By defining the link, our page appears at the appropriate URL. Note that the add/edit/delete links won’t show just yet—we still have to define those.

The date and time range configuration page, showing a list of available formats The screen for listing date/time range formats, provided by the entity list builder.

Updating the main configuration page

In order to reach this new page, we’ll create a menu link on the main configuration page, within the regional and language section. We do that by supplying a daterange_compact.links.menu.yml file:

entity.date_range_format.collection:
  title: 'Date and time range formats'
  route_name: entity.date_range_format.collection
  description: 'Configure how date and time ranges are displayed.'
  parent: system.admin_config_regional
  weight: 0

That link gives us the starting point for our interface:

The system configuration navigation, showing a link to date and time range formats Date/time range formats are accessed via the main configuration page.

We can now view all the date and time range formats from the main administrative interface in Drupal. Next we’ll build some forms to maintan them, after which the add/edit/delete links should start to appear on our collection page.

Forms

We need a form to be able to edit date range formats. The same form is used to create new ones. Drupal provides a lot of built-in functionality via the EntityForm class which we can extend. Drupal will then take care of loading and saving the entity. We just need to provide the form elements to map values on to our entity’s properties.

Adding & editing

We can add any number of forms, but we only need one to edit an existing format, and we can reuse the same form for adding a new format. This form is defined as a class, and lives in src/Form/DateRangeFormatForm.php:

<?php
namespace Drupal\daterange_compact\Form;

class DateRangeFormatForm extends EntityForm {
  /* implementation */
}

Configuration entities don’t use the field API, so we need to build the form ourselves. Although the form looks quite complicated and has a lot of options, it’s reasonably easy to build—each property in the configuration entity can be populated by a single element, like this:

$form['label'] = [
  '#type' => 'textfield',
  '#title' => $this->t('Label'),
  '#maxlength' => 255,
  '#default_value' => $this->entity->label(),
  '#description' => $this->t("Name of the date time range format."),
  '#required' => TRUE,
];

The full implementation of the form is here.

We also need to tell Drupal about this form, which we can do by adding the following to the @ConfigEntityType annotation:

"form" = {
  "add" = "Drupal\daterange_compact\Form\DateRangeFormatForm",
  "edit" = "Drupal\daterange_compact\Form\DateRangeFormatForm",
}

We also add some links, to match up operations such as add and edit with the new form. These are also defined in the @ConfigEntityType annotation:

links = {
  "add-form" = "/admin/config/regional/date_range_format/add",
  "edit-form" = "/admin/config/regional/date_range_format/{date_range_format}/edit",
}

If we look at the collection view again we see that alongside each format there is a link to edit it. That is because of the edit-form link declared in the annotation.

We also want a link at the top of that page, to add a new format. We can do that by providing an action link that refers to the add-form link. This belongs in the daterange_compact.links.action.yml file:

entity.date_range_format.add_form:
  route_name: 'entity.date_range_format.add_form'
  title: 'Add format'
  appears_on:
    - entity.date_range_format.collection

At this point we have a means of adding and editing formats. Our form looks like this:

The date and time range configuration page, showing our new format for editing The screen for editing date/time range formats.

Deletion

Deleting entities is slightly different. We want to show a confirmation page after a before performing the actual deletion. The EntityDeleteForm class does just that. All we need to do is subclass it and provide the wording for the question:

<?php
namespace Drupal\daterange_compact\Form;

class DateRangeFormatDeleteForm extends EntityDeleteForm {
  public function getQuestion() {
    return $this->t('Are you sure?');
  }
}

We declare this form and link on the @ConfigEntityType annotation in the same way as for add/edit:

"form" = {
  "delete" = "Drupal\foo\Form\DateRangeFormatDeleteForm"
}
links = {
  "delete-form" = "/admin/config/regional/date_range_format/{date_range_format}/delete",
}

Conclusion

That’s it. We’ve got a field formatter to render date and time ranges in a very flexible way. Users can define their own formats thorough the web interface, and these are represented as configuration entities, giving us all the benefits of the configuration management initiative, such as predictable deployments and multilingual support.

The module is available at https://www.drupal.org/project/daterange_compact.

I hope you found this write-up useful.

Want to help?

I’m currently working on getting this module up to scratch in order to have coverage from the Drupal security team. If you want to help make that happen, please review the code following this process and leave a comment on this issue. Thanks :-)

May 02 2017
May 02
3 May 2017

This is the second part of a series on improving the way date ranges are presented in Drupal, by creating a field formatter that can omit the day, month or year where appropriate, displaying the date ranges in a nicer, more compact form, e.g.:

  • 24–25 January 2017
  • 29 January–3 February 2017
  • 9:00am–4:30pm, 1 April 2017

The first post dealt with porting some existing code from Drupal 7 to Drupal 8, adding an automated test along the way.

In this post, we’ll do some of the work to make the the format customisable:

  • a new config entity to store formats
  • moving the rendering logic into a service

This is a long post, and we’ll cover a lot of ground. You may want to make a cup of tea before we start.

Configurable formats

Drupal’s core date formats are stored as configuration entities. Each one consists of a single pattern made up of PHP’s date & time formatting symbols. We’ll create another type of configuration entity for date range formats. Each format will need several different patterns:

  • a default date pattern. This is used to show the full single date if the start and end dates are the same. It’s also used to show both dates fully where the range spans several years.
  • patterns for the start and end dates, where a range spans several days within the same month
  • patterns for the start and end dates, where a range spans several months within the same year

We can also support times in a similar fashion:

  • a default pattern. This is used to show the full single date & time if the start and end values are the same. It’s also used to show both dates & times fully if the range spans several days.
  • optional patterns for the start and end values, where we have a range contained within a single day

All of these 8 patterns can be stored within a single configuration entity, alongside custom separator text.

Defining a configuration entity

To create the configuration entity, we need some custom code. We can use Drupal console to generate a certain amount of boilerplate code as a starting point. Be aware that Drupal console will produce a lot of code all in one go, including the admin interface which we’ll look at in part 3.

I find it helpful to generate this boilerplate into a temporary location, then copy the files over one at a time, editing them as I go. It slows things down in a good way, and forces me to understand the code I’m going to be responsible for maintaining.

Entity class and definition

Let’s start with the entity class, which lives in src/Entity/DateRangeFormat.php:

<?php
namespace Drupal\daterange_compact\Entity;

class DateRangeFormat extends ConfigEntityBase implements DateRangeFormatInterface {
  /* implementation */
}

In order to tell Drupal that this is a configuration entity, we need to add an annotation to the class. Similar to the annotation on the field formatter in part 1, we’re telling Drupal about the existence of a new date range format configuration entity, plus some information about it.

/**
 * @ConfigEntityType(
 *   id = "date_range_format",
 *   label = @Translation("Date range format"),
 *   config_prefix = "date_range_format",
 *   entity_keys = {
 *     "id" = "id",
 *     "label" = "label",
 *     "uuid" = "uuid"
 *   }
 * )
 */
class DateRangeFormat...

The class adheres to a corresponding DateRangeFormatInterface. We’ll refer to the interface, rather than the entity class directly, elsewhere in the code.

The complete implementation is here.

Schema

End users will create instances of the configuration entity—one per format. These can be represented as a single YAML file, and imported and exported as such. The schema describes the structure of these YAML files, dictating the type of data we’re storing inside the entity.

The schema definition is itself a YAML file, and lives in config/schema/daterange_compact.schema.yml:

daterange_compact.date_range_format.*:
  type: config_entity
  label: 'Date range format config'
  mapping:
    # properties of the config entity

The complete schema implementation is here.

Updating entity definitions

If we look at the status report of our site, we’ll see that there is an error: Mismatched entity and/or field definitions. Each time we add or change code that defines entities, we need to update Drupal’s internal copy of the definitions. We can do this with the drush entup command, and we should get the following output:

The following updates are pending:
date_range_format entity type :
  The Date range format entity type needs to be installed.

Do you wish to run all pending updates? (y/n): y
 [success] Cache rebuild complete.
 [success] Finished performing updates.

That’s the bare minimum for defining a config entity. Right now the only way to manage them is by editing YAML files by hand, but that’s enough to start working on the improved functionality for now. In the next post we’ll look at an administrative interface for editing these formats.

Providing a default entity

Any Drupal module can provide configuration entities as part of their install process. The contact module is a good example—enabling the module will create the feedback and personal contact forms, each configuration entities. You can then change those forms, remove them or add new ones.

Let’s make our module provide a date range format called Medium, following the same naming convention as Drupal’s standard date formats. We do that by providing a file called {$modulename}.{$config_entity_type}.{$machine_name}.yml in the config/install directory.

So the module will contain a config/install/daterange_compact.date_range_format.medium.yml file that looks like the following. You’ll see the properties follow the schema defined earlier. The patterns are made up of PHP date & time formatting symbols.

langcode: en
status: true
dependencies: {  }
id: medium
label: Medium
date_settings:
  default_pattern: 'j F Y'
  separator: ' - '
  same_month_start_pattern: j
  same_month_end_pattern: 'j F Y'
  same_year_start_pattern: 'j F'
  same_year_end_pattern: 'j F Y'
datetime_settings:
  default_pattern: 'j F Y H:i'
  separator: ' - '
  same_day_start_pattern: 'j F Y H:i'
  same_day_end_pattern: 'H:i'

We’ll need to re-install the module for this to take effect, but if we do, and then export the site configuration, we should get a copy of this YAML file along with the rest of the configuration.

A note on cacheability

Normally, whenever we produce some sort of output in Drupal 8, we need to provide its cacheability metdata, which describes how it may be cached. Whenever something changes, the render cache can be examined and anything that depended on it can be cleared.

Certain items, like date formats, are so widely used they a treated differently. From drupal.org:

The DateFormat config entity type entity type affects rendered content all over the place: it’s used pretty much everywhere. It seems appropriate in this case to not set cache tags that bubble up, but to just clear the entire render cache. Especially because it hardly ever changes: it’s a set-and-forget thing.

Invalidating the rendered cache tag should be done sparingly. It’s a very expensive thing to do, clearing the entire render cache. But it’s appropriate here to follow what the core date format entity does.

We need to add this to the @ConfigEntity annotation:

list_cache_tags = { "rendered" }

and this to the DateRangeFormat class:

public function getCacheTagsToInvalidate() {
  return ['rendered'];
}

Refactoring

Until now, the code for rendering date ranges has been part of the field formatter. As it’s getting more complicated, it makes sense to move it out of there into it’s own, distinct location. That means we’ll be able to use it outside of the context of a field.

We’ll use a Drupal service for this. A service is a separate class in which we can handle the business logic independently of field formatters. When Drupal manages a request, it will take care of creating an instance of the service class and making it available to other parts of the system.

Our service class it quite straightforward:

<?php
namespace Drupal\daterange_compact;

class DateRangeFormatter implements DateRangeFormatterInterface {

  function __construct(…) {
    /* implementation */
  }

  function formatDateRange($start_timestamp, $end_timestamp, $type = 'medium', …) {
    /* implementation */
  }

  function formatDateTimeRange($start_timestamp, $end_timestamp, $type = 'medium', …) {
  /* implementation */
  }

}

The complete implementation of DateRangeFormatter is here.

We then tell Drupal about the service by including it in the module’s daterange_compact.services.yml file. We can also specify dependencies on other services, and these will be passed to our object’s constructor.

services:
  daterange_compact.date_range.formatter:
    class: Drupal\daterange_compact\DateRangeFormatter
    arguments: ['@entity_type.manager', '@date.formatter']

Now our formatting functions are available to use within other parts of Drupal via the daterange_compact.date_range.formatter service. We’ll access it from the field formatter next.

You can find more documentation about Drupal 8 services on drupal.org.

Pulling it all together

We need to revisit the field formatter and make a couple of changes. First we remove the hardcoded formatting logic that was there previously, and instead delegate that work to the service. Second, we need a way to let the site builder choose a particular format.

Dependency injection

The field formatter needs to be able access to the new service. Drupal 8 makes use of dependency injection for this sort of thing—a way to access dependencies at runtime without being tied to any particular implementation.

There are a few steps involved in getting to use our service this way:

First, we want it for the lifetime of the field formatter, so it needs to be in the constructor. The constructor for FieldFormatterBase takes quite a lot of parameters. We need to accept them all as well, plus our formatter. Some of the other parameters are hidden with here for clarity.

function __construct(, DateRangeFormatterInterface $date_range_formatter) {
  parent::__construct();

  $this->dateRangeFormatter = $date_range_formatter;
}

Next, we need to state that this formatter makes use of dependency injection. We do that by making the class implement ContainerFactoryPluginInterface, which declares a create function that should be used to create instances. The create function is passed the container, an object from which we can get services by name. In the create function we get the service by name and pass it to the constructor:

static function create(ContainerInterface $container, ) {
  return new static(,
    $container->get('daterange_compact.date_range.formatter')
);

Now we will always have access to a formatter via the $this-dateRangeFormatter variable.

Field formatter settings

Whenever we use this formatter, we can choose a particular date range format. We store that choice in the field formatter settings, once for each time a field is displayed.

Adding field formatter settings is documented quite thoroughly on drupal.org, but it involves a YAML file describing the type of data we want to store, some default settings, a form and a summary.

The YAML file is named field.formatter.settings.{$formatter_name}:

field.formatter.settings.daterange_compact:
  type: mapping
  label: 'Date/time range compact display format settings'
  mapping:
    format_type:
      type: string
      label: 'Date/time range format'

The extra functions we need to implement in the DateRangeCompactFormatter class are as follows:

public static function defaultSettings() {
  /* an array of default values for the settings */
}

public function settingsForm(array $form, FormStateInterface $form_state) {
  /* form from which to choose from a list of formats */
}

public function settingsSummary() {
  /* text describing what format will be used */
}

Now when anyone opts to use the compact formatter to render a date range field, they will be prompted to choose a date range format.

Display

Finally, we have everything we need to render the date range using the chosen format. We can change the viewElements function, removing the hardcoded stuff we had before, and delegating to our date range formatter service:

$format = $settings['format_type'];
$formatter = $this->dateRangeFormatter;
$output = $formatter->formatDate($start_timestamp, $end_timestamp, $format, );

The complete implementation of the formatter is here

Test it!

We’ve added substantial functionality, so we need to make sure there have been no regressions on what we had before. We also want to test the new configurable formats.

The test should pass as before, with one small tweak. In the setUp function, we need to load the configuration for the daterange_compact module, so that the medium format is present.

We’ll also define a new usa format, for US-style month, day, year display. That is created in the setUp function:

protected function setUp() {
  parent::setUp();
  /* existing set up code */
  /* create a new date range format called "usa" */
}

We’ll add another test specifically for rendering the USA format:

function testUSAFormats() {
  $all_data = [
  ['start' => '2017-01-01', 'end' => '2017-01-01', 'expected' => 'Jan 1, 2017'],
  ['start' => '2017-01-02', 'end' => '2017-01-03', 'expected' => 'Jan 2–3, 2017'],
  ['start' => '2017-01-04', 'end' => '2017-02-05', 'expected' => 'Jan 4–Feb 5, 2017'],
  ['start' => '2017-01-06', 'end' => '2018-02-07', 'expected' => 'Jan 6, 2017–Feb 7, 2018'],
];

  foreach ($all_data as $data) {
    /* 1. programmatically create an entity and populate start/end dates */
    /* 2. programmatically render the entity */
    /* 3. assert that the output contains the expected text */
  }
}

Whilst the goal should be to have as much test coverage as possible, is isn’t feasible to cover every combination of dates, formats and settings. But we should try to test lots of variations of date and datetime ranges, edge cases, possible formats and results that would vary by timezone. And if something doesn’t behave as expected later, we can write a test to demonstrate it before changing code. Later we can verify any fixes via the new test, and make sure there are no other regressions too!

You can find the full test implementation here.

Phew, that was a lot!

In the final post, we’ll provide an admin interface for editing the date range formats and look at some implications of using this in a multilingual environment.

Apr 20 2017
Apr 20
21 April 2017

A while ago I wanted to present events with a date range on a Drupal 7 site. Drupal’s contributed date module provided a way to store the data, but the display wasn’t to my liking, always showing the complete start and end date.

Wouldn’t it be nice to show the dates in a more compact fashion, by not repeating the day, month or year where they aren’t necessary? Like this:

  • 24–25 January 2017
  • 29 January–3 February 2017
  • 9:00am–4:30pm, 1 April 2017

(and yes, those are en dashes!)

There didn’t seem to be a contributed module available, so I wrote some bespoke code for the project. It only supports one particular UK-specific date format, and there’s no support for different languages.

Over the last few days, I’ve spent some time porting it to Drupal 8 and improving it, suitable for release as a new contributed module. I thought I’d write about this process in the form of a tutorial over a few posts. I hope you’ll find it useful.

Let’s port this to Drupal 8

Firstly, as of Drupal 8.3, there’s an experimental core date range module, so we’re no longer reliant on another contributed module for data storage.

As mentioned, the original code doesn’t offer much in the way of customisation. Neither are there any guarantees about what will happens when different languages and timezones are introduced. While there are lots of things we can improve on later, for now let’s get what was there before working with Drupal 8.

This module consists of a field formatter. These are well documented on drupal.org, and make for quite a gentle introduction to Drupal 8 development.

It’s helpful to think of field formatters in two parts—a definition, describing what it is, and an implementation, concerned with how it works. This is quite a common pattern in Drupal. In D7, we’d see this pattern implemented as two hook functions; in D8, as a plugin and annotation.

D7 .module file → D8 plugin

In Drupal 7, lots of code lives in a single, monolithic .module file. Drupal 8 makes use of object oriented programming, so individual components such as field formatters are each defined in their own classes. A plugin is a class that implements particular functionality and is discoverable at runtime.

Our plugin is defined in src/Plugin/Field/FieldFormatter/DateRangeCompactFormatter.php and looks like this:

<?php
namespace Drupal\daterange_compact\Plugin\Field\FieldFormatter;

class DateRangeCompactFormatter extends FormatterBase {
  /* implementation */
}

D7 info hook → D8 annotation

Our Drupal 7 implementation has a hook function that specifies there is a formatter called daterange_compact (and labelled Compact), that is suitable for date/time fields:

/**
 * Implements hook_field_formatter_info().
 */
function daterange_compact_field_formatter_info() {
  $info['daterange_compact'] = array(
    'label' => t('Compact'),
    'field types' => array('datetime'),
    'settings' => array(),
  );
  return $info;
}

In Drupal 8, we supply the same information but using an annotation. Note the change of field type, Drupal 8.3 comes with a an experimental date range field type that’s separate from the singular date field.

/**
 * @FieldFormatter(
 *   id = "daterange_compact",
 *   label = @Translation("Compact"),
 *   field_types = {"daterange"}
 * )
 */
class DateRangeCompactFormatter...

D7 → D8 implementation

In Drupal 7, the formatting itself happens in another hook, that gets passed the field values (via the $items parameter) and returns the desired output. I’ve left out the actual implementation for brevity; complete versions are available here (D7) and here (D8).

/**
 * Implements hook_field_formatter_view().
 */
function daterange_compact_field_formatter_view($entity_type, $entity,
      $field, $instance, $langcode, $items, $display) {
  /* given the field values, return a render array */
}

This is really similar in Drupal 8, except that we define a function called viewElements in our class, and the field values are accessible through an object.

function viewElements(FieldItemListInterface $items, $langcode) {
  /* given the field values, return a render array */
}

Test it!

To see this in action, let’s set up a content type with a field of type date range. The field is date only—at the moment this formatter doesn’t support times (something we’ll change later). We’ll populate the field with four pairs of values, all of which should appear differently:

  • 1 January 2017 (start and end date are the same)
  • 2–3 January 2017 (same month)
  • 4 January–5 February 2017 (different months, same year)
  • 6 January 2017–7 January 2018 (different years)

A quick check reveals the output is as expected, so we’ve successfully ported the formatter to Drupal 8!

Now is an ideal time to automate that check with a unit test. We’re going to be adding more functionality to this module, during which we may well inadvertently introduce regressions. The test will help flag those up.

Writing a test in D8

Testing field formatters is done with PHPUnit. The timestamp formatter does a similar thing to our formatter, so we can examine that to see how it works. There is a TimestampFormatterTest class that extends KernelTestBase, so let’s create a similar class in modules/daterange_compact/src/Tests/DateRangeCompactFormatterTest.php:

<?php
namespace Drupal\daterange_compact\Tests;

class DateRangeCompactFormatterTest extends KernelTestBase {
  /* implementation */
}

The setUp function will create an arbitrary entity type with fields. It makes use of the entity_test module to define that entity and bundle, to which we create an appropriate daterange field and define it’s default display settings.

protected function setUp() {
  parent::setUp();
  /* 1. install the entity_test schema */
  /* 2. programmatically create a field of type daterange */
  /* 3. programmatically create a field instance based on the above */
  /* 4. set the display settings to use our new formatter */
}

Each function whose name begins with test___ corresponds to a single test. Within each test we can iterate through a set of values, populating an entity, rendering it and comparing the expected output with the actual output.

function testCompactFormatter() {
  $all_data = [
    ['start' => '2017-01-01', 'end' => '2017-01-01', 'expected' => '1 January 2017'],
    ['start' => '2017-01-02', 'end' => '2017-01-03', 'expected' => '2–3 January 2017'],
    ['start' => '2017-01-04', 'end' => '2017-02-05', 'expected' => '4 January–5 February 2017'],
    ['start' => '2017-01-06', 'end' => '2018-02-07', 'expected' => '6 January 2017–7 February 2018'],
  ];

  foreach ($all_data as $data) {
    /* 1. programmatically create an entity and populate start/end dates */
    /* 2. programmatically render the entity */
    /* 3. assert that the output contains the expected text */
  }

You can see the full implementation of the test class here.

This test won’t stop bugs, but it will mean that if the behaviour changes in such a way that the given dates start producing different output, we’ll have a way of knowing. At that point, either the code or the test might need some work.

Running the test

The run PHPUnit, we need to set it up by creating a phpunit.xml file in the core directory. This is documented on drupal.org, and there’s an example file provided.

We also need a database (different to the one used for the site itself).

To run all the tests in our module, run the following command from the core directory:

../vendor/bin/phpunit ../modules/custom/daterange_compact/

If everything worked, we should get a result like the following:

PHPUnit 4.8.27 by Sebastian Bergmann and contributors.

.

Time: 4.92 seconds, Memory: 6.75Mb

OK (1 test, 6 assertions)

The test passed! That’s a good first version of our updated contrib module.

In the next post we’ll look at some improvements, namely making the format configurable and adding support for times.

Apr 12 2017
Apr 12

As a Swiss-based Drupal Agency, we have to create a lot of multilingual sites. Since Switzerland has three official languages (German, French, Italian) and even one more national language (Rumantsch), we are used to this requirement and we found our way with Drupal to make this an easy task (usually). We mainly used node translations in Drupal 7 for maximum flexibility. We used to separate languages from each other using the various i18n modules, language specific menus, blocks, URL-patterns, terms and so on.

With Drupal 8, things changed.
I struggled a little doing multilingual sites in Drupal 8 the same way I was used to in Drupal 7 because node translation is not available anymore (which is good) so I had to find another way to achieve the same easy to handle translations system. For us and for our clients. Let me explain, what I have learned.

Drupal 8 multilanguage

Image: drupal8multilingual.org

Drupal 8 issues multilanguage challenges

Challenge 1: Node add / edit menu handling

The main challenge I had using Drupal 8, was the ease to build your menus directly from the node creation page. You can do it, but only for the initial language. If you try to add a translated node to another menu or rename the item, it always ends up moving / renaming the source node instead of adding a link to the translation. So it can become quite confusing building a navigation directly from the node creation page or to add translations to the menu. A workaround was to add all navigation items manually in the menu administration if you are using a menu per language. With lots of languages and menus / items, this is not really a convenient task. Fortunately, translations from the node creation page have been implemented with a later release of Drupal 8.

Challenge 2: Untranslated Nodes show up in Menu

Another thing which bothered me was that untranslated nodes show up in the navigation (if you use only one menu). This can be quite confusing since most of the times not every page is translated in every language. Or in some languages, you need a little more than in others. You can read a lot about this topic and the reasons behind (e.g. here and here). However you do it, it’s always wrong in some situations and perfectly fine in others. But to be “limited” and “locked in” to a certain way is not nice and you have to deal with it. To sum up, once a node is put into a menu, it will show up everywhere. Regardless if there are translations or not.

Challenge 3: Language Switcher shows all languages – always.

Somewhat confusing is the Language Switcher. In Drupal 7, a language link was not available or strikethrough if there was no translation available. In Drupal 8, every language is always visible and linked. So if you look on a German page which is only available in German, the language switcher will present you all language links to the same node. A click on those language links mainly changes the interface language but the node content remains the same (since not translated). Usually also with a drupalish URL (node/xxxx) because there is no translation for the node and therefore also no URL alias available. This behavior is confusing and wrong in my point of view

An example to illustrate the above-written challenges.

multilanguage issues with Drupal 8

English Front-Page with mixed navigation items.

The screen above shows an installation with 2 languages (English and German). The English Page is a basic page which has a translation. English is selected. If you choose Deutsch on the language switcher, the English Page becomes Deutsche Seite (see image below) and shows the German content. So far so good. But the second menu item you see with the title Über uns (nur Deutsch) should not appear here since it’s only available in German. But it does. And if you actually go on this page, you will see the German text with everything English around it and no URL-Alias (/node/2 in this example). This is usually not very useful for us.

multilanguage issues with Drupal 8

German only Page – Language Switcher visible.

Also, the language switcher shown in the image above is from my point of view wrong or not very useful. It shows a link to the English version, but there is no English translation for this node. So why is it there? To see a German page with English decoration? Not sure. But I want to get rid of this link or at least modify it to be stroked through if the language is not available.

How to fix improve this?

Luckily, the Drupal community is always good for help. After some “research” on the web, I finally found (besides lots of discussions and comments in the issue queues) a way to achieve the desired setup.

To sum up again: I want to see only menu items which are available in my language and only see a link to another language, if a translation is available.

Since there is no patch and still some ongoing discussions on drupal.org you need to implement it on your own. Implement the following two modules.

Hide untranslated menu items

Code from https://www.drupal.org/node/2466553#comment-11991690. Credits go to michaelkoehne.

<?php use Drupal\Core\Menu\MenuLinkInterface; use Drupal\menu_link_content\Plugin\Menu\MenuLinkContent; use Drupal\Core\Language\LanguageInterface; /** * Implements hook_preprocess_menu(). */ function MYMODULE_preprocess_menu(&$variables) { if ($variables['menu_name'] == 'main') { $language = Drupal::languageManager() ->getCurrentLanguage(LanguageInterface::TYPE_CONTENT) ->getId(); foreach ($variables['items'] as $key => $item) { if (!$variables['items'][$key] = MYMODULE_checkForMenuItemTranslation($item, $language)) { unset($variables['items'][$key]); } } } } function MYMODULE_checkForMenuItemTranslation($item, $language) { $menuLinkEntity = MYMODULE_load_link_entity_by_link($item['original_link']); if ($menuLinkEntity != NULL) { $languages = $menuLinkEntity->getTranslationLanguages(); // Remove links which are not translated to the current language. if (!array_key_exists($language, $languages)) { return FALSE; } else { if (count($item['below']) > 0) { foreach ($item['below'] as $subkey => $subitem) { if (!$item['below'][$subkey] = MYMODULE_checkForMenuItemTranslation($subitem, $language)) { unset($item['below'][$subkey]); } } } return $item; } } } function MYMODULE_load_link_entity_by_link(MenuLinkInterface $menuLinkContentPlugin) { $entity = NULL; if ($menuLinkContentPlugin instanceof MenuLinkContent) { $menu_link = explode(':', $menuLinkContentPlugin->getPluginId(), 2); $uuid = $menu_link[1]; $entity = \Drupal::service('entity.repository') ->loadEntityByUuid('menu_link_content', $uuid); } return $entity; }

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55

56

<?php

use Drupal\Core\Menu\MenuLinkInterface;

use Drupal\menu_link_content\Plugin\Menu\MenuLinkContent;

use Drupal\Core\Language\LanguageInterface;

/**

* Implements hook_preprocess_menu().

*/

function MYMODULE_preprocess_menu(&$variables) {

  if ($variables['menu_name'] == 'main') {

    $language = Drupal::languageManager()

      ->getCurrentLanguage(LanguageInterface::TYPE_CONTENT)

      ->getId();

    foreach ($variables['items'] as $key => $item) {

      if (!$variables['items'][$key] = MYMODULE_checkForMenuItemTranslation($item, $language)) {

        unset($variables['items'][$key]);

      }

    }

  }

}

function MYMODULE_checkForMenuItemTranslation($item, $language) {

  $menuLinkEntity = MYMODULE_load_link_entity_by_link($item['original_link']);

  if ($menuLinkEntity != NULL) {

    $languages = $menuLinkEntity->getTranslationLanguages();

    // Remove links which are not translated to the current language.

    if (!array_key_exists($language, $languages)) {

      return FALSE;

    }

    else {

      if (count($item['below']) > 0) {

        foreach ($item['below'] as $subkey => $subitem) {

          if (!$item['below'][$subkey] = MYMODULE_checkForMenuItemTranslation($subitem, $language)) {

            unset($item['below'][$subkey]);

          }

        }

      }

      return $item;

    }

  }

}

function MYMODULE_load_link_entity_by_link(MenuLinkInterface $menuLinkContentPlugin) {

  $entity = NULL;

  if ($menuLinkContentPlugin instanceof MenuLinkContent) {

    $menu_link = explode(':', $menuLinkContentPlugin->getPluginId(), 2);

    $uuid = $menu_link[1];

    $entity = \Drupal::service('entity.repository')

      ->loadEntityByUuid('menu_link_content', $uuid);

  }

  return $entity;

}

Hide untranslated languages in language switcher

Code from https://www.drupal.org/node/2791231#comment-12004615 (slightly adapted. Links get a class, not removed by default). Credits to Leon Kessler.

<?php /** * @file * Hide language switcher links for untranslated languages on an entity. */ use Drupal\Core\Entity\ContentEntityInterface; /** * Implements hook_language_switch_links_alter(). */ function MYOTHERMODULE_language_switch_links_alter(array &$links, $type, $path) { if ($entity = MYOTHERMODULE_get_page_entity()) { $new_links = array(); foreach ($links as $lang_code => $link) { try { if ($entity->getTranslation($lang_code)->access('view')) { $new_links[$lang_code] = $link; } } catch (\InvalidArgumentException $e) { // This language is untranslated so do not add it to the links. $link['attributes']['class'][] = 'not-translated'; $new_links[$lang_code] = $link; } } $links = $new_links; // If we're left with less than 2 links, then there's nothing to switch. // Hide the language switcher. if (count($links) < 2) { $links = array(); } } } /** * Retrieve the current page entity. * * @return Drupal\Core\Entity\ContentEntityInterface * The retrieved entity, or FALSE if none found. */ function MYOTHERMODULE_get_page_entity() { $params = \Drupal::routeMatch()->getParameters()->all(); $entity = reset($params); if ($entity instanceof ContentEntityInterface) { return $entity; } return FALSE; }

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

<?php

/**

* @file

* Hide language switcher links for untranslated languages on an entity.

*/

use Drupal\Core\Entity\ContentEntityInterface;

/**

* Implements hook_language_switch_links_alter().

*/

function MYOTHERMODULE_language_switch_links_alter(array &$links, $type, $path) {

  if ($entity = MYOTHERMODULE_get_page_entity()) {

    $new_links = array();

    foreach ($links as $lang_code => $link) {

      try {

        if ($entity->getTranslation($lang_code)->access('view')) {

          $new_links[$lang_code] = $link;

        }

      }

      catch (\InvalidArgumentException $e) {

        // This language is untranslated so do not add it to the links.

        $link['attributes']['class'][] = 'not-translated';

        $new_links[$lang_code] = $link;

      }

    }

    $links = $new_links;

    // If we're left with less than 2 links, then there's nothing to switch.

    // Hide the language switcher.

    if (count($links) < 2) {

      $links = array();

    }

  }

}

/**

* Retrieve the current page entity.

*

* @return Drupal\Core\Entity\ContentEntityInterface

*   The retrieved entity, or FALSE if none found.

*/

function MYOTHERMODULE_get_page_entity() {

  $params = \Drupal::routeMatch()->getParameters()->all();

  $entity = reset($params);

  if ($entity instanceof ContentEntityInterface) {

    return $entity;

  }

  return FALSE;

}

Please note: The code above is from Drupal.org and therefore thanks to the original authors linked above.

Enable those two modules and you’re all set!

I did not encounter any issues yet using those two modules. If ever something changes in the way Drupal handles those cases, you just need to switch off the modules and everything should be back to normal. So nothing to lose right?

There are other attempts to this by altering the menu block. One of them is Menu Block Current Language but I had no luck with this one. On my most recent project, it worked with one menu but not if you separate your menu by two blocks (different starting levels).

I would love to hear how you guys handle those cases or how you deal with I18N in general. I’m sure there are a gazillion other ways to do it.

Apr 07 2017
Apr 07

After implementing some larger enterprise Drupal 8 websites, I would like to share some insights, how to solve common issues in the deployment workflow with Drupal 8 CMI.

Introduction to Drupal CMI

First of all, you need to understand, how the configuration management in Drupal 8 works. CMI allows you to export all configurations and its dependencies from the database into yml text files. To make sure, you never end up in an inconsistent state, CMI always exports everything. By default, you cannot exclude certain configurations.

Example:

If you change some configuration on the live database, these configurations will be reverted in the next deployment when you use

drush config-import

1

drush config-import

This is helpful and will make sure, you have the same configuration on all your systems.

How can I have different configurations on local / stage / live environments?

Sometimes, you want to have different configurations on your environments. For example, we have installed a “devel” module only on our local environment but we want to have it disabled on the live environment.

This can be achieved by using the configuration split module: https://www.drupal.org/project/config_split

What does Configuration Split?

This module slightly modifies the CMI by implementing a Config Filter (https://www.drupal.org/project/config_filter). Importing and exporting works the same way as before, except some configuration is read from and written to different directories. Importing configuration still removes configuration not present in the files. Thus, the robustness and predictability of the configuration management remains. And the best thing is: You still can use the same drush commands if you have at least Drush 8.1.10 installed.

Configuration Split Example / Installation Guide

Install config_split using composer. You need need at least “8.x-1.0-beta4” and > drush 8.1.10 for this guide.

composer require drupal/config_split "^1.0"

1

composer require drupal/config_split "^1.0"

Enable config_split and navigate to “admin/config/development/configuration/config-split”

drush en config_split -y

1

drush en config_split -y

Optional: Installing the chosen module will make the selection of blacklists / greylists way more easier. You can enable chosen only on admin pages.

composer require drupal/chosen "^1.0"

1

composer require drupal/chosen "^1.0"

I recommend you to create an “environments” subfolder in your config folder. Inside this folder you will have a separate directory for every environment:

Drupal 8 Configuration Management Folders

Now you can configure your environments:

Config Split in Drupal 8 Configuration Management

The most important thing is, that you set every environment to “Inactive”. We will later activate them according to the environment via settings.php

Config Split settings with the Drupal 8 Configuration Management

Here is my example where I enable the devel module on local:

Dev Environment Example

Activate the environments via settings.php

This is the most important part of the whole setup up. Normally, we never commit the settings.php into git. But we have a [environment]-settings.php in git for every environment:

settings.php (not in git) variables-dev.php (in git and included in the settings.php of dev) variables-live.php (in git and included in the settings.php of live) settings.local.php (in git and included locally)

settings.php (not in git)

variables-dev.php (in git and included in the settings.php of dev)

variables-live.php (in git and included in the settings.php of live)

settings.local.php (in git and included locally)

You need to add the following line to the variables-[environment].php. Please change the variable name according to your environment machine name:

// This enables the config_split module $config['config_split.config_split.dev']['status'] = TRUE;

// This enables the config_split module

$config['config_split.config_split.dev']['status'] = TRUE;

If you have done everything correctly and cleared the cache you will see “active (overriden)” in the config_split overview next to the current environment.

Now you can continue using

drush config-import -y drush config-export -y

drush config-import -y

drush config-export -y

and config_split will do the magic.

How can I exclude certain Config Files and prevent them to be overridden / deleted on my live environment?

The most prominent candidates for this workflow are webforms and contact forms. In Drupal 7, webforms are nodes and you were able to give your CMS administrator the opportunity to create their own forms.

In Drupal 8 webforms are config entities, which means that they will be deleted while deploying if the yml files are not in git.

After testing a lot of different modules / drush scripts, I finally came up with an easy to use workflow to solve this issue and give CMS administrators the possibility to create webforms without git knowledge:

Set up an “Excluded” environment

First of all, we need an “excluded” environment. I created a subfolder in my config-folder and added a .htaccess file to protect the content. You can copy the .htaccess from an existing environment, if you are lazy. Don’t forget to deploy this folder to your live system before you do the next steps.

Folders

Excluded

Now you can exclude some config files to be excluded / grey-listed on your live environment:

webform.webform.* contact.form.*

webform.webform.*

contact.form.*

Greylist Webform in Config Split

Set the excluded environment to “Inactive”. We will later enable it on the live / dev environment via settings.php.

Enable “excluded” environment and adapt deployment workflow

We enable the “excluded” environment on the live system via variables-live.php (see above):

// This will allow module config per environment and exclude webforms from being overridden $config['config_split.config_split.excluded']['status'] = TRUE;

// This will allow module config per environment and exclude webforms from being overridden

$config['config_split.config_split.excluded']['status'] = TRUE;

In your deployment workflow / script you need to add the following line before you do a drush config-import:

#execute some drush commands echo "-----------------------------------------------------------" echo "Exporting excluded config" drush @live config-split-export -y excluded echo "-----------------------------------------------------------" echo "Importing configuration" drush @live config-import -y

1

2

3

4

5

6

7

8

#execute some drush commands

echo "-----------------------------------------------------------"

echo "Exporting excluded config"

drush @live config-split-export -y excluded

echo "-----------------------------------------------------------"

echo "Importing configuration"

drush @live config-import -y

The drush command “drush @live config-split-export -y excluded” will export all webforms and contact forms created by your CMS administrators into the folder “excluded”. The “drush config-import” command will therefore not delete them and your administrators can happily create their custom forms.

Benefit of disable “excluded” on local environment

We usually disable the “excluded” environment on our local environment. This allows us to create complex webforms on our local machine for our clients and deploy them as usual. In the end you can have a mix of customer created webforms and your own webforms which is quite helpful.

Final note

The CMI is a great tool and I would like to thank the maintainers of the config_split module for their great extension. This is a huge step forward making Drupal 8 a real Enterprise CMS Tool.

If you have any questions, don’t hesitate to post a comment.

Mar 06 2017
Mar 06

Creating and publishing quality content within time constraints is a common challenge for many content authors. As web engineers, we are focused on helping our clients overcome this challenge by delivering systems that are intuitive, stable, and a pleasure to operate.

During the architectural phase, it’s critical to craft the editorial experience to the specific needs of content authors to ensure the best content editing experience possible. Drupal 8 makes it even easier than previous versions for digital agencies to empower content creators and editors with the right tools to get the job done efficiently, and more enjoyably.

Our five tips to enhance the content editing experience with Drupal 8 are:

1. Don’t make authors guess - use structured content

2. Configure the WYSIWYG editor responsibly

3. Empower your editorial team with Quick-Edit

4. Enrich content with Media Embeds

5. Simplify content linking with LinkIt

1. Don’t make authors guess - use structured content

The abundance of different devices, screen sizes and form factors warrants the use of structured content. Structured content is content separated into distinct parts, each of which has a clearly defined purpose and can be edited and presented independently from one another according to context.

“How does that relate to a content editor’s experience?” - you may ask.

In years past, it was very popular to give content editors an ability to create “pages” using one big “MS Word-like” text box for writing their articles, news releases, product descriptions, etc. This approach produced content that was not reusable and was presented in one strict way. Who wants to navigate within one enormous text area to move images around?

Though those days are long behind us, and even though we all know about the importance of structured content, sometimes we still fail to utilize the concept correctly.

Drupal was one of the first Content Management Systems (CMS) to introduce the concept of structured content (node system - Drupal 3 in 2001). In fact, Drupal is no-doubt the best CMS for implementing the concept of structured content, but its ability to provide a good content authoring experience lagged behind this solid foundation.

Today, in Drupal 8, editing structured content is a joy!

With the WYSIWYG (What You See Is What You Get) editor and Quick Edit functionality in Drupal core, we can equip our content editors with the best of class authoring experience and workflow!

You can see the difference between unstructured and structured D8 content below. Instead of only one field containing all text, images, etc., the structured content stores each definitive piece of information in it’s own field, making content entry fast and presentation flexible!

Structured vs unstructured content

The benefits of Drupal 8 structured content approach:

  • The author clearly understands where each piece of information should reside and does not have to factor in markup, layout, and design while editing (see tip #2). Content entry becomes remarkably efficient and allows the author to concentrate on the essence of their message instead of format.
  • The publishing platform is easier to maintain while supporting system scalability.
  • The modular nature of structured content makes migrations between CMS versions or to a completely different CMS much more streamlined. A huge plus for those long-term thinkers!

2. Configure the WYSIWYG editor responsibly

Drupal 8 ships with WYSIWYG text editor in core. The editor even works great on mobile! In a society that is so dependent on our mobile devices - who wouldn’t like to be able to quickly edit a missed typo right from your phone?

Drupal 8 provides superior enhancements to the UX (User Experience) for content authors and editors out of the box. However, with a little configuration, things can be further improved.

When establishing the UI (User Interface) for content authors, site builders should focus on refining rather than whole-sale adoption of the available features. Customizing the WYSIWYG editor is the perfect example of subtle improvements that can immediately make a big difference.

The WYSIWYG text editor is an effective tool for simple content entry since it does not require the end user to be aware of HTML markup or CSS styles. Many great functions like text formatting options (font family, size, color, and background color), source code viewing, and indentation are available at our fingertips, but as site builders we should think twice before adding all those options to the text editor toolbar!

With great power comes great responsibility! Sometimes, when you give content editors control over the final appearance of the published content (e.g. text color, font family and size, image resizing, etc.), it can lead to an inconsistent color schemes, skewed image ratios, and unpredictable typography choices.

How do we help our content authors in avoiding common design / formatting mistakes? Simple!

Use a minimalist approach when configuring the WYSIWYG text editor. Give authors access to the most essential text formatting options that they will need for the type of content they create and nothing more. If the piece of content edited should not contain images or tables - do not include those buttons in the editor. The text editor should be used only for sections of text, not for the page layout.

A properly configured CMS should not allow content editors the ability to change the size of the text or play with image positioning within the text section or the ability to add H1 headers within auxiliary content.

Below is an example of a bad vs. good WYSIWYG configuration.

WYSIWYG editor configuration compared

Benefits of the minimal (thoughtful) WYSIWYG configuration:

  • Easy to use
  • Less confusion (though there are edge cases, most editors don’t use all the buttons)
  • Better usability on mobile devices
  • Less risk of breaking established website design

Let’s keep our content editors happy and not overcrowd their interfaces when it’s absolutely not necessary. It is our duty as software engineers to deliver systems that are easy to use, intuitive, scalable and uphold design consistency.

3. Empower your editorial team with Quick-Edit

The Quick Edit module is one of the most exciting new features that is included in Drupal 8 core. It allows content authors and editors to make updates to their content without ever leaving the page.

The days of clicking “Edit” and waiting for a separate page to load just to fix a tiny typo are gone! The Quick Edit module eliminates that extra step and allows content editors to save a great deal of time on updating content. As an added bonus - content editors can instantly see how updated content will look within the page flow.

Here’s the Quick Edit functionality in action.

Quick Edit module demo

Quick Edit configuration tip for back-end and front-end developers

To make use of the Quick Edit functionality within the website pages, entities have to be rendered on the page via View Modes and not as separate fields.

This restriction presents a challenge when there’s a needs to provide Quick Edit functionality for a page constructed by the Views module. More often than not, Views are used to single out and output individual fields from the entities. The most used Views formats are “Table” and “Grid”. They currently do not support Quick Edit functionality for usability reasons.

A workaround for this issue is to use the custom View modes for Entities and create custom Twig templates for each View mode that should be outputted by Views in order to accommodate custom layout options.

4. Enrich content with Media Embeds

In the era of social media, content editors can’t imagine their daily routine without being able to embed their Tweets or videos into the stories they publish on their sites. In Drupal 6 and the early days of Drupal 7, it was pretty challenging to provide this functionality within the WYSIWYG editor. Developers had to configure many different plugins and modules and ask them politely to cooperate.

The Drupal 8 Media initiative has placed the content author’s experience and needs at the forefront of community efforts. As a result, we have access to a great solution for handling external media - CKEditor Media Embed Module. It allows content editors to embed external resources such as videos, images, tweets, etc. via WYSIWYG editor. Here’s an example of the Tweet embed – the end result looks beautiful and requires minimal effort.

"If you're going to build a new site, build it in D8." - someone who knows what they're talking about quotes @jrbeaton @TriDUG pic.twitter.com/8w9GAuuARu

— Savas Labs (@Savas_Labs) January 27, 2017

With all this media goodness available to us, there is no reason why we shouldn’t go the extra mile and configure the CKEditor Media Embed module for our content authors!

5. Simplify content linking with LinkIt

Linking to content has always been a clumsy experience for content editors, especially when linking internally within the same site.

There was always the risk of accidentally navigating away from the page that you were actively editing (and losing any unsaved information) while searching for the page to link to. Also, the default CKEditor link button allowed editors to insert a link, assign it a target value, title, maybe an anchor name, but that was about it. If the link to the internal content changed, there was no way for the page to update and links throughout the website would end up broken.

Let’s not put our content editors through that horrible experience again. LinkIt module for Drupal 8 to the rescue!

With the LinkIt module the user does not have to copy / paste the URL or remember it. LinkIt provides a search for internal content with autocomplete field. Users can link not only to pages, but also to files that are stored within Drupal CMS.

The new and improved linking method is much more sustainable, as it recognizes when the URL of the linked content changes, and automatically produces the correct link within the page without the need to update that content manually.

LinkIt File link demo

Linking to files with LinkIt

My personal favorite feature of the LinkIt module is the flexible configuration options. The LinkIt module makes it possible to limit the type of entities (pages, posts, files) that are searchable via the link field. You can also create a custom configuration of the LinkIt autocomplete dialog for each WYSIWYG editor profile configured on your site. Plus, it is fully integrated with Drupal 8 configuration synchronization.

Final Thoughts

As site builders, there are many improvements that we can make in order to streamline the process of content authoring.

With the right mix of forethought and understanding, Drupal 8 allows web engineers to deliver content publishing platforms that are unique to the client’s specific needs, while making web authoring a productive and satisfying experience.

Mar 05 2017
Mar 05
5 March 2017

This weekend I’ve been at the fifth DrupalCamp London - a gathering of 500 or so designers, developers and business owners using Drupal.

I blogged previously about the CxO day on Friday and day 2 on Saturday. Today was the final day!

Breakfast time!! #drupal #dclondon pic.twitter.com/2EPGgkisZU

— DrupalCamp London (@DrupalCampLDN) March 5, 2017

We kicked off with Jeffrey “jam” McGuire, who’s work involves advising companies on the value that open source and contribution can bring.

This keynote presented a challenge to the audience: selling Drupal isn’t enough anymore. We need to provide value to businesses at a higher level.

There was much concern over whether Drupal 8 would make things “too hard” and alienate users with small sites. But those technical aspects aren’t really the problem. Increasingly, IT is becoming commoditised. Work that was previously high value is now much more readily available. WordPress, Squarespace and Shopify all provide a means to get a website online for no or very low cost. Medium and Facebook take things one step further - you can have an online presence without a website at all.

Jam referred to a TED talk by Simon Sinek on the what, how and why:

  • what - “do what I ask”
  • how - “help me to think”
  • why - “think for me”

By focusing on the why at the center of this circle, we can begin to create more value for clients.

Going to the "why" before the "what".. Reversing our way of thinking and understanding more. Inspirational keynote by @HornCologne #dclondon pic.twitter.com/sXc3L8Z5Dl

— Tawny Bartlett (@littlepixiez) March 5, 2017

This idea is something I’ve been thinking about for a while, and had some a discussions on in yesterday’s freelancers’ BoF. I’m keen to explore ways to diversify my Drupal offering to clients, perhaps with training, workshops or research.

After a coffee break, I heard Florian Lorétan speak on Elasticsearch. I don’t have any experience with this, but as a volunteer at DrupalCamp I was monitoring the room, and sometimes that means getting to hear talks that you otherwise wouldn’t have thought about going to.

Elasticsearch looked interesting - more widely used than Solr, and with a developer-friendly RESTful API. Florian showed an example of using it with Drupal’s serialization API to provide the search index with appropriate content.

ElasticSearch is new to me, and some of what was covered went over my head. But I’ve seen enough to pique my interest, particularly with regard to saved searches. I hope to play around with it more in future.

Next up was Mark Conroy on all too common scenario of a client signing off a Photoshop design, without considering real content or the variety of devices on which the site is browsed.

Mark’s most hated design tool is Photoshop, and his rant against it was a fun moment towards the end of the weekend. But it was good to have someone articulate the problem with using Photoshop for web design, and I found his explanation of how browsers and operating systems render fonts differently, and definition of a PSD as an approximation of how a website might look a helpful way I can in turn explain this to people.

Mark followed that up with an explanation of atomic design, and demonstrated using Pattern Lab with Drupal.

The weekend finished with a closing keynote by Danese Cooper. Danese has been involved with open source since 1999 and is currently chair of the Node.js foundation.

Danese gave a history of open source, and some of the pioneers of various projects - Larry Wall (Perl), Richard Stallman (GNU), Ian Murdock (Debian), Mitchell Baker (Mozilla) amongst others. People had to fight to get open source taken seriously, but now that they do, there is a new generation of developers who take that for granted. Many younger developers don’t know or care about the “open source vs free software” debate, for example.

Keynote by @DivaDanese at #dclondon pic.twitter.com/3g7183uWrv

— tvn (@tvnweb) March 5, 2017

Transparency is non-negotiable, however companies like to control things and people need to be reminded of that from time to time.

New recruits often don’t know when to push back, they expect code to always be transparent and aren’t aware of what rights they have because of open source. We need to stand up for things that we believe are wrong, both social (bullying etc), and technical - but be sure to support your argument.

I thought of the popularity of React at this point and it’s controversial licence.

We need to keep embracing the community, which is big in Drupal. It’s important to have a variety of people involved any open source project, and Danese referenced an article by Josh Berkus on how to destroy a community if we aren’t careful.

There are no “open source companies” per-se. Any for-profit company will always assess open source as a strategic move. But everyone needs to water the grass for projects to be sustainable, and companies must encouraged and given painless ways to financially support projects.

Ultimately, open source is about people.

To wrap up, I had a wonderful time at DrupalCamp London. It’s been the biggest DrupalCamp in the world (and it had the biggest cake).

A huge thanks to the speakers, sponsors, volunteers and core team that organised such a fantastic event!

See you next year for DrupalCamp London 2018?

Mar 04 2017
Mar 04
4 March 2017

This weekend I’ve been at the fifth DrupalCamp London - a gathering of 500 or so designers, developers and business owners using Drupal.

Friday was the CxO day, which I blogged about earlier. Saturday and Sunday are more technically focussed.

Cake is ready! Come grab some by ELG01. #dclondon pic.twitter.com/hAzZ8FhPSi

— DrupalCamp London (@DrupalCampLDN) March 4, 2017

The day kicked off with a keynote by Matt Glaman - a US based developer working on the Drupal Commerce project. Matt spoke on open source, what it is, and the impact it’s had on his life and career.

@nmdmatt from @CommerceGuys kicks off #dclondon 2017 to a record breaking crowd #drupal pic.twitter.com/3eQDywrnsM

— Paul Johnson (@pdjohnson) March 4, 2017

Matt’s Drupal journey began while working in a bar, using Drupal as a hobbyist by night. With no formal education in programming, Matt taught himself to program using open source, via the mentors he was able to find through the Drupal community.

Community is vital to any open source project. We all have things to contribute, not just code but support, inspiration and mentoring. Open source creates a demand for skills, and creates opportunities for people to learn and teach each other.

#opensource: "Be as knowledgeable as you choose to be." @nmdmatt #dclondon keynote. pic.twitter.com/F6a36pp0eE

— Jeffrey A. McGuire (@HornCologne) March 4, 2017

After coffee, the rest of the day was broken down into parallel talks.

Phil Wolstenholme spoke about accessibility, and demonstrated some of the improvements that had gone into Drupal 8. I really liked the new announce feature, used to audibly announce new content that appears outside of a full page request. Phil showed it working in conjunction with an autocomplete field, where a list of suggested results appears as you type the first few letters.

In web development you can inadvertently make something that’s difficult or impossible to use by those people who have some form of disability or impairment. I asked Phil what resources he’d advise people to look at to learn more about how to avoid this. WebAIM is a great place to start, but also learn how to use a screenreader like VoiceOver, which gives you a totally different perspective on your site.

Next, I gave my offline first talk. I’ve enjoyed doing this talk at various events over the last year. The audience asked a lot of questions which I’ll take as a good sign! There’s obviously an interest in this topic and I’m keen to see how we can use it with Drupal in the near future.

For anyone contemplating speaking at an event like this, I’d recommend it. I wrote some thoughts on this recently.

@erikerskine on Offline First - how to deliver good user experience on poor on intermittent internet https://t.co/inINg6iBOF #DCLondon pic.twitter.com/ZR0FgESkMT

— Paul Johnson (@pdjohnson) March 4, 2017

After lunch, Justine Pocock shared some basic design principles for developers. This was really helpful for me, although I don’t do a lot of design work, I still want to be able to make things that look presentable and it’s useful to have some constraints to work within. Justine took the DrupalCamp website apart and showed how just a few hours work (speeded up to a few minutes) made a huge improvement, using:

  • contrast, to make elements stand out and catch the eye
  • repetition, to bring uniformity and consistency
  • alignment, to organise and keep things tidy, like Tetris
  • proximity, to delineate things according to information architecture

Learn the rules like a pro, so you can break them like an artist

—Pablo Picasso

I followed that with a meaty technical talk on microservices by Ronald Ashri. Ronald explained how, rather than being about size, microservices are individual components each with a clear, well-defined scope and purpose.

With microservices, every part of the system does one thing well, with a defined, bounded context in which it operates. Different components can then be composed together to create a larger system. The goal is to make a system that makes change easy, safely and at scale.

OO has traditionally focused on objects, but the messages between them are arguably more important. Roland advised not to start by designing a data model, rather focus on business capabilities.

I finished the day with a BoF for freelancers. A BoF is a “birds of a feather” session - often arranged on the spur of the moment with no set agenda, by like-minded people who “flock together”. It was great to chat to others and get perspectives from those contracting as well as companies that employ freelancers. Thanks to Farez for organising!

At the end of the day we retired to the Blacksmith & Toffeemaker pub round the corner to continue the great conversations over a well earned pint.

Looking forward to tomorrow!

Mar 03 2017
Mar 03
3 March 2017

This year is the fifth DrupalCamp London, and today was my first time attending the CxO day. The CxO day is a single track event aimed at business leaders who provide Drupal services. I reckon there were about 100 people there, and more will come over the weekend.

It’s great that DrupalCamps cater for a wide audience, but I can’t help wonder if a separate CxO day leads to a bit of a divide between business and technical. I’d love to hear more talks that cross this divide. There must be many people who could share and learn from each other but don’t get to meet.

Social for the CxO day in full swing #dclondon pic.twitter.com/javR8En13U

— DrupalCamp London (@DrupalCampLDN) March 3, 2017

The day kicked off with breakfast, followed by a talk by Andre Spicer on the stupidity paradox, a pitfall for many companies. Companies often start off well, wanting to appear smart, and hire the “brightest and best”. But smart people think independently, and this is inconvenient. Sadly companies sometimes revert to managing that stupidity, through bureaucracy, copying each other and taking the safe option. This can lead to an action-oriented or results-oriented culture with an overemphasis on leadership. Workers realise that it doesn’t pay to be smart or get in the way, and are rewarded by climbing the corporate ladder.

Andre shared the example of the VW emissions scandal as one such example of this that brought short term gains but a much larger long term repercussion.

So what can we do about this? Teams need people that ask questions, that play devil’s advocate. People that ask “why?”, and don’t accept “because I said so?” or “because we’ve always done it this way” in response.

"Smart people think independently, which is inconvenient" #dclcxo #dclondon pic.twitter.com/DuUp2hPWQc

— Chandeep Khosa (@ChandeepKhosa) March 3, 2017

Next talk was Paul Reeves, Drupal and I. Paul followed on from the first talk by sharing his journey with Drupal beginning with Drupal 4, candidly sharing his mistakes along the way. Initially hating Drupal, he preferred writing his own code, but reached a turning point at DrupalCon 2008. It was the community behind Drupal, with solutions to problems that other people had already found. He discovered work he’d been involved in for a client was able to be incorporated into and improve Drupal 5.

Paul advised avoiding a monolithic architecture where you need to learn the entire system to get things done. You don’t need to (and can’t) do that with Drupal. Instead, use it wisely and people can be productive on different levels.

Next was Benn Finn. Ben is one of the co-founders of Sibelius, and shared some insights into how ideas new features for Sibelius were prioritised.

Functionality isn’t the only thing that can be deemed a new feature. Rather, anything that’s a marketable benefit to a customer should be considered. That could be an improved interface, compatibility, better speed etc. There are always more ideas than it’s possible to implement, so we need to choose carefully.

Ideas can come from yourself, your customers or developers. Often these people alone don’t have a big enough picture of what you’re doing to suggesting solutions, nor can they tell you how to prioritise your features.

Companies often choose features by gut feel, but gut feel is a really unreliable method to choose what to prioritise. People tend to choose features that excite them personally, there is a bias towards bigger features. Instead we need to do a cost benefit analysis: feature priority = benefit / cost. It’s hard to predict these values, but we can start by identifying the proportion of users who will use a feature, multiplied by how much they will pay, and divide by the development time. Then review your estimates later to see how accurate they were. Ben went into detail about how they did this at Sibelius.

After lunch Alex Elkins spoke (at short notice!) on problems not solutions. Alex advised caution in focusing on solutions too early on. Identifying a problem is the key to having a successful product or service, and this must happen first before before we try to come up with a solution. Many unsuccessful products are a result of not solving a real problem. So what makes a problem a good candidate to try to solve? It needs to be valid, important, well defined, actionable and not already have a solution.

To finish, Sarah Wood, founder of Unruly, gave a keynote interview. Sarah shared her story of how she came to found Unruly, after working in academia and wanting to have more time with family and make a bigger impact. Unruly’s’ work includes the We’re the Superhumans video promoting the Paralympics.

Infectious energetic passionate presentation with @sarahfwood on startups, viral social media, data insights in advertising #dclondon pic.twitter.com/2EIUT9QVfn

— Paul Johnson (@pdjohnson) March 3, 2017

Successful video content needs two things in order for it to be shared. Firstly, it must solicit an emotion. Make someone laugh or cry, surprise, shock, or inspire. Secondly, invoke a social motivation to do something with that feeling.

Sarah wouldn’t do anything differently if she did it all again. She advised not to focus too much on what you would have done, instead look at what you’re doing now, and what you want to do differently both now and in future.

It was a great day, I learnt a lot and had some great conversations. I’m looking forward to the rest of the weekend.

Mar 01 2017
Mar 01
    To improve SEO, we need to clean our URLs. By default in drupal, we've an option called clean URLs at the configuration. In drupal 7, we can also manage the URLs. For instance, you have a content type called services. You wanted to each service page have url like services/page-name. To do that, we've a pathauto module in drupal 7. The pathauto module allow us to manage the URLs for every content types, files, taxonomy & users and also we can remove some unnecessary words from URL like an, the and so on.

   The pathauto module can remove some unnecessary words like a, an, the and so on & also remove special characters like !, @, $ and so on. Unfortunately, it doesn't included some other symbols like copyright(©), trademark(™), registered(®) and so on. But it provide a hook to add new symbols into the punctuation settings called hook_pathauto_punctuation_chars_alter. After created a content with some symbols which are represented above, your page URL looks like below image:

Drupal 7 - remove special characters from url using pathauto module

/**
 * Implements hook_pathauto_punctuation_chars_alter().
 */
function phponwebsites_pathauto_punctuation_chars_alter(array &$punctuation) {
  $punctuation['copyright']          = array('value' => '©', 'name' => t('Copyright'));
  $punctuation['registered']         = array('value' => '®', 'name' => t('Registered trademark'));
  $punctuation['trademark']          = array('value' => '™', 'name' => t('Trademark'));
}
   After implemented above code into your module, you cold see added symbols are listing on Pathauto module's settings page at /admin/config/search/path/settings. If You didn't get these symbols, clear cache & test it again. It looks like below image:

Drupal 7 - pathauto settings after hook_pathauto_punctuation_chars_alter

Now you can create a content with those symbols. The pathauto module didn't added those symbols into the URL.

Now I hope you know how to remove some special characters from URL alias using pathauto module in drupal 7.

Related articles:
Add new menu item into already created menu in Drupal 7
Add class into menu item in Drupal 7
Create menu tab programmatically in Drupal 7
Add custom fields to search api index in Drupal 7
Clear views cache when insert, update and delete a node in Drupal 7
Create a page without header and footer in Drupal 7
Login using both email and username in Drupal 7
How to set multiple URL alias for a node using pathauto module in Drupal 7
Update multiple fields using #ajax in Drupal 7
Mar 01 2017
Mar 01
   As we discussed in my previous post, clean URL is one of the option to improve SEO. We've module called pathauto to clean URLs in drupal 7. It can allow us to set alias for content types, files, users & taxonomies. But we can set only one URL alias for a content type in drupal 7. You can set URL alias for a content type at admin/config/search/path/patterns. It looks like below image:

Pathauto module patterns in drupal 7

   Suppose you need two path for a content. For instance, the URL alias for a article need to node title and also article/node-title. Is it possible to set multiple path alias for a content type in drupal 7? Yes it is possible in drupal 7. We can set multiple URL alias for a conten type programmatically using pathauto module in drupal 7. We need to insert our path alias into the "url_alias" table while inserting & updating a node and remove path alias When delete a node.

Add URL alias programmatically when insert and update a node using pathauto module in drupal 7:


    For instance, I've choosen article content type. We need to insert & update a URL alias into the "url_alias" table using hook_node_insert() & hook_node_update() in drupal 7.


/**
 * Implements hook_node_insert()
 */
function phponwebsites_node_insert($node) {
  if ($node->type == 'article') {
    //save node alias
    _phponwebsites_insert_update_alias($node);
  }
}

/**
 * Implements hook_node_update()
 */
function phponwebsites_node_update($node) {
  if ($node->type == 'article') {
    //update node alias
    _phponwebsites_insert_update_alias($node);
  }
}

/**
 * Insert and update alias for course
 */
function _phponwebsites_insert_update_alias($node) {
  module_load_include('inc', 'pathauto');
  $title = pathauto_cleanstring($node->title);

  $values['source'] = 'node/' . $node->nid . '/article';
  $values['alias'] = 'article/' . $title;

  $all_values = array($values);

  foreach ($all_values as $all) {
    $query = db_merge('url_alias')
      ->fields(array('source' => $all['source'], 'alias' => $all['alias'], 'language' => LANGUAGE_NONE))
      ->key(array('source' => $all['source']))
      ->execute();
  }
}

Where,
 pathauto_cleanstring is obey the pathatuo module's rules which is mentioned at admin/config/search/path/settings. To know more details of pathauto_cleanstring, please visit http://www.drupalcontrib.org/api/drupal/contributions!pathauto!pathauto.inc/function/pathauto_cleanstring/7

After added the above code into your custome module(clear cache), you will create a article. You just test your url at admin/config/search/path in the pathauto's list. It looks like below image:

Pathauto module URL alias list in drupal 7

Now you could access the article by both node-title as well as article/node-title.

Multiple URL alias for a node using pathauto module in drupal 7

Delete URL alias programmatically when delete a node using pathauto module in drupal 7:


     We've inserted 2 URL alias for a node. So we need to delete those from "url_alias" table when delete a node. We can trigger it using hook_node_delete() in drupal 7. Consider the below code:


/**
 * Implements hook_node_delete()
 */
function arep_node_delete($node) {
  if ($node->type == 'article') {
    //delete node alias for ceu and non-ceu course
    module_load_include('inc', 'pathauto');
    $source[0] = 'node/' . $node->nid . '/article';

    foreach ($source as $s) {
      $path = path_load(
        array('source' => $s)
      );
      path_delete($path['pid']);
    }

  }
}

Where,
  path_load returns the details of a URL alias like source, alias, path id  & language. To know more details of path_load(), please visit https://api.drupal.org/api/drupal/includes!path.inc/function/path_load/7.x.

After added the above code into your customer module(clear cache), you will delete a node and check your URL alias at admin/config/search/path. Now tt should not be displayed here.

Now I've hope you know how to set multiple URL alias for a content type.

Related articles:
Remove speical characters from URL alias using pathauto module in Drupal 7
Add new menu item into already created menu in Drupal 7
Add class into menu item in Drupal 7
Create menu tab programmatically in Drupal 7
Add custom fields to search api index in Drupal 7
Clear views cache when insert, update and delete a node in Drupal 7
Create a page without header and footer in Drupal 7
Login using both email and username in Drupal 7
Update multiple fields using #ajax in Drupal 7
Mar 01 2017
Mar 01
     This blog describes how to disable future dates in the Drupal 7. One of the features in the date  module is displayed the date in the pop-up.

Disable future dates in the date pop up - drupal 7

The use case is if you want to display only past & current date rather than all the dates in the pop-up, then how to do it in Drupal 7. Actually, the date module provides API called hook_date_popup_process_alter to alter the date_popup widget elements.

Example for disabling future dates in Drupal 7:


   For instance, I am going to disable future dates in the article content type. Please consider the following code snippet.

/**
 * Implement hook_date_popup_process_alter().
 */
function phponwebsites_date_popup_process_alter(&$element, &$form_state, &$context) {

  if ($form_state['complete form']['#form_id'] == 'article_node_form' && $element['#field']['field_name'] == 'field_date') {
    $max = 0;
  }

  if (isset($element['#datepicker_options']['maxDate'])) {
    $max = $element['#datepicker_options']['maxDate'];
  }

  if (isset($max)) {
    $element['#datepicker_options'] = array(
      'maxDate' => "+$max D",
    );
  }
  $element['date'] = date_popup_process_date_part($element);
}


   I've disabled the dates only if the form is article & the field name is field_date. After added the above code to your module, you could see disabled future dates in the date pop up. It looks like the below image:


Disable future dates in the date pop-up in drupal 7

   Now I've hope you know how to disable the future dates at the date module in Drupal 7.

Related articles:
Remove speical characters from URL alias using pathauto module in Drupal 7
Add new menu item into already created menu in Drupal 7
Add class into menu item in Drupal 7
Create menu tab programmatically in Drupal 7
Add custom fields to search api index in Drupal 7
Clear views cache when insert, update and delete a node in Drupal 7
Create a page without header and footer in Drupal 7
Login using both email and username in Drupal 7
Update multiple fields using #ajax in Drupal 7
Mar 01 2017
Mar 01
     We know how to replace a field using #ajax in a form. Can we update multiple fields using #ajax in a form? Yes, we can update multiple fields using #ajax in the Drupal 7. We can achieve it using ajax_command_replace() in Drupal 7. For more details about ajax commands on Drupal 7, please visit https://api.drupal.org/api/drupal/7.x/search/ajax_command.

Update fields using #ajax in a Drupal form:


    Consider the following example. In this example, I've created a form with fields First, Second & Third name. I tried to update the Second & Third name fields when focus out on the First name field.
/**  * Implments hook_form()  */ function phponwebsites_ajax_form($form, &$form_state) {   $form['firstname'] = array(     '#title' => t('First name'),     '#type' => 'textfield',     '#ajax' => array(       'callback' => '_generate_textfield',       'wrapper' => 'copied-text-field',     )   );
  $form['secondname'] = array(     '#title' => t('Second name'),     '#type' => 'textfield',     '#prefix' => '<div id="copied-secondname">',     '#suffix' => '</div>',   );
  $form['thirdname'] = array(     '#title' => t('Third name'),     '#type' => 'textfield',     '#prefix' => '<div id="copied-thirdname">',     '#suffix' => '</div>',   );
  $form['submit'] = array(     '#type' => 'submit',     '#value' => 'Submit'   );
  return $form; }
function _generate_textfield($form, &$form_state) {   if (!empty($form_state['values']['firstname'])) {     $form['secondname']['#value'] =  $form_state['values']['firstname'];     $form['thirdname']['#value'] =  $form_state['values']['firstname'];   }   $commands = array();   $commands[] = ajax_command_replace('#copied-secondname', drupal_render($form['secondname']));   $commands[] = ajax_command_replace('#copied-thirdname', drupal_render($form['thirdname']));   return array('#type' => 'ajax', '#commands' => $commands); }
When you tried to execute the above codes, it'll populate the same name on the other 2 fields. It looks likes the below image:
Update multiple form fields using #ajax in Drupal 7

Now I hope you know how to populate multiple fields using #ajax in Drupal 7 forms. Related articles:
Remove speical characters from URL alias using pathauto module in Drupal 7
Add new menu item into already created menu in Drupal 7
Add class into menu item in Drupal 7
Create menu tab programmatically in Drupal 7
Add custom fields to search api index in Drupal 7
Clear views cache when insert, update and delete a node in Drupal 7
Create a page without header and footer in Drupal 7
Login using both email and username in Drupal 7
Disable future dates in date pop-up calendar Drupal 7
Mar 01 2017
Mar 01
     This blog describes how to add date pop-up calender to a custom form in the Drupal 7.

Use date pop-up calendar in custom form - drupal 7

     The use case is if you want to use date pop-up calendar in a custom form, then how you can do it in the drupal 7. Actually, the drupal 7 form API provides lots of form types like textfield, checkbox, checkboxes etc to create a custom form. Similarly, the date module  also provides the form type called date_popup. We can use it in the custom form in order to display the date pop-up in the custom form.

Use date pop-up calendar with the custom form in drupal 7:


   Let consider the below code snippet:

 
function phponwebsites_menu() {
  $items = array();

  $items['customform'] = array(
    'title' => t('Custom Form'),
    'type' => MENU_CALLBACK,
    'page callback' => 'drupal_get_form',
    'page arguments' => array('phponwebsites_display_date_popup_form'),
    'access callback' => TRUE,
  );

  return $items;
}

function phponwebsites_display_date_popup_form($form, &$form_state) {
  $form['date'] = array(
    '#type' => 'date_popup',
    '#default_value' => date('Y-m-d'),
    '#date_format'   => 'Y-m-d',
    '#date_year_range' => '0:+5',
    '#datepicker_options' => array('minDate' => 0, 'maxDate' => 0),
  );

  return $form;
}


    Where,
      '#date_format'   => 'Y-m-d' if you need to display only date
      '#date_format'   => 'Y-m-d H:i:s' if you need to display date & time
      '#date_year_range' => '0:+5' if you need to display only future 5 years
      '#datepicker_options' => array('minDate' => 0, 'maxDate' => 0) if you want to display only current date. We can hide the future & past dates using this option.

   Please add the above code into your module file and look into the "customform" page. It looks like the below image:

Display only current date in date -pop-up - drupal 7

   Now I've hope you know how to add date pop-up calendar with custom form in the drupal 7.

Related articles:
Remove speical characters from URL alias using pathauto module in Drupal 7
Add new menu item into already created menu in Drupal 7
Add class into menu item in Drupal 7
Create menu tab programmatically in Drupal 7
Add custom fields to search api index in Drupal 7
Clear views cache when insert, update and delete a node in Drupal 7
Create a page without header and footer in Drupal 7
Login using both email and username in Drupal 7
Disable future dates in date pop-up calendar Drupal 7
Update multiple fields using #ajax in Drupal 7
Feb 20 2017
Feb 20

Overview

Savas Labs has been using Docker for our local development and CI environments for some time to streamline our systems. On a recent project, we chose to integrate Phase 2’s Pattern Lab Starter theme to incorporate more front-end components into our standard build. This required building a new Docker image for running applications that the theme depends on. In this post, I’ll share:

  • A Dockerfile used to build an image with Node, npm, PHP, and Composer installed
  • A docker-compose.yml configuration and Docker commands for running theme commands such as npm start from within the container

Along the way, I’ll also provide:

  • A quick overview of why we use Docker for local development
    • This is part of a Docker series we’re publishing, so be on the lookout for more!
  • Tips for building custom images and running common front-end applications inside containers.

Background

We switched to using Docker for local development last year and we love it - so much so that we even proposed a Drupalcon session on our approach and experience we hope to deliver. Using Docker makes it easy for developers to quickly spin up consistent local development environments that match production. In the past we used Vagrant and virtual machines, even a Drupal-specific flavor DrupalVM, for these purposes, but we’ve found Docker to be faster when switching between multiple projects, which we often do on any given Sunworkday.

Usually we build our Docker images from scratch to closely match production environments. However, for agile development and rapid prototyping, we often make use of public Docker images. In these cases we’ve relied on Wodby’s Docker4Drupal project, which is “a set of docker containers optimized for Drupal.”

We’re also fans of the atomic design methodology and present our clients interactive style guides early to facilitate better collaboration throughout. Real interaction with the design is necessary from the get-go; gone are the days of the static Photoshop file at the outset that “magically” translates to a living design at the end. So when we heard of the Pattern Lab Starter Drupal theme which leverages Pattern Lab (a tool for building pattern-driven user interfaces using atomic design), we were excited to bake the front-end components in to our Docker world. Oh, the beauty of open source!

Building the Docker image

To experiment with the Pattern Lab Starter theme we began with a vanilla Drupal 8 installation, and then quickly spun up our local Docker development environment using Docker4Drupal. We then copied the Pattern Lab Starter code to a new custom/theme/patter_lab_starter directory in our Drupal project.

Running the Phase 2 Pattern Lab Starter theme requires Node.js, the node package manager npm, PHP, and the PHP dependency manager Composer. Node and npm are required for managing the theme’s node dependencies (such as Gulp, Bower, etc.), while PHP and Composer are required by the theme to run and serve Pattern Lab.

While we could install these applications on the host machine, outside of the Docker image, that defeats the purpose of using Docker. One of the great advantages of virtualization, be it Docker or a full VM, is that you don’t have to rely on installing global dependencies on your local machine. One of the many benefits of this is that it ensures each team member is developing in the same environment.

Unfortunately, while Docker4Drupal provides public images for many applications (such as Nginx, PHP, MariaDB, Mailhog, Redis, Apache Solr, and Varnish), it does not provide images for running the applications required by the Pattern Lab Starter theme.

One of the nice features of Docker though is that it is relatively easy to create a new image that builds upon other images. This is done via a Dockerfile which specifies the commands for creating the image.

To build an image with the applications required by our theme we created a Dockerfile with the following contents:

FROM node:7.1
MAINTAINER Dan Murphy <[email protected]>

RUN apt-get update && \
    apt-get install -y php5-dev  && \
    curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer && \

    # Directory required by Yeoman to run.
    mkdir -p /root/.config/configstore \

    # Clean up.
    apt-get clean && \
    rm -rf \
      /root/.composer \
      /tmp/* \
      /usr/include/php \
      /usr/lib/php5/build \
      /var/lib/apt/lists/*

# Permissions required by Yeoman to run: https://github.com/keystonejs/keystone/issues/1566#issuecomment-217736880
RUN chmod g+rwx /root /root/.config /root/.config/configstore

EXPOSE 3001 3050

The commands in this Dockerfile:

  • Set the official Node 7 image as the base image. This base image includes Node and npm.
  • Install PHP 5 and Composer.
  • Make configuration changes necessary for running Yeoman, a popular Node scaffolding system used to create new component folders in Pattern Lab.
  • Expose ports 3001 and 3050 which are necessary for serving the Pattern Lab style guide.

From this Dockerfile we built the image savaslabs/node-php-composer and made it publicly available on DockerHub. Please check it out and use it to your delight!

One piece of advice I have for building images for local development is that while Alpine Linux based images may be much smaller in size, the bare-bones nature and lack of common packages brings with it some trade-offs that make it more difficult to build upon. For that reason, we based our image on the standard DebianJessie Node image rather than the Alpine variant.

This is also why we didn’t just simply start from the wodby/drupal-php:7.0 image and install Node and npm on it. Unfortunately, the wodby/drupal-php image is built from alpine:edge which lacks many of the dependencies required to install Node and npm.

Now a Docker purist might critique this image and recommend only “one process per container”. This is a drawback of this approach, especially since Wodby already provides a PHP image with Composer installed. Ideally, we’d use that in conjunction with separate images that run Node and npm.

However, the theme’s setup makes that difficult. Essentially PHP scripts and Composer commands are baked into the theme’s npm scripts and gulp tasks, making it difficult to untangle them. For example, the npm start command runs Gulp tasks that depend on PHP to generate and serve the Pattern Lab style guide.

Due to these constraints, and since this image is for local development, isn’t being used to deploy a production app, and encapsulates all of the applications required by the Pattern Lab Starter theme, we felt comfortable with this approach.

Using the image

To use this image, we specified it in our project’s docker-compose.yml file (see full file here) by adding the following lines to the services section:

node-php-composer:
 image: savaslabs/node-php-composer:1.2
 ports:
   - "3050:3050"
   - "3001:3001"
 volumes_from:
   - php

This defines the configuration that is applied to a node-php-composer container when spun up. This configuration:

  • Specifies that the container should be created from the savaslabs/node-php-composer image that we built and referenced previously
  • Maps the container ports to our host ports so that we can access the Pattern Labs style guide locally
  • Mounts the project files (that are mounted to the php container) so that they are accessible to the container.

With this service defined in the docker-compose.yml we can start using the theme!

First we spin up the Docker containers by running docker-compose up -d.

Once the containers are running, we can open a Bash shell in the theme directory of the node-php-composer container by running the command:

docker-compose run --rm --service-ports -w /var/www/html/web/themes/custom/pattern_lab_starter node-php-composer /bin/bash

We use the --service-ports option to ensure the ports used for serving the style guide are mapped to the host.

Once inside the container in the theme directory, we install the theme’s dependencies and serve the style guide by running the following commands:

npm install --unsafe-perm
npm start

Voila! Once npm start is running we can access the Pattern Lab style guide at the URL’s that are outputted, for example http://localhost:3050/pattern-lab/public/.

Note: Docker runs containers as root, so we use the --unsafe-perm flag to run npm install with root privileges. This is okay for local development, but would be a security risk if deploying the container to production. For information on running the container as an unprivileged user, see this documentation.

Gulp and Bower are installed as theme dependencies during npm install, therefore we don’t need either installed globally in the container. However, to run these commands we must shell into the theme directory in the container (just as we did before), and then run Gulp and Bower commands as follows:

  • To install Bower libraries run $(npm bin)/bower install --allow-root {project-name} --save
  • To run arbitrary Gulp commands run $(npm bin)/gulp {command}

Other commands listed in the Pattern Lab Starter theme README can be run in similar ways from within the node-php-composer container.

Conclusion

Using Docker for local development has many benefits, one of which is that developers can run applications required by their project inside containers rather than having to install them globally on their local machines. While we typically think of this in terms of the web stack, it also extends to running applications required for front-end development. The Docker image described in this post allows several commonly used front-end applications to run within a container like the rest of the web stack.

While this blog post demonstrates how to build and use a Docker image specifically for use with the Pattern Lab Starter theme, the methodology can be adapted for other uses. A similar approach could be used with Zivtech’s Bear Skin theme, which is another Pattern Lab based theme, or with other contributed or custom themes that rely on npm, Gulp, Bower, or Composer.

If you have any questions or comments, please post them below!

Feb 15 2017
Feb 15

We use Docker for our development environments because it helps us adhere to our commitment to excellence. It ensures an identical development platform across the team while also achieving parity with the production environment. These efficiency gains (among others we’ll share in an ongoing Docker series) over traditional development methods enable us to spend less time on setup and more time building amazing things.

Part of our workflow includes a mechanism to establish and update the seed database which we use to load near-real-time production content to our development environments as well as our automated testing infrastructure. We’ve found it’s best to have real data throughout the development process, rather than using stale or dummy data which runs the risk of encountering unexpected issues toward the end of a project. One efficiency boon we’ve recently implemented and are excited to share is a technique that dramatically speeds up database imports, especially large ones. This is a big win for us since we’re often importing large databases multiple times a day on a project. In this post we’ll look at:

  • how much faster data volume imports are compared to traditional database dumps piped to mysql
  • how to set up a data volume import with your Drupal Docker stack
  • how to tie in this process with your local and continuous integration environments

The old way

The way we historically imported a database was to pipe a SQL database dump file into the MySQL command-line client:

mysql -u{some_user} -p{some_pass} {database_name} < /path/to/database.sql

An improvement upon the default method above which we’ve been using for some time allows us to monitor import progress utilizing the pv command. Large imports can take many minutes, so having insight into how much time remains is helpful to our workflow:

pv /path/to/database.sql | mysql -u{some_user} -p {some_pass} {database_name}

On large databases, though, MYSQL imports can be slow. If we look at a database dump SQL file, we can see why. For example, a 19 MB database dump file we are using in one of our test cases further on in this post contains these instructions:

--
-- Table structure for table `block_content`
--

DROP TABLE IF EXISTS `block_content`;
/*!40101 SET @saved_cs_client     = @@character_set_client */;
/*!40101 SET character_set_client = utf8 */;
CREATE TABLE `block_content` (
  `id` int(10) unsigned NOT NULL AUTO_INCREMENT,
  `revision_id` int(10) unsigned DEFAULT NULL,
  `type` varchar(32) CHARACTER SET ascii NOT NULL COMMENT 'The ID of the target entity.',
  `uuid` varchar(128) CHARACTER SET ascii NOT NULL,
  `langcode` varchar(12) CHARACTER SET ascii NOT NULL,
  PRIMARY KEY (`id`),
  UNIQUE KEY `block_content_field__uuid__value` (`uuid`),
  UNIQUE KEY `block_content__revision_id` (`revision_id`),
  KEY `block_content_field__type__target_id` (`type`)
) ENGINE=InnoDB AUTO_INCREMENT=12 DEFAULT CHARSET=utf8mb4 COMMENT='The base table for block_content entities.';
/*!40101 SET character_set_client = @saved_cs_client */;

--
-- Dumping data for table `block_content`
--

LOCK TABLES `block_content` WRITE;
/*!40000 ALTER TABLE `block_content` DISABLE KEYS */;
set autocommit=0;
INSERT INTO `block_content` VALUES (1,1,'basic','a9167ea6-c6b7-48a1-ac06-6d04a67a5d54','en'),(2,2,'basic','2114eee9-1674-4873-8800-aaf06aaf9773','en'),(3,3,'basic','855c13ba-689e-40fd-9b00-d7e3dd7998ae','en'),(4,4,'basic','8c68671b-715e-457d-a497-2d38c1562f67','en'),(5,5,'basic','bc7701dd-b31c-45a6-9f96-48b0b91c7fa2','en'),(6,6,'basic','d8e23385-5bda-41da-8e1f-ba60fc25c1dc','en'),(7,7,'basic','ea6a93eb-b0c3-4d1c-8690-c16b3c52b3f1','en'),(8,8,'basic','3d314051-567f-4e74-aae4-a8b076603e44','en'),(9,9,'basic','2ef5ae05-6819-4571-8872-4d994ae793ef','en'),(10,10,'basic','3deaa1a9-4144-43cc-9a3d-aeb635dfc2ca','en'),(11,11,'basic','d57e81e8-c613-45be-b1d5-5844ba15413c','en');
/*!40000 ALTER TABLE `block_content` ENABLE KEYS */;
UNLOCK TABLES;
commit;

When we pipe the contents of the MySQL database dump to the mysql command, the client processes each of these instructions sequentially in order to (1) create the structure for each table defined in the file, (2) populate the database with data from the SQL dump and (3) do post-processing work like create indices to ensure the database performs well. The example here processes pretty quickly, but if your site has a lot of historic content, as many of our clients do, then the import process can take enough time that it throws a wrench in our rapid workflow!

What happens when mysql finishes importing the SQL dump file? The database contents (often) live in /var/lib/mysql/{database}, so for example for the block_content table mentioned above, assuming you’re using the typically preferred InnoDB storage engine, there are two files called block_content.frm and block_content.ibd in /var/lib/mysql/{database}/. The /var/lib/mysql directory will also contain a number of other directories and files related to the configuration of the MySQL server.

Now, suppose that instead of sequentially processing the SQL instructions contained in a database dump file, we were able to provide developers with a snapshot of the /var/lib/mysql directory for a given Drupal site. Could this swap faster than the traditional database import methods? Let’s have a look at two test cases to find out!

MySQL import test cases

The table below shows the results of two test cases, one using a 19 MB database and the other using a 4.7 GB database.

Method Database size Time to drop tables and restore (seconds) Traditional mysql 19 MB 128 Docker data volume restore 19 MB 11 Traditional mysql 4.7 GB 606 Docker data volume restore 4.7 GB 85

In other words, the MySQL data volume import completes, on average, in about 11% of the time, or 9 times faster, than a traditional MySQL dump import would take!

Since a GIF is worth a thousand words, compare these two processes side-by-side (both are using the same 19 MB source database; the first is using a data volume restore process while the second is using the traditional MySQL import process). You can see that the second process takes considerably longer!

Docker data volume restore

Traditional MySQL database dump import

Use MySQL volume for database imports with Docker

Here’s how the process works. Suppose you have a Docker stack with a web container and a database container, and that the database container has data in it already (your site is up and running locally). Assuming a database container name of drupal_database, to generate a volume for the MySQL /var/lib/mysql contents of the database container, you’d run these commands:

# Stop the database container to prevent read/writes to it during the database
# export process.
docker stop drupal_database
# Now use the carinamarinab/backup image with the `backup` command to generate a
# tar.gz file based on the `/var/lib/mysql` directory in the `drupal_database`
# container.
docker run --rm --volumes-from drupal_database carinamarina/backup backup \
--source /var/lib/mysql/ --stdout --zip > db-data-volume.tar.gz

With the 4.7 GB sample database above, this process takes 239 seconds and results in 702 MB compressed file.

We’re making use of the carinamarina/backup image produced by Rackspace to create an archive of the database files.

You can then distribute this file to your colleagues (at Savas Labs, we use Amazon S3), or make use of it in continuous integration builds (more on that below), using these commands:

# Copy the data volume tar.gz file from your team's AWS S3 bucket.
if [ ! -f db/db-data-volume.tar.gz ]; then aws s3 cp \
s3://{your-bucket}/mysql-data-volume/db-data-volume.tar.gz db-data-volume.tar.gz; fi
# Stop the database container to prevent read/writes during the database
# restore process.
docker stop drupal_database
# Remove the /var/lib/mysql contents from the database container.
docker run --rm --volumes-from drupal_database alpine:3.3 rm -rf /var/lib/mysql/*
# Use the carinamarina/backup image with the `restore` command to extract
# the tar.gz file contents into /var/lib/mysql in the database container.
docker run --rm --interactive --volumes-from drupal_database \
carinamarina/backup restore --destination /var/lib/mysql/ --stdin \
--zip < db-data-volume.tar.gz
# Start the database container again.
docker start drupal_database

So, not too complicated, but it will require a change in your processes for generating seed databases to distribute to your team for local development, or for CI builds. Instead of using mysqldump to create the seed database file, you’ll need to use the carinamarina/backup image to create the .tar.gz file for distribution. And instead of mysql {database} < database.sql you’ll use carinamarina/backup to restore the data volume.

In our team’s view this is a small cost for the enormous gains in database import time, which in turn boosts productivity to the tune of faster CI builds and refreshes of local development environments.

Further efficiency gains: integrate this process with your continuous integration workflow

The above steps can be manually performed by a technical lead responsible for generating and distributing the MySQL data volume to team members and your testing infrastructure. But we can get further productivity gains by automating this process completely with Travis CI and GitHub hooks. In outline, here’s what this process looks like:

1. Generate a new seed database SQL dump after production deployments

At Savas Labs, we use Fabric to automate our deployment process. When we deploy to production (not on a Docker stack), our post-deployment tasks generate a traditional MySQL database dump and copy it to Amazon S3:

def update_seed_db():
    run('drush -r %s/www/web sql-dump \
    --result-file=/tmp/$(date +%%Y-%%m-%%d)--post-deployment.sql --gzip \
    --structure-tables-list=cache,cache_*,history,search_*,sessions,watchdog' \
    % env.code_dir)
    run('/usr/local/bin/aws s3 cp /tmp/$(date +%Y-%m-%d)--post-deployment.sql.gz \
    s3://{bucket-name}/seed-database/database.sql.gz --sse')
    run('rm /tmp/$(date +%Y-%m-%d)--post-deployment.sql.gz')

2. When work is merged into develop, create a new MySQL data volume archive

We use git flow as our collaboration and documentation standard for source code management on our Drupal projects. Whenever a developer merges a feature branch into develop, we update the MySQL data volume archive dump for use in Travis CI tasks and local development. First, there is a specification in our .travis.yml file that calls a deployment script:

deploy:
  provider: script
  script:
    - resources/scripts/travis-deploy.sh
  skip_cleanup: true
  on:
    branch: develop

And the travis-deploy.sh script:

#!/usr/bin/env bash

set -e

make import-seed-db
make export-mysql-data
aws s3 cp db-data-volume.tar.gz \
s3://{bucket-name}/mysql-data-volume/db-data-volume.tar.gz --sse

This script: (1) imports the traditional MySQL seed database file from production, and then (2) creates a MySQL data volume archive. We use a Makefile to standardize common site provisioning tasks for developers and our CI systems.

3. Pull requests and local development make use of the MySQL data volume archive

Now, whenever developers want to refresh their local environment by wiping the existing database and re-importing the seed database, or, when a Travis CI build is triggered by a GitHub pull request, these processes can make use of an up-to-date MySQL data volume archive file which is super fast to restore! This way, we ensure we’re always testing against the latest content and configuration, and avoid running into costly issues having to troubleshoot inconsistencies with production.

Conclusion

We’ve invested heavily in Docker for our development stack, and this workflow update is a compelling addition to that toolkit since it has substantially sped up MySQL imports and boosted productivity. Try it out in your Docker workflow and we invite comments to field any questions and hear about your successes. Stay tuned for further Docker updates!

Jan 11 2017
Jan 11
11 January 2017

In September last year I gave a talk at DrupalCon in Dublin, on the topic of offline first. I wanted to reflect a little on that experience.

The topic covered how modern browsers allow us to build websites to work better under poor or non-existent network conditions. I chose this because my previous experience writing native mobile apps has given me some insight into the issues with mobile connectivity. Until recently, that’s something that native apps have generally been better at addressing than the web, but now that’s changing.

I wanted to talk for a number of reasons. Firstly, I felt by sharing I could contribute to the Drupal community at large. This is a relatively new area, one in which people are still figuring out opportunities, particularly how to apply it to Drupal. Secondly, I hoped that by stretching myself there would be a sense of personal and career development for myself.

When I received the email informing me my talk had been accepted I was really excited. I hadn’t spoken at this size of event before, so I wasn’t expecting my proposal to be picked. My aim had been to build up my speaking experience and perhaps try for something like this the following year!

If you want to speak at a conference, you do need to start off at a smaller event. DrupalCon itself asks for speakers to have had previous experience, and ideally you want to be giving an established talk you’re confident on rather than a brand new one!

If you’re based near London, Drupal Show and Tell is an ideal place to start - 3 short talks once a month, about 15-20 minutes each. They are always looking for new speakers too! It’s friendly and informal, a nice opportunity to test out a topic without the huge commitment of a larger event.

I first spoke there in May. I came away feeling like the subject was received well, but also that the talk needed restructuring. A useful exercise, as these are the kind of things you only find out by putting yourself in front of a real audience.

I then worked on the material, turning it into a longer, 35-40 minute session, which I presented at the Brighton and Bristol DrupalCamps. It’s easy to underestimate how much effort goes into such a talk. The abstract itself takes quite a lot of time to prepare, but it’s what delegates will use to decide whether to attend or not. You want to get this right - who are the people that will most benefit from the talk? What experience do they need to have beforehand, in order to learn something new?

If you aren’t that experienced in public speaking, get some help! One of the benefits to being based at The Skiff co-working space is the community, and I was fortunate enough to attend a public speaking workshop run by Steve Bustin. Public speaking isn’t something that comes naturally to me, so I learnt a lot and was certainly out of my comfort zone!

Leading up to giving a talk, I’d recommend you run though it as often as you can, preferably with an audience. Local companies will often be willing to have you give it to their their staff - you get the practice and they get a conference talk for free. In my case, The Unit in Brighton were willing to be guinea pigs.

Some practicalities at the conference itself: firstly, pace yourself during the event and do something relaxing the night before. Secondly, get to the venue early and spend some time in the room while it’s quiet. It will take away any unfamiliarity of the room, and also you’ll have plenty of time to deal with any technical problems! I had some projector issues, but the conference staff were excellent at sorting them out.

I really enjoyed the talk itself. I felt prepared, something that was probably the biggest contributor to how well it went. People in the audience asked some good questions, and a nice little spontaneous discussion happened straight after.

A couple of things weren’t so good. Firstly, I wished there had been a more diverse audience. This is something that we as an industry have to get better at, and I’m glad work is being done in the Drupal community to help. Secondly, DrupalCon has an anonymous feedback mechanism for talks. My feedback was generally positive, but it still left me with thoughts of what I could have done better. After the talk, when the adrenaline has gone, its easy to feel quite vulnerable.

Overall though, I’m really glad I did it. I learnt a lot from the experience and developed personally as a result. I’d love to talk again, perhaps jointly with someone else next time.

Jan 10 2017
Jan 10

Drupal is an open source project and really depends on its community to move forward. It is all about getting to know the CMS, spreading the knowledge and contribute to projects.
I will give you some ways to get involved, even if you are not a developer there is a task for you!

A group of Drupal mentors at DrupalCon 2016 in Dublin

Drupal Mentors – DrupalCon Dublin 2016 by Michael Cannon is licenced under CC BY-SA 2.0

Participating in user support

Sharing your knowledge with others is very important to the community: it is a nice thing to do and you might also learn some things by doing so. Whatever your skill level, you can give back to the community with online support. There are many places where you can give support starting with the Support Forums. You can also go to Drupal Answers which is more active than the forums or subscribe to the Support Mailing list. If you prefer real-time chat, you can also join #drupal-support channel on IRC or the Slack channels.

Helping out on documentation

Community members can write, review and improve different sorts of documentation for the project: community documentation on drupal.org, programming API reference, help pages inside the core software, documentation embedded in contributed modules and themes etc.
Contributing is a good way to learn more about Drupal and share your knowledge with others. Beginners are particularly encouraged to participate as they are more likely to know where documentation is lacking.
If you are interested, check out the new contributor tasks for anyone and writers.

Translating Drupal interface in your own language

The default language for the administration interface is English but there are about 100 available languages for translations. There is always a need for translations as many of these translation sets are incomplete or can be improved for core and contributed modules.
All translations are now managed by the translation server. If you are willing to help, all you have to do is logging into drupal.org and join a language team. There is even a video to learn how the translation system works and a documentation.

You can also help to translate documentation into your language. Most language-specific communities have their own documentation so you should get in touch with them directly. To learn more, see the dedicated page.

Improving design and usability

The idea is to improve the usability especially in Drupal 8 regarding the administration interface. The focus is mainly on content creation and site building. The community has done many research to understand the problems that users run into and how the new improvements performs. The purpose is also to educate developers and engage designers in order to grow the UX-team. You can visit the Drupal 8 UX page for more details and join the usability group.

Writing a blog post about Drupal

Writing a blog post about Drupal is a good way to share your knowledge and expertise. There are many subjects to explore, technical or not: talking about a former project you developed or writing a tutorial, telling about the state of a version or sharing about an event you attended… And if you are lucky enough your post can be published on the Weekly Drop, the official Drupal newsletter!

Don’t forget to reference your blog post on Planet Drupal, this platform is an aggregated list of feeds from around the web which shares relevant Drupal-related knowledge and information.

You can also find our Drupal related blog posts on the Liip blog.

Testing core and modules

Testing Drupal projects is necessary to make the platform stable and there are many things to test! If you have a technical background, you can help to review patches or to write unit tests.
For non-technical people, you can provide some feedback about usability of the administration interface that will help to improve the user experience. Follow the process to give a proper feedback.

Contributing to development

There are many ways to contribute code in core and “contrib” projects such as modules or themes.
You can first help to improve existing projects by submitted patches. This would be the natural thing to do when you work with a module and you notice a bug or a missing feature: search in the corresponding issue queue if the problem have been noticed before. If not, post a message explaining the issue and add a snippet of code if you found a potential fix. Then you can create a patch and submit it into the issue queue.
You can also contribute to new projects by creating your very own module or theme or create a sandbox for more experimental projects.

Attending events

The Drupal association organizes many events all around the world to promote the CMS and gather the community.

One of the biggest events are the Drupalcons. A Drupalcon gathers thousands of people and lasts about one week including 3 full days of conferences. These conferences cover many topics: site building, user experience, security, content authoring etc. You can also join sprints to contribute to Drupal projects and social events to meet the members of the community. Check out our report about DrupalCon Barcelona 2015!

“Drupal Dev Days” conferences occur once a year and gather developers to discuss and present topics technically relevant to the community. You can join sprints, intensive coding sessions and technical conferences.

You can also join DrupalCamps to meet your local community. These events last one or two days and focus on sharing knowledge amongst the community. You can attend conferences and sprints.

There are also many Drupal meetups which are free events happening in many cities in the world. Presentations and discussions finish around nice drinks and appetizers.

Sponsoring events

The community holds conventions and meetups in many countries and being a sponsor will not only help Drupal development but it will also enable you to be noticeable within the community. There are different levels of sponsorings that will offer from mentions on social media to advertising online and at the exhibition space of the event. All you have to do is getting in touch with the event organizers. By the way, Liip will sponsor the Drupal Mountain Camp in Davos this year!

Offering a donation

You can give donations to the Drupal association through the website in order to support drupal.org infrastructure and maintenance, worldwide events such as Drupalcons. The donations are either in Euros or Dollars.

You can also become a member of the Drupal Association for the same purpose, as an individual member or an organization member. The minimal fees are 15 Euros. Find more information about membership on drupal.org.

Conclusion

Drupal projects are constantly improving thanks to passionate volunteers who work on many subjects: development, documentation, marketing, events organization, supports… There is for sure a task that will suit you and it only takes small time commitment to make changes.
So join the great Drupal community and start getting involved!

Jan 09 2017
Jan 09
9 January 2017

Today, I started looking at some of the proposals to include layouts within Drupal core from version 8.3 onwards.

This initiative aims to take the functionality that currently exists for laying out blocks and regions, and to use it for displaying other things, such as content entity view and form modes.

Some of this work started life in contrib, in the layout plugin module. Although this module is still alpha status, both the panels and display suite modules use it. Those modules can, therefore, share layouts. However, this module seems to be a stepping stone for what will eventually end up as a core module. Somewhat confusingly, it has a different name.

I’ve decided to focus only on two small modules, either in or planned for Drupal core:

Layout discovery module

Layout discovery is currently in Drupal 8.3 as an experimental module.

This is a very simple API module that allows for the discovery of layouts provided by other modules. It replaces the Layout plugin module mentioned above.

Providing your own layouts is pretty straightforward and documented. The most basic use case is a YAML file that defines a layout and it’s regions, along with a corresponding twig template. More complicated stuff can be done too - a dynamic layout builder would provide the layout definitions to be discovered by this module too, but probably by implementing a deriver class.

I was able to create a very simple layout with ease:

two_column:
  label: Two column
  category: Erik's layouts
  template: templates/two-column
  regions:
    main:
      label: Main content
    secondary:
      label: Secondary content

The template is equally easy, just put the markup you want for the layout and then refer to {{ content.main }} and {{ content.secondary }} in the appropriate places.

Field layout module

Field layout is a proposed new module, not yet added to Drupal core. [Update: as of 26 Jan this is now part of Drupal core]

This alters the manage display and manage form display settings forms. Currently a Drupal site builder can use these forms to control the ordering in which each field is displayed. If you want to do anything more involved, you need to write a twig template for a particular display. The field layout module enhances this, allowing the site builder to choose a predefined layout and populate it’s regions with fields.

Think of it as a cut down version of display suite.

Content type manage display form showing left and right regions to arrange fields The manage display form with a two column layout enabled.

Rendering

I studied the field layout module to see how it works, and how I might use layouts in other settings. It turns out rendering a layout programmatically is quite straightforward. To use the two_column layout defined above, my render array would look like this:

$output = [
  '#theme' => 'layout__two_col',
  'main' => [ /* render array for main content */ ],
  'secondary' => [ /* render array for secondary content */ ],
];

I think this is going to be really useful to have in Drupal core.

Jan 05 2017
Jan 05

As a follow-up to my previous blog post about the usage of Migrate API in Drupal 8, I would like to give an example, how to import multilingual content and translations in Drupal 8.

Prepare and enable translation for your content type

Before you can start, you need to install the “Language” and “Content Translation” Module. Then head over to “admin/config/regional/content-language” and enable Entity Translation for the node type or the taxonomy you want to be able to translate.

As a starting point for setting up the migrate module, I recommend you my blog post mentioned above. To import data from a CVS file, you also need to install the migrate_source_csv module.

Prerequisites for migrating multilingual entities

Before you start, please check the requirements. You need at least Drupal 8.2 to import multilingual content. We need the destination option “translations”, which was added in a patch in Drupal 8.2. See the corresponding drupal.org issue here.

Example: Import multilingual taxonomy terms

Let’s do a simple example with taxonomy terms. First, create a vocabulary called “Event Types” (machine name: event_type).

Here is a simplified dataset:

Id Name Name_en 1 Kurs Course 2 Turnier Tournament

You may save this a csv file.

Id;Name;Name_en 1;Kurs;Course 2;Turnier;Tournament

Id;Name;Name_en

1;Kurs;Course

2;Turnier;Tournament

The recipe to import multilingual content

As you can see in the example data,  it contains the base language (“German”) and also the translations (“English”) in the same file.

But here comes a word of warning:

Don’t try to import the term and its translation in one migration run. I am aware, that there are some workarounds with post import events, but these are hacks and you will run into troubles later.

The correct way of importing multilingual content, is to

  1. create a migration for the base language and import the terms / nodes. This will create the entities and its fields.
  2. Then, with an additional dependent migration for each translated language, you can then add the translations for the fields you want.

In short: You need a base migration and a migration for every language. Let’s try this out.

Taxonomy term base language config file

In my example, the base language is “German”. Therefore, we first create a migration configuration file for the base language:

This is a basic example in migrating a taxonomy term in my base language ‘de’.

Put the file into <yourmodule>/config/install/migrate.migration.event_type.yml and import the configuration using the drush commands explained in my previous blog post about Migration API.

id: event_type label: Event Types source: plugin: csv # Full path to the file. Is overriden in my plugin path: public://csv/data.csv # The number of rows at the beginning which are not data. header_row_count: 1 # These are the field names from the source file representing the key # uniquely identifying each node - they will be stored in the migration # map table as columns sourceid1, sourceid2, and sourceid3. keys: - Id ids: id: type: string destination: plugin: entity:taxonomy_term process: vid: plugin: default_value default_value: event_type name: source: Name language: 'de' langcode: plugin: default_value default_value: 'de' #Absolutely necessary if you don't want an error migration_dependencies: {}

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

id: event_type

label: Event Types

source:

  plugin: csv

  # Full path to the file. Is overriden in my plugin

  path: public://csv/data.csv

  # The number of rows at the beginning which are not data.

  header_row_count: 1

  # These are the field names from the source file representing the key

  # uniquely identifying each node - they will be stored in the migration

  # map table as columns sourceid1, sourceid2, and sourceid3.

  keys:

    - Id

ids:

  id:

    type: string

destination:

  plugin: entity:taxonomy_term

process:

  vid:

   plugin: default_value

   default_value: event_type

  name:

    source: Name

    language: 'de'

  langcode:

    plugin: default_value

    default_value: 'de'

#Absolutely necessary if you don't want an error

migration_dependencies: {}

Taxonomy term translation migration configuration file:

This is the example file for the English translation of the name field of the term.

Put the file into <yourmodule>/config/install/migrate.migration.event_type_en.yml and import the configuration using the drush commands explained in my previous blog post about Migration API.

id: event_type_en label: Event Types english source: plugin: csv # Full path to the file. Is overriden in my plugin path: public://csv/data.csv # The number of rows at the beginning which are not data. header_row_count: 1 keys: - Id ids: id: type: string destination: plugin: entity:taxonomy_term translations: true process: vid: plugin: default_value default_value: event_type tid: plugin: migration source: id migration: event_type name: source: Name_en language: 'en' langcode: plugin: default_value default_value: 'en' #Absolutely necessary if you don't want an error migration_dependencies: required: - event_type

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33

34

35

id: event_type_en

label: Event Types english

source:

  plugin: csv

  # Full path to the file. Is overriden in my plugin

  path: public://csv/data.csv

  # The number of rows at the beginning which are not data.

  header_row_count: 1

  keys:

    - Id

ids:

  id:

    type: string

destination:

  plugin: entity:taxonomy_term

  translations: true

process:

  vid:

    plugin: default_value

    default_value: event_type

  tid:

    plugin: migration

    source: id

    migration: event_type

  name:

    source: Name_en

    language: 'en'

  langcode:

     plugin: default_value

     default_value: 'en'

#Absolutely necessary if you don't want an error

migration_dependencies:

  required:

    - event_type

Explanation and sum up of the learnings

The key in the migrate configuration to import multilingual content are the following lines:

destination: plugin: entity:taxonomy_term translations: true

destination:

  plugin: entity:taxonomy_term

  translations: true

These configuration lines instruct the migrate module, that a translation should be created.

tid: plugin: migration source: id migration: event_type

tid:

  plugin: migration

  source: id

  migration: event_type

This is the real secret. Using the process plugin migration,  we maintain the relationship between the node and its translation.The wiring via the tid field make sure, that Migrate API will not create a new term with a new term id. Instead, the existing term will be loaded and the translation of the migrated field will be added. And thats exactly what we need!

Now go ahead and try to create a working example based on my explanation. Happy Drupal migrations!

Nov 30 2016
Nov 30

Freitag logoOur latest site with Drupal Commerce 1.x went live in July 2016. It is Freitag. Since then we’ve been adding several new commerce related features. I feel it’s time to write a wrap-up. The site has several interesting solutions, this article will focus on commerce.

First a few words about the architecture. platform.sh hosts the site. The stack is Linux + nginx +  MySQL + PHP, the CMS is Drupal 7. Fastly caches http responses for anonymous users and also for authenticated users having no additional role (that is, logged-in customers). Authcache module takes care of lazy-loading the personalized parts (like the user menu and the shopping cart). Freitag has an ERP system to which we connect using the OCI8 PHP library. We write Behat and simpletest tests for QA.

We use the highly flexible Drupal Commerce suite. 23 of the enabled Freitag contrib modules have a name starting with ‘commerce’. We applied around 45 patches on them. Most of the patches are authored by us and 15 of them have already been committed. Even with this commitment to solve everything we could in an open-source way we wrote 30.000+ lines of commerce-related custom code. Still, in March 2016 Freitag was the 3rd largest Drupal customer contributor.

The words ‘product’ and ‘product variation’  I’ll be using throughout the article correspond to ‘product display node’ and ‘product’ in Drupal Commerce lingo.

ERP

ERP is the source of all products and product variations. We import this data into Drupal on a regular basis using Feeds. (Now I would use Migrate instead, it’s better supported and easier to maintain.) ERP also lets Drupal know about order status changes, sends the shipping tracking information and informs Drupal about products sent back to Freitag by customers.

There is data flowing in the opposite direction as well. ERP needs to know about all Drupal orders. Also, we create coupons in Drupal and send them to ERP too for accounting and other reasons.

Emails

We send commerce-related emails using the Message stack. This way we can have order-related tokens in our mails and we can manage and translate them outside the Rules UI. Mandrill takes care of the mail delivery.

Payment gateway

It was a client requirement to use the Swiss Datatrans payment gateway. However, at the time of starting  the project, Commerce Datatrans (the connector module on drupal.org) was in dev state and lacked several features we needed. Pressed for time we opted for buying a Datatrans Drupal module from a company offering this solution. It turned out to be a bad choice. When we discovered that the purchased module still does not cover all our needs and looked at the code we found that it was obfuscated and pretty much impossible to change. Also, the module could be used only on one site instance which made it impossible to use it on our staging sites.

We ended up submitting patches to the Commerce Datatrans module hosted on drupal.org. The module maintainer, Sascha Grossenbacher (the well-known Drupal 8 core contributor) helped us solving several issues and feature requests by reviewing our patches. This process has lead to a stable release of Commerce Datatrans with a dozen of feature improvements and bugfixes.

Additional to Datatrans we use Commerce Custom Offline Payments to enable offline store purchases by store staff and bank transfer payments.

Currencies

The site works with 7 different currencies, some of them having two different prices depending on the shipping country. Prices come from ERP and we store them in a field collection field on the product. We do not use the commerce_price field on the product variation.

Tax

Freitag ships to countries all around the world. VAT calculations are performed for EU, Switzerland, UK, Japan, South Korea and Singapore. To implement this functionality our choice fell on the commerce_vat module. Adding commerce_eu_vat and commerce_ch_vat released us from having to maintain VAT rates for EU and Switzerland ourselves. For the 3 Asian countries we implemented our own hook_commerce_vat_rate_info().

We have two different VAT rates for most of the countries. This is because usually a lower VAT rate applies to books. Drupal imports the appropriate VAT rate from the ERP with the product variation data. This information is handled by price calculation rules in Drupal.

Shipping

Freitag delivers its products using several shipping providers (like UPS, Swiss Post) all around the world. Most shipping providers have many shipping rates depending on the destination country, speed and shipped quantity (weight or volume). On checkout the customer can choose from a list of shipping services. This list needs to be compatible with the order.

We used rules to implement the shipping services in Drupal based on the Commerce Flat Rate module. For this end we trained our client to set up and maintain these rules themselves. It was not easy: shipping rules are daunting even for experienced commerce developers. First we needed to set up the “Profile Address” rules components. Then we configured the “Place of Supply” components. We applied these in turn in the condition part of the shipping rules components themselves.

The weakest point of any  implementation based on Rules is the maintenance. It’s not easy to find a specific rule after you created it. Having 250 rules components for only shipping made this feeling stronger.

The shipping line item receives the VAT rate of the product with the highest VAT rate in the order.

Coupons

Freitag has 6 different coupon types. They differ in who can create them, who and where (online/offline) can redeem them, whether partial redemption is possible, whether it’s a fixed amount or percentage discount and whether Freitag accounting needs to know about them or not.

Based on these criteria we came up with a solution featuring Commerce Coupon. Coupons can be discount coupons or giftcards. Giftcard coupons can only have a fixed value. Discount based coupons can also apply a percentage discount. The main difference between them is that customers can partially redeem giftcards, while discount-based coupons are for one-time use.

To make coupons work with VAT was quite tricky. (To make things simpler we only allowed one coupon per order.) Some coupon types work as money which means that from an accounting point of view they do not actually decrease the order total (and thus the VAT) but work as a payment method. Other coupon types however do decrease the order total (and thus the VAT). At the same time Drupal handles all coupons as line items with a negative price and the Drupal order total does decrease in either case.

The solution we found was to use Commerce proportional VATAxel Rutz maintains this brilliant little module and he does this in a very helpful and responsive manner. All the module does is adding negative VAT price components to coupon line items to account for VAT decrease. It decreases the order total VAT amounts correctly even if we have several different VAT rates inside the order.

Conclusion

Although there’s always room for increasing the complexity of the commerce part of the site (let’s find some use case for recurring payments!), it’s already the most complicated commerce site I’ve worked on.  For this Drupal Commerce provided a solid foundation that is pleasant to work with. In the end, Drupal enabled us to deliver a system that tightly integrates content and commerce.

I also would like to say thanks to Bojan Živanović from Commerce Guys who provided me with valuable insights on the legal aspects of tax calculation.

Nov 30 2016
Nov 30
Unconference | Erik Erskine 30 November 2016

On Saturday I took a trip up to Manchester for an unconference organised by the North West Drupal user group.

An unconference is like a conference, but without a pre-planned programme. You go, ideally prepared to speak about a topic or lead a discussion, and the schedule is defined on the day. It’s a great environment for anyone new to public speaking, or for trying out a new talk. You needn’t worry about small audiences either - sessions often naturally turn into discussion groups.

The topics were really varied, starting off with a discussion around recruitment and things that get in the way of that process. We discussed cooperation with other conferences, and how we can benefit from attending conferences outside of the Drupal world. On that note we had Brian Teeman from the Joomla project talking about address formats in different parts of the world. We also covered usability testing, Mike Bell explained debugging using his central heating system to illustrate, and JP Stacy talked about the forthcoming Tour de Drupal cycle ride to Vienna.

I gave an abridged version of my offline first talk. It felt a little rushed trying to squeeze a longer talk into 25 minutes, but was a good learning exercise on preparing a talk to be different lengths. It seemed to go well though, and led to some good conversations around progressive web apps afterwards.

The afternoon finished with some very funny presentation poker - 2 minute off the cuff talks on aspects of Agile development, with random images as slides!

All in all I had a fun day, learnt new things and met some new people both from the Drupal community and outside.

Thanks to the team that organised it, and to Auto Trader for sponsoring and the top notch lunch!

Nov 15 2016
Nov 15

Our clients are often looking to reach their audiences via email campaigns, and MailChimp is one of the solutions we frequently recommend for this. MailChimp makes it easy to create and manage email campaigns while also providing beneficial analytics on user behavior.

Earlier this year I wrote a blog post showing how to use Composer Manager along with the Mailchimp API v2.0 PHP package to subscribe users to mailing lists in a Drupal 6 or 7 custom module without the need for the Mailchimp contributed module.

However, since then, MailChimp API v3.0 was released and Mailchimp announced that v2.0 (and all prior versions) will no longer be supported after 2016.

So in this blog post, I’ll demonstrate how to accomplish the same objective using the new MailChimp API v3.0, and I’ll expand the tutorial to also include some Drupal 8 specifics.

Background

To quickly summarize the key takeaways from my previous blog posts on Composer Manager and subscribing users to MailChimp lists using the old API:

  • Composer is a tool for managing PHP libraries that your project depends on.
  • Challenges arise managing project-wide dependencies when custom and contributed modules specify their own unique dependencies.
  • Composer Manager is a contributed module for Drupal 7 (and formerly Drupal 6) that addresses these challenges and allows contributed and custom modules to depend on PHP libraries managed via Composer.
  • Using a Composer managed PHP package for the MailChimp API, we can easily subscribe users to MailChimp lists in a Drupal custom module without relying on the Mailchimp module.
  • While the Mailchimp contributed module is great, sometimes all you need is a simple, lightweight method for subscribing users to mailing lists.

One important development since my previous posts is that Composer Manager has been deprecated for Drupal 8. Improvements introduced in Drupal 8.1.0 allow modules to rely on Composer managed dependencies without the need for the Composer Manager module.

Implementation

There are a few steps we must take so that we can subscribe users to mailing lists in our custom module. We’ll review each of these steps in detail:

  • Add the MailChimp API v3.0 PHP library as a dependency of our custom module.
  • Ensure that the library is installed for our project.
  • Properly use the library in our custom module to subscribe users to mailing lists.

Specify the dependency

ThinkShout maintains the Mailchimp contributed module and we were very excited to see that as part of the effort to “get Drupal off the island” they also released a PHP library for MailChimp API v3.0.

To use this new library, we must specify it as a dependency of our custom module. We do that in a composer.json file that sits in our custom module’s root directory and requires that library via the following code:

{
  "require": {
    "thinkshout/mailchimp-api-php": ">=1.0.3"
  }
}

Install the library

Composer is intended for projects and therefore requires a Drupal site to have a single composer.json, so things get complicated when individual modules specify their own dependencies.

For Drupal 7 sites (or still active Drupal 6 sites), the Composer Manager contributed module handles this by merging the requirements specified by each custom and contributed module’s composer.json files into a single, consolidated, site-wide composer.json file.

So for Drupal 6/7 projects we’ll need Composer Manager installed and enabled.

Once enabled, we can generate the consolidated composer.json and then install all of the site’s dependencies that file specifies (including the MailChimp API v3.0 PHP library specified by our custom module) in one of two ways:

From the command line, we can run the following drush commands:

$ drush composer-json-rebuild
$ drush composer-manager install

Alternatively, we could include the following lines in an update hook:

// Re-build Composer Manager composer.json and run composer update.
drush_composer_manager_composer_json_rebuild();
drush_composer_manager('update');

For Drupal 8 sites, the process is slightly different. As mentioned previously, as of release 8.1.0, Drupal core directly uses Composer to manage dependencies and the Composer Manager module is no longer necessary. For Drupal 8 sites, we should follow the Drupal.org instructions for managing dependencies for a custom project. Following those instructions ensures that all of the site’s dependencies, including the MailChimp library specified by our custom module, are installed.

Use the library

Once we have the MailChimp API v3.0 PHP library installed, we can use it in our custom module to subscribe users to mailing lists.

We suggest creating a dedicated function for subscribing users to email lists which can then be called throughout the custom module. For our purposes, we modeled that function off of the Mailchimp module (version 7.x-4.6) mailchimp_subscribe_process() function.

We implemented the following function, which can be reviewed and modified for your specific purposes:

<?php
/**
 * Add an email to a MailChimp list.
 *
 * This code is based on the 7.x-4.6 version of the Mailchimp module,
 * specifically the mailchimp_subscribe_process() function. That version of
 * the Mailchimp contrib module makes use of the ThinkShout PHP library for
 * version 3.0 of the MailChimp API. See the following for more detail:
 * https://www.drupal.org/project/mailchimp
 * https://github.com/thinkshout/mailchimp-api-php.
 *
 * @see Mailchimp_Lists::subscribe()
 *
 * @param string $api_key
 *   The MailChimp API key.
 * @param string $list_id
 *   The MailChimp list id that the user should be subscribed to.
 * @param string $email
 *   The email address for the user being subscribed to the mailing list.
 */
function mymodule_subscribe_user($api_key, $list_id, $email) {

  try {
    // Set the timeout to something that won't take down the Drupal site:
    $timeout = 60;
    // Get an instance of the MailchimpLists class.
    $mailchimp = new \Mailchimp\MailchimpLists($api_key, 'apikey', $timeout);

    // Use MEMBER_STATUS_PENDING to require double opt-in for the subscriber. Otherwise, use MEMBER_STATUS_SUBSCRIBED.
    $parameters = array(
      'status' => \Mailchimp\MailchimpLists::MEMBER_STATUS_PENDING,
      'email_type' => 'html',
    );

    // Subscribe user to the list.
    $result = $mailchimp->addOrUpdateMember($list_id, $email, $parameters);

    if (isset($result->id)) {
      watchdog('mymodule', '@email was subscribed to list @list.',
        array('@email' => $email, '@list' => $list_id), WATCHDOG_NOTICE
      );
    }
    else {
      watchdog('mymodule', 'A problem occurred subscribing @email to list @list.', array(
        '@email' => $email,
        '@list' => $list_id,
      ), WATCHDOG_WARNING);
    }
  }
  catch (Exception $e) {
    // The user was not subscribed so log to watchdog.
    watchdog('mymodule', 'An error occurred subscribing @email to list @list. Status code @code. "%message"', array(
      '@email' => $email,
      '@list' => $list_id,
      '%message' => $e->getMessage(),
      '@code' => $e->getCode(),
    ), WATCHDOG_ERROR);
  }
}

With that function defined, we can then subscribe an email address to a specific Mailchimp mailing list through the following function call in our custom module:

mymodule_subscribe_user($api_key, $list_id, $email);

Conclusion

By taking advantage of the modern PHP ecosystem built on reusable Composer managed packages, we can easily build or adapt a custom module to subscribe users to mailing lists without the MailChimp contributed module.

Lastly, a special thanks to ThinkShout for their hard work maintaining the MailChimp module and creating the library, on which this approach depends!

Oct 24 2016
Oct 24

In this blog post I will present how, in a recent e-Commerce project built on top of Drupal7 (the former version of the Drupal CMS), we make Drupal7, SearchAPI and Commerce play together to efficiently retrieve grouped results from Solr in SearchAPI, with no indexed data duplication.

We used the SearchAPI and the FacetAPI modules to build a search index for products, so far so good: available products and product-variations can be searched and filtered also by using a set of pre-defined facets. In a subsequent request, a new need arose from our project owner: provide a list of products where the results should include, in addition to the product details, a picture of one of the available product variations, while keep the ability to apply facets on products for the listing. Furthermore, the product variation picture displayed in the list must also match the filter applied by the user: this with the aim of not confusing users, and to provide a better user experience.

An example use case here is simple: allow users to get the list of available products and be able to filter them by the color/size/etc field of the available product variations, while displaying a picture of the available variations, and not a sample picture.

For the sake of simplicity and consistency with Drupal’s Commerce module terminology, I will use the term “Product” to refer to any product-variation, while the term “Model” will be used to refer to a product.

Solr Result Grouping

We decided to use Solr (the well-known, fast and efficient search engine built on top of the Apache Lucene library) as the backend of the eCommerce platform: the reason lies not only in its full-text search features, but also in the possibility to build a fast retrieval system for the huge number of products we were expecting to be available online.

To solve the request about the display of product models, facets and available products, I intended to use the feature offered by Solr called Result-Grouping as it seemed to be suitable for our case: Solr is able to return just a subset of results by grouping them given an “single value” field (previously indexed, of course). The Facets can then be configured to be computed from: the grouped set of results, the ungrouped items or just from the first result of each group.

Such handy feature of Solr can be used in combination with the SearchAPI module by installing the SearchAPI Grouping module. The module allows to return results grouped by a single-valued field, while keeping the building process of the facets on all the results matched by the query, this behavior is configurable.

That allowed us to:

  • group the available products by the referenced model and return just one model;
  • compute the attribute’s facets on the entire collection of available products;
  • reuse the data in the product index for multiple views based on different grouping settings.

Result Grouping in SearchAPI

Due to some limitations of the SearchAPI module and its query building components, such plan was not doable with the current configuration as it would require us to create a copy of the product index just to apply the specific Result Grouping feature for each view.

The reason is that the features implemented by the SearchAPI Grouping are implemented on top of the “Alterations and Processors” functions of SearchAPI. Those are a set of specific functions that can be configured and invoked both at indexing-time and at querying-time by the SearchAPI module. In particular Alterations allows to programmatically alter the contents sent to the underlying index, while the Processors code is executed when a search query is built, executed and the results returned.
Those functions can be defined and configured only per-index.

As visible in the following picture, the SearchAPI Grouping module configuration could be done solely in the Index configuration, but not per-query.

SearchAPI: processor settings

Image 1: SearchAPI configuration for the Grouping Processor.

As the SearchAPI Grouping module is implemented as a SearchAPI Processor (as it needs to be able to alter the query sent to Solr and to handle the returned results), it would force us to create a new index for each different configuration of the result grouping.

Such limitation requires to introduce a lot of (useless) data duplication in the index, with a consequent decrease of performance when products are saved and later indexed in multiple indexes.
In particular, the duplication is more evident as the changes performed by the Processor are merely an alteration of:

  1. the query sent to Solr;
  2. the handling of the raw data returned by Solr.

This shows that there would be no need to index multiple times the same data.

Since the the possibility to define per-query processor sounded really promising and such feature could be used extensively in the same project, a new module has been implemented and published on Drupal.org: the SearchAPI Extended Processors module. (thanks to SearchAPI’s maintainer, DrunkenMonkey, for the help and review :) ).

The Drupal SearchAPI Extended Processor

The new module allows to extend the standard SearchAPI behavior for Processors and lets admins configure the execution of SearchAPI Processors per query and not only per-index.

By using the new module, any index can now be used with multiple and different Processors configurations, no new indexes are needed, thus avoiding data duplication.

The new configuration is exposed, as visible in the following picture, while editing a SearchAPI view under “Advanced > Query options”.
The SearchAPI processors can be altered and re-defined for the given view, a checkbox allows to completely override the current index setting rather than providing additional processors.

Drupal SearchAPI: view's extended processor settings

Image 2: View’s “Query options” with the SearchAPI Extended Processors module.

Conclusion: the new SearchAPI Extended Processors module has now been used for a few months in a complex eCommerce project at Liip and allowed us to easily implement new search features without the need to create multiple and separated indexes.
We are able to index Products data in one single (and compact) Solr index, and use it with different grouping strategies to build both product listings, model listings and model-category navigation pages without duplicating any data.
Since all those listings leverages the Solr FilterQuery query parameter to filter the correct set of products to be displayed, Solr can make use of its internal set of caches and specifically the filterCache to speed up subsequent searches and facets. This aspect, in addition to the usage of only one index, allows caches to be shared among multiple listings, and that would not be possible if separate indexes were used.

For further information, questions or curiosity drop me a line, I will be happy to help you configuring Drupal SearchAPI and Solr for your needs.

Sep 23 2016
Sep 23

Here at Savas Labs, we listen to our clients needs, and what many of our clients need is to reach their target audiences effectively. Let’s be honest here - perfectly coded, a pretty looking website will be of little use if it doesn’t produce leads / increase brand awareness / facilitate conversions or generate revenue! So how do we help our clients achieve their goals? We get there by balancing website objectives while giving priority to lead generation via SEO. The more quality traffic that comes to the website - the more conversions we can achieve. It’s that simple!

There are multiple ways of generating traffic to a website. The most popular methods are SEO (Search Engine Optimization) and PPC (Pay Per Click). Both bring traffic through search engines. The difference between the two is that SEO brings long-term results boosting organic traffic while PPC helps marketers achieve short-term goals by gaining instant exposure throughout the duration of an ad campaign.

In this post I’ll share some insight about current SEO trends. I’ll also describe new features of Drupal 8 that make it the most SEO-friendly content management framework available today.

Let’s start by taking a look at what exactly Search Engine Optimization is.

What is SEO?

Search Engine Optimization (SEO) is a marketing discipline focused on optimizing a website’s architecture and content so that it performs well (read “ranks high”) in organic search engine results.

Search engines (Google, Bing, etc.) change their search algorithms many times throughout the year. There are over 200 ranking factors that become updated with every algorithmic change. It is worth noting that Google, one of the leading search engines, is steadily growing its market share in the world and the United States for both desktop and mobile search (see chart below).

Search Engine Market Share 2016

Search Engine Market Share 2016

Google’s dominance in web search makes it clear that in 2017 marketers should pay close attention to Google’s algorithmic updates in order to stay ahead of the curve. Standards introduced to SEO by Google are likely to satisfy all other search engines. Given this reality, there are many techniques that website owners can use to optimize their digital property for search engine consumption. So where do we start? How do we know what efforts will bring us the best Return on Investment (ROI) and let our marketers do their job effectively in the long run?

SEO Outlook 2017

The Savas Labs team stays dialed in on the current trends of Search Engines Optimization. By leveraging aggregated research data and first-hand experience, we’ve developed a solid, yet constantly evolving, foundation of currently effective marketing methods.

Here are four ranking factors that we’ve identified as being most important as of Q4 2016. Our forecast is that these four factors will likely remain at the top of the list throughout 2017.

1. Content

Content is still king! Yes, that’s right. It is and it always will be! You’ve got to be relevant in order to even appear in search. And nothing will make you more relevant than carefully crafted, practical, awesome, juicy, shareable, actionable (you name it) CONTENT! It is important to note that marketers should stop thinking of content as purely text and focus their efforts on providing visual content that supports storytelling, is engaging and matches user intent.

Not just any good ‘ol link to your website. Good backlinks come from high authority domains that are in the same niche as your website. Strong backlinks bring quality traffic and are therefore considered highly desirable to your SEO cause.

3. Responsive Design

With more people using their handheld devices to browse the internet, it has become increasingly important to make a website look good across multiple platforms (smartphone, tablet, etc.). It is not an option in 2017 - it is a necessity! While we won’t get into the notoriously labeled Google algorithm update “Mobilegeddon” that happened in April 2015, we will provide some interesting statistics to back up the importance of responsive design.

There are more mobile internet users than desktop internet users; 52.7% of global internet users access the internet via mobile, and 75.1% of U.S. internet users access the internet via mobile.

4 out of 5 consumers use a Smartphone to shop.

4. Page Speed

In response to the substantial increase in mobile traffic growth, search engines have acknowledged the importance of page speed and the effect it has on user experience (UX) and now give more weight to fast-loading websites.

40% of people abandon a website that takes more than 3 seconds to load.

a 2-second delay in load time during a transaction resulted in abandonment rates of up to 87%. This is significantly higher than the baseline abandonment rate of 67%.

Can your business handle the loss in revenue that may occur from slow page load speed?

Drupal 8 - Built with SEO in Mind

The base of all our SEO efforts lies within the website’s architecture. There are many website engines and CMS’s to choose from and most of them will claim that they are SEO optimized. Don’t be fooled! No CMS will come search engine optimized out of the box. It may have some features, which, if configured correctly, may bring you some SEO benefits. SEO is not only about code, though it does start there. SEO is also about the continuous efforts of your marketing team. We all know that time = money. The more efficient your marketing team is in performing tasks within your CMS - the more ROI you get!

A good CMS must provide means for your marketing team to work independently from your development team. Drupal 8 does just that! It provides a solid framework that can be tuned to become a powerful marketing-machine.

Let’s take a look at some of the new features in core that make search engines love Drupal 8.

Drupal 8 is Responsive out of the Box

Drupal 8 comes with responsive themes in core. Now both public facing and admin facing themes are responsive and make user experience great on any device.

Drupal 8 Page Load is Fast

There has been a lot of debate about Drupal 8 vs. Drupal 7 performance / page load since the Drupal 8’s release. It is a fact that vanilla Drupal 8 is running much more code than vanilla Drupal 7. It runs vendor code like Symfony, which adds some overhead. However, Drupal 8 has a significant number of performance improvements that are making up for that overhead:

  • Javascript files are now loading in the footer. Due to this change pages build up faster and user can see and use them earlier.

  • Pluggable CSS/JS aggregation and minification to support more optimal optimization algorithms.

  • Highly improved caching. Drupal 8 uses “cache tags” that makes caching more efficient and includes Cache Context API which provides context-based caching. This means pages load faster while ensuring that visitors always see the latest version of your site.

  • BigPipe render pipeline. Sends pages in a way that allows browsers to show them much faster. First sends the cacheable parts of the page, then the dynamic/uncacheable parts. Uses the BigPipe technique.

These improvements have the potential to make your Drupal 8 website fly! And if after all that it is not “flying” - than you need someone to review the code that powers your website’s features. Contact us.

Semantic Markup

Search engines appreciate clean markup that explicitly describes the purpose of on-page elements. Thanks to the HTML5 Initiative for Drupal 8 development, we now have a number of great markup improvements right in Drupal core:

  • HTML5 themes with new semantic elements in core templates

  • Support for the new form elements to Drupal’s Form API

  • Rich media handling with <video> and <audio> elements

  • ARIA roles in markup to improve accessibility

  • Resource Description Framework (RDF) support that provides a standardized model for data interchange and facilitates Schema.org mappings

  • Twig theming engine - makes it harder for developers to create messy, non-semantic code

Content-as-a-Service

Another exciting new feature of Drupal 8 is a flexible content delivery.

Today, content owners want to get their content to as many platforms and channels as possible: web, mobile, social networks, smart devices, etc. It is expensive to have a separate solution for every channel. It is much more efficient to have a single editorial team and single software platform that allows for well-organized content management. Drupal 8 and its content-as-a-service capability provides a one-stop solution where content is created and managed via unified web-interface and then consumed by other channels with minimal effort.

Drupal 8 Multilingual Capabilities

To reach audiences from around the world, companies need to speak to users in their native language. In 2017, producing content in English language is not enough, even if English is considered an internationally accepted language. The United States is now the world’s second largest Spanish-speaking country after Mexico, which amplifies the necessity of serving multilingual content for U.S. based audience. To help put things in perspective we checked recent statistics.

English is a #1 language used in the Web, but it only amounts to 26.3% of the online market.

There are 41 million native Spanish speakers in the U.S. Around 79% of them using search engines on a daily basis for gathering information about a future purchase.

Reaching a global audience with Drupal has never been this easy! Previous versions of Drupal had partial support for multilingual websites. Luckily, Drupal 8 had a fundamental overhaul of its multilingual system. Every single component is translatable out of the box in Drupal core without any additional modules. Drupal core natively supports 94 languages. Also, the administration interface is now entirely translatable. Media assets (files or images), can now be assigned to a language or shared between languages. This gives a huge advantage to businesses that aim to reach a global audience.

SEO for Drupal 8 is off to a good start with just the core features! Drupal 8 also has a growing number of contributed modules that can amplify your SEO efforts. Just to name a few: Metatag, Google Analytics, Pathauto, Redirect, and more.

Drupal 8 satisfies current SEO trends enabling marketers do their job effectively and efficiently! Even with minimal configuration, Drupal 8 lays a solid base for future marketing performance.

Pages

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web