Upgrade Your Drupal Skills

We trained 1,000+ Drupal Developers over the last decade.

See Advanced Courses NAH, I know Enough
May 01 2018
May 01

A lot of universities use Drupal in some capacity. Universities don't typically have just one site; they're made up of a ton of different pieces put together for course registrations and calendars and events and alumni and so on. So a couple of those pieces might use Drupal. Or one or two departments might use Drupal even if others do not.

Many educational institutions like Drupal because it's open source. Universities are often publicly funded and favor open stuff more than proprietary products. Plus, they need to manage a ton of content by a ton of different people, so they need a really big robust CMS.

[embedded content]

Introducing OpenEDU 3.0

The new OpenEDU 3.0 is a Drupal distribution setup for educational institutions. The older version was mostly a set of custom configurations, whereas 3.0 actually has unique functionality. It has analytics and monitoring built right into it, for instance. There's a new analytics dashboard that allows a central admin to see what's going on in all the different sections without having to check a while bunch of different accounts, which is pretty cool. There's also new functionality related to content management, workflows and editing flows that universities need to handle.

OpenEDU is also being integrated into the Commerce (keep an eye out at commercekickstart.com), so you can have both of them together.

The Commerce Disconnect

Strangely, a ton of universities are using Drupal, but they are not using Commerce. Even those they use Drupal and perform ecommerce are typically using pretty terrible antiquated systems, if they have a system at all.

Check Out Our High Five Drupal Web SeriesLack of awareness is a big factor in this. A lot of universities are so focused on the publishing end that they don't even think about commerce. Another stumbling block is security—they don't want to deal with the compliance issues around online payments, so they just keep doing what they're doing (i.e. accepting cash or taking credit card details over the phone, which is even less secure).

The reality is that businesses or organizations within a university could really benefit from using Commerce, particularly if they already use Drupal. They could just tack on a bit of Commerce and easily sell club memberships and accept donations (remember: Commerce has a built-in point of sale). There could be one central system that IT could maintain and keep secure, and everyone could still spin up their own customized version of it.

TL:DR - Educational institutions already use Drupal and so should really adopt Drupal Commerce to replace their old, antiquated payment systems.

More from Acro Media

Chat with us

Our team understands that one-size does not fit all, especially in the education space, so we listen and work together to bring your students and staff the most secure and integrated open source solution available in the Commerce arena. Contact us today to discuss how Drupal Commerce can fit it with your existing systems.

Contact Acro Media Today!

May 01 2018
May 01

Drupal is a big deal, Bootstrap is too - but they are well separated projects, so where do we stand? The following is an overview of the current relationship between Drupal and Bootstrap 3 and 4, and what my thoughts on it all are.

Bootstrap 3

With all previous large scale Drupal projects, we have used Bootstrap 3. It has worked well as an all round solid framework with a good structure for handling mobile and desktop styling. The grid system is the most useful and helpful thing about it, saving us time and brain power for things that matter more for the particularities of the site design.

However, there are some downfalls to using Bootstrap's framework out of the box:

  • Tied down to some of the styles of buttons, form controls etc due to the complexity of the SCSS/LESS
  • Some of the grid system becomes a hindrance to things like equal height columns and things can get a bit mixed up with other approaches.
  • It can become something that you "just use" rather than understand.

There is also the idea that being a distributed theme (Bootstrap 3), it is considered a contrib module. This means that the core code should technically not be modified as with updates, those changes will all be lost. Which means anything custom needs to override or go on top of the contrib theme, and this can get messy. The theme also provides hook and theme alters to tailor for some of the markup requirements and classes in order to keep the theme in tact. The theme being pure bootstrap. Rarely do you see the benefits of these without having a good understanding of the way Drupal renders things in conjunction with the way it may impede or interfere with the expectations for the relationship between Bootstrap's markup and its styling.

A lot of the problems faced when theming some of Drupals strange markup can be handled independently of bootstrap and sometimes is despite the fact that Bootstrap may provide some help or half of it. Drupal wasn't developed alongside Bootstrap. A lot of the work Bootstrap 3's theme does gets ignored or goes by unused or unnoticed. In many ways it is almost a theme which is just used to quickly theme up a site, without doing any CSS, why use it at all if we have a specific design? The best example of a Drupal website that uses and makes good use of the theme is the documentation site itself.

Nevertheless, the positives generally outweigh the negatives:

  • A lot of time saved if you don't have it, to create a grid system specific for your site.
  • The theme typically uses best practices and has knowledgeable developers involved in improving certain aspects to it.
  • It can provide you with useful info on some of the way Drupal's theme layer works and how hooks and alters are used (though the ones used in the contribs theme are often not needed).


Bootstrap 4

In recent months, Bootstrap has now been updated to version 4 - where quite a bit has changed. The main differences are:

  • Moved to using Sass by default.
  • Bigger fonts and headings.
  • No glyphicons :)
  • Underlying architecture now uses rem and em rather than px, and grid system uses flexbox - which is good for equal height columns.
  • The use of flexbox has been used and rewritten with several components; modals, list groups, navs.
  • More grid breakpoint and bumped up sizes (.col-md-6 in v3 is now .col-lg-6 in v4).
  • Instead of .col-xs-6, it’s now just .col-6
  • You can specify specific gutter widths at each breakpoint.
  • Grid breakpoints and container widths are now handled via Sass maps ($grid-breakpoints and $container-max-widths).
  • Mixins for media queries, so instead of writing @media (min-width: @screen-sm-min) { ... }, you can write @include media-breakpoint-up(sm) { ... }.
  • Simpler classes and more to tailor for larger screens.
  • "Cards" replace Wells and panels and are nicely customizable. They are light and the markup is simple.
  • Navbar has changed significantly - classes simplified and improved responsiveness choices
  • Forms have been revisited, mores specificity to how the forms are control, size, containers etc.
  • JS now uses ES6, doesn't really affect anything too much because we wouldn't touch the source code here.
  • The Affix jQuery plugin has been dropped.
  • Nearly all components have been refactored to use more un-nested class selectors instead of over-specific children selectors.
  • Support for IE8 and IE9 has been dropped.
  • ~30% lighter


In the Drupal community, what sites have been built with Bootstrap 4, if any? I can't find any.

As you can see the list above is quite long and it is, without thinking about Drupal, quite a big change. So whether or not to use it with Drupal is kinda scary.

What does the Drupal Bootstrap Theme give us that we would need to replicate in a Bootstrap 4 project?

It gives a lot of integration with how Drupal works and renders things like form and sidebars, and nav tabs etc, via hooks and alters using Drupals API.

At the moment, Bootstrap 4 in a Drupal project would be essentially a completely custom and "on the top" theme. All the things that the Drupal Bootstrap project (https://drupal-bootstrap.org/api/bootstrap) provides would have to be done independently using newly written code for this, which you would do using many other themes such as classy.

This means that a developers cognitive strain would be increased for two reasons: 1) learn Bootstrap 4 and 2) write your own hooks and alters to modify markup etc.

So, should you use Bootstrap 4 for a new Drupal project?

Yes if:

  • You want a change from Bootstrap 3 and are fond of all the changes above, at the cost of no Drupal integration.
  • You are willing to learn Bootstrap 4 and implement/integrate it with Drupal.

No if:

  • You find the hooks and alters extremely useful that the current Bootstrap Drupal project provides.
  • You don't have enough time to learn Bootstrap 4 properly.

Summary

  • Using Bootstrap for Drupal projects is a good idea but beware of the constraints 
  • There's a big enough jump between v3 and v4 that it'll require some learning time before diving in headfirst 
Apr 25 2018
Apr 25

Here are some random useful snippets for dealing with caches in Drupal 8, just because I keep having to dig them up from the API.

I’ll try to add more here as I go.

set('cache_key', 'cache_data', $expiration_timestamp);

// Set a permanent cache item
\Drupal::cache()->set('cache_key', 'cache_data', CacheBackendInterface::CACHE_PERMANENT);

// Set a permanent cache item with tags.
\Drupal::cache()->set('cache_key', 'cache_data', CacheBackendInterface::CACHE_PERMANENT, array('tag_one', 'second_tag'));

// Fetch an item from the cache
$cache = \Drupal::cache()->get('cache_key');
if (!empty($cache->data) {
  // Do something with $cache->data here.
}

// Invalidate a cache item
\Drupal::cache()->invalidate('cache_key');

// Invalidate multiple cache items
\Drupal::cache()->invalidateMultiple($array_of_cache_ids);

// Invalidate specific cache tags
use Drupal\Core\Cache\Cache;

Cache::invalidateTags(['config:block.block.YOURBLOCKID', 
  'config:YOURMODULE.YOURCONFIG', 'node:YOURNID']);

// Note that the invalidation functions also exist for deleting caches,
// by just replacing invalidate with delete.

// Flush the entire site cache.
drupal_flush_all_caches();

The end!

Apr 25 2018
Apr 25

Mission: you have 2 fields in a Drupal paragraph bundle, one a node reference field and one a boolean field. Show certain fields in the referenced node depending on the value of the boolean field.

That's a question that popped up today in the DrupalTwig Slack. Here's my response, which I implemented a version of recently.  (In that particular case, we had an 'Event' content type with fields for 'address', 'phone number', etc and also a reference field for 'Amenity'. If the fields were filled in in the event content type, they were to be presented, but if they were left blank on the event content type, we had to pull in the corresponding fields for address, phone number, etc from the referenced amenity.) Anyway, my response:

{# Check the value of the boolean field #}
{% if paragraph.field_boolean.value === 'on' %}
  {# Just render the title of the referenced node #}
  {{ paragraph.field_reference.0.entity.label }}
{% else %}
  {# Render the title and the image field #}
  {{ paragraph.field_reference.0.entity.label }}
  {{ paragraph.field_reference.0.entity.field_image.url }}
{% endif %}

{# Ensure that the cache contexts can bubble up by rendering the {{ content }} variable #}
{{ content|without('field_boolean', 'field_reference') }}

Just for clarity - variables in that code snippet are simply made up off the top of my head (this is what happens when answering questions on Slack). I'm sure I have things slightly wrong and you'll need to play with them to get them to work correctly.

Also, the reason for the cache contexts bit? Say thanks to Lee Rowlands from Previous Next for his blog post Ensuring Drupal 8 Block Cache Tags bubble up to the Page

Apr 23 2018
Apr 23

There will be a security release of Drupal 7.x, 8.4.x, and 8.5.x on April 25th, 2018 between 16:00 - 18:00 UTC. This PSA is to notify that the Drupal core release is outside of the regular schedule of security releases. For all security updates, the Drupal Security Team urges you to reserve time for core updates at that time because there is some risk that exploits might be developed within hours or days. Security release announcements will appear on the Drupal.org security advisory page.

This security release is a follow-up to the one released as SA-CORE-2018-002 on March 28.

  • Sites on 7.x or 8.5.x can immediately update when the advisory is released using the normal procedure.
  • Sites on 8.4.x should immediately update to the 8.4.8 release that will be provided in the advisory, and then plan to update to 8.5.3 or the latest security release as soon as possible (since 8.4.x no longer receives official security coverage).

The security advisory will list the appropriate version numbers for each branch. Your site's update report page will recommend the 8.5.x release even if you are on 8.4.x or an older release, but temporarily updating to the provided backport for your site's current version will ensure you can update quickly without the possible side effects of a minor version update.

Patches for Drupal 7.x, 8.4.x, 8.5.x and 8.6.x will be provided in addition to the releases mentioned above. (If your site is on a Drupal 8 release older than 8.4.x, it no longer receives security coverage and will not receive a security update. The provided patches may work for your site, but upgrading is strongly recommended as older Drupal versions contain other disclosed security vulnerabilities.)

This release will not require a database update.

The CVE for this issue is CVE-2018-7602. The Drupal-specific identifier for the issue will be SA-CORE-2018-004.

The Security Team or any other party is not able to release any more information about this vulnerability until the announcement is made. The announcement will be made public at https://www.drupal.org/security, over Twitter, and in email for those who have subscribed to our email list. To subscribe to the email list: login on Drupal.org, go to your user profile page, and subscribe to the security newsletter on the Edit » My newsletters tab.

Journalists interested in covering the story are encouraged to email [email protected] to be sure they will get a copy of the journalist-focused release. The Security Team will release a journalist-focused summary email at the same time as the new code release and advisory.
If you find a security issue, please report it at https://www.drupal.org/security-team/report-issue.

Apr 20 2018
Apr 20

We hope you find this an interesting and useful article. However, if you just want the quick and dirty facts...

(TL; DR - Jump to takeaways)

 

UPDATE:

A week after this article was published, another critical Drupal core security update was released - SA-CORE-2018-004 - on Wednesday, April 25, 2018. It's a great time to be a Thinkbean client, as every client was (yet again) already protected against this exploit - even before it became public knowledge. If you are not a client, go talk with your developer right now and make certain your Drupal instance has had this latest vulnerability patched. Then, come back and read this article. If you don't have a reliable, professional Drupal development company monitoring your site, contact us now.

 

What is it?

As you may have gathered by the “2.0” portion of the title… for the second time in less than four years, a major vulnerability in Drupal core has been announced. As was the case in 2014 with SA-CORE-2014-005 (aka “Drupalgeddon”), the latest security announcement (SA-CORE-2018-002 (aka Drupalgeddon 2.0)) is highly critical (25/25 on the Drupal Security Team (DST) “Security risk” score). Both vulnerabilities allow for remote code execution by an attacker. If you are technically-inclined (and are really interested in some of the technical details behind this vulnerability), check out this article from Check Point Research.

 

How was it discovered?

Drupal is (still) one of the most secure platforms available on the web today. Ironically, this is because of, not in spite of, these recent security releases. Part of the reason for this is the thousands of contributors who regularly work with it. This, particular, vulnerability was discovered by Jasper Mattsson as part of general research and security of Drupal. Jasper works for a Drupal development company in the Netherlands.

 

What does it mean?

Basically, this means your Drupal 6, 7 and/or 8 site has a flaw in the core programming of Drupal, itself, which allows anyone who knows about it to take over your site. There are several reasons why these vulnerabilities have been treated with such gravity:

  • It’s very easy for an attacker to leverage the vulnerability.

  • There is no privilege level required for an exploit to succeed.

  • All data (including non-public data) could be exposed/accessed.

  • All data could be modified or deleted.

  • Known exploits already exist “in the wild”.

  • For all intents and purposes, all Drupal sites are affected.

What do I need to do?

Put into the simplest terms, you need to be certain your developer has already updated your site (at the very least, on the day the update was made available). If you do not have a developer and your site was not updated, you need to consider your site compromised. Do you have thoughts of asking your developer to “check and see” if your site has been hacked? On the surface, that’s a reasonable initial thought. However, be advised it could take tens of hours or hundreds of hours to make such a concrete determination. Even then, it may very easily not be discovered. Our strong recommendation would be: don’t waste the resources (time, money, effort).

This is a situation where the quality of your web hosting provider really paid off if you were using a good one. There were a few providers who were already protected from this vulnerability. That is a very short list (Platform.sh, Acquia and Pantheon (there may be others)). Shameless plug: Thinkbean hosts its Drupal web projects only with the aforementioned hosting solution providers. Check out this post from one of our main hosting partners.

Even if your site was hosted on one of the protected hosting platforms, it is absolutely imperative your site be updated… for a number of reasons. It is not important to go into the reasons why you must update even if you are hosting with one of the protected providers. Just know - you need to. Ask your developer to explain the reasons to you if you are truly interested.

 

What if my site is compromised?

If your site needs to be considered compromised because you did not update immediately, you can find some information here. Basically, your only real options are:

  • Replace your current installation with a backup from the time immediately preceding the announcement (preferably, no more than 12 hours prior).

  • Re-build the site.

  • Close the site down and leave it closed.

These may seem like extreme options. However, the reality is if you did not update in time and you continue to allow your site to run, you are not only jeopardizing the safety of all the information on your own website (public and private) but you are also very likely to be an unwitting accomplice to the compromising of your users and other websites. Although, since you are now reading this article, you really couldn’t consider yourself an “unwitting accomplice” any longer.

Daily, automated backups are yet another example of why high quality, Drupal-specific hosting is well worth the average costs. If you didn’t upgrade immediately then you should have a recent, complete backup available to restore the site from the day before the announcement.

 

Why would anyone want to hack my little site?

It is important to understand... big or small, you are no exception to this exploit. There is no site on the web today (Drupal or otherwise) which is not on some hacker’s radar, somewhere... more specifically, some hacker’s bot’s radar, somewhere. Ask your developer to tell you how many hacking attempts, per day (yes, that’s per day), your site gets.

We use our own, internally-developed monitoring application on all our client sites which gives us access to a great deal of client site information (such as illicit access attempts). It may take some digging but your developer should be able to give you that information. Trust us, you will be amazed at that number.

If for no other reason, your site will be hacked to have a backdoor installed. That way, an attacker will have undetectable access to your site at-the-ready. Should your site become of interest at any point in the future, they have an easy way in. Even if your site holds no interest, in and of itself, it will be hacked and used as a vector - an access point from which to launch other attacks against more desirable targets.

One of the most common reasons for hacking into a site is to gain access to computing resources (e.g. processing power). Hackers love to take over other machines and launch surreptitious, unrelated attacks on others (e.g. DDOS attacks). Another, recent use-case for this exploit is cryptocurrency mining. There is an interesting article on exactly that topic here.

There are approximately one million active Drupal sites on the web today. One of the Drupal-centric hosting companies mentioned earlier (Acquia) has stated publicly they have been stopping in excess of 100,000 exploit attempts per day – and that’s just one Drupal hosting company. You are not an ostrich. Don’t act like one. You can’t stick your head in the sand and hope you won’t be affected by this. If you do, you’ll be hurting yourself and others.

The takeaways of this article are:

  • If your site was not updated by 04.11.18 (at the latest), consider your site compromised. Restore it from a backup from 03.27.18 (the day preceding the Drupal Security Team public service announcement (PSA)).

  • If you do not have a backup, talk to a developer about your options.

  • In most cases, the best course of action will be to restore the site from a backup or re-build it.

  • Attempting to sanitize your site by asking a developer to determine if you have been hacked is resource-prohibitive (time, effort, money) and will not guarantee your site is secure.

  • With vanishingly few exceptions, this update should be performed by a professional Drupal development company.

  • This is an urgent update. If your site was not updated then contact a developer to determine your next steps.

  • Bots attack any vulnerable site, regardless of “importance” or “size”. Just as a search engine, bots crawl the web 24 hours a day, 7 days a week, 365 days a year looking for vulnerabilities (this one in particular).

  • Vulnerable sites will have been hacked if for no other reason than to become vectors.

  • DST risk score 25/25 - the highest possible urgency.

  • The vulnerability is being actively exploited, at present (likely, millions of daily attempts).

  • This vulnerability (as well as Drupalgeddon 1.0) can remain hidden and used when desired.

  • If your site was hosted by one of the three providers mentioned in this article then you were protected from this exploit but you still need to update ASAP.

  • Drupal remains one of (if not the) most secure platforms available today.

  • Professional, Drupal-specific hosting is usually somewhat more expensive but well worth the expense (case in point).

An objective and thorough Thinkbean site audit is like no other.

We'll determine the status of your Drupal site - beyond question.
Set your site (and yourself) at ease.

Protect My Site!
Apr 20 2018
Apr 20

This is a first blog post about how we build the team portal for Roparun.

But first what is Roparun? The Roparun is a relay race of over 500 kilometres from Paris and Hamburg to Rotterdam, where people in teams, take part in an athletic event to raise money for people with cancer. It’s also called an adventure for life. This is also clear from the motto, which for years has been: ‘Adding life to days, when days often can’t be added to life’.

So each year Roparun organizes this race and around 400 teams participate in the event. The first part of the project was to setup donation functionality and that is working right now.

The next part of the project is to create a new portal for team captains where they can manage their team data, (e.g. name of the team, start location and the individual team members). We have chosen to have this in a separate Drupal website.

In CiviCRM each team captain is registered as a participant to the Roparun event with the role team captain. The team captain can login into the portal as soon as he has been registered as a team captain and till the event is over.

The first part of this project is that we wanted the team captains being able to login and we have created a module called CiviMRF User Sync. This module build on top of the CiviMRF framework.

This user sync module uses the CiviCRM api to create drupal user accounts. See screenshot below for the configuration.

What you can see is that we use a custom api to retrieve the team captains. This custom api returns the email, contact id and the team id of the team captain. We store the e-mail address as the username and at the email field at the user level.

As soon as a new team captain is registered a new user record is created and the team captain receives an e-mail with a link to create a password.

As soon as an existing team captain is removed from CiviCRM the user account is cancelled and the team captain receives an email indicating that his account is disabled.

We have also created a drupal module to store the team id at the drupal user record and use this team id in the view (see https://github.com/CiviCooP/roparun_team_portal)

So the first bit is done, meaning a team captains can log in. The next bit is to build the portal with Drupal Views and Webforms. The building blocks we are going to use for that is CiviMRF Webform, CiviMRF Views and at the CiviCRM site the form-processor. I will keep you posted on the developments of the next steps.

Filed under

Apr 19 2018
Apr 19

Last week I wrote a Drupal module that uses face recognition to automatically tag images with the people in them. You can find it on Github, of course. With this module, you can add an image to a node, and automatically populate an entity_reference field with the names of the people in the image. This isn’t such a big deal for individual nodes of course; it’s really interesting for bulk use cases, like Digital Asset Management systems.

Automatic tags, now in a Gif.

I had a great time at Drupalcon Nashville, reconnecting with friends, mentors, and colleagues as always. But this time I had some fresh perspective. After 3 months working with Microsoft’s (badass) CSE unit - building cutting edge proofs-of-concept for some of their biggest customers - the contrast was powerful. The Drupal core development team are famously obsessive about code quality and about optimizing the experience for developers and users. The velocity in the platform is truly amazing. But we’re missing out on a lot of the recent stuff that large organizations are building in their more custom applications. You may have noticed the same: all the cool kids are posting about Machine Learning, sentiment analysis, and computer vision. We don’t see any of that at Drupalcon.

There’s no reason to miss out on this stuff, though. Services like Azure are making it extremely easy to do all of these things, layering simple HTTP-based APIs on top of the complexity. As far as I can tell, the biggest obstacle is that there aren’t well defined standards for how to interact with these kinds of services, so it’s hard to make a generic module for them. This isn’t like the Lucene/Solr/ElasticSearch world, where one set of syntax - indeed, one model of how to think of content and communicate with a search-specialized service - has come to dominate. Great modules like search_api depend on these conceptual similarities between backends, and they just don’t exist yet for cognitive services.

So I set out to try and explore those problems in a Drupal module.

Image Auto Tag is my first experiment. It works, and I encourage you to play around with it, but please don’t even think of using it in production yet. It’s a starting point for how we might build an analog to the great search_api framework, for cognitive services rather than search.

I built it on Azure’s Cognitive Services Face API to start. Since the service is free for up to 5000 requests per month, this seemed like a place that most Drupalists would feel comfortable playing. Next up I’ll abstract the Azure portion of it into a plugin system, and try to define a common interface that makes sense whether it’s referring to Azure cognitive services, or a self-hosted, open source system like OpenFace. That’s the actual “hard work”.

In the meantime, I’ll continue to make this more robust with more tests, an easier UI, asynchronous operations, and so on. At a minimum it’ll become a solid “Azure Face Detection” module for Drupal, but I would love to make it more generally useful than that.

Comments, Issues, and helpful PRs are welcome.

Apr 16 2018
Apr 16

Ever gotten this error: User error: “attributes” is an invalid render array key? Here's what I do to get around it. If you've a better solution, let me know.

When building PatternLab-based Drupal themes, I try to get the Twig in PatternLab to match what I expect from Drupal. So, if I know Drupal has a line like this in its node.html.twig:

I want to be able to put the same thing into my PatternLab template - even though I am not going to use the {{ attributes }} in PatternLab. This means then I can simply let the Drupal template extend from the PatternLab one and not need to worry about anything.

However, when you do this, you will often get an error to say "attributes” is an invalid render array key. How do I get that error message to go away? Simple - I just add attributes to my Pattern's .yml file, like so:

attributes:
  Attribute():
    class:

Note: to use this, you need to have the plugin-data-transform by @aleksip (thanks to Aleksip for pointing this out to me on Slack). This can be added to your composer.json require-dev section:

"aleksip/plugin-data-transform": "^1.0.0",

The data.json File

You can do this for each individual pattern, but then you might get an error somewhere else talking about "title_attributes” is an invalid render array key. To get around all these errors, I simply add these items globally to the default data.json file, like so:

  "attributes": {
    "Attribute()": {
      "class": []
    }
  },
  "content_attributes": {
    "Attribute()": {
      "class": []
    }
  },
  "title_attributes": {
    "Attribute()": {
      "class": []
    }
  },
  "rows": {
    "Attribute()": {
      "class": []
    }
  },
  "teaser": {
    "Attribute()": {
      "class": []
    }
  }

The PatternLab Teaser Twig File

Taking the teaser view mode as an example, here's what my PatternLab twig file looks like:

{%
set classes = [
  'node',
  'node--type-' ~ node.bundle|clean_class,
  node.isPromoted ? 'node--promoted',
  node.isSticky ? 'node--sticky',
  not node.isPublished ? 'node--unpublished',
  view_mode ? 'node--view-mode-' ~ view_mode|clean_class,
]
%}


  {% if display_submitted %}
    

Published: {{ node.created.value|date("D d M Y") }}

{% endif %} {{ title_prefix }}

{{ label }} </h2> {{ title_suffix }} {{ content.field_intro }}

{{>{{>
Apr 13 2018
Apr 13

Description

This Public Service Announcement is a follow-up to SA-CORE-2018-002 - Drupal core - RCE. This is not an announcement of a new vulnerability. If you have not updated your site as described in SA-CORE-2018-002 you should assume your site has been targeted and follow directions for remediation as described below.

The security team is now aware of automated attacks attempting to compromise Drupal 7 and 8 websites using the vulnerability reported in SA-CORE-2018-002. Due to this, the security team is increasing the security risk score of that issue to 24/25

Sites not patched by Wednesday, 2018-04-11 may be compromised. This is the date when evidence emerged of automated attack attempts. It is possible targeted attacks occurred before that.

Simply updating Drupal will not remove backdoors or fix compromised sites. You should assume that the host is also compromised and that any other sites on a compromised host are compromised as well.

If you find that your site is already patched, but you didn’t do it, that can be a symptom that the site was compromised. Some attacks in the past have applied the patch as a way to guarantee that only that attacker is in control of the site.

What to do if your site may be compromised

Attackers may have copied all data out of your site and could use it maliciously. There may be no trace of the attack.

Take a look at our help documentation, ”Your Drupal site got hacked, now what.”

Recovery

Attackers may have created access points for themselves (sometimes called “backdoors”) in the database, code, files directory and other locations. Attackers could compromise other services on the server or escalate their access.

Removing a compromised website’s backdoors is difficult because it is very difficult to be certain all backdoors have been found.

If you did not patch, you should restore from a backup. While recovery without restoring from backup may be possible, this is not advised because backdoors can be extremely difficult to find. The recommendation is to restore from backup or rebuild from scratch. For more information please refer to this guide on hacked sites.

Contact and More Information

We prepared a FAQ that was released when SA-CORE-2018-002 was published. Read more at FAQ on SA-CORE-2018-002.

The Drupal security team can be reached at security at drupal.org or via the contact form at https://www.drupal.org/contact.

Learn more about the Drupal Security team and their policies, writing secure code for Drupal, and securing your site.

Apr 06 2018
Apr 06

A straightforward mission doesn’t always mean there’s a simple path. When we kicked off the Mass.gov redesign, we knew what we wanted to create: A site where users could find what they needed without having to know which agency or bureaucratic process to navigate. At DrupalCon Baltimore in 2017, we shared our experience with the first nine months of the project building a pilot website with Drupal 8, getting our feet wet with human-centered (AKA “constituent-centric”) design, and beginning to transform the Mass.gov into a data-driven product.

https://medium.com/media/488562ad39a45ea9675f3d96f13b21ce/href

Interested in a career in civic tech? Find job openings at Digital Services.
Follow us on Twitter | Collaborate with us on GitHub | Visit our site

DrupalCon 2017 Presentation: Making Mass.gov data driven and constituent centric was originally published in Massachusetts Digital Service on Medium, where people are continuing the conversation by highlighting and responding to this story.

Mar 30 2018
Mar 30

This is not a new phenomenon, and is testament to the efficiency and professionalism of the Drupal Security Team that these vulnerabilities are found, fixed, and the releases managed appropriately.

The release meant we had to update every single client site quickly, across multiple versions spanning Drupal 6 to Drupal 8.5, so our team immediately swung into action, developing a plan for each site. 

We've got your back

On Wednesday 28 March, around 8PM, the new versions of Drupal were released. Our team were poised, fingers on keyboards around Ireland, the UK and France, and rather than panic in the face of a large, time-sensitive job, we set to work.

Over the next couple of hours our shared spreadsheet tracked all the updates, steadily turning green as site after site was updated. Chat, jokes and on-mission discussions flowed freely through our chat channel as the team worked with one mind and one goal. In truth, it was a fun and exciting evening! By midnight we surveyed the end result: 70 sites updated, development and staging environments updated, and one redesign project even deployed to the live server!

1 team, 70 sites, 6 versions, no problem

Every member of the Annertech team did us proud, and because of our efforts on Wednesday evening, not a single client site of ours remained vulnerable to the exploit. Job well done!

Have you updated yet?

If you have a site that is not yet updated, and you need help doing it, don't delay: please get in touch - we'd be glad to help.

Mar 28 2018
Mar 28

UPDATE: 1:27pm PT After analyzing the vulnerability and the most obvious remote exploitation path, we have deployed a platform wide mitigation and are logging potential exploits. At this time we do not see any systematic attacks. Patching your site is the only way to be sure you are safe, so please do that as soon as possible.

— — —

The Drupal Security Team has published Drupal SA-2018-002 to address a critical vulnerability. This the first update of this magnitude since SA-2014-005 (aka “Drupageddon”) back in 2014. In that case, the time from release to automated exploitation was around seven hours.

As soon as 8.5.1 (and related releases) came out, we immediately pushed the update to all site dashboards, where it can be deployed with a few clicks or via scripted mass-updates. Please update your Drupal sites now before continuing to read this post.

We’ve been planning for this since the Security Team issued a PSA last week, and have engineers standing by if additional response is needed.

As with SA-2014-005, we will update our status page as well as this blog post with any additional information, and will follow up with any interesting findings we can observe at a platform level.

However, I cannot emphasize enough that the only way to be sure you sites are safe is to deploy the core update. Please do not delay in rolling that out today.

Topics Drupal, Drupal Planet, Security
Mar 23 2018
Mar 23

I'm working in creating a Drupal 8 installation profile and learning how they can override default configuration that its modules provide at install time.

All Drupal 8 modules can provide a set of configuration that should be installed to the site when the module is installed. This configuration is placed in the module's config/install or config/optional directory. The only difference is that the configuration objects placed in the config/optional directory will only be installed if all of their dependencies are met. For example, the core "media" module has a config file config/optional/views.view.media.yml which will install the standard media listings view, but only if the views module is available on your site at the time of install.

The power of installation profiles is that they can provide overrides for any configuration objects that a module would normally provide during its installation. This is accomplished simply by placing the config object file in the installation profile's config/install or config/optional directory. This works because when Drupal's ConfigInstaller is installing any configuration object, it checks to see if that config object exists in your installation profile, and uses that version of it if it exists.

However, overriding default configuration that a module would normally provide is a double edged sword and brings up some interesting challenges.

If you dramatically alter a configuration object that a module provides, what happens when that module releases a new version that includes an update hook to modify that config? The module maintainers may write the update hook assuming that the config object that's installed on your site is identical to the one that it provided out-of-the-box during install time. I think this falls on the module maintainer to write update hooks that first check to make sure that the config object is mostly what it expects it to be before modifying it. If not, fatal errors could be thrown.

Another challenge that I ran into recently is more complicated. My installation profile was overriding an entity browser config object provided by the Lightning Media module. Entity browsers use views to display lists of entities on your site that an editor can choose from. My override changed this config object to point to a custom view that my installation profile provided (placed in its config/install directory), but it didn't work. When installing a site with the profile, I was met with an UnmetDependenciesException which claimed that the entity browser override I provided depended on a view that didn't exist. Well, it did exist, it's right there in the install folder for the profile! After some debugging, this is happening because the Drupal doesn't install config from the installation profile until all of the modules your install profile depends are installed first. So to summarize, it's not possible for a module's default config objects to depend on config that is provided by an install profile.

Mar 21 2018
Mar 21
  • Advisory ID: DRUPAL-PSA-2018-001
  • Project: Drupal Core
  • Version: 7.x, 8.x
  • Date: 2018-March-21

Description

There will be a security release of Drupal 7.x, 8.3.x, 8.4.x, and 8.5.x on March 28th 2018 between 18:00 - 19:30 UTC, one week from the publication of this document, that will fix a highly critical security vulnerability. The Drupal Security Team urges you to reserve time for core updates at that time because exploits might be developed within hours or days. Security release announcements will appear on the Drupal.org security advisory page.

While Drupal 8.3.x and 8.4.x are no longer supported and we don't normally provide security releases for unsupported minor releases, given the potential severity of this issue, we are providing 8.3.x and 8.4.x releases that include the fix for sites which have not yet had a chance to update to 8.5.0. The Drupal security team strongly recommends the following:

  • Sites on 8.3.x should immediately update to the 8.3.x release that will be provided in the advisory, and then plan to update to the latest 8.5.x security release in the next month.
  • Sites on 8.4.x should immediately update to the 8.4.x release that will be provided in the advisory, and then plan to update to the latest 8.5.x security release in the next month.
  • Sites on 7.x or 8.5.x can immediately update when the advisory is released using the normal procedure.

The security advisory will list the appropriate version numbers for all three Drupal 8 branches. Your site's update report page will recommend the 8.5.x release even if you are on 8.3.x or 8.4.x, but temporarily updating to the provided backport for your site's current version will ensure you can update quickly without the possible side effects of a minor version update.

This will not require a database update.

Patches for Drupal 7.x and 8.3.x, 8.4.x, 8.5.x and 8.6.x will be provided.

The CVE for this issue is CVE-2018-7600. The Drupal-specific identifier for the issue is SA-CORE-2018-002.

The Security Team or any other party is not able to release any more information about this vulnerability until the announcement is made. The announcement will be made public at https://www.drupal.org/security, over Twitter, and in email for those who have subscribed to our email list. To subscribe to the email list: log in on drupal.org, go to your user profile page and subscribe to the security newsletter on the Edit » My newsletters tab.

Journalists interested in covering the story are encouraged to email [email protected] to be sure they will get a copy of the journalist-focused release. The Security Team will release a journalist-focused summary email at the same time as the new code release and advisory.

If you find a security issue, please report it at https://www.drupal.org/security-team/report-issue.

updated 2018-03-22: Added information about database updates

updated 2018-03-27: Added information about patches

updated 2018-03-28: Added information about CVE and identifiers

Mar 09 2018
Mar 09
Gruppenfoto Splash AwardLars Stauder Photography Drupal Business Deutschland e.V.

On the 8th of March 2018 it was time again, in Frankfurt the Drupal Splash Awards 2018 were awarded in the Brotfabrik. The aim of the award is to show special Drupal projects and give them a stage. Jeffrey A. "jam" McGuire, Marcus Maihoff, Robert Douglas, Meike Jung, Rouven Volk and Marc Dinse were among the international jury this year.

The nominees

In total, 27 German projects were nominated this year. There have been various great projects, etc. filed in the fields of e-learning, e-commerce, non-profit, government or publishing/media. All nominees you can find here. We entered the race this year with our project ispo.com in the category publishers/media. Other contenders for the coveted award in this category were the SUPERillu (by UEBERBIT), Der Nordschleswiger (by Comm-Press) and Das Haus (by Galani Projects).

The Splash Award was international

New this year: for the first time there were also ten Austrian nominations. The Austrian Drupal community actually had the goal also to host a national competition for Drupal projects. However, there were not enough applicants, that’s why the German Splash Awards organizing team assigned the austrian colleagues an extra category. In the specially established categories "Austria", "E-Commerce", "Enterprise" and "Publishing / Media", all Austrian projects competed against each other.

The award

As in the previous year, Jeffrey A. "guided" McGuire through his presentation in a charming manner, presenting the nominated projects in each category. Compared to the previous year, significantly more spectators appeared for the award and so there were in addition to the awards also a lot of new contacts to win. All nominees received certificates and the agencies with the winning projects also took a nice, blue award back to the office.

The winners

We are pleased to have been nominated for this year with our project ispo.com in the category publishers / media. We congratulate our colleagues from Galani Projects, who won the Splash Award in the category publishers / media with their project Das Haus. More winners of this year will be available soon here.

Review Splash Awards 2017

Mar 03 2018
Mar 03

Today I'm introducing you to a new contrib module, I've created for allowing "add to cart" (or wishlist) buttons as links instead of forms. This helps to circumvent some unfortunate Drupal core limitations, when you want to build overview pages or blocks.

Have you ever built blocks or pages containing product listings, like category pages, bestseller blocks or related product blocks? Have you included the add to cart form to them, in order to be able to add items directly from the overview page? Currently, building product listing blocks is often accompanied with some headache due to existing Drupal Core limitations, when you want to include the form. That's why I have written the Drupal Commerce Add To Cart Link module. Here's an overview about the limitations I'm talking about:

Disappearing forms

The biggest problem is that Drupal forms won't work when they disappear on the submitting request, like in the following scenario:

You want to build a "related products" block, showing up to 4 related products. Let's say, there are 20 possible candidates for a given product, and you want to choose the 4 products randomly. You may want to set some Cache conditions though, but even then the 4 shown products are subject to change.

So, what happens, if you use a form to show the add to cart button, and the selected product is no longer displayed in that request you pressed the "add to cart" button? Nothing, just nothing happens. No message, no warning, no error, but also no product in your cart. That's just, how Form API works in Drupal, and there's little to nothing that Commerce could do to prevent that.

Offering a dedicated "add to cart" link instead of using the Form API prevents that problem.

Ajaxified Views are hijacking your forms

Ever tried to build a View with Ajax enabled (eg. for infinite pagination) and list products including the add to cart form? You'll fail. The forms will only work on initial page load. After the first Views Ajax link was clicked, you'll run into 404 errors because Views has "stolen" the forms from Commerce. This is actually very mean, as you might oversee this malfunction, if you first build and test your views, later add the Ajax paging and just quickly try it, first adding something to the cart, next do some paging - you won't see that immediately probably.

Normally, you only have two choices now: you can now either disable Ajax on that view or remove the add to cart form completely from the teaser view mode and only link to the detail page.

Again, eliminating the form here and replacing it by a link will not break anything and allows peaceful co-existence of an Ajax enabled View and the cart button.

See these links for more information on this topic:

Only show the default variation in the catalog view

Not an issue, but something you'll have to do with form alteration normally: if you have products with multiple variations, you'll probably do not want to have the variations selector on overview pages and other product listings. It's easy though to hide the selector, but you need to implement a hook_form_alter(), which may be a problem for non-developers. By using Commerce Add To Cart Link module, you can achieve that with sole entity view configuration.

Introducing Commerce Add To Cart Link

The problems described above were the main motivation behind implementing Commerce Add To Cart Link module. The module is adding a "add_to_cart_link" pseudo field (via hook_entity_extra_field_info()) to both Product and Product Variation entities. Why both? Because semantically, it belongs to the variation, and being there offers the most flexibility, but is also more complicated to setup correctly and brings some needless overhead for very simple use cases. It's more flexible, because on products with multiple variations a link is printed for every single variation. With preprocessors or view alter implementation, one could always decide to show only one link. But again, this would require some extra coding. Further, it wouldn't show up on the injected variation fields in the product template, so you would have to set up a custom view mode for variations, that only contains that link field, and configure the variations reference field to render the variation in that view mode -> needless complexity.

So, if you only have 1:1 references to variations, or anyway want to show a single "add to cart" button for the default variation only, configuring the field on the Product entity view display, is much easier: just the drag the field into visibilitty on the desired product view display, and you're finished! Btw, the fields are automatically available for all your product and product variation bundles.

Gettin' Twiggy with it

To offer best comfort and flexibility, the link is rendered via Twig template, letting you easily customize the appearance of the link. You have full Twig power and can extend and enrich the markup as you want. The link text is also part of the template and therefore easily swappable. Use an icon? No problem, insert it in your Twig template and it's done. You can even provide different templates per variation type or ID.

That's it. Enjoy the module - may it help you as much as it does for us :)

PS: The links are secured with a CSRF token, so that users can't be tricked into clicking a link and adding products to cart they don't want. In order to have these tokens working accordingly, a session for anonymous user will be enforced, if there isn't one existing already.

Feb 16 2018
Feb 16

Chances are that you already use Travis or another cool CI to execute your tests, and everyone politely waits for the CI checks before even thinking about merging, right? More likely, waiting your turn becomes a pain and you click on the merge: it’s a trivial change and you need it now. If this happens often, then it’s the responsibility of those who worked on those scripts that Travis crunches to make some changes. There are some trivial and not so trivial options to make the team always be willing to wait for the completion.

This blog post is for you if you have a project with Travis integration, and you’d like to maintain and optimize it, or just curious what’s possible. Users of other CI tools, keep reading, many areas may apply in your case too.

Unlike other performance optimization areas, doing before-after benchmarks is not so crucial, as Travis mostly collects the data, you just have to make sure to do the math and present the numbers proudly.

Caching

To start, if your .travis.yml lacks the cache: directive, then you might start in the easiest place: caching dependencies. For a Drupal-based project, it’s a good idea to think about caching all the modules and libraries that must be downloaded to build the project (it uses a buildsystem, doesn’t it?). So even a variant of:

cache:
  directories:
    - $HOME/.composer/cache/files

or for Drush

cache:
  directories:
    - $HOME/.drush/cache

It’s explained well in the verbose documentation at Travis-ci.com. Before your script is executed, Travis populates the cache directories automatically from a successful previous build. If your project has only a few packages, it won’t help much, and actually it can make things even slower. What’s critical is that we need to cache slow-to-generate, easy-to-download materials. Caching a large ZIP file would not make sense for example, caching many small ones from multiple origin servers would be more beneficial.

From this point, you could just read the standard documentation instead of this blog post, but we also have icing on the cake for you. A Drupal installation can take several minutes, initializing all the modules, executing the logic of the install profile and so on. Travis is kind enough to provide a bird’s-eye view on what eats up build time:

Execution speed measurements built in the log

Mind the bottleneck when making a decision on what to cache and how.

For us, it means cache of the installed, initialized Drupal database and the full document root. Cache invalidation is hard, we can’t change that, but it turned out to be a good compromise between complexity and execution speed gain, check our examples:

Do your homework and cache what’s the most resource-consuming to generate, SQL database, built source code or compiled binary, Travis is here to assist with that.

Software Versions

There are two reasons to pay attention to software versions.

Use Pre-installed Versions

Travis uses containers of different distributions, let’s say you use trusty, the default one these days, then if you choose PHP 7.0.7, it’s pre-installled, in case of 7.1, it’s needed to fetch separately and that takes time for every single build. When you have production constraints, that’s almost certainly more important to match, but in some cases, using the pre-installed version can speed things up.

And moreover, let’s say you prefer MariaDB over MySQL, then do not sudo and start to install it with the package manager, as there is the add-on system to make it available. The same goes for Google Chrome, and so on. Stick to what’s inside the image already if you can. Exploit that possibility of what Travis can fetch via the YML definition!

Use the Latest and (or) Greatest

If you ever read an article about the performance gain from migrating to PHP 7, you sense the importance of selecting the versions carefully. If your build is PHP-execution heavy, fetching PHP 7.2 (it’s another leap, but mind the backward incompatibilities) could totally make sense and it’s as easy as can be after making your code compatible:

language: php
php:
  - '7.2'

Almost certainly, a similar thing could be written about Node.js, or relational databases, etc. If you know what’s the bottleneck in your build and find the best performing versions – newer or older – it will improve your speed. Does that conflict with the previous point about pre-installed versions? Not really, just measure which one helps your build the most!

Make it Parallel

When a Travis job is running, 2 cores and 4 GBytes of RAM is available – that’s something to rely on! Downloading packages should happen in parallel. drush make, gulp and other tools like that might use it out of the box: check your parameters and configfiles. However, on the higher level, let’s say you’d like to execute a unit test and a browser-based test, as well. You can ask Travis to spin up two (or more) containers concurrently. In the first, you can install the unit testing dependencies and execute it; then the second one can take care of only the functional test. We have a fine-grained example of this approach in our Drupal-Elm Starter, where 7 containers are used for various testing and linting. In addition to the great execution speed reduction, the benefit is that the result is also more fine-grained, instead of having a single boolean value, just by checking the build, you have an overview what can be broken.

All in all, it’s a warm fuzzy feeling that Travis is happy to create so many containers for your humble project:

If it's independent, no need to serialize the execution

Utilize RAM

The available memory is currently between 4 and 7.5 GBytes , depending on the configuration, and it should be used as much as possible. One example could be to move the database main working directory to a memory-based filesystem. For many simpler projects, that’s absolutely doable and at least for Drupal, a solid speedup. Needless to say, we have an example and on client projects, we saw 15-30% improvement at SimpleTest execution. For traditional RMDBS, you can give it a try. If your DB cannot fit in memory, you can still ask InnoDB to fill memory.

Think about your use case – even moving the whole document root there could be legitimate. Also if you need to compile a source code, doing it there makes sense as well.

Build Your Own Docker Image

If your project is really exotic or a legacy one, it potentially makes sense to maintain your own Docker image and then download and execute it in Travis. We did it in the past and then converted. Maintaining your image means recurring effort, fighting with outdated versions, unavailable dependencies, that’s what to expect. Still, even it could be a type of performance optimization if you have lots of software dependencies that are hard to install on the current Travis container images.

+1 - Debug with Ease

To work on various improvements in the Travis integration for your projects, it’s a must to spot issues quickly. What worked on localhost, might or might not work on Travis – and you should know the root cause quickly.

In the past, we propagated video recording, now I’d recommend something else. You have a web application, for all the backend errors, there’s a tool to access the logs, at Drupal, you can use Drush. But what about the frontend? Headless Chrome is neat, it has built-in debugging capability, the best of which is that you can break out of the box using Ngrok. Without any X11 forwarding (which is not available) or a local hack to try to mimic Travis, you can play with your app running in the Travis environment. All you need to do is to execute a Debug build, execute the installation part (travis_run_before_install, travis_run_install, travis_run_before_script), start Headless Chrome (google-chrome --headless --remote-debugging-port=9222), download Ngrok, start a tunnel (ngrok http 9222), visit the exposed URL from your local Chrome and have fun with inspection, debugger console, and more.

Takeaway

Working on such improvements has benefits of many kinds. The entire development team can enjoy the shorter queues and faster merges, and you can go ahead and apply part of the enhancements to your local environment, especially if you dig deep into database performance optimization and make the things parallel. And even more, clients love to hear that you are going to speed up their sites, as this mindset should be also used at production.

Feb 14 2018
Feb 14

A quick run-through of one the many exciting new features that Twig brings to Drupal 8: extendable, easily-overridden templates.

One of the many good things about Drupal 8 is the introduction of Twig. It's a lovely templating engine that, in my opinion, is far superior to PHPTemplate. For us frontenders, it has a much nicer user syntax that's more akin to handlebars or other JS templating engines.

Imagine this scenario (I'm sure many developers have encountered this before):  you're building a site and some of your page templates need to be full-width, while others have a width-restricted content section. Furthermore on some pages you want to add a class to change your nav's styling.

The Drupal 7 way went something like this:

  • Take page.tpl.php and duplicate it, e.g. page--featured-content.tpl.php
  • Add the new wrapper to the new template
  • Clear the cache
  • Job done

This is fine, but is definitely WET rather than DRY. What happens when the markup for say the footer, or the header, needs to change? The answer is a mass find / replace, leaving you open to missed pages, etc.

Thanks to Twig we have a much cleaner solution for this. Twig brings to the table the concept of blocks (not to be confused with Drupal blocks!). You can define a named section in a template that can be overridden by a sub template. What this means is rather than copying / pasting the whole file and changing one line, you simply override that one block in your new file.

Let's start with the wrapper example from above…

First we create a named block to hold the content we want to override:

{% block header %}
  {% set alt_header = FALSE %}
  {# Header code #}
{% endblock %}

{% block wrapper %}
  {% block content %}
    {# Content code #}
  {% endblock %}

  {% block footer %}
    {# Footer code #}
  {% endblock %}
{% endblock %}


Now for our page--featured-content.html.twig we can just do:

{% extends 'page.html.twig' %}
{% block wrapper %}
  

    {{ parent() }}  

{% endblock %}


That first line just tells Twig which template to use as a base. The new content block will be used in place of the base template, and the rest will stay the same.

The parent() line is pretty cool: it tells Twig to just whack the content of the 'master' block in the new block, so you don't have to retype the whole thing. If you have nested blocks (as in this example), it doesn't matter, you just say "this in here, thanks".

What about variables?

In the example of our dark nav, our Twig template has this:

{% set dark_nav = FALSE %}
{% block header %}
  {% set nav_class = '' %}
  {% if dark_nav %}
    {% set nav_class = 'header--dark-nav' %}
    
  {% endif %}
{% endblock %}

We simply declare the variable outside of the block, and then in the new block redefine it, e.g.: 

{% block header %}
  {% set dark_nav = TRUE %}
  {{ parent() }}
{% endblock %}


So there we have it, in a nutshell. All our new templates can inherit all the defaults from one base, and only have the customisations we need in the sub templates.

Life is better, and updating blocks is just a case of directly editing one place. Happy days!

Feb 14 2018
Feb 14

Yes, a blog post about Drupal 7!

I recently worked on an enhancement for a large multi-site Drupal 7 platform to allow its users to import news articles from RSS feeds. Pretty simple request, and given the maturity of the Drupal 7 contrib module ecosystem, it wasn't too difficult to implement.

One somewhat interesting requirement was that images from the RSS feed be imported to an image field on the news article content type. RSS doesn't have direct support for an image element, but it has indirect support via the enclosure element. According to the RSS spec:

It has three required attributes. url says where the enclosure is located, length says how big it is in bytes, and type says what its type is, a standard MIME type.

RSS feeds will often use the enclosure element to provide an image for each item in the feed.

Despite being in a beta release still, the Drupal 7 Feeds module is considered quite stable and mature, with it's most recent release in September 2017. It has a robust interface that suited my use case quite well, allowing me to map RSS elements to fields on the news article content type. However, it doesn't support pulling data out of enclosure elements in the source. But alas, in there's an 8 year old issue containing a very small patch that adds the ability.

With that patch installed, the final step is to find the proper "target" to map it's data to. It's not immediately clear how this should work. Feeds needs to be smart enough to accept the URL to the image, download it, create a file entity from it, and assign the appropriate data to the image field on the node. Feeds exposes 4 different targets for an image field:

Feeds image field targets

Selecting the "URI" target is the proper choice. Feeds will recognize that you're trying to import a remote image and download it.

Feb 07 2018
Feb 07

It's long time Drupal 8 has been released, it's in the middle of the way but it's going to get matured. Drupal 8.5 is going to release on  March 7, 2018. good features are going to released, significant improvements. you can the list of them Drupal Core RoadMap(8.5 + 8.6) 

Add a Layout Builder to core : it makes it very convenient to set up layouts without need add additional and config modules, Now Layout Build is on the Drupal Core and the site-builder be able to set up a default layout stack per content/entity type with his fields pre-placed. 

Drupal Layer Builder

Media initiative: Essentials - first round of improvements in core : I've remembered it was one the deficiency in Drupal 6,7  in comparison with other CMSs like Wordpress, Joomla that there was not any convenient solution in core to work with media, Drupal community released some modules to fill this deficiency but Now in Drupal 8 we have brilliant  media supporting in Drupal 8 core,new changes make it more efficient in Drupal 8.5.

Drupal 8 Media support

Introduce "outside in" quick edit pattern to core : off course Inside-Out editing is one of the coolest features of Drupal 8 for Content managers, edit the contents inline in view of your website without going far way to go to back-office content management section and finding a specific item between other contents. 

Inside out editing - inline editing in Drupal 8.5

and some other significant features are going to released in Drupal 8.5. Stay Calm Drupal 8.5 is coming. 

Feb 02 2018
Feb 02

Installing a site with existing config has been a bit of a moving target in Drupal 8. At different times, I've recommended at least three different approaches. I won't go into too much detail, but basically we've used the following at times:

  • Manually change site UUIDs (Sloppy)
  • Use --config-dir option with drush site-install (Only supports minimal profile)
  • Use patch from Issue #2788777 (Config needs to be stored in profile directory)

You can read more about previous approaches here. The one thing that hasn't changed is the user story:

As a user I want to be able to install Drupal from a package of configuration that is maintained in git.

The issue that most closely addresses this user story is #1613424 "Allow a site to be installed from existing configuration". That issue is currently postponed on another thorny issue which involves the special way that Drupal treats dependencies of profiles. In the meantime, alexpott has provided a standalone install profile that handles installing a site from existing config. This is the Configuration installer profile.

It takes a minute to wrap your head around the concept because when you use the Configuration installer profile, you don't end up with a site running the Configuration installer profile. At the end of the install, your site will be running the profile that is defined in the config you provided the Config installer.

So for a new project, you would initially install using the profile of your choice. Then, once you have exported your config, you would use the Config installer for subsequent installs.

Considerations

  • For ease of use, your settings file should not be writable by the installer and should not contain the install_profile key. If your settings file contains your profile, you'll get an exception when trying to run the install. And if it is writable, Drupal will write that value every time you do install.
  • The Config installer profile requires two patches in order to work properly with Drush 9.
  • Config must not have direct dependencies on a profile. Lightning 3.0.1 requires the patch in issue #2933445 to be compliant.

Instructions

  1. For new sites, install using the profile or sub-profile of choice.
$ drush site-install lightning
  1. Ensure that the install_profile key is not present in your settings.php file. Drupal will write this value by default, but it is not required in Drupal >= 8.3.0. You can prevent Drupal from writing it by disallowing write access to the settings file. If the installer wrote the profile during initial installation, you can manually delete it. Then revoke write access:
$ chmod 444 docroot/sites/default/settings.php
  1. Define the following patches in your composer.json file if you are using config_installer < 1.6.0 and/or lightning < 3.0.2.
"patches": {
    "drupal/config_installer": {
        "2916090 - Support Drush 9":
        "https://www.drupal.org/files/issues/drush9-support-2916090-6.patch",
        "2935426 - Drush 9: Call to undefined function drush_generate_password":
        "https://www.drupal.org/files/issues/config_installer-drush_9_call_undefined_function_drush_generate_password-2935426-2.patch"
    },
    "drupal/lightning_layout": {
        "2933445 - user.role.layout_manager has dependency on Lightning":
        "https://www.drupal.org/files/issues/2933445.patch"
    }
},
  1. Add the Configuration installer profile to your codebase.
$ composer require drupal/config_installer
  1. Export your site's configuration.
$ drush config-export
  1. Use the Configuration installer profile in all subsequent site installs. The resulting install will run on the profile used in step 1.
$ drush site-install config_installer
Feb 02 2018
Feb 02

A few years ago I started using the PhpStorm IDE for PHP development, was immediately smitten and, after a bit of use, wrote a blog post with some tips I found for makig better use of the tools PhpStorm gives you.

In the four years since then there have been some new developments. Firstly, of course, Drupal 8 was finally released – and, consequently, the one complaint I had back in 2013 about the $MODULE$ variable only working in the module file itself became more of a problem. (Also, I added one more live template that's very useful for Drupal 8.)
But secondly, a few weeks ago PhpStorm finally added scripting support for live templates, so it's now possible to write more powerful templates that way – and fix the $MODULE$ variable.

The new di live template

In general, when writing OOP code for Drupal 8 (that is, for almost all Drupal 8 code) you should use dependency injection as much as possible. There's several different styles for doing that, I'm using one which uses setter methods and calls them in create() (instead of adding all injected objects to the constructor). This makes inheritance easier and keeps the constructor “cleaner” – and becomes much easier with a good live template:

  /**
   * The $NAME$.
   *
   * @var $INTERFACE$|null
   */
  protected $$$PROP_NAME$;

  /**
   * Retrieves the $NAME$.
   *
   * @return $INTERFACE$
   *   The $NAME$.
   */
  public function get$UC_PROP_NAME$() {
    $plugin->set$UC_PROP_NAME$($container->get('$SERVICE$'));

    return $this->$PROP_NAME$ ?: \Drupal::service('$SERVICE$');
  }

  /**
   * Sets the $NAME$.
   *
   * @param $INTERFACE$ $$$VAR_NAME$
   *   The new $NAME$.
   *
   * @return $this
   */
  public function set$UC_PROP_NAME$($INTERFACE$ $$$VAR_NAME$) {
    $this->$PROP_NAME$ = $$$VAR_NAME$;
    return $this;
  }

Variable definitions:

Name Expression Skip if defined VAR_NAME N SERVICE N INTERFACE clipboard() Y NAME underscoresToSpaces(VAR_NAME) Y UC_NAME underscoresToCamelCase(VAR_NAME) Y UC_PROP_NAME capitalize(PROP_NAME) Y

Usage:

  1. Copy the service interface's FQN to your clipboard.
  2. Put the service ID either into a secondary clipboard (e.g., middle mouse button on Linux) or remember it.
  3. Execute live template (at the position where you want the getter and setter).
  4. Input variable name (in snake_case), then input service name.
  5. Move the property definition and the create() line (temporarily stored as the first line of the getter in the template) to their appropriate places.
  6. In the code, alway use the getter method for accessing the service.

Fixing the $MODULE$ variable

Since the code for this is quite complex, we better just put it into a separate file. So, first download the script file and save it to some known location, then simply use the (absolute) path to the script file as the argument for groovyScript(), like this:

groovyScript(

This can be used for all the live templates that contain a $MODULE$ variable (though it will, of course, be less useful for the procedural ones, than for the simple m template).

Feb 01 2018
Feb 01

As a digital agency we need to have a good CMS solution for our clients. Even in situations where we are developing more custom apps than content web applications, we still need a good, modular CMS solution. In this article, I will show you what we learned and how you can build things using Symfony inside Drupal.

The post Drupal for Symfony Developers appeared first on php[architect].

Jan 24 2018
Jan 24

Every year I participate in a number of initiatives introducing people to free software and helping them make a first contribution. After all, making the first contribution to free software is a very significant milestone on the way to becoming a leader in the world of software engineering. Anything we can do to improve this experience and make it accessible to more people would appear to be vital to the continuation of our communities and the solutions we produce.

During the time I've been involved in mentoring, I've observed that there are many technical steps in helping people make their first contribution that could be automated. While it may seem like creating SSH and PGP keys is not that hard to explain, wouldn't it be nice if we could whisk new contributors through this process in much the same way that we help people become users with the Debian Installer and Synaptic?

Paving the path to a first contribution

Imagine the following series of steps:

  1. Install Debian
  2. apt install new-contributor-wizard
  3. Run the new-contributor-wizard (sets up domain name, SSH, PGP, calls apt to install necessary tools, procmail or similar filters, join IRC channels, creates static blog with Jekyll, ...)
  4. write a patch, git push
  5. write a blog about the patch, git push

Steps 2 and 3 can eliminate a lot of "where do I start?" head-scratching for new contributors and it can eliminate a lot of repetitive communication for mentors. In programs like GSoC and Outreachy, where there is a huge burst of enthusiasm during the application process (February/March), will a tool like this help a higher percentage of the applicants make a first contribution to free software? For example, if 50% of applicants made a contribution last March, could this tool raise that to 70% in March 2019? Is it likely more will become repeat contributors if their first contribution is achieved more quickly after using a tool like this? Is this an important pattern for the success of our communities? Could this also be a useful stepping stone in the progression from being a user to making a first upload to mentors.debian.net?

Could this wizard be generic enough to help multiple communities, helping people share a plugin for Mozilla, contribute their first theme for Drupal or a package for Fedora?

Not just for developers

Notice I've deliberately used the word contributor and not developer. It takes many different people with different skills to build a successful community and this wizard will also be useful for people who are not writing code.

What would you include in this wizard?

Please feel free to add ideas to the wiki page.

All projects really need a couple of mentors to support them through the summer and if you are able to be a co-mentor for this or any of the other projects (or even proposing your own topic) now is a great time to join the debian-outreach list and contact us. You don't need to be a Debian Developer either and several of these projects are widely useful outside Debian.

Jan 24 2018
Jan 24
Splash AwardsSplash Awards Deutschland

For the second time, the Splash Awards Germany will take place on March 08, 2018. The awarding of the coveted prize for successful Drupal projects takes place in the Brotfabrik in Frankfurt am Main.

About the Splash Awards

Originally, the award ceremony for the Splash Award comes from the Netherlands. Since 2014, Drupal service providers have been awarded who realize extraordinary projects. Various interesting projects from different areas are presented and judged by a jury. 

Categories

Nominations are available in the categories Education, E-Commerce, Enterprise, Healthcare, Solutions, Non-Profit, Government, Social / Community and Publishing / Media. Agencies and freelancers will be able to submit their projects to the Splash Awards 2018 upto 31th January 2018. The only requirement for participation is a company headquarters in Germany. Sponsors are still accepted. 

This year, we submitted our project ISPO.com in the category publishers/media and we are curious to see who will win the race this year. We look forward to lots of interesting Drupal projects and old/new faces from the Drupal community.

Jan 19 2018
Jan 19

After two years of planning, discussing, and (eventually) coding, the "Out of the Box" initiative has just been committed to Drupal Core.

One of the things most often requested of Drupal has been a better experience "Out of the Box", so that when a new users installs it they see something more positive than a message saying "You have not created any frontpage content yet".

To that end, a strategic initiative called the "Out of the Box Initiative" was set up. I was a member of that team. What we sought to do over the past two years was create a website for a (fictional) publishing company. We decided upon the name "Umami" for a food publishing company, publishing articles and recipes. We went through the full web design process - user stories, validation, requirements gathering, wireframes, design, development ... up to creating what we called the "MEGA Patch". And then submitted about 50 versions of it.

This week we hoped our work would be committed to Drupal 8.5.0-alpha1, but we just missed that deadline. Instead, we had a meeting with the product owners last night to have the final "Needs Product Owners Review" tag removed from the "Create experimental installation profile" issue. Here's the video of that demonstration and meeting:

Following that meeting, the tag was removed and our code was committed to Drupal 8.6.x. This means you'll see it shipping in Drupal in September at the latest, but we hope to get the final beta blockers fixed to have it backported to 8.5.0-beta. If you'd like to help squash some of the bugs, follow these "Out of the Box" issues. Here's the tweet from @webchick (THANKS!) announcing it.

So, what is in this commit?

This commit brings a new installation profile to Drupal. The profile is called "Umami" and has a corresponding "Umami" theme. It creates three content types - basic page, article, and recipe. It has listing pages for articles and recipes, some promotional blocks, a homepage, contact form, and search page. It is a fully-featured (small) website built using only (stable) Drupal core modules.

We are not using experimental modules such as content moderation, layout builder, media, etc. Once they are stable, we hope to bring them into the "Out of the Box" experience as well.

If you'd like to install it, try this link on SimplyTest.me.

Jan 18 2018
Jan 18

Keeping up with the times

ReactJS is all the rage these days, and for good reason. It's a great library, it's well known, it's well documented, and it's easy to jump into if you have any JavaScript experience. Between its nice learning curve and the ability to use JSX it makes sense to use it on applications that you want to be fast and scalable.

Headless is all the rage these days in the Drupal community, and for good reason. It gives Drupal the chance to do what it's best at, managing content. Sure, Drupal is great at other stuff, but the first two words of CMS are "Content" and "Management".

When we look back at the release of Drupal 8, we notice that one of the biggest pieces of D8 was a built in REST API with minimal configuration needed. This opened up a world of possibility for developers to merge the speed and scalability of JavaScript with the content management framework strengths of Drupal.

How Decoupled Should You Go

In a recent blog post, Dries goes into the Future of Decoupled Drupal. In it he mentions progressive vs fully decoupled. Personally, with a background specifically in Drupal and PHP and only a side of JavaScript, I prefer the progressively decoupled approach that keeps all the familiar parts of the Drupal ecosystem available, like the theme layer, the admin side, the block rendering engine, etc., but also allows the flexibility to render an app that uses React or Angular or any of the other libraries out there that can consume data from a REST API.

I have more familiarity with theming Drupal than anything, so that makes it less painful to build something with Drupal than it would if I were to try and build a SPA with a completely different front end that still consumes the Drupal data.  For this post I'm going to be focusing primarily on a progressively decoupled solution.

My Progressively Decoupled Solution

The project that brought us to this involved a site that revolved around a single search app and we wanted it to be fast and easy to use, but there were other aspects to the site that would benefit from being Drupally.  We needed blocks in certain places, a user configurable menu system, login/logout capabilities, and a decent content management side for users to enter content in a friendly way, but we also wanted to have some decoupled/headless elements using the ReactJS library.

For React to work inside Drupal, the React side needed something to hold onto, known as a Mount Node.  To the Drupal dev, the term "mount node" can be misleading since nodes have a completely different meaning in the Drupal world, but in the React world, a mount node is a node in the DOM that the JavaScript side can grab onto and take control of.  The index.js file contains the code that defines the expected ID of the mount node and it looks a little like this:

ReactDOM.render(, document.getElementById('react-div'));

Where 'react-div' is the ID of the div that React will be mounting to.  Essentially, the React app will take that div and inject all of its code inside of there as the parent of the React app's DOM.  What does that mean for us in the Drupal world?  Basically, we need to have a div somewhere inside a page for or Drupal node for the React app to grab onto.

There are a few different ways that we could handle this.

  • A custom twig file built specifically for a single use case
  • Full HTML body field in a Drupal node with a matching ID
  • A custom content type with a twig file
  • Probably more, but I don't like lists going longer than they should

For my project, I decided to use the third option:  A custom content type with an associated twig file, and I wanted it to be reusable so I made a module for that, React Mount Node, and I contributed that module back to Drupal so that anyone can use it.

Thanks to the magic of Drupal Console I was able to quickly bootstrap everything that I needed for a module including the .module file, the .info.yml file, and a composer.json file in case I wanted to contribute the module back to the community.

After getting the base of the module setup, there were a few other things that needed to happen.  Specifically, I needed a content type to be built within the module.  Fortunately, there's a pretty simple way to do that outlined in this documentation.  We need a config/install/node.type..yml in our module to give us some info about our new content type, and then we need our config to build out a content type.  A few ways to do this are by exporting config and cleaning it up a bit by drush or through the UI, or we can use Drupal Console to do it.  Either way, we need to make sure that the UUID gets stripped so we don't run into any errors.

Depending on how complex your content type is, you may only need do edit one or two files, maybe more.  The important thing is that you look through and delete the UUID line in every relevant .yml file that comes from your new content type.

uuid: be001aff-1508-485c-916b-86062ebdb811 <<< GET RID OF ME
langcode: en
status: true

Drupal Console does this automatically when you export a content type.

The content type created by this module is called React Mount Node because it creates a node that you use to mount a React app.  Get it?  GET IT???

React Mount Node has three fields: Title, Div ID, and Library. The Title field is just for the title of the node that you're creating.  The Div ID is where you set the ID that you're using inside of your React app that you set in index.js. The Library field is where you tell the module what library it should be looking for.  We'll get to that in a moment.

If you're building the React side of things in addition to the Drupal side, you may notice that out-of-the-box, React spits code out using the naming convention app-name.hash.min.js. I highly recommend changing that to something a little more friendly.  For the setup I use, create-react-app, I can do that inside of the webpack config file at the line in the output object array under filename that should look something like:

  output: {
    // The build folder.
    path: paths.appBuild,
    // Generated JS file names (with nested folders).
    // There will be one main bundle, and one file per asynchronous chunk.
    // We don't currently advertise code splitting but Webpack supports it.
    filename: 'path/to/your/libraries/react-app/react-app.js',
    chunkFilename: 'static/js/[name].[chunkhash:8].chunk.js',

It's my suggestion that you do this so that every time you compile your React app you don't have to change your libraries files.  It's a pain.  I speak from experience.  Trust me.

This will give you a reusable filename to that you can, as the infomercials say, "SET IT AND FORGET IT!"

What this doesn't give you, however, is the actual library.  In Drupal 8 we've changed from drupal_add_js() and drupal_add_css() to something a little more elegant.  Libraries.

Creating and Connecting Your Library

Libraries are essentially collections of assets.  If you look in your theme folder, you'll see a file that ends in .libraries.yml.  This is true for every theme, including Bartik and Stark.  The themename.libraries.yml file is where the libraries are built, at least those associated with that theme.  It's slightly different for libraries defined in modules, but the base concepts remain the same.  If you were inclined to place your compiled React app into a module, it would be a very similar procedure.

For our purposes though, we added the React app into the theme folder and defined the library in themename.libraries.yml.  It looks like this 

react-app:
  js:
    libraries/react-app/react-app.js: {}

This is where the Library field on our React Mount Node content type comes into play.  Because we defined the library in our theme, the library name is themename/react-app and that's what we place in the field.

Another beauty of D8 is that we don't have to load every JS file on every page.  That library will only be loaded on the node(s) it's attached to.  How do we do this? Through the magic of twig.  Twig allows us to attach a library on a per-node/page/block/site basis with one simple line of code:

  {{ attach_library(my_lib) }}

That's the line from the twig file that's used to render the React Mount Node content type and it tells Drupal to attach this library to this node that has this ID.  The end result is a React app living inside of a Drupal node.

React app in a node

This is really the tip of the iceberg though.  With this module and the flexibility of React and a REST API we're able to do a ton of stuff.  I highly suggest getting to know the Fetch API that makes getting data a breeze and learning more about how to setup Drupal as a REST endpoint.

I'll be doing posts in reference to the REST API, normalization, and one of my new favorite things -- REST Flagging in the near future.  In the meantime, what excites you the most about going decoupled with Drupal?

Jan 18 2018
Jan 18

Under certain circumstances, it might be necessary to build a specific version of Lightning with dependencies exactly as they were when it was released. But sometimes building older versions of Lightning can be problematic. For example, maybe the older version assumes an older version of a dependency, or a patch no longer applies with an updated dependency.

In that case, you can use the new "Lightning Strict" package to pin all of Lightning's dependencies (and their dependencies recursively) to the specific versions that were included in Lightning's composer.lock file when it was released. (If this sounds familiar, a "Drupal Core Strict" package also exists that does the same thing for core. But note that package is incompatible with Lightning Strict since Lightning uses PHP 7.0 when building its lock file.)

In this example, we want to build Lightning 2.2.4 - which contains the migration to Content Moderation, Workflows, and Lightning Scheduler:

$ composer require acquia/lightning:2.2.4 balsama/lightning_strict:2.2.4 --no-update
$ composer update

Assuming you were updating from Lightning 2.2.3, you could then follow the update instructions for 2.2.4 found in our release notes. In this case, they are:

$ drush updatedb && drush cache-rebuild
$ drupal update:lightning --since=2.2.3

Once you've updated to the most recent version, you can remove the dependency on balsama/lightning_strict.

The package will automatically be updated when new versions of Lightning are released. Hopefully this will solve some of the problems people have experienced when trying to build older version of Lightning.

Jan 10 2018
Jan 10

I’m excited to announce that I’ve signed with Microsoft as a Principal Software Engineering Manager. I’m joining Microsoft because they are doing enterprise Open Source the Right Way, and I want to be a part of it. This is a sentence that I never believed I would write or say, so I want to explain.

First I have to acknowledge the history. I co-founded my first tech company just as the Halloween documents were leaked. That’s where the world learned that Microsoft considered Open Source (and Linux in particular) a threat, and was intentionally spreading FUD as a strategic counter. It was also the origin of their famous Embrace, Extend, and Extinguish strategy. The Microsoft approach to Open Source only got more aggressive from there, funneling money to SCO’s lawsuits against Linux and its users, calling OSS licensing a “cancer”, and accusing Linux of violating MS intellectual property.

I don’t need to get exhaustive about this to make my point: for the first decade of my career (or more), Microsoft was rightly perceived as a villain in the OSS world. They did real damage and disservice to the open source movement, and ultimately to their own customers. Five years ago I wouldn’t have even entertained the thought of working for “the evil empire.”

Yes, Microsoft has made nice movements towards open source since the new CEO (Satya Nadella) took over in 2014. They open sourced .NET and Visual Studio, they released Typescript, they joined the Linux Foundation and went platinum with the Open Source Initiative, but come on. I’m an open source warrior, an evangelist, and developer. I could see through the bullshit. Even when Microsoft announced the Linux subsystem on Windows, I was certain that this was just another round of Embrace, Extend, Extinguish.

Then I met Josh Holmes at the Dutch PHP Conference.

First of all, I was shocked to meet a Microsoft representative at an open source conference. He didn’t even have bodyguards. I remember my first question for him was “What are you doing here?”.

Josh told me a story about visiting startup conferences in Silicon Valley on behalf of Microsoft in 2007, and reporting back to Ballmer’s office:

“The good news is, no one is making jokes about Microsoft anymore. The bad news is, they aren’t even making jokes about Microsoft anymore.”

For Josh, this was a big “aha” moment. The booming tech startup space was focused on Open Source, so if Microsoft wanted to survive there, they had to come to the table.

That revelation led to the creation of the Microsoft Partner Catalyst Team. Here’s Josh’s explanation of the team and its job, from an interview at the time I met him:

“We work with a lot of startups, at the very top edge of the enterprise mix. We look at their toughest problems, and we go solve those problems with open source. We’ve got 70 engineers and architects, and we go work with the startups hand in hand. We’ll sit down for a little pair programming with them, sometimes it will be a large enough problem that will take it off on our own and we’ll work on it for a while, and we’ll come back and give them the code. Everything that we do ends up in Github under typically an MIT or Apache license if it’s original work that we’re doing on our own, or a lot of times we’re actually working within other open source projects.”

Meeting with Josh was a turning point for my understanding of Microsoft. This wasn’t just something that I could begrudgingly call “OK for open source”. This wasn’t just lip service. This was a whole department of people that were doing exactly what I believe in. Not only did I like the sound of this; I found that I actually wanted to work with this group.

Still, when I considered interviewing with Microsoft, I knew that my first question had to be about “Embrace, Extend, and Extinguish”. Josh is a nice guy, and very smart, but I wasn’t going to let the wool be pulled over my eyes.

Over the next months, I would speak with five different people doing exactly this kind of work at Microsoft. I I did my research, I plumbed all my back-channel resources for dirt. And everything I came back with said I was wrong.

Microsoft really is undergoing a fundamental shift towards Open Source.

CEO Sadya Nadella is frank that closed-source licensing as a profit model is a dead-end. Since 2014, Microsoft has been transitioning their core business from licensed software to platform services. After all, why sell a license once, when you can rent it out monthly? So they move all the licensed products they can online, and rent, instead of selling them. Then they rent out the infrastructure itself, too - hence Azure. Suddenly flexibility is at a premium. As one CTO put it, for Azure to be Windows-only would be a liability.

This shift is old news for most of the world. As much as the Hacker News crowd still bitches about it as FUD, this strategic direction has been in and out of the financial pages for years now. Microsoft has pivoted to platform services. Look at their profits by product over the last 8 years:

Microsoft profits by product, over year.

The trend is obvious: server and platform services are the place to invest. Office only remains at the top of the heap because it transitioned to SaaS. Even Windows license profits are declining. This means focusing on interoperability. Make sure everything can run on your platform, because anything else is to handicap the source of your biggest short- and medium-term profit. In fact, remaining adversarial to Open Source would kill the golden goose. Microsoft has to change its values in order to make this shift.

So much for financial and strategic direction; but this is a hundred-thousand-person company. That ship doesn’t turn on a dime, no matter what the press releases tell you. So my second interview question became “How is the transition going?" This sort of question makes people uncomfortable: the answer is either transparently unrealistic, or critical of your environment and colleagues. Over and over again, I heard the right answer: It’s freakin' hard.

MS has more than 40 years of proprietary development experience and institutional momentum. All of their culture and systems - from hiring, to code reviews, to legal authorizations - have been organized around that model. That’s very hard to change! I heard horror stories about the beginning of the transition, having to pass every line of contribution past the Legal department. I heard about managers feeling lost, or losing a sense of authority over their own team. I heard about development teams struggling to understand that their place in an OSS project was on par with some Rando Calrissian contributor from Kansas. And I heard about how the company was helping people with the transition, changing systems and structures to make this cultural shift happen.

The stories I heard were important evidence, which contradicted the old narrative I had in my head. Embrace, extend, extinguish does not involve leadership challenges, or breaking down of hierarchies. It does not involve personal struggle and departmental reorganization. The stories I heard evidenced an organization trying a real paradigm shift, for tens of thousands of people around the world. It is not perfect, and it is not finished, but I believe that the transition is real.

When you accept that Microsoft is trying to reorient its own culture to Open Source, suddenly all those “transparent” PR moves you dismissed get re-framed. They are accomplishments. It’s incredibly difficult to change the culture of one of the biggest companies in the world… but today, almost half of Azure users run Linux. Microsoft’s virtualization work made them the fifth largest contributor to the 3.x Linux kernel. Microsoft maintains the biggest project on Github (by contributor count). They maintain a BSD distribution and a Linux distribution. And a huge part of LXD (the container-based virtualization system for Linux) comes from Microsoft’s work with Canonical.

That’s impressive for any company. But Microsoft? It boggles the mind. This level of contribution is not lip-service. You don’t maintain a 15 thousand person community just for PR. Microsoft is contributing as much or more to open source than many other major players, who have had this in their culture from the start (Google, Facebook, Twitter, LinkedIn…). It’s an accomplishment, and it’s impressive!

In the group I’m entering, a strong commitment to Open Source is built into the project structure, the team responsibilities, and the budgeting practice. Every project has time specifically set aside for contribution; developers' connections to their communities are respected and encouraged. After a decade of working with companies who try to engage with open source responsibly, I can say that this is the strongest institutional commitment to “giving back” that I have ever seen. It’s a stronger support for contribution than I’ve ever been able to offer in any of my roles, from sole proprietor to CTO.

This does mean a lot more work outside of the Drupal world, though. I will still attend Drupalcons. I will still give technical talks, participate, and help make great open source communities for Drupal and other OSS projects. If anything, I will do those things more. And I will do them wearing a Microsoft shirt.

Microsoft is making a genuine, and enormous, push to being open source community members and leaders. From everything I’ve seen, they are doing it extremely well. From the outside at least, this is what it looks like to do enterprise Open Source The Right Way.

Jan 02 2018
Jan 02

I tell my kids all the time that they can’t have both - whether it’s ice cream and cake or pizza and donuts - and they don’t like it. It’s because kids are uncorrupted, and their view of the world is pretty straightforward - usually characterized by a simple question: why not?

And so it goes with web projects:

Stakeholder: I want it to be like [insert billion dollar company]’s site where the options refresh as the user makes choices.

Me: [Thinks to self, “Do you know how many millions of dollars went into that?”] Hmm, well, it’s complicated&mldr;

Stakeholder: What do you mean? I’ve seen it in a few places [names other billion dollar companies].

Me: [Gosh, you know, you’re right] Well, I mean, that’s a pretty sophisticated application, and well, your current site is Drupal, and well, Drupal is in fact really great for decoupled solutions, but generally we’d want to redo the whole architecture… and that’s kind of a total rebuild…

Stakeholder: [eyes glazed over] Yeah, we don’t want to do that.

But there’s is a way.

Have your cake and eat it too - ©Leslie Fay Richards (CC BY 2.0)

Elm in Drupal Panels

Until recently, we didn’t have a good, cost-effective way of plugging a fancy front-end application into an existing Drupal site. The barrier to develop such a solution was too high given the setup involved and the development skills necessary. If we were starting a new project, it would be a no-brainer. But in an “enhancement” scenario, it was never quite worth the time and cost. Over time, however, our practiced approach to decoupled solutions has created a much lower barrier of entry for these types of solutions.

We now have a hybrid approach, using Elm “widgets” nested inside of a Drupal panel. Our reasons for using Elm - as opposed to some of the other available front-end frameworks (React, Angular, Ember) - are well-documented, but suffice it to say that Drupal is really good at handling content and it’s relationships, and Elm is really good at displaying that content and allowing a user to interact with it. But add to that the fact that Elm has significant guarantees (such as no runtime exceptions) and gives us the ability to do unit tests that we could never do with jQuery or Ajax, and all of a sudden, we have a solution that is not only more slick, but more stable and cost efficient.

A nifty registration application with lots of sections and conditional fields built as an Elm widget inside a Drupal Panel. Also shown, the accompanying tests that ensure the conditions yield the proper results. Oh how we miss the Drupal Form API!

In these cases, where our clients have existing Drupal websites that they don’t want to throw away, and big ideas for functionality that their users have come to expect, we can now deliver something better. This is groundbreaking in particular for our non-profit clients, as it gives them an opportunity to have “big box” functionality at a more affordable price point. Our clients can have their proverbial cake and eat it too.

What’s more, is that it helps us drive projects even further using our “Gizra Way” mindset: looking at a project as the sum of its essential parts. Because - in these scenarios - we don’t need to use Drupal for everything (and likewise, we don’t need to use Elm for everything either), we can pick and choose between them, and mix and match our approach depending upon what a particular function requires. In a way, we can ask: would this piece work nicely as a single-page application (SPA)? Yes? Let’s drop an Elm widget into a Panel. Is this part too much tied to the other Drupal parts of the apparatus? Fine, let’s use Drupal.

Building a Summer Planner

FindYourSummer.Org is a program operated by the Jewish Education Project in New York (jointly funded by UJA-Federation of New York and the Jim Joseph Foundation) and is dedicated to helping teens find meaningful Jewish summer experiences. They have amassed a catalogue of nearly 400 summer programs, and when they decided to expand their Drupal site into a more sophisticated tool for sorting options, comparing calendars, and sharing lists, the expected functionality exceeded Drupal’s ability to deliver.

Separating out the functional components into smaller tasks helped us to achieve what we needed without going for a full rebuild.

For instance, some of the mechanisms we left to Drupal entirely:

Adding a program to the summer planner (an action similar to adding an item to a shopping cart) is a well known function in commerce sites and in Drupal can be handled by Ajax pretty well. Just provide an indication that the item is added and increment the shopping cart and we’re all set.

Ajax does just fine for adding a program to the summer planner.

The new feature set also required more prompts for users to login (because only a logged-in user can share their programs), and again, Drupal is up for the task. Dropping the user login/registration form into a modal provides a sophisticated and streamlined experience.

Login prompts provided by a Ctools Modal lets a user know that to continue using the planner, they need to login. A key performance indicator for the project was to increase signups in order to track users who actually register for summer experiences.

For when a user gets into the planner (the equivalent of the shopping cart on a commerce site) the team had big ideas for how users would interact with the screen: things like adding dates and labels, sharing programs with friends and families, and removing items from the planner altogether.

Drupal could certainly handle those actions, but given the page refreshes that would be needed, the resulting interface would be sluggish, prone to error, and not at all in line with users’ expectation of a modern “shopping” experience. But, because we could define all of the actions that we wanted on one screen, we began to think of the “cart” page as a SPA. As such, it was a perfect opportunity to use Elm inside a Panel and provide a robust user experience.

Fast and stable (and well-tested) performance, slick user experience, and cost efficiency using Elm.

While the biggest benefit to the user is the greatly enhanced interaction, perhaps the biggest benefit to the client was the cost. The cost to handle this feature with an Elm application was only marginally more costly than it would have been in Drupal only. The most significant extra development is to provide the necessary data to the Elm app via RESTful endpoints. Everything else - from the developer experience perspective - is vastly improved because Elm is so much easier to deal with and provides so many guarantees.

Elm Apps Everywhere!

Maybe not. Sometimes - with new projects or in cases where the functionality can’t be boiled down into a single page - it’s more beneficial to start fresh with a fully decoupled solution. In these cases though, where there’s an existing Drupal site, and the functionality can be easily segmented, projects can have it both ways. It’s not surprising that we’ve been using this technique quite a bit lately, and as we get more adept, it only means the barrier to cost effectiveness is getting lower.

Dec 30 2017
Dec 30

Integrating a Drupal Text with Image Paragraph Bundle with Patternlab

Let's get to grips with having a text with image paragraph bundle set up with PatternLab, including having options for left/right for the image/text.

PatternLab Image with Text Component

It's a fairly common design pattern for clients to request - upload an image, place text beside it, and choose whether we have the image on the left with the text on the right or vice versa. You can see my PatternLab version of it here.

Okay, first off, in my twig file, I have the following:

{%
set classes = [
  'image-with-text',
  'layout-contained',
  paragraph.field_p_it_alignment.value,
]
%}


  

    

{{ content.field_p_it_image }}

{{ content.field_p_it_text }}

{{>

The only thing that is anyway special here is the paragraph.* variables. I have named them like so because this is what Drupal is going to give me back (since the machine name of those fields is p_it_alignment (I namespace all my entity fields with the first letter of the entity type - in this case the name stands for Paragraph Image (with) Text Alignment). This then allows me to have options in PatternLab for alignment and background style (light/dark). To achieve this, I have the following in my pattern's .yml file:

paragraph:
  field_p_it_alignment:
    value: left
  field_p_it_style:
    value: light

And in my image-with-text~right.yml file, I just need to override those variables like so:

paragraph:
  field_p_it_alignment:
    value: right

Following that, I have variables named content.field_p_it_image and content.field_p_it_text. Again, these are deliberately named like so, because this is what Drupal will give us back after I create a field with those machine names. Again and again, I try to keep my pattern variables in PatternLab the same as what I will get back from Drupal so when I come to adding the Drupal template, it's just one line of code to say "Hi Drupal template, you'll find the code for this file over here!". So, you can decide in PatternLab what the machine name for the Drupal fields is going to be and then get your site-builders to create fields with matching names, or you can ask your site-builders what machine names are being used in Drupal and then use those in PatternLab.

In my pattern's .yml file, I then set those variables like this:

content:
  field_p_it_text: '

A Short Heading

Fusce dapibus, tellus ac cursus commodo, tortor mauris condimentum nibh, ut fermentum massa justo sit amet risus.

' field_p_it_image: ''

Finally, in our paragraph--image-with-text.html.twig file we have just one line of code:

{% extends "@building-blocks/image-with-text/image-with-text.twig" %}

You can probably guess what the sass looks like:

.image-with-text {
    display: flex;
    &.left {
      flex-direction: row;
    }
    &.right {
      flex-direction: row-reverse;
    }
}

The images with text above is an example of this pattern in use on a Drupal website.

Dec 25 2017
Dec 25

If you happen to know Brice - my colleague and Gizra’s CEO - you probably have picked up that he doesn’t get rattled too easily. While I find myself developing extremely annoying ticks during stressful situations, Brice is a role model for stoicism.

Combine that with the fact that he knows I dislike speaking on the phone, let alone at 6:53pm, almost two hours after my work day is over, you’d probably understand why I was surprised to get a call from him. “Surprised” as in, immediately getting a stomach ache.

The day I got that call from him was a Sale day. You see, we have this product we’ve developed called ״Circuit Auction״, which allows auction houses to manage their catalog and run live, real-time, auction sales - the “Going once, Going twice” type.

- “Listen Bruce,” (that’s how I call him) “I’m on my way to working out. Did something crash?” I don’t always think that the worst has happened, but you did just read the background.
- “No.”

I was expecting a long pause. In a way, I think he kind of enjoys those moments, where he knows I don’t know if it’s good or bad news. In a way, I think I actually do somehow enjoy them myself. But instead he said, “Are you next to a computer?”

- “No. I’m in the car. Should I turn back? What happened?”

I really hate to do this, but in order for his next sentence to make sense I have to go back exactly 95 years, to 1922 Tokyo, Japan.

Professor Albert Einstein was visiting there, and the story tells that he scribbled a note in German and handed it to a bellboy after he did not have cash for a tip:

“A calm and modest life brings more happiness than the pursuit of success combined with constant restlessness,” it reads.

I wonder if it’s really the way it went. I’d like to believe it is. Seriously, just imagine that event!

Anyway, back to late October of 2017. Professor Einstein is long dead. The bellboy, even if still alive, is surely no longer a bellboy. Me, in my car, waiting for the light to turn Green - it’s either a left to go workout, or a u-turn back home. And the note. The note!

That note was up for sale that day. The opening price was $2,000, and it was estimated to be sold between $5,000 to $8,000.

- “It’s just passed a million dollars!”

That’s what he said next. Mind the exclamation mark. Brice almost never pronounces it, but this time I could swear I heard it. Heck, if we were next to each other we might have ended up hugging and crying together, and marvelling at how something we’ve created ended up selling a note for $1.6M!

Yes, the same note that reads “A calm and modest life brings more happiness than the pursuit of success combined with constant restlessness” was finally purchased after a hectic thirty minutes bidding war for lots and lots of money. I always enjoy good irony as much as I enjoy a good story. And by the way - it totally happened.

Screenshot of the live sale

We’re now launching a new version of the webapp. It has Headless Drupal in the backend, Elm in the client, and it’s sprinkled with Pusher and Serverless for real-time response.

Elm

Even after almost three years, Elm doesn’t cease to amaze me. I honestly don’t get why people are still directly JSing without at least TypeScript, to get a taste of something better and move on to a better solution. For our needs, Elm is definitely the right solution. If rewriting 60 flies with zero bugs once it compiles doesn’t impress you, then probably nothing I’ll present here will.

There are many advantages to Elm, and one of the bigger ones is how we can help the compiler help us using types. Here’s an example of how we model the notion of an Item status. When selling an item it transitions through many different states. Is it open for sale? Is it the currently selected item? Was it withdrawn by the auctioneer? Is it Active, Going, Gone?

Below is our way of telling the compiler what are the allowed states. You can not have a Going status, while the Item is actually Withdrawn as that would be an “impossible state”. Having impossible states is the holy grail of webapps. If you don’t allow certain states to happen, it means you simply don’t have to think about certain edge cases or bugs as they cannot be written!

Drupal

We decided to go with a super complex Drupal 8 setup. The kind that you at home probably don’t have, and never will. It’s a super secret branch that …

No, just kidding, It’s Drupal 7. With RESTful 1.x, and just the custom code we need along with some key modules such as Entity API, Message, Features and it is all tested with a large amount of SimpleTests.

Here is a short Q&A to questions no one really asked me, probably because I offer an answer before they are asked:

Q: Why not Drupal 8?
A: Could you also ask me why not Haskell? It would be easier to answer both these questions together.

Q: Why not Haskell?
A: Great questions! I’ll start with the latter. We’ve been dabbling with Haskell for some time now, and after doing Elm for so long we can definitely appreciate the language. However, our team was missing two important things: experience and mastery.

I think that often time, in the (silly) arguments of which is the best framework/ system, we are presented with the language’s features. But we need to also take into account experience. After 10 years with Drupal, there are very few problems we haven’t encountered, and we have had a chance to iterate and improve those solutions. We have the manpower in Gizra that is most experienced with Drupal, so scaling the dev team is easier. Combine it with a big ecosystem such as Panteon for hosting, Blackfire.io integrated in our CI to prevent regression in performance, Drupal was in the end the budget correct choice.

So back to Drupal 8. I’ve never been too shy on my opinion of Drupal 8. It’s probably the best CMS out there, but from my Drupal 7 perspective and current needs, it doesn’t offer anything that is much better. For example, we’ve never had config problems in Drupal 7 we couldn’t solve, so Drupal 8’s config feature that everybody raves around isn’t that appealing. Also, Drupal 8’s DX is indeed better, but at a cost of way more complexity. In the end, the way I see it - if you scratch Drupal 8 in some part, you will find Drupal 7 buried.

So Drupal 8 is on one hand not so far from Drupal 7, and in the other not radically different enough to be worth the learning curve.

Don’t get me wrong, we do develop on Drupal 8 for clients in Gizra. But for new projects, we still recommend starting with Drupal 7. And for non-CMS (similar to our Circuit Auction webapp), we’re looking to start using Yesod - a Haskell framework.

If I had to choose one topic I’m proud of in this project on the Drupal side, I’d have to pick our attention to docs and automatic testing.

The PHPDocs are longer than the actual codeCI testing is extensive

Static data with S3 & Lazy Loading with Pusher

Drupal, with PHP 7 isn’t slow. It actually performs quite well. But it is probably not as scalable as we’d get from working with Haskell. But even if we would go with a super fast solution, we’ve realized that all clients hitting the server at the same time could and should be avoided.

As we’re dealing with a real-time app, we know all the bidders are looking at the same view – the live sale. So, instead of having to load the items ahead of time, we’ve taken a different path. The item is actually divided into static and dynamic info. The static info holds the item’s name, uuid, image, description, etc. We surely can generate it once, upload it to S3, and let Amazon take the hit for pennies.

As for the calculated data (minimum price, starting price, etc), Drupal will serve it via RESTful. However, the nifty feature we’ve added is, once a Clerk hits the Gone button on an item, and the sale jumps to the next Item, we don’t let the clients ask the item from the server, but rather let the server send a single message to Pusher, which will in turn distribute it to all the clients. Again, Drupal is not taking the hit, and the cost is low.

It’s actually a bit more complicated than that as different people should see slightly different data. A winning bidder should see different info, for example a “You are the highest bidder” message, but that winning bidder can change any second. So, caching or Varnish wouldn’t cut it. Instead, we’re actually using Pusher’s public and private channels, and make sure to send different messages to the right people. It’s working really fast, and the Drupal server stays clam.

Keen.io & Serverless

We’re using keen.io to get analytics. It’s pretty exciting to see the reactions of the clients - the auction house owners, when we tell them about this. Because they can suddenly start getting answers for questions they didn’t know they could ask.

- “Which user hovered over the Place Bid button but didn’t press it?”
- “Who used the carousel, and to which item did they they scroll?”
- “Do second time bidders bid more?”
- “When do most bidders join the sale?”

Keen.io is great, since it allows us to create analytics dashboards per auction house, without them having any access to others auction houses.

Showing the number of hovers over the `Place bid` button a user did

Serverless is useful when we want to answer some of those questions in real time. That is, the question “Which user hovered over the Place Bid button but didn’t press it?” is useful for some post-sale conclusions, but what if we wanted to ask “Which user is currently hovering over the Place Bid button?", so the auctioneer could stall the sale, giving a hesitating bidder a chance to place their bid.

Even Though the latency of keen is quite low (about 30 sec), it’s not good enough for real-time experience – certainly where each items sale can be even less than a minute. This is where Serverless comes in. It acts as a proxy server, where each client sends MouseIn, MouseOut events, and Serverless is responsible to Broadcasting it via Pusher to the Auctioneers’ private channel.

Setting up Serverless was lots of fun, and knowing there’s zero thought we need to give to the infrastructure, along with its cost - made it fit nicely into our product.

Dec 22 2017
Dec 22

Why is Website Development so Important for SEO?

Everyday, millions of people use search engines (such as Google and Bing) to find information. Most company websites exist because they have answers, solutions and services people need. How can a company increase the odds their information will be shown in search engine results pages (SERP)? If many websites offer the same information, the better-optimized site is more likely to get more visitors.

Search engine optimization (SEO), is a framework of rules and techniques that web developers and digital marketers use to improve search engine rankings for websites. In general, sites developed with proper SEO techniques tend to realize increases in the quality of organic search result placement (page 1 vs page 10 of a Google search), because speed, user experience (UX) and good structure are some of the criteria search engines use to determine rankings.

Drupal is an exemplary platform for SEO, especially for organizations that want to maintain a website in-house after initial development. Here are three ways Drupal can help your website perform well in search engine results:

The 3 S’s of Drupal SEO

1. Security

Quality score criteria for search engines is more comprehensive than an evaluation of keyword use and content value. Search engines aim to protect their users by ranking secure sites. Drupal is a stable CMS, designed with strong security features. A secure website will have a better reputation. So, it will rank higher in search engine results.

2. Structure

Without structure, nothing works efficiently. This is especially true of websites. There are many coding requirements for excellent SEO (for example, the proper incorporation of metadata). One of the reasons hiring an experienced web development company (such as Thinkbean) is so important is many elements cannot be set up properly without extensive knowledge of search engine code preferences and the content management system (CMS) in use.

Drupal provides a strong structural foundation for SEO, in part because of its evolving core code. Other Drupal elements that contribute to SEO are its empowering design for content managers and marketers (who update SEO elements continuously) and its simplified approach to content publishing.

3. Support

In addition to the Drupal core code, multiple modules have been developed to help Drupal web development companies build search-engine-friendly websites. These nine top-ranked Drupal SEO modules perform the following functions:

  1. Pathauto - Automatically generates URL aliases for content.
  2. Page Title - Enhances control of the default title node. Unique and relevant titles can be created for content, and different automation patterns can be set.
  3. Metatag - Enhances control over meta tags: page title, description and keywords. Gives web development team the option to set default meta tags for an entire site, or different groups of pages.
  4. Search 404 - Gives a user a redirect to the internal site search if they encounter a 404 page error. This prevents users from leaving the site when they don’t see page results.
  5. Redirect - Directs a user from an existing URL to another one without generating a 404 error, and helps prevent instances of duplicate content.
  6. Global Redirect - Verifies that URLs are being implemented correctly and prevent URL duplication.
  7. Content Optimizer - Shows statistical SEO analysis of the website and provides recommendations for improvement.
  8. SEO Checklist - Lists the most important SEO tasks and modules needed to improve on-site SEO. Creates a to-do list of modules and tasks that need to be completed. The module is updated regularly, and breaks down tasks according to function.
  9. Drupal SEO Tools - This SEO suite covers keywords, titles, tags, paths, redirects, sitemaps, Google analytics, webmaster tools, and more. Designed for integration with other SEO modules.

Once installed and properly configured, these modules help improve the performance of the website and give SEO professionals the tools they need to run a successful, ongoing SEO campaign.

Choose a Drupal Web Development Company to Master SEO

Drupal’s robust core programming, contributed modules, and potential for customization make it a perfect choice for companies looking to out-rank their competitors. However, it takes education, experience and skill to build a site that performs well. That’s why choosing a qualified Drupal web development company is vital.

Thinkbean has developed superior websites for healthcare companies, educational companies and more, with an endless variety of functionality. With years of Drupal experience behind them, Thinkbean provides custom Drupal web development and support, leveraging Drupal’s capabilities to achieve search engine optimization as only an expert Drupal development company can.

Thinkbean’s advanced Drupal development skills allow businesses to establish a user-friendly web presence, achieve business goals, and exceed the needs and expectations of their customers. Read case studies of our most interesting Drupal 8 projects or talk to a Drupal expert. Get the security, structure and support you need with Thinkbean, today.

Dec 21 2017
Dec 21

Integrating a Simple Drupal Text Paragraph Bundle with Patternlab

This is the first post in a series about how to integrate Drupal with PatternLab. In this first blog post, we'll look at a simple text paragraph bundle, which just has one field: text (formatted, long).

PatternLab Text Building Block Example

I see a lot of blog posts and talks around about the benefits of using component-based design, about how we should use design in the browser principles to present designs to our clients, about how styleguides are the best way to have sustainable frontends. I've even written some and given many presentations about it myself. What I don't see a lot of is blog posts about how to actually do it.

So, here's how to (or at least how I) integrate my PatternLab theme (it's based on the Phase 2 PatternLab Drupal theme) with a simple paragraph type.

PatternLab

Create a pattern - you can put it wherever your setup says it should go. Paragraph bundles are probably molecules, but I'm not sure how you set up yours. In my case, I have hacked PatternLab and created a folder called "Building Blocks" - this is where all my paragraph bundles go (and then I also have a "Building Blocks" field in each content type - more about my set up in another blog post.

Call the pattern something meaningful - in this case, I call mine "Text". Next, we write the Twig for the text pattern. This can be as simple as this:

{%
set classes = [
  "text"
]
%}


    {{ content }}
{{>

Then in my corresponding text.yml or text.json file, I put in some sample content, like so (I like yml):

content: >
  

This is a Level 2 Heading

This is a paragraph of text followed by a list. Sed posuere consectetur est at lobortis. This is strong while this is emphasised Morbi leo risus, porta ac consectetur ac, vestibulum at eros. Aenean lacinia bibendum nulla sed consectetur. Curabitur blandit tempus porttitor. Integer posuere erat a ante venenatis dapibus posuere velit aliquet. Vestibulum id ligula porta felis euismod semper.

  • A text item in a list
  • Another text item
    • A sub-item
    • Another sub-item
  • A third item in a list

This is a Level 3 Heading

Followed by some more text. This is a link sed posuere consectetur est at lobortis. Morbi leo risus, porta ac consectetur ac, vestibulum at eros. Aenean lacinia bibendum nulla sed consectetur. Curabitur blandit tempus porttitor. Integer posuere erat a ante venenatis dapibus posuere velit aliquet. Vestibulum id ligula porta felis euismod semper.

Drupal

Finally, in my Drupal paragraph--text.html.twig file, I extend the above PatternLab file, like so:

{% extends "@building-blocks/text/text.twig" %}

Yes, there is only one line of code in that file.

Some Explanation

Why do I call my variable {{ content }}? Simple, I know that the default variable in Drupal's paragraph module to print the contents of the paragraph is {{ content }}, so if I give my pattern in PatternLab the same variable name, I won't have to worry about matching any variables. I do not need to do something like this:

{% include '@building-blocks/text/text.twig' with {
  content: text
  }
%}

This will become much clearer when we work with more complex paragraph types in later blog posts.

You can see an example of this pattern in PatternLab here, and the text you are currently reading is an example of it in use in a Drupal template. Simple, eh?

Dec 13 2017
hw
Dec 13

Today, I came across an interesting bug with composer-patches plugin (it really is a git-apply bug/behavior). TL;DR, it is fixed in the latest version of composer-patches plugin as of this writing – 1.6.4. All you have to do is run composer require cweagans/composer-patches:^1.6.4 to get the fix.

The problem is simple: Patches are not applied even though you might see it in the composer log output. In fact, even a PATCHES.txt file is generated with the correct listing of patches. There are no errors and no indication that the patching failed for any reason. The problem is because git apply fails silently. It does not give any output nor sets an error exit code.

The problem was first documented and fixed in cweagans/composer-patches#165; however, the fix used there relies on git-apply outputting log messages saying patches were skipped. Unfortunately, that behavior was only introduced in 2.9.0. The Docker image I was using only contained version 2.1.4. All I needed was either these “Skipped patch” messages or an exit code and the plugin would fall back on patch command, but neither happened.

Also, this is actually supposed to be git behavior: git apply will fail to do anything when used within a local checkout of a git repository (other than the one for the project the patch is made for), such as if you are patching a module that is within a site that is in Git version control. The suggestion is to use patch -p1 < path/file.patch instead. While an obvious solution was to upgrade git, a simple Google search told me that this problem has happened to others and I decided to dig deeper. All variations of git apply simply didn't work, nor would set the exit code. I followed the debugging steps in one of the comments in the pull request, but I still wouldn’t see any messages similar to “Skipped patch” or applying patch, or any error.

In testing, I found a way to get it working with the ‘--directory‘ parameter to ‘git apply‘. The command executed by the plugin is similar to the following:


git -C package-directory apply -p1 /tmp/patchfile.patch

Instead, this command worked (even if there is no git repository involved at all, even in the current directory):


git apply --directory=package-directory apply -p1 /tmp/patchfile.patch

On digging a bit more, I found another workaround that worked for me in cweagans/composer-patches#175. What’s more, this was already merged and tagged. All I had to do was update to the latest version 1.6.4 and it would work. This change checks for the path being patched and if it actually is a git repository. If the package is not a git repository, it does not attempt a git apply at all, but falls back to patch, which works perfectly.

Dec 10 2017
Dec 10

Some of our applications are deployed to Amazaon Elastic Beanstalk. They are based on PHP, Symfony and of course use composer for downloading their dependencies. This can take a while, approx. 2 minutes on our application when starting on a fresh instance. This can be annyoingly long, especially when you're upscaling for more instances due to for example a traffic spike.

You could include the vendor directory when you do eb deploy, but then Beanstalk doesn't do a composer install at all anymore, so you have to make sure the local vendor directory has the right dependencies. There's other caveats with doing that, so was not a real solution for us.

Composer cache to the rescue. Sharing the composer cache between instances (with a simple up and download to an s3 bucket) brought the deployment time for composer install down from about 2 minutes to 10 seconds.

For that to work, we have this on a file called .ebextensions/composer.config:

commands:
  01updateComposer:
    command: export COMPOSER_HOME=/root && /usr/bin/composer.phar self-update
  02extractComposerCache:
    command: ". /opt/elasticbeanstalk/support/envvars && rm -rf /root/cache && aws s3 cp s3://rokka-support-files/composer-cache.tgz /tmp/composer-cache.tgz &&  tar -C / -xf /tmp/composer-cache.tgz && rm -f /tmp/composer-cache.tgz"
    ignoreErrors: true

container_commands:
  upload_composer_cache:
    command: ". /opt/elasticbeanstalk/support/envvars && tar -C / -czf composer-cache.tgz /root/cache && aws s3 cp composer-cache.tgz s3://your-bucket/ && rm -f composer-cache.tgz"
    leader_only: true
    ignoreErrors: true

option_settings:
  - namespace: aws:elasticbeanstalk:application:environment
    option_name: COMPOSER_HOME
    value: /root

It downloads the composer-cache.tgz on every instance before running composer install and extracts that to /root/cache. And after a new deployment is through, it creates a new tar file from that directory on the "deployment leader" only and uploads that again to S3. Ready for the next deployment or instances.

One caveat we currently didn't solve yet. That .tgz file will grow over time (since it will have old dependencies also in that file). Some process should clear it from time to time or just delete it on S3 when it gets too big. The ignoreErrors options above make sure that the deployment doesn't fail, when that tgz file doesn't exist or is corrupted.

Dec 06 2017
Dec 06

We had a scenario where client runs a cluster of events, and folk sign up for these, and usually the registrants signs up for all events, but then they might invite mum to the Dinner, and brother John to the Talk, etc etc.

We wanted to achieve this on a single form with a single payment. We explored both CiviCart and Drupal Commerce but in the end concluded we could achieve this in a much lighter way with good old webforms.

The outcome is that up to 6 people can be registered for any combination of events, eg 

  • c1 registers for Events A, B, C, D, E and F
  • c2 registers for B, C and D
  • c3 registers for A and B
  • c4 registers for A and F
  • etc

To see the full gory details of the conditionals approach we took, please read the full blog on Fuzion's site.

Filed under

Dec 06 2017
Dec 06

The Rise of Assistants

In last couple of years we have seen the rise of assistants, AI is enabling our lives more and more and with help of devices like Google Home and Amazon Echo, its now entering our living rooms and changing how we interact with technology. Though Assistants have been around for couple of years through android google home app, the UX is changing rapidly with home devices where now we are experiencing Conversational UI i.e. being able to talk to devices, no more typing/searching, you can now converse with your device and book a cab or play your favourite music. Though the verdict on home devices like Echo and Google home is pending, the underlying technology i.e. AI based assistants are here to stay.

In this post, we will explore Google Assistant Developer framework and how we can integrate it with Drupal.

Google Assistant Overview


Google Assistant works with help of Apps that define actions which in turn invokes operations to be performed on our product and services. These apps are registered with Actions on Google, which basically is a platform comprising of Apps and hence connecting different products and services via Apps. Unlike traditional mobile or desktop apps, users interact with Assistant apps through a conversation, natural-sounding back and forth exchanges (voice or text) and not traditional Click and Touch paradigms. 

The first step in the flow is understanding use requests through actions, so lets learn more about it. 

How Action on Google works with the Assistant?

It is very important to understand how actually actions on Google work with the assistant to have an overview of the workflow. From the development perspective, it's crucial we understand the whole of the Google Assistant and Google Action model in total, so that extending the same becomes easier.

 

Actions on Google

 

It all starts with User requesting an action, followed by Google Assistant invoking best corresponding APP using Actions on Google. Now, it's the duty of Actions on Google to contact APP by sending a request. The app must be prepared to handle the request, perform the corresponding action and send a valid response to the Actions on Google which is then passed to Google Assistant. Google Assistant renders the response in its UI and displays it to the user and conversation begins.

Lets build our own action, following tools are required:

  • Ngrok - Local web server supporting HTTPS. 
  • Editor - Sublime/PHPStorm
  • Google Pixel 2 - Just kidding! Although you can order 1 for me :p
  • Bit of patience and 100% attention

STEP1: BUILD YOUR ACTION APP

Very first step now is building our Actions on Google APP. Google provides 3 ways to accomplish this:

  1. With Templates
  2. With Dialogflow
  3. With Actions SDK

Main purpose of this app would be matching user request with an action. For now, we would be going with Dialogflow (for beginner convenience). To develop with Dialogflow, we first need to create an Actions on Google developer project and a Dialogflow agent. Having a project allows us to access the developer console to manage and distribute our app.

  1. Go to the Actions on Google Developer Console.
  2. Click on Add Project, enter YourAppName for the project name, and click Create Project.
  3. In the Overview screen, click BUILD on the Dialogflow card and then CREATE ACTIONS ON Dialogflow to start building actions.
  4. The Dialogflow console appears with information automatically populated in an agent. Click Save to save the agent.

Post saving an agent, we start improving/developing our agent. We can consider this step as training of our newly created Agent via some training data set. These structured training data sets referred here are intents. An individual Intent comprises of query patterns that a user may ask to perform an action, events and actions associated with this particular intent which together define a purpose user want to fulfill. So, every task user wants Assistant to perform is actually mapped with an intent. Events and Actions can be considered as a definitive representation of the actual associated event and task that needs to be performed which will be used by our products and services to understand what the end user is asking for.

So, here we define all the intents that define our app. Let's start with creating an intent to do cache rebuild.

  1. Create a new intent with name CACHE-REBUILD.
  2. We need to add query patterns we can think of, that user might say to invoke this intent. (Query Patterns may content parameters too, we will cover this later.)
  3. Add event cache-rebuild.
  4. Save the intent.
Intent Google Actions

For now, this is enough to just understand the flow, we will focus on entities and other aspects later. To verify if the intent you have created gets invoked if user says “do cache rebuild”, use “Try it now” present in the right side of the Dialogflow window.

STEP2: BUILD FULFILLMENT

After we are done with defining action in dialogflow, we now need to prepare our product (Drupal App) to fulfill the user request. So, basically after understanding user request and matching that with an intent and action Actions on Google is now going to invoke our Drupal App in one or the other way . This is accomplished using WEBHOOKS. So, Google is now going to send a post request with all the details. Under Fulfillment tab, we configure our webhook. We need to ensure that our web service fulfills webhook requirements.

According to this, the web service must use HTTPS and the URL must be publicly accessible and hence we need to install NGROK. Ngrok exposes local web server to the internet.

NGROK

After having a publicly accessible URL, we just need to add this URL under fulfillment tab. As this URL will receive post request and processing will be done thereafter, so we need to add that URL where we are gonna handle requests just like endpoints. (It may be like http://yourlocalsite.ngrok.io/google-assistant-request)

add webhook url

Now, we need to build corresponding fulfillment to process the intent.

OK! It seems simple we just need to create a custom module with a route and a controller to handle the request. Indeed it is simple, only important point is understanding the flow which we understood above.

So, why are we waiting? Let’s start.

Create a custom module and a routing file:

droogle.default_controller_handleRequest:
 path: '/google-assistant-request'
 defaults:
   _controller: '\Drupal\droogle\Controller\DefaultController::handleRequest'
   _title: 'Handle Request'
 requirements:
   _access: TRUE

Now, let’s add the corresponding controller

requestStack = $request_stack;
   $this->loggerFactory = $loggerFactory;
 }

 /**
  * {@inheritdoc}
  */
 public static function create(ContainerInterface $container) {
   return new static(
     $container->get('request_stack'),
     $container->get('logger.factory')
   );
 }

 /**
  * Handlerequest.
  *
  * @return mixed
  *   Return Hello string.
  */
 public function handleRequest() {
   $this->loggerFactory->get('droogle')->info('droogle triggered');
   $this->processRequest();
   $data = [
     'speech' => 'Cache Rebuild Completed for the Site',
     'displayText' => 'Cache Rebuild Completed',
     'data' => '',
     'contextOut' => [],
     'source' => 'uniworld',
   ];
   return JsonResponse::create($data, 200);
 }

 protected function processRequest() {
   $params = $this->requestStack->getCurrentRequest();
   // Here we will process the request to get intent

   // and fulfill the action.
 }
}

Done! We are ready with a request handler to process the request that will be made by Google Assistant.

 

STEP3: DEPLOY FULFILLMENT AND TESTING THE APP

Part of the deployment has already been done, as we are developing on our local only. Now, we need to enable our custom module. Post that let's get back to dialogflow and establish the connection with app to test this. Earlier we had configured fulfillment URL details, ensure we have enabled webhook for all domains.

deployment

 

Let’s get back to intent that we build and enable webhook there too and save the intent.

intent enable webhook

Now, to test this we need to integrate it any of the device or live/sandbox app. Under Integrations tab, google provides several options for this too. Enable for Web Demo and open the URL in new tab, to test this:

Integration Web Demo

Speak up and test your newly build APP and let Google Assistant do its work.

So, as seen in the screenshot, there can be 2 type of responses. First, where our server is not able to handle request properly and the second one where Drupal server sends a valid JSON response.

GREAT! Connection is now established, you can now add intents in Google Action APP and correspondingly handle that intent and action at Drupal End. This is just a taste, conversational UX and Assistant technology will definitely impact how we interact with technology and we believe Drupal has a great role to play as a robust backend.

Nov 28 2017
Nov 28

Websites that do something are interactive products which require careful planning to execute, both for a client and a web development company. Websites fail because businesses and Drupal web development (webdev) companies don’t plan, or don’t plan together. While Drupal is a superior platform in many ways (that’s why Thinkbean uses Drupal), there is no substitute for a well-crafted Drupal website strategy. The crux of the strategy is the evolving discussion between the client and their Drupal web development partner.

What Your Drupal WebDev Team Needs to Know Now

A creative and encouraging way to open discussion is to establish a good rapport before communicating project requirements. Ask the Drupal web development team what their biggest challenges are when communicating with clients. The answers show what information the web development company needs most, which will go a long way toward minimizing risks.

Some common information webdev companies need

  1. Identify your business goals and how the new website is expected to fulfill them: Are you trying to grow sales? Expand product mix? Be a thought leader? Capture emails? Your business objectives will drive site features to deliver the best outcome.
     
  2. Identify your target audience AND their expectations: The audience must drive the user experience (UX). Drupal webdev teams use an established target audience persona to optimize UX for the primary users of the website. Target audience identification allows research about the target demographic to inform website design and function choices that can make or break UX. What do users expect a website to be able to do to meet (or exceed) the standard for its sector or industry?
    A few years ago many city governments didn’t have official websites. Now, from Boston.gov’s homepage, local residents can search for property information, pay their real estate taxes, apply for a city job, pay a parking ticket, view the schedule for Boston food trucks, get a resident parking permit, learn how to vote by absentee ballot and report non-emergency issues... and that’s just on the home page.
     
  3. Know your scope and budget for it: This really falls under website strategy. Talking through the website’s functionality and desired features internally, as a team and then again with the Drupal web development company, is an essential step to getting it right without rushing to fulfill unmet needs just before launch. The scope needs to be defined as specifically as possible in terms of goals, deliverables, features, tasks and costs. Without a solid scope included in a website strategy, timeframes and costs can increase dramatically.
     
  4. Know what people should do on your website: The website is the virtual representation of a physical organization for billions of users who visit daily. What’s the best possible outcome when a user visits the website? The primary purpose of a website is to convert a visitor into a lead. How can the website achieve this? Will users download something? Are there forms they will fill out? Should users have the ability to call a company directly? To convert visitors into leads, a website has to offer valuable content to visitors.

Communicating With a Web Development Company

Website strategy isn’t just for the benefit of the webdev team. It’s just as important for the business’s outcomes as it is for the web development company. The process of creating a website strategy with a Drupal web development company can head off many potential problems before they become actual problems.

Some common communication issues clients have with webdev companies are:

  1. It’s difficult to get in touch with my webdev partner: There are many factors impacting this growing industry that make it difficult for web development companies to manage capacity. During the process of building a website strategy with the web development partner your organization chooses; what kind of (and how much) communication both parties will require must be discussed. Ask how often you’ll be speaking, via what platforms, and establish what information you want updated at every meeting. Manage the project for efficiency by appointing one person who can make decisions on behalf of your business.
  2. There were unacceptable delays for deliverables: A client can do a lot to ensure their webdev partner meets deadlines. There is a degree of following-up that anyone involved in a project should do to ensure forward progress. The squeaky wheel gets the grease. The most important thing a client can do to make sure their deliverables are delivered on-time is to make sure the web development company gets all of the information they need to complete the project. While a seemingly minor question might seem trivial to a client, web development companies can be completely road-blocked by unanswered questions. Take a look at the webdev company’s processes too. Good processes mean more successful outcomes.
  3. They over-promised and couldn’t deliver what we wanted: This is a common complaint from clients. In this industry, there are a great many possibilities for features, functionality and aesthetics. With such extensive combinations of potential requirements, it can be difficult for a client and a web development company to communicate exact specifications in ways each can clearly understand. As a result, expectations can be shortchanged.. Understand there may be a language barrier, and set good communication standards and expectations early to prevent major miscommunications.
    • Start with a discovery period so the web development company can establish a clear understanding of the business and user goals
    • Develop a strong scope of work around a minimum viable product (MVP)
    • Don’t let either party scope creep
    • When changes occur, document them with a change order
    • Get demos weekly, or bi-weekly to ensure steady progression
    • Get a budget update regularly
    • Adequately staff your team

** If a project is 1000 hours in development, your team should expect to put in 600-800 hours for approvals, reviews, demos, research, content, and communication.

How Thinkbean Starts When Working a Website Strategy

Thinkbean begins every web project with a Discovery; A web development strategy conversation that identifies a defined website strategy, business & user goals and what users need to be able to do when they reach the website. All projects begin with this because Thinkbean has encountered communication challenges before and has overcome them with this simple, straightforward conversation.

Find the right Drupal Developer - the first time. Thinkbean’s advanced Drupal development skills allow businesses to establish a user-friendly web presence, achieve business goals and exceed the needs and expectations of users.

Read case studies of Thinkbean’s most interesting Drupal 8 projects to discover how we meet and exceed the expectation of our clients - or talk to a Boston Drupal expert today.

Pages

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web