Nov 02 2018
Nov 02

As part of the session I presented in Drupal Europe, REST Ready: Auditing established Drupal 8 websites for use as a content hub, I presented a module called “Entity Access Audit”.

This has proved to be a useful tool for auditing our projects for unusual access scenarios as part of our standard go-live security checks or when opening sites up to additional mechanisms of content delivery, such as REST endpoints. Today this code has been released on Drupal.org: Entity Access Audit.

There are two primary interfaces for viewing access results, the overview screen and a detailed overview for each entity type. Here is a limited example of the whole-site overview showing a few of the entity types you might find in core or custom modules:

Entity access audit

Here is a more detailed report for a single entity type:

Entity access audit

The driving motivation behind these interfaces was being able to visually scan entity types and ensure that the access results align with our expectations. This has so far helped identify various bugs in custom and contributed code.

In order to conduct a thorough access test, the module uses a predefined set of dimensions and then uses a cartesian product of these dimensions to test every combination. The dimensions tested out of the box, where applicable to the given entity type are:

  • All bundles of an entity type.
  • If the current user is the entity owner or not.
  • The access operation: create, view, update, delete.
  • All the available roles.

It’s worth noting that these are only common factors used to determine access results, they are not comprehensive. If access was determined by other factors, there would be no visibility of this in the generated reports.

The module is certainly not a silver bullet for validating the security of Drupal 8 websites, but has proved to be a useful additional tool when conducting audits.

Photo of Sam Becker

Posted by Sam Becker
Senior Developer

Dated 2 November 2018

Add new comment

Oct 08 2018
Oct 08

In this blog post, we'll have a look at how contributed Drupal modules can remove the core deprecation warnings and be compatible with both Drupal 8 and Drupal 9.

Ever since Drupal Europe, we know Drupal 9 will be released in 2020. As per @catch’s comment in 2608496-54

We already have the continuous upgrade path policy which should mean that any up-to-date Drupal 8 module should work with Drupal 9.0.0, either with zero or minimal changes.

Drupal core has a proper deprecation process so it can be continuously improved. Drupal core also has a continuous process of removing deprecated code usages in core should not trigger deprecated code except in tests and during updates, because of proper deprecation testing.

The big problem for contributed modules aka contrib is the removal of deprecated code usage. To allow contrib to keep up with core's removal of deprecation warnings contrib needs proper deprecation testing which is being discussed in support deprecation testing for contributed modules on Drupal.org.

However, Drupal CI build process can be controlled by a drupalci.yml file found in the project. The documentation about it can be found at customizing DrupalCI Testing for Projects.

It is very easy for contributed modules to remove their usage of deprecated code. All we need is to add the following drupalci.yml file to your contributed modules and fix the fails.

# This is the DrupalCI testbot build file for Dynamic Entity Reference.
# Learn to make one for your own drupal.org project:
# https://www.drupal.org/drupalorg/docs/drupal-ci/customizing-drupalci-testing
build:
  assessment:
    validate_codebase:
      phplint:
      phpcs:
        # phpcs will use core's specified version of Coder.
        sniff-all-files: true
        halt-on-fail: true
    testing:
      # run_tests task is executed several times in order of performance speeds.
      # halt-on-fail can be set on the run_tests tasks in order to fail fast.
      # suppress-deprecations is false in order to be alerted to usages of
      # deprecated code.
      run_tests.phpunit:
        types: 'PHPUnit-Unit'
        testgroups: '--all'
        suppress-deprecations: false
        halt-on-fail: false
      run_tests.kernel:
        types: 'PHPUnit-Kernel'
        testgroups: '--all'
        suppress-deprecations: false
        halt-on-fail: false
      run_tests.functional:
        types: 'PHPUnit-Functional'
        testgroups: '--all'
        suppress-deprecations: false
        halt-on-fail: false
      run_tests.javascript:
        concurrency: 15
        types: 'PHPUnit-FunctionalJavascript'
        testgroups: '--all'
        suppress-deprecations: false
        halt-on-fail: false

This drupalci.yml will check all the Drupal core coding standards. This can be disabled by the following change:

      phpcs:
        # phpcs will use core's specified version of Coder.
        sniff-all-files: false
        halt-on-fail: false

This file also only runs PHPUnit tests, to run legacy Simpletest you have to the following block:

      run_tests.simpletest:
         types: 'Simpletest'
         testgroups: '--all'
         suppress-deprecations: false
         halt-on-fail: false

But if you still have those, you probably want to start there, because they won't be supported in Drupal 9.

Last but not the least if you think the is module is not ready yet to fix all the deprecation warning you can set suppress-deprecations: true.

As a contrib module maintainer or a contrib module consumer I encourage you to add this file to all the contrib modules you maintain or use, or at least create an issue in the module's issue queue so that at the time of Drupal 9 release all of your favourite modules will be ready. JSONAPI module added this file in https://www.drupal.org/node/2982964 which inspired me to add this to DER in https://www.drupal.org/node/3001640.

Photo of Jibran Ijaz

Posted by Jibran Ijaz
Senior Drupal Developer

Dated 8 October 2018

Comments

What Alex said. I did this for the JSON API module because we want it to land in Drupal core. But we are running into the problems explained in the issue linked by Alex.

Despite Drupal 9 being announced, Drupal Continuous Integration system is not yet ready for modules trying to keep current with all deprecations for Drupal 9, while remaining compatible with both simultaneously minors with security team coverage (current + previous, ATM 8.6 + 8.5) and the next minor (next, ATM 8.7). Hopefully we soon will be :)

Thanks for writing about this though, I do think it's important that more module maintainers get in this mindset!

DrupalCI always runs contrib tests against the latest core branch. As a contrib module maintainer if I have to make a compatibility change for core minor version then I create a new release and add that to releases notes after the stable release of core minor version e.g. 8.x-2.0-alpha8. I never have to create a new release for the core patch release at least till now but yes I don't know how would I name the new release if I ever have to do that but then again that's contrib semvar issue.

DrupalCI runs against whichever core branch the maintainer has configured it to run against.

If a contributed module wants to remove usages of deprecations, it should probably never do that against the "Development" branch, as there isnt a way for a contrib module to both remove those deprecations, *and* still be compatible with supported or security branches. The earliest that a contrib module should try to remove new deprecations is at the pre-release phase, as at that point we're unlikely to introduce new deprecations.

Pagination

Add new comment

Sep 25 2018
Sep 25

Drupal 8.6 has shipped with the Media Library! It’s just one part of the latest round of improvements from the Media Initiative, but what a great improvement! Being brand new it’s still in the “experimental” module state but we’ve set it up on this website to test it out and are feeling pretty comfortable with its stability.

That said, I highly encourage you test it thoroughly on your own site before enabling any experimental module on a production site. Don’t just take my word for it :)

What it adds

The Media Library has two main parts to it...

Grid Listing

There’s the Grid Listing at /admin/content/media, which takes precedence over the usual table of media items (which is still available under the “Table” tab). The grid renders a new Media Library view mode showing the thumbnail and compact title, as well as the bulk edit checkbox.

The new media library grid listing page

Field Widget

Then there’s the field widget! The field widget can be set on the “Manage Form Display” page of any entity with a Media Reference Field. Once enabled, an editor can either browse existing media (by accessing the Grid Listing in a modal) or create a new media item (utilising the new Media Library form mode - which is easy to customise).

Media reference field with the new Media Library form widget

Media Library widget once media has been added, which shows a thumbnail of the media

The widget is very similar to what the ‘Inline Entity Form’ module gave you, especially when paired with the Entity Browsers IEF submodule. But the final result is a much nicer display and in general feels like a nicer UX. Plus it’s in core so you don’t need to add extra modules!

The widget also supports bulk upload which is fantastic. It respects the Media Reference Fields cardinality, so limit it to one - and only file can be uploaded or selected from the browser. Allow more than one and upload or select up to that exact number.  The field even tells you how many you can add and how many you have left. And yes, the field supports drag and drop :)

What is doesn’t add

WYSIWYG embedding

WYSIWYG embed support is now being worked on for a future release of Drupal 8 core, you can follow this Meta issue to keep track of the progress. It sounds like some version of Entity Embed (possibly limited to Media) will make it’s way in and some form of CKEditor plugin or button will be available to achieve something similar to what the Media Entity Browser, Entity Browser, Entity Embed and Embed module set provides currently.

Until then though, we’ve been working on integrating the Media Libraries Grid Listing into a submodule of Media Entity Browser to provide editors with the UX improvements that came with Media Library but keeping the same WYSIWYG embed process (and the contrib modules behind it) they’re currently used to (assuming they’re already using Media Entity Browser, of course). More on this submodule below.

This is essentially a temporary solution until the Media Initiative team and those who help out on their issue queue (all the way from UX through to dev) have the time and mental space to get it into core. It should hopefully have all same the bulk upload features the field widget has, it might even be able to support bulk embedding too!

View mode or image style selectors for editors

Site builders can set the view mode of the rendered media entity from the manage display page, which in turn allows you to set an image style for that view mode, but editors can’t change this per image (without needing multiple different Media reference fields).

There is work on supporting this idea for images uploaded via CKEditor directly, which has nothing to do with Media, but I think it would be a nice feature for Media embedding via WYSIWYG as well. Potentially also for Media Reference Fields. But by no means a deal breaker.

Advanced cropping

From what I can gather there are no plans to add any more advanced cropping capabilities into core. This is probably a good thing since cropping requirements can differ greatly and we don’t want core to get too big. So contrib will still be your goto for this. Image Widget Crop is my favourite for this, but there’s also the simpler Focal Point.

You can test out the submodule from the patch on this issue and let us know what you think! Once the patch is added, enable the submodule then edit your existing Entity Browsers and swap the View widget over to the “Media Entity Browser (Media Library)” view.

Form for changing the Entity Browser view widget

It shouldn’t matter if you’ve customised your entity browser. If you’ve added something like Dropzone for drag-and-drop support it *should* still work (if not, check the Dropzone or Entity Browser issue queues). If you’ve customised the view it uses however, you might need to redo those customisations on the new view.

I also like updating the Form Mode of the Entity Browsers IEF widget to use the new Media Library form display, which I always pair back to just the essential fields (who really needs to manually set the author and created time of uploaded media?).

You still can’t embed more than one media item at a time. But at least now you also can’t select more than one item when browsing so that’s definitely an improvement.

Modal of the Media Entity Browser showing the same Grid listing

Plus editors will experience a fairly consistent UX between browsing and uploading media on fields as they do via the WYSIWYG.

Once setup and tested (ensuring you’ve updated any Media Reference Fields to use the new Media Library widget too) you can safely disable the base Media Entity Browser module and delete any unused configuration - it should just be the old “Media Entity Browser” view.

Please post any feedback on the issue itself so we can make sure it’s at its best before rolling another release of the module.

Happy days!

I hope you have as much fun setting up the Media Library as I did. If you want to contribute to the Media Initiative I’m sure they’ll be more than happy for the help! They’ve done a fantastic job so far but there’s still plenty left to do.

Photo of Rikki Bochow

Posted by Rikki Bochow
Front end Developer

Dated 25 September 2018

Comments

Nice and useful article of Media Library in core usage into each Drupal 8 project.
Thank you!

Pagination

Add new comment

Aug 17 2018
Aug 17

Allow sitebuilders to easily add classes onto field elements with the new element_class_formatter module.

Adding classes onto a field element (for example a link or image tag - as opposed to the wrapper div) isn't always the easiest thing to do in Drupal. It requires preprocessing into the elements render array, using special Url::setOptions functions, or drilling down a combinations of objects and arrays in your Twig template.

The element_class_formatter module aims to make that process easier. At PreviousNext we love field formatters! We write custom ones where needed, and have been re-using a few generic ones for quite a while now. This module extends our generic ones into a complete set, to allow for full flexibility, sitebuilding efficiency and re-usability of code. 

To use this module, add and enable it just like any other, then visit one of your Manage Display screens. The most widely available formatter is the Wrapper (with class) one, but the others follow a similar naming convention; "Formatter name (with class)". The majority of these formatters extend a core formatter, so all the normal formatter options should still be available.

The manage display page with the formatter selected for three different field types

The manage display page with new (with class) field formatters selected

Setting classes on the configuration pane of a link field

The field formatter settings, with all the default options

Use this module alongside Reusable style guide components with Twig embed, Display Suite with Layouts and some Bare templates to get optimum Drupal markup. Or just use it to learn how to write your own custom field formatters!

For feature requests or issues please see the modules Issue queue on Drupal.org

Photo of Rikki Bochow

Posted by Rikki Bochow
Front end Developer

Dated 17 August 2018

Add new comment

Aug 14 2018
Aug 14

There is not a lot of documentation available about what's the difference between running a browser in WebDriver mode vs Headless so I did some digging...

Apparently, there are two ways to run Chrome for testing:

  • As WebDriver
  • As Headless

WebDriver:

There are two ways to run Chrome as WebDriver:

Using Selenium:

Run Selenium standalone server in WebDriver mode and pass the path of ChromeDriver bin along with the config e.g. Selenium Dockerfile

This works fine with Nightwatch standard setup, \Drupal\FunctionalJavascriptTests\JavascriptTestBase and also with Drupal core's new \Drupal\FunctionalJavascriptTests\WebDriverTestBase.

Using ChromeDriver:

Run ChromeDriver in WebDriver mode e.g. chromedriver Dockerfile

This works fine with Nightwatch, JTB, and WTB.

Headless:

Using Chrome

Run Chrome browser binary in headless mode. e.g. Chrome headless Dockerfile

Nightwatch is not working with this set up, at least I was unable to configure it. See https://github.com/nightwatchjs/nightwatch/issues/1390 and https://github.com/nightwatchjs/nightwatch/issues/1439 for more info. \DMore\ChromeDriver can be used to run the javascript tests.

Using ChromeDriver

Using Selenium ChromeDriver can be run in headless mode something like this:

const fs = require('fs');
const webdriver = require('selenium-webdriver');
const chromedriver = require('chromedriver');

const chromeCapabilities = webdriver.Capabilities.chrome();
chromeCapabilities.set('chromeOptions', {args: ['--headless']});

const driver = new webdriver.Builder()
  .forBrowser('chrome')
  .withCapabilities(chromeCapabilities)
  .build();

DrupalCI is running ChromeDriver without Selenium and testing Nightwatch and WTB on it.

Conclusion

The question is which is the best solution to run Nightwatch and JTB/WTB tests using the same setup?

  • We had seen some memory issues with Selenium containers in the past but we haven't run into any issue recently so I prefer this and you can swap Selenium container to use different browsers for testing.
  • We have also seen some issues while running ChromeDriver in WebDriver mode. It just stops working mid-test runs.
  • I was unable to get Headless Chrome working with Nightwatch but it needs more investigation.
  • Headless ChromeDriver setup on DrupalCI is quite stable. For JTB this would mean that we could use anyone from \Drupal\FunctionalJavascriptTests\DrupalSelenium2Driver and DMore\ChromeDriver.

Please share your ideas and thoughts, thanks!

For more info:

Photo of Jibran Ijaz

Posted by Jibran Ijaz
Senior Drupal Developer

Dated 14 August 2018

Comments

We are also having a discussion about this in 'Drupal testing trait' merge request, see merge_requests/37.

Might be worth adding to this list that there's an alternative setup I successfully tested in april using the browserless/chrome image with Cucumber and Puppeteer for behavioral tests.

YMMV but to give a rough idea, here's the relevant docker-compose.yml extract :

b-tester:
image: node:9-alpine
command: 'sh -c "npm i"'
volumes:
- ./tests:/tests:cached
working_dir: /tests
network_mode: "host"

# See https://github.com/GoogleChrome/puppeteer/issues/1345#issuecomment-3451…
chrome:
image: browserless/chrome
shm_size: 1gb
network_mode: "host"
ports:
- "3000:3000"

... and in tests/package.json :

"devDependencies": {
"chai": "^4.1.2",
"cucumber": "^4.0.0",
"puppeteer": "^1.1.0"
},
"scripts": {
"test": "cucumber-js"
}

... and to connect, open a page and take screenshots, e.g. in tests/features/support/world.js :

const { setWorldConstructor } = require("cucumber");
const { expect } = require("chai");
const puppeteer = require("puppeteer");
const PAGE = "http://todomvc.com/examples/react/#/";

class TodoWorld {

... (snip)

async openTodoPage() {
// See https://github.com/joelgriffith/browserless
// (via https://github.com/GoogleChrome/puppeteer/issues/1345#issuecomment-3451…)
this.browser = await puppeteer.connect({ browserWSEndpoint: 'ws://localhost:3000' });
this.page = await this.browser.newPage();
await this.page.goto(PAGE);
}

... (snip)

async takeScreenshot() {
await this.page.screenshot({ path: 'screenshot.png' });
}

... (snip)
}

setWorldConstructor(TodoWorld);

→ To run tests, e.g. :
$ docker-compose run b-tester sh -c "npm test"

(result in tests/screenshot.png)

I ran out of time to make a prototype repo, but my plan was to integrate https://github.com/garris/BackstopJS

Pagination

Add new comment

Aug 08 2018
Aug 08

Malicious users can intercept or monitor plaintext data transmitting across unencrypted networks, jeopardising the confidentiality of sensitive data in Drupal applications. This tutorial will show you how to mitigate this type of attack by encrypting your database queries in transit.

With attackers and data breaches becoming more sophisticated every day, it is imperative that we take as many steps as practical to protect sensitive data in our Drupal apps. PreviousNext use Amazon RDS for our MariaDB and MySQL database instances. RDS supports SSL encryption for data in transit, and it is extremely simple to configure your Drupal app to connect in this manner.

1. RDS PEM Bundle

The first step is ensuring your Drupal application has access to the RDS public certificate chain to initiate the handshake. How you achieve this will depend on your particular deployment methodology - we have opted to bake these certificates into our standard container images. Below are the lines we've added to our PHP Dockerfile.

# Add Amazon RDS TLS public certificate.
ADD https://s3.amazonaws.com/rds-downloads/rds-combined-ca-bundle.pem  /etc/ssl/certs/rds-combined-ca-bundle.pem
RUN chmod 755 /etc/ssl/certs/rds-combined-ca-bundle.pem

If you use a configuration management tool like ansible or puppet, the same principal applies - download that .pem file to a known location on the app server.

If you have limited control of your hosting environment, you can also commit this file to your codebase and have it deployed alongside your application.

2. Drupal Database Configuration

Next you need to configure Drupal to use this certificate chain if it is available. The PDO extension makes light work of this. This snippet is compatible with Drupal 7 and 8.

$rds_cert_path = "/etc/ssl/certs/rds-combined-ca-bundle.pem";
if (is_readable($rds_cert_path)) {
  $databases['default']['default']['pdo'][PDO::MYSQL_ATTR_SSL_CA] = $rds_cert_path;
}

3. Confirmation

The hard work is done, you'll now want to confirm that the connections are actually encrypted.

Use drush to smoke check the PDO options are being picked up correctly. Running drush sql-connect should give you a new flag: --ssl-ca.

$ drush sql-connect

mysql ... --ssl-ca=/etc/ssl/certs/rds-combined-ca-bundle.pem

If that looks OK, you can take it a step further and sniff the TCP connection between Drupal and the RDS server.

This requires root access to your server, and the tcpflow package installed - this tool will stream the data being transmitted over port 3306. You are wanting to see illegible garbled data - definitely not content that looks like a SQL queries or responses!

Run this command, and click around your site while logged in (to ensure minimal cache hits).

$ tcpflow -i any -C -g port 3306

This is the type of output which indicates the connection is encrypted.

tcpflow: listening on any

x1c
"|{mOXU{7-rd 0E
W$Q{C3uQ1g3&#a]9o1K*z:yPTqxqSvcCH#Zq2Hf8Fy>5iWlyz$A>jtfV9pdazdP7
tpQ=
i\R[dRa+Rk4)P5mR_h9S;lO&/=lnC<U)U87'^[email protected]{4d'Qj2{10
YKIXMyb#i(',,j4-\1I%%N>F4P&!Y5_*f^1bvy)Nmga4jQ3"W0[I=[3=3\NLB0|8TGo0>I%^Q^~jL
L*HhsM5%7dXh6w`;B;;|kHTt[_'CDm:PJbs$`/fTv'M .p2<KTE
lt3'[*z]n6)O*Eiq9w20Rq453*mm=<gwJ_'tn]#p`SQ]5hGDLnn?YQ
DDujr!e7D#d^[email protected]+v3Hy(('7O\2.6{0+
V{+m'[cq|6t!Zhv,_/:EJbBF9D8Qz+2t=E(6}jR}qDezq'~)ikO$Y:F:G,UjC[{qF;/srT?7mm=#DDUNa"%|"[email protected]<szV*B^g/Ij;-f~r~X~t-]}Yvr9zpO0Yf2mOoZ-{muU1w6R.'u=zCfT,S|Cp4.<vRN_gqc[vER?NLN_XGgve-O}3.q'b*][email protected](|Sm15c&=k6Ty$Ak_ZaA.`vE=]V($Bm;_Z)sp..~&!9}uH+K>JP' Ok&erw
W")wLLi1%l5#lDV85nj>R~7Nj%*\I!zFt?w$u >;5~#)/tJbzwS~3$0u'/hK /99.X?F{2DNrpdHw{Yf!fLv
`
[email protected]?AsmczC2*`-/R rA-0(}DXDKC9KVnRro}m#IP*2]ftyPU3A#.?~+MDE}|l~uPi5E&hzfgp02!lXnPJLfMyFOIrcq36s90Nz3RX~n?'}ZX
'Kl[k<TK 
xqj^'Wobyg.oz#kh35'@NlJ{r'qlQ;YE>{#fBa4B\D-H`;c/~O,{DWrltYDbu
cB&H\hVaZIDYTP|JpTw0 |(ElJo{[email protected]#5#[email protected]#{f)ux(EES'Ur]N!P[cp`8+Z-$vh%Hnk=K^%-[KQF'2NzTfjSgxG'/p HYMxgfOGx1"'SEQ1yY&)DC*|z{')=u`TS0u0{xp-(zi6zp3uZ'~E*ncrGPD,oW\m`2^ Hn0`h{G=zohi6H[d>^BJ~ W"c+JxhIu
[{d&s*LFh/?&r8>$x{CG4(72pwr*MRVQf.g"dZU\9f$
h*5%nV9[:60:23K Q`8:Cysg%8q?iX_`Q"'Oj
:OS^aTO.OO&O|c`p*%1TeV}"X*rHl=m!cD2D^)Xp$hj-N^pMb7x[Jck"P$Mp41NNv`5x4!k1Z/Y|ZH,k)W*Y(>f6sZRpYm
8Ph42K)}.%g%M]`1R^'<luU5l7i;1|D2U\
#\?M"33F6{sN?tb|&E.08, &To*H4ovTXH;IWt<zwQ(Z4kyuLr6tKkdEw3Q,Pq!'_%~MyYL~R^(=(CH;F%CKf8q,eNObGX2Oue2#b]4<
;
IE4tf&*`)n<Z9sJTvUMhChiD/0byETR57r$".ul;qd*M+,<?&xq&H)yE$2?+unw;FsF3AE->qh/$3|]]y"zEh0xG(A]-I`MJGU7rKO~oi+K:4M(nyOXnvaWP4xV?d4Y^$8)2WOK,2s]gyny:-)@D*F%}ICT
Tu>ofc)P[DQ>Qn3=<al_q8%&C\"\_{GN%iPB;@NYyr)<!oYMOgy'PM$oLr}<#0(g]B.(1LQv)fg\.]0)9$7I nXa[e[w8oRDI1:B6 
\Vbf2bCOKZ%b^/zkk=pu(9xkg|/MnsRc9<[email protected][A!.t|?|tRr (>0^fuefIm1]-YHq5rx|W(S<egZ']=h%*Qq+gR</+0^_2q5GWGam7N).'mA4*`NhwI}noV{V<ZAbgW*c\jFiVyZ0A28TB*&GffP[zb-G,\rirs2
dmkE^hB:(R;<U8 rTc[~s/w7:%QC%TQR'f,:*|[email protected]=!qKgql7D!v
 S+.Y7cg^m!g9G*KFgI)>3:~2&*6!O|DAZWB:#n9<fz/N }(e9m8]!QOHbkd48W%h#!r)uw7{O)2cG`~Vr&AA*Z=Zo<PP
Vej+^)(9MA;De2oMiG^a`tnoCH9l#tLMXGb%EjEkkjQb/=YblLSd}~;S*>|09`I`[email protected]\E\$=/L5VHm)<pI-%(:UYeZ~/1#A.`1m]lH^oVkPsx$ASIla3=E|j{H"{Z!|$[h~W/v!]Iy:I6H%nI\26E=p.ay?JbYd`q\q( VP+mFoJ#$Dt$u
wToLdFb~gay'8uBYRKsiSL?~5LD#MS$Y&Lf[,#jj/*W (E9tT&lhTywDv
$Fc:/+]i<YK:d07.~<P;5yE.45e=UH9mu9w_6de2
poBW3|gJI}2?|&9A/kDCo:X^w<{faH_>[#|tI"lkuK.u|!2MT/@u7u(S{"H.H'Fh/4kF_2{)Jc9NQ%jA_rI1lH;k'$n~M_%t%y)t!C_4FO?idwMB]t^M::S!a=*Jee<[email protected])L;zAuTN2}v#K4AX.(`<J0#G=$FNRof2|O*`0Ef]z\g5n"LH.Z_n3LqDsoa}D&#=XyDp.o\[email protected]$jKs=Rn
%uZ!bR=vz);i)\2h,GD.qO,84M]augk28?(9hDEiw0"EYi[|TA7Ps/o|}V=F{
Ky`i_&|H0<y]~=XJH%f_s2~u |y\o 35c#ufmrd7'GQ/ P"9 w,Q>X1<{#

Resources:

Add new comment

Jul 23 2018
Jul 23

In a previous article Using ES6 in your Drupal Components, we discussed writing our javascript using the modern ES6 methods and transpiling down for older browsers. It still used jQuery as an interim step to make the process of refactoring existing components a little easier. But let's go all the way now and pull out jQuery, leaving only modern, vanilla javascript.

Why should we do this?

jQuery was first introduced 12 years ago, with the intention of making javascript easier to write. It had support for older browsers baked into it and improved the developer experience a great deal. It also adds 87KB to a page.

Today, modern vanilla javascript looks so much like jQuery! It’s support in the evergreen browsers is great and it’s so much nicer to write than it was 12 years ago. There are still some things that jQuery wins on but in the world of javascript frameworks, understanding the foundation on which they are built makes learning them so much easier.

And those older browsers? We don’t need jQuery for that either. You can support older browsers with a couple of polyfills. The polyfills I needed for the examples in this post only amounted to a 2KB file.

Drupal 8 and jQuery

One of the selling points of Drupal 8 (for us front-enders at least) was that jQuery would be optional for a theme. You choose to add it as a dependency. A lot of work has gone into rewriting core JS to remove the reliance on jQuery. There are still some sections of core that need work - Ajax related stuff is a big one. But even if you have a complex site which uses features that add jQuery in, it's still only going to be on the pages that need it. Plus we can help! Create issues and write patches for core or contrib modules that have a dependency on jQuery. 

So what does replacing jQuery look like?

In the Using ES6 blog post I had the following example for my header component.

// @file header.es6.js

const headerDefaults = {
  breakpoint: 700,
  toggleClass: 'header__toggle',
  toggleClassActive: 'is-active'
};

function header(options) {
  (($, this) => {
    const opts = $.extend({}, headerDefaults, options);
    return $(this).each((i, obj) => {
      const $header = $(obj);
      // do stuff with $header
    });
  })(jQuery, this);
}

export { header as myHeader }

and..

// @file header.drupal.es6.js

import { myHeader } from './header.es6';

(($, { behaviors }, { my_theme }) => {
behaviors.header = {
  attach(context) {
    myHeader.call($('.header', context), {
      breakpoint: my_theme.header.breakpoint
    });
  }
};
})(jQuery, Drupal, drupalSettings);

So let’s pull out the jQuery…

// @file header.es6.js

const headerDefaults = {
 breakpoint: 700,
 toggleClass: 'header__toggle',
 toggleClassActive: 'is-active'
};

function header(options) {
   const opts = Object.assign({}, headerDefaults, options);
   const header = this;
   // do stuff with header.
}

export { header as myHeader }

and...

// @file header.drupal.es6.js

import { myHeader } from './header.es6';

(({ behaviors }, { my_theme }) => {
 behaviors.header = {
   attach(context) {
     context.querySelectorAll('.header').forEach((obj) => {
       myHeader.call(obj, {
         breakpoint: my_theme.header.breakpoint,
       });
     });
   }
 };
})(Drupal, drupalSettings);

We’ve replaced $.extend with Object.assign for our default/overridable options. We use context.querySelectorAll('.header'') instead of $('.header', context) to find all instances of .header. We’ve also moved the .each((i, obj) => {}) to the .drupal file as .forEach((obj) => {}) to simplify our called function. Overall not very different at all!

We could go further and convert our functions to Classes, but if you're just getting started with ES6 there's nothing wrong with taking baby steps! Classes are just fancy functions, so upgrading to them in the future would be a great way to learn how they work.

Some other common things;

  • .querySelectorAll() works the same as .find()
  • .querySelector() is the same as .find().first()
  • .setAttribute(‘name’, ‘value’) replaces .attr(‘name’, ‘value’)
  • .getAttribute(‘name’) replaces .attr(‘name’)
  • .classList.add() and .classList.remove() replace .addClass() and .removeClass()
  • .addEventListener('click', (e) => {}) replaces .on('click', (e) => {})
  • .parentNode replaces .parent()
  • .children replaces .children()

You can also still use .focus(), .closest(), .remove(), .append() and .prepend(). Check out You Don't Need jQuery, it's a great resource, or just google “$.thing as vanilla js”.

Everything I’ve mentioned here that’s linked to the MDN web docs required a polyfill for IE, which is available on their respective docs page.

If you’re refactoring existing JS it’s also a good time to make sure you have some Nightwatch JS tests written to make sure you’re not breaking anything :)

Polyfills and Babel

Babel is the JS transpiler we use and it can provide the polyfills itself (babel-polyfill), but due to the nature of our component library based approach, Babel would transpile the polyfills needed for each component into that components JS file. If you bundle everything into one file then obviously this won’t be an issue. But once we start having a couple of different components JS loaded on a page, all with similar polyfills in them you can imagine the amount of duplication and wasted KB.

I prefer to just put the polyfills I need into one file and load it separately. It means have full control over the quality of my polyfills (since not all polyfills are created equally). I can easily make sure I’m only polyfilling what I really need. I can easily pull them out when no longer needed, and I’m only loading that polyfill file to browsers that need it;

js/polyfill.min.js : { attributes: { nomodule: true, defer: true } }

This line is from my themes libraries.yml file, where I'm telling Drupal about the polyfill file. If I pass the nomodule attribute in browsers who DO support ES6 modules will ignore this file, but browsers like IE load it. We're also deferring the file so it's loading after everything else.

I should point out Babel is still needed. We can't polyfill everything (like Classes or Arrow functions) and we can't Transpile everything either. We need both, at least until IE stops requiring support.

Photo of Rikki Bochow

Posted by Rikki Bochow
Front end Developer

Dated 24 July 2018

Comments

Great article, as always!
Wondering if you still use Rollup.js as a bundler or along the way you found out a better tool?
(Or reverted to Webpack)
Thanks!
Gab

Thanks Gab, yeah we still use Rollup.js for the most part. Some of the more app-like projects are using Webpack, though I'm curious to try out Parcel.js one day too.

Pagination

Add new comment

Jul 05 2018
Jul 05

Automated accessibility tools are only one part of ensuring a website is accessible, but it is a very simple part that can catch a lot of really easy to fix issues. Issues that when found and corrected early in the development cycle, can go a long way to ensuring they don’t get compounded into much larger issues down the track.

I’m sure we all agree that the accessibility of ALL websites is important. Testing for accessibility (a11y) shouldn’t be limited to Government services. It shouldn’t be something we need to convince non-government clients to set aside extra budget for. It certainly shouldn’t be left as a pre-launch checklist item that only gets the proper attention if the allocated budget and timeframe hasn’t been swallowed up by some other feature.

Testing each new component or feature against an a11y checker, as it’s being developed, takes a small amount of time. Especially when compared to the budget required to check and correct an entire website before launch -- for the very first time. Remembering to run such tests after a components initial development is one thing. Remembering to re-check later down the line when a few changes and possible regressions have gone through is another. Our brains can only do so much, so why not let the nice, clever computer help out?

NightwatchJS

NightwatchJS is going to be included in Drupal 8.6.x, with some great Drupal specific commands to make functional javascript testing in Drupal super easy. It's early days so the documentation is still being formed.  But we don't have to wait for 8.6.x to start using Nightwatch, especially when we can test interactions against out living Styleguide rather than booting up Drupal.

So lets add it to our build tools;

$ npm install nightwatch

and create a basic nightwatch.json file;

{
  "src_folders": [
    "app/themes/my_theme/src/",
    "app/modules/custom/"
  ],
  "output_folder": "build/logs/nightwatch",
  "test_settings": {
    "default": {
      "filter": "**/tests/*.js",
      "launch_url": "http://127.0.0.1",
      "selenium_host": "127.0.0.1",
      "selenium_port": "4444",
      "screenshots": {
        "enabled": true,
        "on_failure": true,
        "on_error": true,
        "path": "build/logs/nightwatch"
      },
      "desiredCapabilities": {
        "browserName": "chrome"
      }
    }
  }
}

We're pointing to our theme and custom modules as the source of our JS tests as we like to keep the tests close to the original JS. Our test settings are largely based on the Docker setup described below, with the addition of the 'filter' setting which searches the source for .js files inside a tests directory.

A test could be as simple as checking for an attribute, like the following example;

/**
 * @file responsiveTableTest.js.
 */

module.exports = {
  'Responsive tables setup': (browser) => {
    browser
      .url(`${browser.launch_url}/styleguide/item-6-10.html?hideAll`)
      .pause(1000);
    browser.expect.element('td').to.have.attribute('data-label');
    browser.end();
  },
};

Which launches the Styleguides table component, waits a beat for the JS to initiate then checks that our td elements have the data-label that our JS added. Or is could be much more complex.

aXe: the Accessibility engine

aXe is a really nice tool for doing basic accessibility checks, and the Nightwatch Accessibility node module integrates aXe with Nightwatch so we can include accessibility testing within our functional JS tests without needing to write out the rules ourself. Even if you don't write any component specific tests with your Nightwatch setup, including this one accessibility test will give you basic coverage.

$ npm install nightwatch-accessibility

Then we edit our nightwatch.json file to include the custom_commands_path and custom_assertions_path;

{
  "src_folders": ["app/themes/previousnext_d8_theme/src/"],
  "output_folder": "build/logs/nightwatch",
  "custom_commands_path": ["./node_modules/nightwatch-accessibility/commands"],
  "custom_assertions_path": ["./node_modules/nightwatch-accessibility/assertions"],
  "test_settings": {
     ...
  }
}

Then write a test to do the accessibility check;

/**
 * @file Run Axe accessibility tests with Nightwatch.
 */

const axeOptions = {
  timeout: 500,
  runOnly: {
    type: 'tag',
    values: ['wcag2a', 'wcag2aa'],
  },
  reporter: 'v2',
  elementRef: true,
};

module.exports = {
  'Accessibility test': (browser) => {
    browser
      .url(`${browser.launch_url}/styleguide/section-6.html`)
      .pause(1000)
      .initAccessibility()
      .assert.accessibility('.kss-modifier__example', axeOptions)
      .end();
  },
};

Here we're configuring aXe core to check for wcag2a and wcag2aa, for anything inside the .kss-modifier__example selector of our Styleguide. Running this will check all of our components and tell us if it's found any accessibility issues. It'll also fail a build, so when hooked up with something like CircleCI, we know our Pull Requests will fail.

If we want to exclude a selector, instead of the .kss-modifier__example selector, we pass an include/exclude object { include: ['.kss-modifier__example'], exclude: ['.hljs'] }.

If you only add one test add one like this. Hopefully once you get started writing Nightwatch tests you'll see how easy it is and eventually add more :)

You can include the accessibility test within another functional test too, for example a modal component. You'll want to test it opens and closes ok, but once it's open it might have some accessibility issues that the overall check couldn't test for. So we want to re-run the accessibility assertion once it's open;

/**
 * @file dialogTest.js
 */

const axeOptions = require('../../../axeOptions.js'); // axeOptions are now shareable.

const example = '#kssref-6-18 .kss-modifier__example';
const trigger = '#example-dialog-toggle';
const dialog = '.js-dialog';

module.exports = {
  'Dialog opens': (browser) => {
    browser
      .url(`${browser.launch_url}/styleguide/item-6-18.html?hideAll`)
      .pause(1000)
      .initAccessibility();
    browser.click(trigger).pause(1000);
    browser.expect.element(dialog).to.be.visible;
    browser.assert.attributeEquals(dialog, 'aria-hidden', 'false');
    browser.assert.accessibility(example, axeOptions);
    browser.end();
  },
};

Docker

As mentioned above this all needs a little docker & selenium setup too. Selenium has docs for adding an image to Docker, but the setup basically looks like this;

@file docker-compose.yml

services:
  app:
    [general docker image stuff...]

  selenium:
    image: selenium/standalone-chrome
    network_mode: service:app
    volumes:
      - /dev/shm:/dev/shm

Then depending on what other CI tools you're using you may need some extra config. For instance, to get this running on CircleCI, we need to tell it about the Selenium image too;

@file .circleci/config.yml

jobs:
  test:
    docker:
     [other docker images...]
     - image: selenium/standalone-chrome

If you're not using docker or any CI tools and just want to test this stuff locally, there's a node module for adding the selenium-webdriver but I haven't tested it out with Nightwatch.

Don’t forget the manual checks!

There’s a lot more to accessibility testing than just these kinds of automated tests. A layer of manual testing will always be required to ensure a website is truly accessible. But automating the grunt work of running a checklist against a page is one very nice step towards an accessible internet.

Add new comment

Jul 03 2018
Jul 03

Back in the Drupal 6 days, I built the BOM Weather Drupal module to pull down weather data from the Australian Bureau of Meteorology (BOM) site, and display it to users.

We recently had a requirement for this in a new Drupal 8 site, so decided to take a more modern approach.

Not that kind of decoupled Drupal

We often hear the term Decoupled Drupal but I'm not talking about a Javascript front-end and Drupal Web Service API backend.

This kind of decoupling is removing the business logic away from Drupal concepts. Drupal then becomes a wrapper around the library to handle incoming web requests, configuration and display logic.

We can write the business logic as a standalone PHP package, with it's own domain models, and publish it to Packagist.org to be shared by both Drupal and non-Drupal projects.

The Bom Weather Library

We started by writing unit-testable code, that pulled in weather forecast data in an XML format, and produced a model in PHP classes that is much easier for consuming code to use. See the full BOM Weather code on GitHub 

For example:

$client = new BomClient($logger);
$forecast = $client->getForecast('IDN10031');

$issueTime = $forecast->getIssueTime();

$regions = $forecast->getRegions();
$metros = $forecast->getMetropolitanAreas();
$locations = $forecast->getLocations();

foreach ($locations as $location) {
  $aac = $location->getAac();
  $desc = $location->getDescription();

  /** @var \BomWeather\Forecast\ForecastPeriod[] $periods */
  $periods = $location->getForecastPeriods();

  // Usually 7 days of forecast data.
  foreach ($periods as $period) {
    $date = $period->getStartTime();
    $maxTemp = $period->getAirTempMaximum();
    $precis = $period->getPrecis();
  }
}

The library takes care of fetching the data, and the idiosyncrasies of a fairly crufty API (no offence intended!).

Unit Testing

We can have very high test coverage with our model. We can test the integration with mock data, and ensure a large percentage of the code is tested. As we are using PHPUnit tests, they are lightning fast, and are automated as part of a Pull Request workflow on CircleCI.

Any consuming Drupal code can focus on testing just the Drupal integration, and not need to worry about the library code.

Dependency Management

As this is a library, we need to be very careful not to introduce too many runtime dependencies. Also any versions of those dependencies need to be more flexible than what you would normally use for a project. If you make your dependency versions too high they can introduce incompatibilities when used a project level. Consumers will simply not be able to add your library via composer.

We took a strategy with the BOM Weather library of having high-low automated testing via CircleCI. This means you test using both: 

composer update --prefer-lowest

and

composer update

The first will install the lowest possible versions of your dependencies as specified in your composer.json. The second will install the highest possible versions. 

This ensures your version constraints are set correctly and your code should work with any versions in between.

Conclusion

At PreviousNext, we have been using the decoupled model approach on our projects for the last few years, and can certainly say it leads to more robust, clean and testable code. We have had projects migrate from Drupal 7 to Drupal 8 and as the library code does not need to change, the effort has been much less.

If you are heading to Drupal Camp Singapore, make sure to see Eric Goodwin's session on Moving your logic out of Drupal.

Photo of Kim Pepper

Posted by Kim Pepper
Technical Director

Dated 4 July 2018

Comments

Thanks for writing this! It's great to see this approach gain traction in Drupal 8. We're doing the same thing with the Drupal 8 version of the media_mpx module (library at https://github.com/Lullabot/mpx-php). As you say, test coverage of the critical functionality is so much simpler when you aren't dealing with the testing difficulties of Drupal 8 entities.

We've had good success bridging Drupal services back into non-Drupal libraries. For example, we use the cache PSR's to allow the PHP library to save data to Drupal's cache. You might be interested in https://github.com/Lullabot/drupal-symfony-lock which does the same thing for locks.

Thanks Andrew. I will check them out!

Pagination

Add new comment

Jun 18 2018
Jun 18

In Drupal 8.5.0, the "processed" property of text fields is available in REST which means that REST apps can render the HTML output of a textarea without worrying about the filter formats.

In this post, I will show you how you can add your own processed fields to be output via the REST API.

The "processed" property mentioned above is what is known as a computed property on the textarea field.

The ability to make the computed properties available for the REST API like this can be very helpful. For example, when the user inputs the raw value and Drupal performs some complex logical operations on it before showing the output.

Drupal fieldable entities can also have computed properties and those properties can also be exposed via REST. I used the following solution to expose the data of an entity field which takes raw data from the users and perform some complex calculations on it.

First of all, we need to write hook_entity_bundle_field_info to add the property and because it is a computed field we don't need to implement hook_entity_field_storage_info.


<?php // my_module/my_module.module /** * @file * Module file for my_module. */ use Drupal\my_module\FieldStorageDefinition; use Drupal\my_module\Plugin\Field\MyComputedItemList /** * Implements hook_entity_bundle_field_info(). */ function my_module_entity_bundle_field_info(EntityTypeInterface $entity_type, $bundle, array $base_field_definitions) { $fields = []; // Add a property only to nodes of the 'my_bundle' bundle. if ($entity_type->id() === 'node' && $bundle === 'my_bundle') { // It is not a basefield so we need a custom field storage definition see // https://www.drupal.org/project/drupal/issues/2346347#comment-12206126 $fields['my_computed_property'] = FieldStorageDefinition::create('string') ->setLabel(t('My computed property')) ->setDescription(t('This is my computed property.')) ->setComputed(TRUE) ->setClass(MyComputedItemList::class) ->setReadOnly(FALSE) ->setInternal(FALSE) ->setDisplayOptions('view', [ 'label' => 'hidden', 'region' => 'hidden', 'weight' => -5, ]) ->setDisplayOptions('form', [ 'label' => 'hidden', 'region' => 'hidden', 'weight' => -5, ]) ->setTargetEntityTypeId($entity_type->id()) ->setTargetBundle($bundle) ->setName('my_computed_property') ->setDisplayConfigurable('form', FALSE) ->setDisplayConfigurable('view', FALSE); } return $fields; }

Then we need the MyComputedItemList class to perform some magic. This class will allow us to set the computed field value.


<?php // my_module/src/Plugin/Field/MyComputedItemList.php namespace Drupal\my_module\Plugin\Field; use Drupal\Core\Field\FieldItemList; use Drupal\Core\TypedData\ComputedItemListTrait; /** * My computed item list class. */ class MyComputedItemList extends FieldItemList { use ComputedItemListTrait; /** * {@inheritdoc} */ protected function computeValue() { $entity = $this->getEntity(); if ($entity->getEntityTypeId() !== 'node' || $entity->bundle() !== 'my_bundle' || $entity->my_some_other_field->isEmpty()) { return; } $some_string = some_magic($entity->my_some_other_field); $this->list[0] = $this->createItem(0, $some_string); }

The field we add is not a base field so we can't use \Drupal\Core\Field\BaseFieldDefinition. There is an open core issue to address that https://www.drupal.org/project/drupal/issues/2346347 but in tests there is a workaround using a copy of \Drupal\entity_test\FieldStorageDefinition:


<?php // my_module/src/FieldStorageDefinition.php namespace Drupal\my_module; use Drupal\Core\Field\BaseFieldDefinition; /** * A custom field storage definition class. * * For convenience we extend from BaseFieldDefinition although this should not * implement FieldDefinitionInterface. * * @todo Provide and make use of a proper FieldStorageDefinition class instead: * https://www.drupal.org/node/2280639. */ class FieldStorageDefinition extends BaseFieldDefinition { /** * {@inheritdoc} */ public function isBaseField() { return FALSE; } }

Last but not least we need to announce our property definition to the entity system so that it can keep track of it. As it is an existing bundle we can write an update hook. Otherwise, we'd need to implement hook_entity_bundle_create.


<?php // my_module/my_module.install /** * @file * Install file for my module. */ use Drupal\my_module\FieldStorageDefinition; use Drupal\my_module\Plugin\Field\MyComputedItemList; /** * Adds my computed property. */ function my_module_update_8001() { $fields['my_computed_property'] = FieldStorageDefinition::create('string') ->setLabel(t('My computed property')) ->setDescription(t('This is my computed property.')) ->setComputed(TRUE) ->setClass(MyComputedItemList::class) ->setReadOnly(FALSE) ->setInternal(FALSE) ->setDisplayOptions('view', [ 'label' => 'hidden', 'region' => 'hidden', 'weight' => -5, ]) ->setDisplayOptions('form', [ 'label' => 'hidden', 'region' => 'hidden', 'weight' => -5, ]) ->setTargetEntityTypeId('node') ->setTargetBundle('my_bundle') ->setName('my_computed_property') ->setDisplayConfigurable('form', FALSE) ->setDisplayConfigurable('view', FALSE); // Notify the storage about the new field. \Drupal::service('field_definition.listener')->onFieldDefinitionCreate($fields['my_computed_property']); }

The beauty of this solution is that I don't have to write a custom serializer to normalize the output. Drupal Typed Data API is doing all the heavy lifting.

Related Drupal core issues:

Photo of Jibran Ijaz

Posted by Jibran Ijaz
Senior Drupal Developer

Dated 18 June 2018

Add new comment

Jun 14 2018
Jun 14

If you've ever patched Drupal core with Composer you may have noticed patched files can sometimes end up in the wrong place like core/core or core/b. Thankfully there's a quick fix to ensure the files make it into the correct location.

When using cweagans/composer-patches it's easy to include patches in your composer.json

"patches": {
    "drupal/core": {
        "Introduce a batch builder class to make the batch API easier to use": "https://www.drupal.org/files/issues/2018-03-21/2401797-111.patch"
    }
}

However in certain situations patches will get applied incorrectly. This can happen when the patch is only adding new files (not altering existing files), like in the patch above. The result is the patched files end up in a subfolder core/core. If the patch is adding new files and editing existing files the new files will end up in core/b. This is because composer-patches cycle through the -p levels trying to apply them; 1, 0, 2, then 4.

Thankfully there is an easy fix!

"extra": {
   ...
   "patchLevel": {
       "drupal/core": "-p2"
    }
}

Setting the patch level to p2 ensures any patch for core will get applied correctly.

Note that until composer-patches has a 1.6.5 release, specifically this PR, you'll need to use the dev release like:

"require": {
    ...
    "cweagans/composer-patches": "1.x-dev"
}

The 2.x branch of composer-patches also includes this feature.

Big thanks to cweagans for this great tool and jhedstrom for helping to get this into the 1.x branch.

Comments

thanks for the blog @Saul Willers .

Fantastic, thanks Cameron!

Thanks for the trick.

And just as an addendum about *creating* patches from a split core git repository: make sure to use "git diff --src-prefix=a/core/ --dst-prefix=b/core/".

Ciao,
Antonio

Pagination

Add new comment

May 09 2018
May 09

Several times in the past I've been caught out by Drupal's cron handler silently catching exceptions during tests.

Your test fails, and there is no clue as to why.

Read on to find out how to shine some light on this, by making your kernel tests fail on any exception during cron.

If you're running cron during a kernel test and expecting something to happen, but it doesn't - it can be hard to debug why.

Ordinarily an uncaught exception during a test will cause PHPUnit to fail, and you can pinpoint the issue.

However, if you're running cron in the test this may not be the case.

This is because, by default Drupal's cron handler catches all exceptions and silently logs them. This is colloquially known as Pokemon exception handling.

The act of logging an exception is not enough to fail a test.

So your test skips the exception and carries on, failing in other ways unexpectedly.

This is exacerbated by the fact that PHP Unit throws an exception for warnings. So the slightest issue in your code will cause it to halt execution. In an ordinary scenario, this exception causes the test to fail. But the pokemon catch block in the Cron class prevents that, and your test continues in a weird state.

This is the code in question in the cron handler

<?php
try {
  $queue_worker->processItem($item->data);
  $queue->deleteItem($item);
}
// ... 
catch (\Exception $e) {
  // In case of any other kind of exception, log it and leave the item
  // in the queue to be processed again later.
  watchdog_exception('cron', $e);
}

So how do you make this fail your test? In the end, it's quite simple.

Firstly, you make your test a logger and use the handy trait to do the bulk of the work.

You only need to implement the log method, as the trait takes care of handling all other methods.

In this case, watchdog_exception logs exceptions as RfcLogLevel::ERROR. The log levels are integers, from most severe to least severe. So in this implementation we tell PHP Unit to fail the test with any messages logged where the severity is ERROR or worse.

use \Drupal\KernelTests\KernelTestBase;
use Psr\Log\LoggerInterface;
use Drupal\Core\Logger\RfcLoggerTrait;
use Drupal\Core\Logger\RfcLogLevel;

class MyTest extends KernelTestBase implements LoggerInterface {
  use RfcLoggerTrait;

  /**
   * {@inheritdoc}
   */
  public function log($level, $message, array $context = []) {
    if ($level <= RfcLogLevel::ERROR) {
      $this->fail(strtr($message, $context));
    }
  }
}

Then in your setUp method, you register your test as a logger.

$this->container->get('logger.factory')->addLogger($this);

And that's it - now any errors that are logged will cause the test to fail.

If you think we should do this by default, please comment on this core issue.

Photo of Lee Rowlands

Posted by Lee Rowlands
Senior Drupal Developer

Dated 9 May 2018

Add new comment

Mar 15 2018
Mar 15

In most of the projects we build, the HTML markup provided by core just gets in the way. There is way too many wrapper divs. This can cause issues when trying to create lean markup that matches what is produced in a generated styleguide.

In this post, I'll introduce you to the concept of bare templates, and how you can remove unnecessary markup from your Twig templates.

In Drupal 8, a couple of themes are shipped by default to serve a common set of end user needs.

Among them are:

  • Bartik: A flexible, recolourable theme with many regions and a responsive, mobile-first layout.
  • Seven: The default administration theme for Drupal 8 was designed with clean lines, simple blocks, and sans-serif font to emphasise the tools and tasks at hand.
  • Stark: An intentionally plain theme with almost no styling to demonstrate default Drupal’s HTML and CSS.
  • Stable: A base theme. Stable theme aggregates all of the CSS from Drupal core into a single theme. Theme markup and CSS will not change so any sub-theme of Stable will know that updates will not cause it to break.
  • Classy: A sub-theme of Stable. Theme designed with lot of markups for beginner themers.

But in an actual business scenario the requirements and expectations of a client towards the look and feel of the website is far more distinct than the themes that are provided in Drupal core.

When building your site based upon one of these themes it is common to face issues with templating during the frontend implementation phase. Quite often the default suggested templates for blocks, nodes, fields etc. contain HTML wrapper divs that your style guide doesn’t require.

Usually the most effective way is to build themes using the Stable theme. In Stable, the theme markup and CSS are fixed between any new Drupal core releases making any sub-theme to less likely to break on a Drupal core update. It also uses the verbose field template support for debugging.

Which leads us to use bare templates.

What is a bare template?

A bare template is a twig file that has the minimum number of HTML wrappers around actual content. It could be simple as a file with a single content output like {{content.name}}

Compared to th traditional approach, bare templates provide benefits such as:

  • Ease of maintenance: With minimum markup the complexity of the template is much lesser making it easy to maintain.
  • Cleaner Markup: The markup will only have the essential or relevant elements where as in traditional approach there are a lot of wrappers leading to a complex output.
  • Smaller page size: Less markup means less page size.
  • Avoids the need for markup removal modules: With bare markup method we do not need to use modules like fences or display suite. Which means less modules to maintain and less configuration to worry about.

Our Example

We need to create a bare template for type field and suggest it to render only field name and field_image of my_vocabulary taxonomy entity. This will avoid Drupal from suggesting this bare template for other fields belonging to different entities.

Field template

Let's have a look at field template which resides at app/core/themes/stable/templates/field/field.html.twig

{% if label_hidden %}
{% if multiple %}
  <div{{ attributes }}>
    {% for item in items %}
      <div{{ item.attributes }}>{{ item.content }}</div>
    {% endfor %}
  </div>
{% else %}
  {% for item in items %}
    <div{{ attributes }}>{{ item.content }}</div>
  {% endfor %}
{% endif %}
{% else %}
<div{{ attributes }}>
  <div{{ title_attributes }}>{{ label }}</div>
  {% if multiple %}
    <div>
  {% endif %}
  {% for item in items %}
    <div{{ item.attributes }}>{{ item.content }}</div>
  {% endfor %}
  {% if multiple %}
    </div>
  {% endif %}
</div>
{% endif %}

As you see there is quite a lot of div wrappers used in the default template which makes it difficult to style components. If you are looking for simple output, this code is overkill. There is however, a lot of valuable information is provided in the comments of field.html.twig which we can use.

{#
/**
* @file
* Theme override for a field.
*
* To override output, copy the "field.html.twig" from the templates directory
* to your theme's directory and customize it, just like customizing other
* Drupal templates such as page.html.twig or node.html.twig.
*
* Instead of overriding the theming for all fields, you can also just override
* theming for a subset of fields using
* @link themeable Theme hook suggestions. @endlink For example,
* here are some theme hook suggestions that can be used for a field_foo field
* on an article node type:
* - field--node--field-foo--article.html.twig
* - field--node--field-foo.html.twig
* - field--node--article.html.twig
* - field--field-foo.html.twig
* - field--text-with-summary.html.twig
* - field.html.twig
*
* Available variables:
* - attributes: HTML attributes for the containing element.
* - label_hidden: Whether to show the field label or not.
* - title_attributes: HTML attributes for the title.
* - label: The label for the field.
* - multiple: TRUE if a field can contain multiple items.
* - items: List of all the field items. Each item contains:
*   - attributes: List of HTML attributes for each item.
*   - content: The field item's content.
* - entity_type: The entity type to which the field belongs.
* - field_name: The name of the field.
* - field_type: The type of the field.
* - label_display: The display settings for the label.
*
* @see template_preprocess_field()
*/
#}

The code

Building the hook.

We will be using hook_theme_suggestions_HOOK_alter() to suggest the two fields to use our bare template when rendering.

It is important to note that only these two fields will be using the bare template and the other fields (if any) in that entity will use the default field.html.twig template to render.

my_custom_theme_theme_suggestions_field_alter (&$hooks, $vars){

    // Get the element names passed on when a page is rendered.
    $name = $vars['element']['#field_name'];

    // Build the string layout for the fields.
    // <entity type>:<bundle name>:<view mode>:<field name>

    $bare_hooks = [
        'taxonomy_term:my_vocabulary:teaser:name',
        'taxonomy_term:my_vocabulary:teaser:field_logo',
    ];

    // Build the actual var structure from second parameter
    $hook = implode(':', [
        $vars['element']['#entity_type'],
        $vars['element']['#bundle'],
        $vars['element']['#view_mode'],
        $vars['element']['#field_name'],
    ]);

    // Check if the strings match and assign the bare template.
    if (in_array($hook, $bare_hooks, TRUE)) {
        $hooks[] = 'field__no_markup';
    }
}

The hook key field__no_markup mentioned in the code corresponds to a twig file which must reside under app/themes/custom/my_theme/templates/field/field--no-markup.html.twig

Debugging Output

In order to see how this is working, we can fire up PHPStorm and walk the code in the debugger.

As you can see in the output below, the implode() creates the actual var structure from the second parameter. We will use this to compare with the $bare_hooks array we created  fields specific to content entity types that we need to assign the bare template.

Note: As best practise make sure you pass a third argument TRUE to in_array(). Which will validate the data type as well.

 

Bare Template Markup

The following is the contents of our bare template file. Notice the lack of any HTML?

{#
/**
* @file
* Theme override to remove all field markup.
*/
#}

{% spaceless %}
{% for item in items %}
  {{ item.content }}
{% endfor %}
{% endspaceless %}


Bare templating can be used for other commonly used templates as well. To make it render a minimal amount of elements.

Conclusion

We can always use custom templating to avoid getting into complicated markups. And have the flexibility to maintain the templates to render for specific entities.

Resources

Comments

Great post! I love getting to the cleanest markup possible.

Since the field templates don't have the `attributes`, have you run into any issues with Contextual Links & Quick Edit working? I've run into this issue trying to achieve the same thing using different methods:

https://www.drupal.org/project/drupal/issues/2551373

Thanks!

Jim

Pagination

Add new comment

Mar 12 2018
Mar 12

Since the release of Drupal 8, it has become tricky to determine what and where override configuration is set.

Here are some of the options for a better user experience.

Drupal allows you to override configuration by setting variables in settings.php. This allows you to vary configuration by which environment your site are served. In Drupal 7, when overrides are set, the overridden value is immediately visible in administration UI. Though the true value is transparent, when a user attempts to change configuration, the changes appear to be ignored. The changes are saved and stored. But Drupal exposes the overridden value when a configuration form is (re)loaded.

With Drupal 8, the behaviour of overridden configuration has reversed. You are always presented with active configuration, usually set by site builders. When configuration is accessed by code, overrides are applied on top of active configuration seamlessly. This setup is great if you want to deploy the active configuration to other environments. But it can be confusing on sites with overrides, since its not immediately obvious what Drupal is using.

An example of this confusion is: is your configuration forms show PHP error messages are switched-on, but no messages are visible. Or, perhaps you are overriding Swiftmailer with environment specific email servers. But emails aren't going to the servers displayed on the form.

A Drupal core issue exists to address these concerns. However this post aims to introduce a stopgap. In the form of a contrib module, of course.

Introducing Configuration Override Inspector (COI). This module makes configuration-overrides completely transparent to site builders. It provides a few ways overridden values can be exposed to site builders.

The following examples show error settings set to OFF in active configuration, but ON in overridden configuration. (such as a local.settings.php override on your dev machine)

// settings.php
$config['system.logging']['error_level'] = 'verbose';

Hands-off: Allow users to modify active configuration, while optionally displaying a message with the true value. This is most like out-of-the-box Drupal 8 behaviour:

Coi Passive

Expose and Disable: Choose whether to disable form fields with overrides display the true value as the field value:

Coi Disabled

Invisible: Completely hide form fields with overrides:

Coi Hidden

Unfortunately Configuration Override Inspector doesnt yet know how to map form-fields with appropriate configuration objects. Contrib module Config Override Core Fields exists to provide mapping for Drupal core forms. Further documentation is available for contrib modules to map fields to configuration objects. Which looks a bit like this:

$config = $this->config('system.logging');
$form['error_level'] = [
  '#type' => 'radios',
  '#title' => t('Error messages to display'),
  '#default_value' => $config->get('error_level'),
  // ...
  '#config' => [
    'key' => 'system.logging:error_level',
  ],
];

Get started with Configuration Override Inspector (COI) and Config Override Core Fields:

composer require drupal/coi:^[email protected]
composer require drupal/config_override_core_fields:^[email protected]

COI requires Drupal 8.5 and above, thanks to improvements in Drupal core API.

Have another strategy for handling config overrides? Let me know in the comments!

Photo of Daniel Phin

Posted by Daniel Phin
Drupal Developer

Dated 12 March 2018

Add new comment

Feb 14 2018
Feb 14

In one of our recent projects, our client made a request to use LinkIt module to insert file links to content from the group module.  However, with the added distinction of making sure that only content that is in the same group as the content they are editing is suggested in the matches.

Here’s how we did it.

The LinkIt module

First, let me give you a quick overview of the LinkIt module.

LinkIt is a tool that is commonly used to link internal or external artifacts. One of the main advantages of using it is because LinkIt maintains links by uuid which means no occurrence for broken links. And it can link any type of entity varying from core entities like nodes, users, taxonomy terms, files, comments and to custom entities created by developers.

Once you install the module, you need to set a Linkit profile which consists of information about which plugins to use. To set the profiles use /admin/config/content/linkit path. And the final step will be to enable the Linkit plugin on the text format you want to use. Formats are found at admin/config/content/formats. And you should see the link icon when editing content item.

Once you click on the LinkIt icon it will prompt a modal as shown below.

By default LinkIt ships with a UI to maintain profiles that enables you to manage matchers.

Matchers

Matchers are responsible for managing the autoload suggestion criteria for a particular LinkIt field. It provides bundle restrictions and bundle grouping settings

Proposed resolution

To solve the issue; we started off by creating a matcher for our particular entity type. Linkit has an EntityMatcher plugin that uses Drupal's Plugin Derivatives API to expose one plugin for each entity type. We started by adding the matcher that linkit module exposed for our custom group content entity type.

We left the bundle restrictions and bundle grouping sections un-ticked so that all existing bundles are allowed so the content of those bundles will be displayed.

Now that the content is ready we have to let the matcher know that we only need to load content that belongs to the particular group for which the user is editing or creating the page.

Using the deriver

In order to do that we have to create a new class in /modules/custom/your_plugin_name/src/Plugin/Linkit/Matcher/YourClassNameMatcher.php by extending existing EntityMatcher class which derives at /modules/contrib/linkit/src/Plugin/Linkit/Matcher/EntityMatcher.php.

Because Linkit module's plugin deriver exposes each entity-type plugin with and ID for the form entity:{entity_type_id} we simply need to create a new plugin with an ID that matches our entity type ID. This then takes precedence over the default derivative based plugin provided by Linkit module. We can then modify the logic in either the ::execute() or ::buildEntityQuery method.

Using LinkIt autocomplete request

But here comes the challenge, in that content edit page the LinkIt modal doesn’t know about the group of the content being edited, therefore we cannot easily filter the suggestions based on the content being edited. We need to take some fairly extreme measures to make that group ID available for our new class to filter the content once the modal is loaded and user starts typing in the field.

In this case the group id is available from the page uri.

So in order to pass this along, we can make use of the fact that the linkit autocomplete widget has a data attribute 'data-autocomplete-path' which is used by its JavaScript to perform the autocomplete request. We can add a process callback to the LinkIt element to extract the current page uri and pass it as a query parameter in the autocomplete path.

The code

To do so we need to implement hook_element_info_alter in our custom module. Here we will add a new process callback and in that callback we can add the current browser url as a query parameter to the data-autocomplete-path attribute of the modal.

\Drupal\linkit\Element\Linkit is as follows;

public function getInfo() {
 $class = get_class($this);
 return [
  '#input' => TRUE,
  '#size' => 60,
  '#process' => [
    [$class, 'processLinkitAutocomplete'],
    [$class, 'processGroup'],
  ],
  '#pre_render' => [
    [$class, 'preRenderLinkitElement'],
    [$class, 'preRenderGroup'],
  ],
  '#theme' => 'input__textfield',
  '#theme_wrappers' => ['form_element'],
 ];
}

Below is the code to add the process callback and alter the data-autocomplete-path element. We rely on the HTTP Referer header which Drupal sends in its AJAX request that is used to display the LinkIt modal, which in turn builds the LinkIt element

/**
* Implements hook_element_info_alter().
*/

function your_module_name_element_info_alter(array &$info) {
  $info['linkit']['#process'][] = 'your_module_name_linkit_process';
}

/**
* Process callback.
*/
function your_module_name_linkit_process($element) {
 // Get the HTTP referrer (current page URL)
 $url = \Drupal::request()->server->get('HTTP_REFERER');

 // Parse out just the path.
 $path = parse_url($url, PHP_URL_PATH);

 // Append it as a query parameter to the autocomplete path.
 $element['#attributes']['data-autocomplete-path'] .= '?uri=' . urlencode($path);
 return $element;
}

Once this is done we can now proceed to create the new plugin class extending EntityMatcher class. Notice the highlighted areas.

namespace Drupal\your_module\Plugin\Linkit\Matcher;

use Drupal\linkit\Plugin\Linkit\Matcher\EntityMatcher;
use Drupal\linkit\Suggestion\EntitySuggestion;
use Drupal\linkit\Suggestion\SuggestionCollection;


/**
* Provides specific LinkIt matchers for our custom entity type.
*
* @Matcher(
*   id = "entity:your_content_entity_type",
*   label = @Translation("Your custom content entity"),
*   target_entity = "your_content_entity_type",
*   provider = "your_module"
* )
*/

class YourContentEntityMatcher extends EntityMatcher {

/**
 * {@inheritdoc}
 */
public function execute($string) {
  $suggestions = new SuggestionCollection();
  $query = $this->buildEntityQuery($string);
  $query_result = $query->execute();
  $url_results = $this->findEntityIdByUrl($string);
  $result = array_merge($query_result, $url_results);

  if (empty($result)) {
    return $suggestions;
  }

  $entities = $this->entityTypeManager->getStorage($this->targetType)->loadMultiple($result);

  $group_id = FALSE;
  // Extract the Group ID from the uri query parameter.
  if (\Drupal::request()->query->has('uri')) {
    $uri = \Drupal::Request()->query->get('uri');
    list(, , $group_id) = explode('/', $uri);
  }

  foreach ($entities as $entity) {
    // Check the access against the defined entity access handler.
    /** @var \Drupal\Core\Access\AccessResultInterface $access */
    $access = $entity->access('view', $this->currentUser, TRUE);
    if (!$access->isAllowed()) {
      continue;
    }

    // Exclude content that is from a different group
    if ($group_id && $group_id != $entity->getGroup()->id()) {
      continue;
    }

    $entity = $this->entityRepository->getTranslationFromContext($entity);
    $suggestion = new EntitySuggestion();
    $suggestion->setLabel($this->buildLabel($entity))
      ->setGroup($this->buildGroup($entity))
      ->setDescription($this->buildDescription($entity))
      ->setEntityUuid($entity->uuid())
      ->setEntityTypeId($entity->getEntityTypeId())
      ->setSubstitutionId($this->configuration['substitution_type'])
      ->setPath($this->buildPath($entity));
    $suggestions->addSuggestion($suggestion);
  }

  return $suggestions;
 }
}

Conclusion

And we are done.

By re-implementing the execute() method of EntityMatcher class we are now able to make the LinkIt field to display only content from the same group as the content the user is editing/creating.

So next challenge here is to create some test coverage for this, as we're relying on a few random pieces of code - a plugin, some JavaScript in the LinkIt module, an element info alter hook and a process callback - any of which could change and render all of this non-functional. But that's a story for another post.

Photo of Pasan Gamage

Posted by Pasan Gamage
Drupal Developer

Dated 14 February 2018

Comments

I am the maintainer for the Linkit module, and I really liked this post. Glad you found it (quite) easy to extend.

Hey there, I've got linkit 8.x-4.3 and the class EntitySuggestion and SuggestionCollection don't even seem to exist at all? So the use statements fail and everything after that. Is there some aspect you did not include in this description?

Pagination

Add new comment

Feb 07 2018
Feb 07

Great to see this project a really good page builder is badly needed for Drupal - looks like a very good start, well done Lee.

Not sure if you are familiar with the layout builder and visual composer build by NikaDevs (a theme company) but you could do a lot worse then having a look at their approach, it's a very good page builder - which they ave on all their themes.

https://themeforest.net/user/nikadevs

Thanks,

Shane

Jan 24 2018
Jan 24

When optimising a site for performance, one of the options with the best effort-to-reward ratio is image optimisation. Crunching those images in your Front End workflow is easy, but how about author-uploaded images through the CMS?

Recently, a client of ours was looking for ways to reduce the size of uploaded images on their site without burdening the authors. To solve this, we used the module Image Optimize which allows you to use a number of compression tools, both local and 3rd party.

The tools it currently supports include:

  • Local
    • PNG 
    • JPEG
  • 3rd Party 

We decided to avoid the use of 3rd party services, as processing the images on our servers could reduce processing time (no waiting for a third party to reply) and ensure reliability.

In order to pick the tools which best served our we picked an image that closely represented the type of image the authors often used. We picked an image featuring a person’s face with a complex background - one png and one jpeg, and ran it through each of the tools with a moderately aggressive compression level.

PNG Results

Compression Library Compressed size Percentage saving Original (Drupal 8 default resizing) 234kb - AdvPng 234kb 0% OptiPng 200kb 14.52% PngCrush 200kb 14.52% PngOut 194kb 17.09% PngQuant 63kb 73.07%
Compression Library Compressed size Percentage saving Original 1403kb - AdvPng 1403kb 0% OptiPng 1288kb 8.19% PngCrush 1288kb 8.19% PngOut 1313kb 6.41% PngQuant 445kb 68.28%

JPEG Results

Compression Library Compressed size Percentage saving Original (Drupal 8 default resizing) 57kb - JfifRemove 57kb 0% JpegOptim 49kb 14.03% JpegTran 57kb 0%
Compression Library Compressed size Percentage saving Original 778kb - JfifRemove 778kb 0% JpegOptim 83kb 89.33% JpegTran 715kb 8.09%

Using a combination of PngQuant and JpegOptim, we could save anywhere between 14% and 89% in file size, with larger images bringing greater percentage savings.

Setting up automated image compression in Drupal 8

The Image Optimize module allows us to set up optimisation pipelines and attach them to our image styles. This allows us to set both site-wide and per-image style optimisation.

After installing the Image Optimize module, head to the Image Optimize pipelines configuration (Configuration > Media > Image Optimize pipeline) and add a new optimization pipeline.

Now add the PngQuant and JpegOptim processors. If they have been installed to the server Image Optimize should pick up their location automatically, or you can manually set the location if using a standalone binary.

JpegOptim has some additional quality settings, I’m setting “Progressive” to always and “Quality” to a sweet spot of 60. 70 could also be used as a more conservative target.

JpegOptim Settings

The final pipeline looks like the following:

pipeline

Back to the Image Optimize pipelines configuration page, we can now set the new pipeline as the sitewide default:

Default sitewide pipeline

And boom! Automated sitewide image compression!

Overriding image compression for individual image styles

If the default compression pipeline is too aggressive (or conservative) for a particular image style, we can override it in the Image Styles configuration (Configuration > Media > Image styles). Edit the image style you’d like to override, and select your alternative pipeline:

Override default pipeline

Applying compression to existing images

Flushing the image cache will recreate existing images with compression the next time the image is loaded. This can be done with the drush command 

drush image-flush --all

Conclusion

Setting up automated image optimisation is a relatively simple process, with potentially large impacts on site performance. If you have experience with image optimisation, I would love to hear about it in the comments.

Photo of Tony Comben

Posted by Tony Comben
Front end Developer

Dated 24 January 2018

Comments

Did you consider using mod_pagespeed at all? If you did what made you decide against it?

Good question, Drupal performs many of the optimisations that mod_pagespeed does but allows us more granular control. One of the benefits of this approach is being able to control compression levels per image style. As Drupal is resizing and reshaping images then anyway, I feel it makes sense to do the compression at the same time.

Hi Tony,
Nice of you to post some details on this.

How does this integrate with Responsive Images and the Picture fields?

Can it crop and scale immediately after upload to get multiple files for multiple view ports?

Regards

Hi Saj, Responsive Images picks up its variants from the Image Styles so this works seamlessly. You can set your image dimensions and cropping in the image style, and the compression is applied after that.

Nice write up! I never knew about this Drupal module.

It'd be nice to compare the Original Image + Drupal Compression + Final Selected compression library output through some image samples.

Also might worth mentioning that PngQuant is a lossly image compression algorithm - and the others aren't (hence the big compression difference).

I'd recommend running optipng or pngcrush after pngquant to get an even more compressed image. Careful though, this can burn CPU cycles, especially with the module's obsessive parameter choices. Have a look at the $cmd entries in binaries/*.inc if you're curious.

Hi Christoph,

How do you define the order by which these compressions are applied?

Great article btw! The comparison metric is quite useful in knowing which tool is the best performer. I initially went for jpegtran but jpegoptim is producing way better results.

Thanks

I recommend one of the external services when you’re on a host where you can’t install extra server software (like Pantheon).

Hi! Nice post. Which version did you use with Drupal 8 ?

8.x-2.0-alpha3 ? No issues with alpha version ? Thanks

Hi! Did you try this version of the module with actual Drupal version?
What result have you got?
Thanks

I am glad to read such a huge stuff about the picture optimisation in Drupal. This looks really interesting to me and I would love to try out out after my bus tours from washington dc

Hi Tony,
Nice explanation of use of imageAPI module. m a newbie in D8, working on a live site.. I had a question regarding, setting manual path of pngquant.. As understood in windows, php folder should have its dll file, but as i am working on server directly, i dont know how to proceed from that step. Please do help.

Pagination

Add new comment

Jan 22 2018
Jan 22

In November 2017 I presented at Drupal South on using Dialogflow to power conversational interfaces with Drupal.

The video and slides are below, the demo in which I talk to Drupal starts in the first minute.

by Lee Rowlands / 23 January 2018 Open slides in new window

Tagged

Conversational UI, Drupal 8, Chatbots, DrupalSouth
Jan 22 2018
Jan 22

All PreviousNext Drupal 8 projects are now managed using Composer. This is a powerful tool, and allows our projects to define both public and private modules or libraries, and their dependencies, and bring them all together.

However, a if you require public or private modules which are hosted on GitHub you may run into the API Rate Limits. In order to overcome this, it is recommended to add a GitHub personal access token to your composer configuration.

In this blog post, I'll show how you can do this in a secure and manageable way.

It's common practice when you encounter a Drupal project to see the following snippet in a composer.json file:

"config": {
    "github-oauth": {
        "github.com": "XXXXXXXXXXXXXXXXXXXXXX"
    }
},

What this means is, everyone is sharing a single account's personal access token. While this may be convenient, it's also a major security risk should the token accidentally be made public, or a team member leaves the organisation, and still has read/write access to your repositories.

A better approach, is to have each team member have their own personal access token configure locally. This ensures that individuals can only access repositories they have read permissions for, and once they leave your organisation they can no longer access any private dependencies.

Step 1: Create a personal access token

Go to https://github.com/settings/tokens and generate a new token.

Generate GitHub Token

You will need to specify all repo scopes.

Select GitHub Scopes

Finally, hit Generate Token to create the token.

GitHub token

Copy this, as well need it in the next step.

Step 2: Configure Composer to use your personal access token

Run the following from the command line:

composer config -g github-oauth.github.com XXXXXXXXXXXXXXXXXXXXXXX

You're all set! From now on, composer will use your own individual personal access token which is stored in $HOME/.composer/auth.json

What about Automated Testing Environments?

Fortunately, composer also accepts an environment variable COMPOSER_AUTH with a JSON-formatted string as an argument. For example:

COMPOSER_AUTH='{"github-oauth": {"github.com": "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"}}'

You can simply set this environment variable in your CI Environment (e.g. CircleCI, TravisCI, Jenkins) and have a personal access token specific to the CI environment.

Summary

By using Personal Access Tokens, you can now safely remove any tokens from the project's composer.json file, removing the risk this gets exposed. You can also know that by removing access for any ex-team members, they are no longer able to access your organisations repos using a token. Finally, in the event of a token being compromised, you have reduced the attack surface, and can more easily identify which user's token was used.

Photo of Kim Pepper

Posted by Kim Pepper
Technical Director

Dated 22 January 2018

Add new comment

Jan 18 2018
Jan 18

After reading a blog post by Matthias Noback on keeping an eye on code churn, I was motivated to run the churn php library over some modules in core to gauge the level of churn.

Is this something you might like to do on your modules? Read on for more information.

What is churn

As Matthias details in his blog post - churn is a measure of the number of times a piece of code has been changed over time. The red flags start to crop up when you have high complexity and high churn.

Enter churn-php

Churn php is a library that analyses PHP code that has its history in git to identify high churn/complexity scores.

You can either install it with composer require bmitch/churn-php --dev or run it using docker docker run --rm -ti -v $PWD:/app dockerizedphp/churn run /path/to/code

Some results from core

So I ran it for some modules I look after in core, as well as the Drupal\Core\Entity namespace.

Block Content

File Times Changed Complexity Score core/modules/block_content/src/Entity/BlockContent.php 41 6 1 core/modules/block_content/src/BlockContentForm.php 32 6 0.78 core/modules/block_content/src/Plugin/Block/BlockContentBlock.php 20 6 0.488 core/modules/block_content/src/Tests/BlockContentTestBase.php 16 6 0.39 core/modules/block_content/src/BlockContentTypeForm.php 18 4 0.347 core/modules/block_content/src/Controller/BlockContentController.php 8 6 0.195

Comment

File Times Changed Complexity Score core/modules/comment/src/CommentForm.php 60 45 1 core/modules/comment/src/Entity/Comment.php 55 25 0.548 core/modules/comment/src/Tests/CommentTestBase.php 33 29 0.426 core/modules/comment/src/Controller/CommentController.php 32 20 0.274 core/modules/comment/src/CommentViewBuilder.php 37 16 0.25 core/modules/comment/src/Plugin/Field/FieldFormatter/CommentDefaultFormatter.php 32 18 0.24 core/modules/comment/src/Form/CommentAdminOverview.php 29 17 0.191 core/modules/comment/src/CommentAccessControlHandler.php 17 28 0.19 core/modules/comment/src/CommentLinkBuilder.php 15 29 0.17 core/modules/comment/src/CommentManager.php 29 15 0.157

Drupal\Core\Entity

File Times Changed Complexity Score core/lib/Drupal/Core/Entity/ContentEntityBase.php 115 173 0.808 core/lib/Drupal/Core/Entity/Sql/SqlContentEntityStorage.php 61 196 0.465 core/lib/Drupal/Core/Entity/Sql/SqlContentEntityStorageSchema.php 56 203 0.427 core/lib/Drupal/Core/Entity/Entity.php 131 43 0.212 core/lib/Drupal/Core/Entity/ContentEntityStorageBase.php 41 105 0.16

Conclusion

So, what to do with these results?

Well I think if you're looking to simplify your code-base and identify places that would warrant refactoring, those with a high 'churn' score would be a good place to start.

What do you think? Let us know in the comments.

Photo of Lee Rowlands

Posted by Lee Rowlands
Senior Drupal Developer

Dated 19 January 2018

Comments

Pagination

Add new comment

Jan 15 2018
Jan 15

Managing technical debt is important for the health of all software projects. One way to manage certain types of technical debt is to revisit code and decide if it’s still relevant to the project and to potentially remove it. Doing so can reducing complexity and the amount of code developers are required to maintain.

To address this we’ve been experimenting with adding simple annotations to code, which indicate an “expiry”. A nudge to developers to go and reevaluate if some bit of code will still be needed at some point in the future. This can be integrated into CI pipelines to fail builds which have outstanding expiry annotations.

Some scenarios where this has proved to be helpful have been:

  • Removing workarounds in CSS to address bugs in web browsers which have since been fixed.
  • Removing uninstalled modules, which were required only for hook_uninstall.
  • Removing code that exists for features which are gradually being superseded, like an organisation gradually migrating content from nodes into a new custom entity.

Here is an real snippet of code we were able to recently delete from a project, based on a bug which was fixed upstream in Firefox. I don’t believe without an explicit prompt to revisit the code, which was introduced many months earlier, we would have been able to confidently clean this up.


// @expire Jan 2018
// Fix a bug in firefox which causes all form elements to match the exact size
// specified in the "size" or "cols" attribute. Firefox probably will have
// fixed this bug by now. Test it by removing the following code and visiting
// the contact form at a small screen size. If the elements dont overflow the
// viewport, the bug is fixed.
.form-text__manual-size {
  width: 529px;
  @media (max-width: 598px) {
    width: 100%;
  }
}

The code we've integrated into our CI pipeline to check these expiry annotations simply greps the code base for strings matching the expiry pattern for the last n months worth of time:


#!/bin/bash

SEARCH_FORMAT="@expire %s"
DATE_FORMAT="+%b %Y"
DIRS="./app/modules/custom/ ./app/themes/"
SEARCH_LAST_N_MONTHS=4

# Cross-platform date formatting with a month offset.
case `uname` in
  Darwin)
    function date_offset_month() {
      date -v $1m "$DATE_FORMAT";
    }
    ;;
  Linux)
    function date_offset_month() {
      date --d="$1 month" "$DATE_FORMAT"
    }
    ;;
  *)
esac

for i in $(seq 0 $SEARCH_LAST_N_MONTHS); do
    FORMATTED_DATE=$(date_offset_month -$i)
    SEARCH_STRING=$(printf "$SEARCH_FORMAT" "$FORMATTED_DATE")
    echo "Searching codebase for \"$SEARCH_STRING\"."
    grep -rni "$SEARCH_STRING" $DIRS && exit 1
done

exit 0
Photo of Sam Becker

Posted by Sam Becker
Senior Developer

Dated 16 January 2018

Comments

Nice!
Do you integrate this into your project issue tracking? Maybe have a tech debt story?

Pagination

Add new comment

Dec 04 2017
Dec 04

With the release of Drupal 8.4.x and its use of ES6 (Ecmascript 2015) in Drupal core we’ve started the task of updating our jQuery plugins/widgets to use the new syntax. This post will cover what we’ve learnt so far and what the benefits are of doing this.

If you’ve read my post about the Asset Library system you’ll know we’re big fans of the Component-Driven Design approach, and having a javascript file per component (where needed of course) is ideal. We also like to keep our JS widgets generic so that the entire component (entire styleguide for that matter) can be used outside of Drupal as well. Drupal behaviours and settings are still used but live in a different javascript file to the generic widget, and simply call it’s function, passing in Drupal settings as “options” as required.

Here is an example with an ES5 jQuery header component, with a breakpoint value set somewhere in Drupal:

@file header.js

(function ($) {

 // Overridable defaults
 $.fn.header.defaults = {
   breakpoint: 700,
   toggleClass: 'header__toggle',
   toggleClassActive: 'is-active'
 };

 $.fn.header = function (options) {
   var opts = $.extend({}, $.fn.header.defaults, options);
   return this.each(function () {
     var $header = $(this);
     // do stuff with $header
  }

})(jQuery);
@file header.drupal.js

(function ($, Drupal, drupalSettings) {
 Drupal.behaviors.header = {
   attach: function (context) {
     $('.header', context).header({
       breakpoint: drupalSettings.my_theme.header.breakpoint
     });
   }
 };
})(jQuery, Drupal, drupalSettings);

Converting these files into a different language is relatively simple as you can do one at a time and slowly chip away at the full set. Since ES6 is used in the popular JS frameworks it’s a good starting point for slowly moving towards a “progressively decoupled” front-end.

Support for ES6

Before going too far I should mention support for this syntax isn’t quite widespread enough yet! No fear though, we just need to add a “transpiler” into our build tools. We use Babel, with the babel-preset-env, which will convert our JS for us back into ES5 so that the required older browsers can still understand it.

Our Gulp setup will transpile any .es6.js file and rename it (so we’re not replacing our working file), before passing the renamed file into out minifying Gulp task.

With the Babel ENV preset we can specify which browsers we actually need to support, so that we’re doing the absolute minimum transpilation (is that a word?) and keeping the output as small as possible. There’s no need to bloat your JS trying to support browsers you don’t need to!

import gulp from 'gulp';
import babel from 'gulp-babel';
import path from 'path';
import config from './config';

// Helper function for renaming files
const bundleName = (file) => {
 file.dirname = file.dirname.replace(/\/src$/, '');
 file.basename = file.basename.replace('.es6', '');
 file.extname = '.bundle.js';
 return file;
};

const transpileFiles = [
 `${config.js.src}/**/*.js`,
 `${config.js.modules}/**/*.js`,
 // Ignore already minified files.
 `!${config.js.src}/**/*.min.js`,
 `!${config.js.modules}/**/*.min.js`,
 // Ignore bundle files, so we don’t transpile them twice (will make more sense later)
 `!${config.js.src}/**/src/*.js`,
 `!${config.js.modules}/**/src/*.js`,
 `!${config.js.src}/**/*.bundle.js`,
 `!${config.js.modules}/**/*.bundle.js`,
];

const transpile = () => (
 gulp.src(transpileFiles, { base: './' })
   .pipe(babel({
     presets: [['env', {
       modules: false,
       useBuiltIns: true,
       targets: { browsers: ["last 2 versions", "> 1%"] },
     }]],
   }))
   .pipe(rename(file => (bundleName(file))))
   .pipe(gulp.dest('./'))
);

transpile.description = 'Transpile javascript.';
gulp.task('scripts:transpile', transpile);

Which uses:

$ yarn add path gulp gulp-babel babel-preset-env --dev

On a side note, we’ll be outsourcing our entire Gulp workflow real soon. We’re just working through a few extra use cases for it, so keep an eye out!

Learning ES6

Reading about ES6 is one thing but I find getting into the code to be the best way for me to learn things. We like to follow Drupal coding standards so point our eslint config to extend what’s in Drupal core. Upgrading to 8.4.x obviously threw a LOT of new lint errors, and was usually disabled until time permitted their correction. But you can use these errors as a tailored ES6 guide. Tailored because it’s directly applicable to how you usually write JS (assuming you wrote the first code).

Working through each error, looking up the description, correcting it manually (as opposed to using the --fix flag) was a great way to learn it. It took some time, but once you understand a rule you can start skipping it, then use the --fix flag at the end for a bulk correction.

Of course you're also a Google away from a tonne of online resources and videos to help you learn if you prefer that approach!

ES6 with jQuery

Our original code is usually in jQuery, and I didn’t want to add removing jQuery into the refactor work, so currently we’re using both which works fine. Removing it from the mix entirely will be a future task.

The biggest gotcha was probably our use of this, once converted to arrow functions needed to be reviewed. Taking our header example from above:

return this.each(function () { var $header = $(this); }

Once converted into an arrow function, using this inside the loop is no longer scoped to the function. It doesn’t change at all - it’s not an individual element of the loop anymore, it’s still the same object we’re looping through. So clearly stating the obj as an argument of the .each() function lets us access the individual element again.

return this.each((i, obj) => { const $header = $(obj); }

Converting the jQuery plugins (or jQuery UI widgets) to ES6 modules was a relatively easy task as well… instead of:

(function ($) {

 // Overridable defaults
 $.fn.header.defaults = {
   breakpoint: 700,
   toggleClass: 'header__toggle',
   toggleClassActive: 'is-active'
 };

 $.fn.header = function (options) {
   var opts = $.extend({}, $.fn.header.defaults, options);
   return this.each(function () {
     var $header = $(this);
     // do stuff with $header
  }

})(jQuery);

We just make it a normal-ish function:

const headerDefaults = {
 breakpoint: 700,
 toggleClass: 'header__toggle',
 toggleClassActive: 'is-active'
};

function header(options) {
 (($, this) => {
   const opts = $.extend({}, headerDefaults, options);
   return $(this).each((i, obj) => {
     const $header = $(obj);
     // do stuff with $header
   });
 })(jQuery, this);
}

export { header as myHeader }

Since the exported ES6 module has to be a top level function, the jQuery wrapper was moved inside it, along with passing through the this object. There might be a nicer way to do this but I haven't worked it out yet! Everything inside the module is the same as I had in the jQuery plugin, just updated to the new syntax.

I also like to rename my modules when I export them so they’re name-spaced based on the project, which helps when using a mix of custom and vendor scripts. But that’s entirely optional.

Now that we have our generic JS using ES6 modules it’s even easier to share and reuse them. Remember our Drupal JS separation? We no longer need to load both files into our theme. We can import our ES6 module into our .drupal.js file then attach it as a Drupal behaviour. 

@file header.drupal.js

import { myHeader } from './header';

(($, { behaviors }, { my_theme }) => {
 behaviors.header = {
   attach(context) {
     myHeader.call($('.header', context), {
       breakpoint: my_theme.header.breakpoint
     });
   }
 };
})(jQuery, Drupal, drupalSettings);

So a few differences here, we're importing the myHeader function from our other file,  we're destructuring our Drupal and drupalSettings arguments to simplify them, and using .call() on the function to pass in the object before setting its arguments. Now the header.drupal.js file is the only file we need to tell Drupal about.

Some other nice additions in ES6 that have less to do with jQuery are template literals (being able to say $(`.${opts.toggleClass}`) instead of $('.' + opts.toggleClass')) and the more obvious use of const and let instead of var , which are block-scoped.

Importing modules into different files requires an extra step in our build tools, though. Because browser support for ES6 modules is also a bit too low, we need to “bundle” the modules together into one file. The most popular bundler available is Webpack, so let’s look at that first.

Bundling with Webpack

Webpack is super powerful and was my first choice when I reached this step. But it’s not really designed for this component based approach. Few of them are truly... Bundlers are great for taking one entry JS file which has multiple ES6 modules imported into it. Those modules might be broken down into smaller ES6 modules and at some level are components much like ours, but ultimately they end up being bundled into ONE file.

But that’s not what I wanted! What I wanted, as it turned out, wasn’t very common. I wanted to add Webpack into my Gulp tasks much like our Sass compilation is, taking a “glob” of JS files from various folders (which I don’t really want to have to list), then to create a .bundle.js file for EACH component which included any ES6 modules I used in those components.

The each part was the real clincher. Getting multiple entry points into Webpack is one thing, but multiple destination points as well was certainly a challenge. The vinyl-named npm module was a lifesaver. This is what my Gulp talk looked like:

import gulp from 'gulp';
import gulp-webpack from 'webpack-stream';
import webpack from 'webpack'; // Use newer webpack than webpack-stream
import named from 'vinyl-named';
import path from 'path';
import config from './config';

const bundleFiles = [
 config.js.src + '/**/src/*.js',
 config.js.modules + '/**/src/*.js',
];

const bundle = () => (
 gulp.src(bundleFiles, { base: "./" })
   // Define [name] with the path, via vinyl-named.
   .pipe(named((file) => {
     const thisFile = bundleName(file); // Reuse our naming helper function
     // Set named value and queue.
     thisFile.named = thisFile.basename;
     this.queue(thisFile);
   }))
   // Run through webpack with the babel loader for transpiling to ES5.
   .pipe(gulp-webpack({
     output: {
       filename: '[name].bundle.js', // Filename includes path to keep directories
     },
     module: {
       loaders: [{
         test: /\.js$/,
         exclude: /node_modules/,
         loader: 'babel-loader',
         query: {
           presets: [['env', { 
             modules: false, 
             useBuiltIns: true, 
             targets: { browsers: ["last 2 versions", "> 1%"] }, 
           }]],
         },
       }],
     },
   }, webpack))
   .pipe(gulp.dest('./')) // Output each [name].bundle.js file next to it’s source
);

bundle.description = 'Bundle ES6 modules.';
gulp.task('scripts:bundle', bundle);

Which required:

$ yarn add path webpack webpack-stream babel-loader babel-preset-env vinyl-named --dev

This worked. But Webpack has some boilerplate JS that it adds to its bundle output file, which it needs for module wrapping etc. This is totally fine when the output is a single file, but adding this (exact same) overhead to each of our component JS files, it starts to add up. Especially when we have multiple component JS files loading on the same page, duplicating that code.

It only made each component a couple of KB bigger (once minified, an unminified Webpack bundle is much bigger), but the site seemed so much slower. And it wasn’t just us, a whole bunch of our javascript tests started failing because the timeouts we’d set weren’t being met. Comparing the page speed to the non-webpack version showed a definite impact on performance.

So what are the alternatives? Browserify is probably the second most popular but didn’t have the same ES6 module import support. Rollup.js is kind of the new bundler on the block and was recommended to me as a possible solution. Looking into it, it did indeed sound like the lean bundler I needed. So I jumped ship!

Bundling with Rollup.js

The setup was very similar so it wasn’t hard to switch over. It had a similar problem about single entry/destination points but it was much easier to resolve with the ‘gulp-rollup-each’ npm module. My Gulp task now looks like:

import gulp from 'gulp';
import rollup from 'gulp-rollup-each';
import babel from 'rollup-plugin-babel';
import resolve from 'rollup-plugin-node-resolve';
import commonjs from 'rollup-plugin-commonjs';
import path from 'path';
import config from './config';

const bundleFiles = [
 config.js.src + '/**/src/*.js',
 config.js.modules + '/**/src/*.js',
];

const bundle = () => {
 return gulp.src(bundleFiles, { base: "./" })
   .pipe(rollup({
     plugins: [
       resolve(),
       commonjs(),
       babel({
         presets: [['env', {
           modules: false,
           useBuiltIns: true,
           targets: { browsers: ["last 2 versions", "> 1%"] },
         }]],
         babelrc: false,
         plugins: ['external-helpers'],
       })
     ]
   }, (file) => {
     const thisFile = bundleName(file); // Reuse our naming helper function
     return {
       format: 'umd',
       name: path.basename(thisFile.path),
     };
   }))
   .pipe(gulp.dest('./')); // Output each [name].bundle.js file next to it’s source
};

bundle.description = 'Bundle ES6 modules.';
gulp.task('scripts:bundle', bundle);

We don’t need vinyl-named to rename the file anymore, we can do that as a callback of gulp-rollup-each. But we need a couple of extra plugins to correctly resolve npm module paths.

So for this we needed:

$ yarn add path gulp-rollup-each rollup-plugin-babel babel-preset-env rollup-plugin-node-resolve rollup-plugin-commonjs --dev

Rollup.js does still add a little bit of boilerplate JS but it’s a much more acceptable amount. Our JS tests all passed so that was a great sign. Page speed tests showed the slight improvement I was expecting, having bundled a few files together. We're still keeping the original transpile Gulp task too for ES6 files that don't include any imports, since they don't need to go through Rollup.js at all.

Webpack might still be the better option for more advanced things that a decoupled frontend might need, like Hot Module Replacement. But for simple or only slightly decoupled components Rollup.js is my pick.

Next steps

Some modern browsers can already support ES6 module imports, so this whole bundle step is becoming somewhat redundant. Ideally the bundled file with it’s overhead and old fashioned code is only used on those older browsers that can’t handle the new and improved syntax, and the modern browsers use straight ES6...

Luckily this is possible with a couple of script attributes. Our .bundle.js file can be included with the nomodule attribute, alongside the source ES6 file with a type=”module” attribute. Older browsers ignore the type=module file entirely because modules aren’t supported and browsers that can support modules ignore the ‘nomodule’ file because it told them to. This article explains it more.

Then we'll start replacing the jQuery entirely, even look at introducing a Javascript framework like React or Glimmer.js to the more interactive components to progressively decouple our front-ends!
 

Photo of Rikki Bochow

Posted by Rikki Bochow
Front end Developer

Dated 5 December 2017

Comments

Is it absolutely necessary to bring all our jQuery plugins to ES6, or would they remain fine as it is?

They would be fine as is, you just won't get the full benefits of the ES6 module imports/exports. Being able to import a particular function (or all of them) from another file just means you can make things more reusable. You can be selective about what you convert too and just do the parts you know would benefit most from it.

Pagination

Add new comment

Nov 28 2017
Nov 28

At DrupalSouth 2017, I presented a session on the new Workflows module, which just went stable in Drupal 8.4.0. Workflows was split out from content moderation as a separate module, and can be used independently to create custom workflows. In this presentation, I gave a demonstration of how to create a basic workflow for an issue tracker.

Since 2011 we have had access to a content moderation tool in Drupal 7 in the form of Workbench Moderation. This module introduced the concept of Draft ➡ Review ➡ Published workflows, with different user roles having specific permissions to move from one state to the next.

Unfortunately, the underlying Drupal core revision API was not designed to deal with this, and there were some pretty crazy workarounds.

Content moderation has long been a key feature request for Drupal, and so effort was made to port Workbench Moderation across to Drupal 8. 

Content Moderation drove a lot of cleanup in Drupal core APIs, including proper support for forward revisions, and adding revision support to other content entities besides Content Types, such as Custom Blocks. More are on the way.

In Drupal 8.3, the Workflows module was split out of Content Moderation. Why you may ask? Well, because the Workflows module provides the state machine engine that Content Moderation relies on.

What is a State Machine?

A state machine defines a set of states and rules on how you can transition between those states.

Door state machine A door state machine

In our simple example of a door, it can only be opened, closed or locked. However, you can't go directly from locked to open, you need to unlock it first.

Content Moderation Workflow Configuration

Content Moderation provides a set of Workflow states and transitions by default.

Content Moderation States Content Moderation States Content Moderation Transitions Content Moderation Transitions

If we were to put this together in a state machine diagram, it would look like the following:

Content Moderation State Machine Content Moderation State Machine

From the above diagram, it becomes clear what the allowed transitions are between states.

So now Workflows has been configured with our Content Moderation states and transitions, what is left for Content Moderation to do?

What Does Content Moderation Do?

It turns out quite a lot. Remember, that Workflows only provides the state machine. It in no way prescribes how you should manage the current state of a particular entity.

Content Moderation provides:

  • Default Workflows configuration
  • A fairly complex WorkflowType plugin which works with the Revision API.
  • Storage for individual states on content entities
  • Configuration of which entity bundles (Content types, etc.) should have Content Moderation
  • A number of admin forms for configuring the workflows and how they apply
  • Permissions

Building an Issue Tracker

We want to build a very simple issue tracker for our example. The state machine diagram is the following:

Issue Tracker State Machine Issue Tracker State Machine

That's the simple bits out of the way. Now, in order to build an issue tracker, we will need to replicate the rest what Content Moderation does!

Fortunately there is a module that can do most of the heavy lifting for us.

Workflows Field

“This module provides a field which allows you to store states on any content entity and ensure the states change according to transitions defined by the core workflows module.” 

Perfect! Let's download and install it.

Next we want to add a new Workflow. We can assign it a label of Issue Status and you'll see that we have a new Workflows Field option in the Workflow Type dropdown.

Add new workflow Add new workflow

We can then configure the desire Workflows states and transitions.

Issue States Issue States Issue Transitions Issue Transitions

Thats the our Workflows configured. Now we need to create a new Issue content type to attach our workflow to. It's assumed you know how to create a content type already. If not, check out the User Guide.

Next, we need to add our Workflows Field to our Issue content type. Follow the usual steps to add a field, and in the drop down choose Workflows as the field type, and our previously created Issue Status workflow.

Add workflows field Add workflows field

Test it out!

Now we can test our our workflow by creating a new Issue from the Content page. If everything was configured correctly, we should see a new field on the edit form for Status.

Issue status form Issue status form

Given the transitions we defined in our workflow, we should only be allowed to see certain values in the drop-down, depending on the current state.

Testing workflow constraints Testing workflow constraints

What next?

That's it for setting up and configuring a custom workflow using Workflows Field. Some next steps would be:

  • Add permissions for certain users (there's an issue for that #2904573 )
  • Add email notifications

How would you use the new Workflows API?

Let me know in the comments!

Photo of Kim Pepper

Posted by Kim Pepper
Technical Director

Dated 29 November 2017

Add new comment

Nov 23 2017
Nov 23

As seen in the recent Uber hack, storing secrets such as API tokens in your project repository can leave your organisation vulnerable to data breaches and extortion. This tutorial demonstrates a simple and effective way to mitigate this kind of threat by leveraging Key module to store API tokens in remote key storage.

Even tech giants like Uber are bitten by poor secret management in their applications. The snippet below describes how storing AWS keys in their repository resulted in a data breach, affecting 57 million customers and drivers.

Here’s how the hack went down: Two attackers accessed a private GitHub coding site used by Uber software engineers and then used login credentials they obtained there to access data stored on an Amazon Web Services account that handled computing tasks for the company. From there, the hackers discovered an archive of rider and driver information. Later, they emailed Uber asking for money, according to the company.

Uber could have avoided this breach by storing their API keys in a secret management system. In this tutorial, I'll show you how to do exactly this using the Key module in conjunction with the Lockr key management service.

This guide leverages a brand-new feature of Key module (as of 8.x-1.5) which allows overriding any configuration value with a secret. In this instance we will set up the MailChimp module using the this secure config override capability.

Service Set-Up

Before proceeding with the Drupal config, you will need a few accounts:

  • Mailchimp offer a "Forever Free" plan.
  • Lockr offer your first key and 1,500 requests for free.

These third-party services provide us with a simple example. Other services are available.

Dependencies

There are a few modules you'll need to add to your codebase.

composer require \
  "drupal/key:^1.5" \
  "drupal/lockr" \
  "drupal/mailchimp"

Configuration

  1. Go to /admin/modules  and enable the MailChimp, Lockr and Key modules.
  2. Go to /admin/config/system/lockr
  3. Use this form to generate a TLS certificate that Lockr uses to authenticate your site. Fill out the form and submit.
    Lockr Create Auth Certificate
  4. Enter the email address you used for your Lockr account and click Sign up.
  5. You should be now be re-prompted to log in - enter the email address and password for your Lockr account.
  6. In another tab, log into the MailChimp dashboard
    1. Go to the API settings page - https://us1.admin.mailchimp.com/account/api/
    2. Click Create A Key
    3. Note down this API key so we can configure in Drupal in the next step.
      MailChimp API Dashboard
  7. In your Drupal tab, go to /admin/config/system/keys and click Add Key
  8. Create a new Key entity for your MailChimp token. The important values here are:
    1. Key provider - ensure you select Lockr
    2. Value - paste the API token you obtained from the MailChimp dashboard.
      Key Create MailChimp Token
  9. Now we need to set up the configuration overrides. Go to /admin/config/development/configuration/key-overrides and click Add Override
  10. Fill out this form, the important values here are:
    1. Configuration type: Simple configuration
    2. Configuration name: mailchimp.settings
    3. Configuration item: api_key
    4. Key: The name of the key you created in the previous step.
      Key Override Create

... and it is that simple.

Result

The purpose of this exercise is to ensure the API token for our external services are not saved in Drupal's database or code repository - so lets see what those look like now.

MailChimp Config Export - Before

If you configured MailChimp in the standard way, you'd see a config export similar to this. As you can see, the api_key value is in plaintext - anyone with access to your codebase would have full access to your MailChimp account.

api_key: 03ca2522dd6b117e92410745cd73e58c-us1
cron: false
batch_limit: 100
api_classname: Mailchimp\Mailchimp
test_mode: false

MailChimp Config Export - After

With the key overrides feature enabled, the api_key value in this file is now null.

api_key: null 
cron: false
batch_limit: 100
api_classname: Mailchimp\Mailchimp
test_mode: false

There are a few other relevant config export files - lets take a look at those.

Key Entity Export

This export is responsible for telling Drupal where Key module stored the API token. If you look at the key_provider and key_provider_settings values, you'll see that it is pointing to a value stored in Lockr. Still no API token in sight!

dependencies:
 module:
   - lockr
   - mailchimp
id: mailchimp_token
label: 'MailChimp Token'
description: 'API token used to authenticate to MailChimp email marketing platform.'
key_provider: lockr
key_provider_settings:
 encoded: aes-128-ctr-sha256$nHlAw2BcTCHVTGQ01kDe9psWgItkrZ55qY4xV36BbGo=$+xgMdEzk6lsDy21h9j….
key_input: text_field

Key Override Export

The final config export is where the Key entity is mapped to override MailChimp's configuration item. 

status: true
dependencies:
 config:
   - key.key.mailchimp_token
   - mailchimp.settings
id: mailchimp_api_token
label: 'MailChimp API Token'
config_type: system.simple
config_name: mailchimp.settings
config_item: api_key
key_id: mailchimp_token

Conclusion

Hopefully this tutorial shows you how accessible these security-hardening techniques have become. 

With this solution implemented, an attacker can not take control of your MailChimp account simply by gaining access to your repository or a database dump. Also remember that this exact technique can be applied to any module which uses the Configuration API to store API tokens.

Why? Here are a few examples of ways popular Drupal modules could harm your organisation if their configs were exposed (tell me about your own worst-case scenarios in the comments!).

  • s3fs - An attacker could leak or delete all of the data stored in your bucket. They could also ramp up your AWS bill by storing or transferring terabytes of data.
  • SMTP - An attacker could use your own SMTP server against you to send customers phishing emails from a legitimate email address. They could also leak any emails the compromised account has access to.

What other Drupal modules could be made more securing in this way? Post your ideas in the comments!

Go forth, and build secure Drupal projects!

Photo of Nick Santamaria

Posted by Nick Santamaria
Systems Operations Developer

Dated 24 November 2017

Add new comment

Nov 21 2017
Nov 21

Need a way to mix fields from referenced entities with regular fields from managed display?

Then the Display Suite Chained Fields module might be for you.

So how do you go about using the module?

Step 1: Enable a display suite layout for the view mode

To use the chained fields functionality, you must enable a display suite layout for the view mode. Select a layout other than none and hit Save.

Screenshot showing how to go about enabling a layout Enabling a layout

Step 2: Enable the entity reference fields you wish to chain

To keep the manage display list from being cluttered, you must manually enable the entity reference fields you wish to show chained fields from. For example, to show the author's picture, you might enable the 'Authored by' entity reference field, which points to the author. After you've enabled the required fields, press Save.

Screenshot showing enabling the fields for chaining Enabling fields for chaining

Step 3: Configure the chained fields as required

Finally, just configure the chained fields as normal.

Screenshot showing chained fields available for configuration Configuring chained fields

That's it - let me know your thoughts in the comments or the the issue queue.

Photo of Lee Rowlands

Posted by Lee Rowlands
Senior Drupal Developer

Dated 22 November 2017

Add new comment

Nov 21 2017
Nov 21

We recently Open Sourced our temporary environment builder, M8s. In this blog post we will be demoing everything you need to get started!

by Nick Schuch / 21 November 2017

Introduction

In this video we will introduce you to M8s and the problem which it is solving.

[embedded content]

Provisioning a M8s cluster

Now that you are acquainted with the M8s project, it's time to get a cluster provisioned!

In this video we will setup a Google Kubernetes Engine cluster and deploy the M8s API components.

[embedded content]

Setting up CircleCI

Now that our M8s cluster is up and running, it's time to setup our pipeline to run a build.

In this video we will be configuring CircleCI to run the M8s CLI.

[embedded content]

Pushing a topic branch

It's time to put it all together!

In this video we will be pushing a topic branch to demonstrate how M8s interacts with a Pipeline.

[embedded content]

Finale

You made it to the finale! In this video we will be checking out the build environment and how a developer can access the Mailhog and Solr containers.

[embedded content]

Conclusion

To learn more about the M8s project you can go and checkout:

We welcome any and all feedback via Twitter and our Github Project issues page.

Tagged

m8s, Kubernetes, Drupal Development Photo of Nick Schuch

Posted by Nick Schuch
Sys Ops Lead

Dated 21 November 2017

Add new comment

Your name

Comment (required) About text formats

Restricted HTML

  • Allowed HTML tags: <a> <em> <strong> <cite> <blockquote> <code> <ul> <ol> <li> <dl> <dt> <dd> <h2> <h3> <h4> <h5> <h6>
  • Lines and paragraphs break automatically.
  • Web page addresses and email addresses turn into links automatically.
Nov 20 2017
Nov 20

In my recent talk at DrupalSouth Auckland 2017 I took a hard look at the hyperbole of Drupal supposedly powering over a million websites. Where does Drupal really sit in relation to other CMS platforms, both open source and proprietary? What trends are emerging that will impact Drupal's market share? The talk looked outside the Drupal bubble and took a high level view of its market potential and approaches independent firms can take to capitalise on Drupal's strengths and buffer against its potential weaknesses.

But, Drupal powers over a million websites!

One of the key statistics that Drupalers hold onto is that it's powered over a million websites since mid 2014 when Drupal 7 was in ascendance. However, since Drupal 8 was released in late 2015, Drupal's overall use has stalled at around 1.2m websites, as seen circled in red on the Drupal Core usage statistics graph below.

Drupal install graph

The main reason for this stall in growth was that Drupal 8 was a major architectural re-write that wasn't essential or even affordable for many Drupal 7 sites to migrate to. For clients considering major new projects, many held off on committing to Drupal 8 until there were more successful case studies in the wild and didn't commission new Drupal 7 sites given that version was nearing a decade old. Anecdotally, 2016 was a tough year for many Drupal firms as they grappled with this pause in adoption.

Of course, Drupal 8 is now a well-proven platform and is experiencing steady uptake as circled in green on the usage graph above. This uptake corresponds with a down tick in Drupal 7 usage, but also indicates a softening of total Drupal usage. If we extrapolate these trend lines in a linear fashion, then we can see that Drupal 8 might surpass Drupal 7 usage around 2023.

Drupal usage extrapolation

Of course, technology adoption doesn't move in a straight line! Disruptive technologies emerge that rapidly change the playing field in a way that often can't be envisaged. The example that springs to mind is Nokia's market share was still growing when the iPhone 4 was released in 2010. By the time the iPhone 4s was released in 2011, Nokia's sales volumes had almost halved, leading to Microsoft's catastrophic purchase of the handset division in 2013 and subsequent re-sale for 5% of the purchase value in 2016. Oops!

Builtwith stats

Despite this downward trend in overall Drupal usage, we can take comfort that its use on larger scale sites is growing, powering 5.7% of the Top 10,000 websites according to Builtwith.com. However, its market share of the Top 100,000 (4.3%) and Top Million (3%) websites is waning, indicating that other CMS are gaining ground with smaller sites. It's also worth noting that Builtwith only counts ~680,000 Drupal websites, indicating that the other ~500,000 Drupal.org is detecting are likely to be development and staging sites.

So, where are these other sites moving to when they're choosing a new CMS? 

Wordpress usage

Looking at the stats from W3Techs, it's clear to see that Wordpress accounts for almost all of the CMS growth, now sitting at around 30% of total market share.

Wordpress has been able to achieve this dominance by being a fantastic CMS for novice developers and smaller web agencies to build clients' websites with. This is reinforced by Wordpress having an exceptional editor experience and a hugely popular SAAS platform at Wordpress.com.

Drupal's place in the CMS market

The challenge Wordpress poses to other open-source CMS platforms, like Joomla, Typo3 and Plone, all with under 1% market share and falling, is their development communities are likely to look direct their efforts to other platforms. Drupal is able to hedge against this threat by having a large and highly engaged community around Drupal 8, but it's now abundantly clear that Drupal can't compete as a platform for building smaller brochure-ware style sites that Wordpress and SAAS CMS like Squarespace are dominating. We're also seeing SAAS platforms like Nationbuilder eat significantly into Drupal's previously strong share of the non-profit sector.

With all the hype around Headless or Decoupled CMS, Drupal 8 is well positioned to play a role as the backend for React or Angular Javascript front-ends. Competitors in this space are SAAS platforms like Contentful and Directus, with proprietary platforms like Kentico pivoting as a native cloud CMS service designed to power decoupled front-ends.

We often talk of Drupal as a CMS Framework, where it competes against frameworks like Ruby on Rails, .NET and Django to build rich web based applications. Drupal 8 is still well placed to serve this sector if the web applications are also relying on large scale content and user management features.

Which brings us to the Enterprise CMS sector, where Drupal competes head to head with proprietary platforms like Adobe Experience Manager, Sitecore and legacy products from Opentext, IBM and Oracle. The good news is that Drupal holds its own in this sector and has gained very strong market share with Government, Higher Education, Media and "Challenger" Enterprise clients.

This "Comfort zone" for Drupal usage is characterised by clients building large scale platforms with huge volumes of content and users, high scalability and integration with myriad third party products. Operationally, these clients often have well established internal web teams and varying degrees of self reliance. They're often using Agile delivery methods and place high value on speed to market and the cost savings associated with open-source software.

Where Drupal is gaining a competitive edge since the release of Drupal 8 is against the large proprietary platforms like Adobe Experience Manager and Sitecore. These companies market a platform of complementary products in a unified stack to their clients through long standing partnerships with major global digital agencies and system integrators. It's no surprise then that Acquia markets their own platform in a similar way to this sector where Drupal serves as the CMS component, complemented by subscription-based tools for content personalisation, customer segmentation and cloud based managed hosting. Acquia have actively courted global digital media agencies with this offering through global partnerships to give Drupal a toe hold in this sector.

Garnter Magic Quadrant CMS

This has meant Acquia has made significant headway into larger Enterprise clients through efforts like being recognised as a "Leader" in the Gartner Magic Quadrant for CMS, lending Drupal itself some profile and legitimacy as a result. This has driven Enterprise CIOs, CTOs and CMOs to push their vendors to offer Drupal services, who have looked to smaller Drupal firms to provide expertise where required. This is beneficial to independent Drupal services firms in the short term, but the large digital agencies will quickly internalise these skills if they see a long term market for Drupal with their global clients.

As one of those independent Drupal firms, PreviousNext have staked a bet that not all Enterprise customers will want to move to a monolithic platform where all components are provided by a single vendor's products. We're seeing sophisticated customers wanting to use Drupal 8 as the unifying hub for a range of best-of-breed SAAS platforms and cloud services. 

Drupal 8 hub

This approach means that Enterprise customers can take advantage of the latest, greatest SAAS platforms whilst retaining control and consistency of their core CMS. It also allows for a high degree of flexibility to rapidly adapt to market changes. 

What does this all mean for Drupal 8?

The outcome of our research and analysis has led to a few key conclusions about what the future looks like for Drupal 8:

  • Drupal's overall market share will steadily fall as smaller sites move to SAAS CMS and self-managed Wordpress installs.
  • The "comfort zone" of Government, Media, Higher Education and "Challenger" Enterprise clients will grow as many of these clients upgrade or switch to Drupal 8 from Drupal 7 or proprietary platforms.
  • Drupal will gain traction in the larger Enterprise as the global digital agencies and system integrators adopt Drupal 8 as a direct alternative to proprietary CMS products. 
  • Independent Drupal services firms have a good opportunity to capitalise on these trends through partnerships with larger global agencies and specialisation in technologies that complement Drupal 8 as a CMS.
  • A culture of code contribution needs to grow within the larger clients and agencies moving to Drupal to ensure the burden of maintaining Drupal's development isn't shouldered by smaller independent firms and individual developers. 

Despite the fact that we've probably already passed "Peak Drupal", we're firm believers that Drupal 8 is the right tool for large scale clients and that community has the cohesion to adapt to these existential challenges!

Add new comment

Nov 16 2017
Nov 16

At PNX, style guide driven development is our bag. It’s what we love: building a living document that provides awesome reference for all our front end components. And Drupal 8, with its use of Twig, complements this methodology perfectly. The ability to create a single component, and then embed that component and its markup throughout a Drupal site in a variety of different ways without having to use any tricks or hacks is a thing of beauty.

Create a component

For this example we are going to use the much loved collapsible/accordion element. It’s a good example of a rich component because it uses CSS, JS, and Twig to provide an element that’s going to be used everywhere throughout a website.

To surmise the component it’s made up of the following files:

collapsible.scss
collapsible.widget.js
collapsible.drupal.js
collapsible.twig
collapsible.svg

The .scss file will end up compiling to a .css file, but we will be using SASS here because it’s fun. The widget.js file is a jQuery UI Widget Factory plugin that gives us some niceties - like state. The drupal.js file is a wrapper that adds our accordion widget as a drupal.behavior. The svg file provides some pretty graphics, and finally the twig file is where the magic starts.

Let’s take a look at the twig file:

{{ attach_library('pnx_project_theme/collapsible') }}
<section class="js-collapsible collapsible {{ modifier_class }}">
  <h4 class="collapsible__title">
    {% block title %}
      Collapsible
    {% endblock %}
  </h4>
  <div class="collapsible__content">
    {% block content %}
      <p>Curabitur blandit tempus porttitor. Cum sociis natoque penatibus et
        magnis dis parturient montes, nascetur ridiculus mus. Morbi leo risus,
        porta ac consectetur ac, vestibulum at eros. Praesent commodo cursus
        magna, vel scelerisque nisl consectetur et. Fusce dapibus, tellus ac
        cursus commodo, tortor mauris condimentum nibh, ut fermentum massa justo
        sit amet risus.</p>
    {% endblock %}
  </div>
</section>

This is a standard-ish BEM based component. It uses a js-* class to attach the widget functionality. We also have a {{ modifier_class }} variable, that can be used by kss-node to alter the default appearance of the collapsible (more on this later). There are two elements in this component title and content. They are expressed inside a twig block. What this means is we can take this twig file and embed it elsewhere. Because the component is structured this way, when it’s rendered in its default state by KSS we will have some default content, and the ability to show it's different appearances/styles using modifier_class.

Our twig file also uses the custom Drupal attach_library function which will bring in our components CSS and JS from the following theme.libraries.yml entry:

collapsible:
  css:
    component:
      src/components/collapsible/collapsible.css: {}
  js:
    src/components/collapsible/collapsible.widget.js : {}
    src/components/collapsible/collapsible.drupal.js : {}
  dependencies:
    - core/jquery
    - core/drupal
    - core/jquery.once
    - core/jquery.ui
    - core/jquery.ui.widget

This is a pretty meaty component so it’s got some hefty javascript requirements. Not a problem in the end as it’s all going to get minified and aggregated by Drupal Cores library system.

And there we have it - a rich javascript component. It’s the building block for all the cool stuff we are about to do.

Use it in a field template override

As it stands we can throw this component as-is into KSS which is nice (although we must add our css and js to KSS manually, attach_library() won’t help us there sadly - yet), but we want drupal to take advantage of our twig file. This is where twigs embed comes in. Embed in twig is a mixture of the often used include, and the occasionally used extend. It’s a super powerful piece of kit that lets us do all the things.

Well these things anyway: include our twig templates contents, add variables to it, and add HTML do it.

Because this is an accordion, it’s quite likely we’ll want some field data inside it. The simplest way to get this happening is with a clunky old field template override. As an example I’ll use field--body.html.twig:

{% for item in items %}
  {% embed '@pnx_project_theme/components/collapsible/collapsible.twig' %}
    {% block title %}
      {{ label }}
    {% endblock %}
    {% block content %}
      {{ item.content }}
    {% endblock %}
  {% endembed %}
{% endfor %}

Here you can see the crux of what we are trying to achieve. The collapsible markup is specified in one place only, and other templates can include that base markup and then insert the content they need to use in the twig blocks. The beauty of this is any time this field is rendered on the page, all the markup, css and js will be included with it, and it all lives in our components directory. No longer are meaty pieces of markup left inside Drupal template directories - our template overrides are now embedding much richer components.

There is a trick above though, and it’s the glue that brings this together. See how we have a namespace in the embed path - all drupal themes/modules get a twig namespace automatically which is just @your_module_name or @your_theme_name - however it points to the theme or modules templates directory only. Because we are doing style guide driven development and we have given so much thought to creating a rich self-contained component our twig template lives in our components directory instead, so we need to use a custom twig namespace to point there. To do that, we use John Albins Component Libraries module. It lets us add a few lines to our theme.info.yml file so our themes namespace can see our component templates:

component-libraries:
  pnx_project_theme:
    paths:
      - src
      - templates

Now anything in /src or /templates inside our theme can be included with our namespace from any twig template in Drupal.

Use it in a field formatter

Now let’s get real because field template overrides are not the right way to do things. We were talking about making things DRY weren’t we?

Enter field formatters. At the simple end of this spectrum our formatter needs an accompanying hook_theme entry so the formatter can render to a twig template. We will need a module to give the field formatter somewhere to live.

Setup your module file structure as so:

src/Plugin/Field/FieldFormatter/CollapsibleFormatter.php
templates/collapsible-formatter.html.twig
pnx_project_module.module
pnx_project_module.info.yml

Your formatter lives inside the src directory and looks like this:

<?php

namespace Drupal\pnx_project_module\Plugin\Field\FieldFormatter;

use Drupal\Core\Field\FieldItemListInterface;
use Drupal\Core\Field\FormatterBase;
use Drupal\Core\Form\FormStateInterface;

/**
 * A field formatter for trimming and wrapping text.
 *
 * @FieldFormatter(
 *   id = "collapsible_formatter",
 *   label = @Translation("Collapsible"),
 *   field_types = {
 *     "text_long",
 *     "text_with_summary",
 *   }
 * )
 */
class CollapsibleFormatter extends FormatterBase {

  /**
   * {@inheritdoc}
   */
  public function viewElements(FieldItemListInterface $items, $langcode = NULL) {
    $elements = [];

    foreach ($items as $delta => $item) {
      $elements[$delta] = [
        '#theme' => 'collapsible_formatter',
        '#title' => $items->getFieldDefinition()->getLabel(),
        '#content' => $item->value,
        '#style' => NULL,
      ];
    }

    return $elements;
  }

}

And the hook_theme function lives inside the .module file:

<?php

/**
 * @file
 * Main module functions.
 */

/**
 * Implements hook_theme().
 */
function pnx_project_module_theme($existing, $type, $theme, $path) {
  return [
    'collapsible_formatter' => [
      'variables' => [
        'title' => NULL,
        'content' => NULL,
        'style' => NULL,
      ],
    ],
  ];
}

Drupal magic is going to look for templates/collapsible-formatter.html.twig in our module directory automatically now. Our hook_theme template is going to end up looking pretty similar to our field template:

{% embed '@pnx_project_theme/components/collapsible/collapsible.twig' with { modifier_class: style } %}
  {% block title %}
    {{ title }}
  {% endblock %}
  {% block content %}
    {{ content }}
  {% endblock %}
{% endembed %}

Now jump into the field display config of a text_long field, and you’ll be able to select the collapsible and it’s going to render our component markup combined with the field data perfectly, whilst attaching necessary CSS/JS.

Add settings to the field formatter

Let's take it a bit further. We are missing some configurability here. Our component has a modifier_class with a mini style (a cut down smaller version of the full accordion). You'll notice in the twig example above, we are using the with notation which works the same way for embed as it does for include to allow us to send an array of variables through to the parent template. In addition our hook_theme function has a style variable it can send through from the field formatter. Using field formatter settings we can make our field formatter far more useful to the site builders that are going to use it. Let's look at the full field formatter class after we add settings:

class CollapsibleFormatter extends FormatterBase {

  /**
   * {@inheritdoc}
   */
  public function viewElements(FieldItemListInterface $items, $langcode = NULL) {
    $elements = [];

    foreach ($items as $delta => $item) {
      $elements[$delta] = [
        '#theme' => 'collapsible_formatter',
        '#title' => !empty($this->getSetting('label')) ? $this->getSetting('label') : $items->getFieldDefinition()->getLabel(),
        '#content' => $item->value,
        '#style' => $this->getSetting('style'),
      ];
    }

    return $elements;
  }

  /**
   * {@inheritdoc}
   */
  public function settingsSummary() {
    $summary = [];
    if ($label = $this->getSetting('label')) {
      $summary[] = 'Label: ' . $label;
    }
    else {
      $summary[] = 'Label: Using field label';
    }
    if (empty($this->getSetting('style'))) {
      $summary[] = 'Style: Normal';
    }
    elseif ($this->getSetting('style') === 'collapsible--mini') {
      $summary[] = 'Style: Mini';
    }
    return $summary;
  }

  /**
   * {@inheritdoc}
   */
  public function settingsForm(array $form, FormStateInterface $form_state) {
    $form['label'] = [
      '#title' => $this->t('Label'),
      '#type' => 'textfield',
      '#default_value' => $this->getSetting('label'),
      '#description' => t('Customise the label text, or use the field label if left empty.'),
    ];
    $form['style'] = [
      '#title' => t('Style'),
      '#type' => 'select',
      '#options' => [
        '' => t('Normal'),
        'collapsible--mini' => t('Mini'),
      ],
      '#description' => t('See <a href="https://www.previousnext.com.au/styleguide/section-6.html#kssref-6-1" target="_blank">Styleguide section 6.1</a> for a preview of styles.'),
      '#default_value' => $this->getSetting('style'),
    ];
    return $form;
  }

  /**
   * {@inheritdoc}
   */
  public static function defaultSettings() {
    return [
      'label' => '',
      'style' => '',
    ];
  }

}

There's a few niceties there: It allows us to set a custom label (for the whole field), it automatically assigns the correct modifier_class, it links to the correct section in the style guide in the settings field description, and it adds a settings summary so site builders can see the current settings at a glance. These are all patterns you should repeat.

Let's sum up

We've created a rich interactive BEM component with its own template. The component has multiple styles and displays an interactive demo of itself using kss-node. We've combined its assets into a Drupal library and made the template - which lives inside the style guides component src folder - accessible to all of Drupal via the Component Libraries module. We've built a field formatter that allows us to configure the components appearance/style. Without having to replicate any HTML anywhere.

The component directory itself within the style guide will always be the canonical source for every version of the component that is rendered around our site.

Photo of Jack Taranto

Posted by Jack Taranto
Front end developer

Dated 16 November 2017

Add new comment

Nov 06 2017
Nov 06

Its extremely important to have default values that you can rely on for local Drupal development, one of those is "localhost". In this blog post we will explore what is required to make our local development environment appear as "localhost".

In our journey migrating to Docker for local dev we found ourselves running into issues with "discovery" of services eg. Solr/Mysql/Memcache.

In our first iteration we used linking, allowing our services to talk to each other, some downsides to this were:

  • Tricky to compose an advanced relationship, lets use PHP and PanthomJS as an example:
    • PHP needs to know where PhantomJS is running
    • PhantomJS needs to know the domain of the site that you are running locally
    • Wouldn't it be great if we could just use "localhost" for both of these configurations?
  • DNS entries only available within the containers themselves, cannot run utilities outside of the containers eg. Mysql admin tool

With this in mind, we hatched an idea.....

What if we could just use "localhost" for all interactions between all the containers.

  • If we wanted to access our local projects Apache, http://localhost (inside and outside of container)
  • If we wanted to access our local projects Mailhog, http://localhost:8025 (inside and outside of container)
  • If we wanted to access our local projects Solr, http://localhost:8983 (inside and outside of container)

All this can be achieved with Linux Network Namespaces in Docker Compose.

Network Namespaces

Linux Network Namespaces allow for us to isolate processes into their own "network stacks".

By default, the following happens when a container gets created in Docker:

  • Its own Network Namespace is created
  • A new network interface is added
  • Provided an IP on the default bridge network

However, if a container is created and told to share the same Network Namespace with an existing container, they will both be able to interface with each other on "localhost" or "127.0.0.1".

Here are working examples for both OSX and Linux.

OSX

  • Mysql and Mail share the PHP containers Network Namespace, giving us "localhost" for "container to container" communication.
  • Port mapping for host to container "localhost"
version: "3"

services:
  php:
    image: previousnext/php:7.1-dev
    # You will notice that we are forwarding port which do not belong to PHP.
    # We have to declare them here because these "sidecar" services are sharing
    # THIS containers network stack.
    ports:
      - "80:80"
      - "3306:3306"
      - "8025:8025"
    volumes:
      - .:/data:cached

  db:
    image: mariadb
    network_mode: service:php

  mail:
    image: mailhog/mailhog
    network_mode: service:php

Linux

All containers share the Network Namespace of the users' host, nothing else is required.

version: "3"

services:
  php:
    image: previousnext/php:7.1-dev
    # This makes the container run on the same network stack as your
    # workstation. Meaning that you can interact on "localhost".
    network_mode: host
    volumes:
      - .:/data

  db:
    image: mariadb
    network_mode: host

  mail:
    image: mailhog/mailhog
    network_mode: host

Trade offs

To facilitate this approach we had to make some trade offs:

  • We only run 1 project at a time. Only a single process can bind to port 80, 8983 etc.
  • Split out the Docker Compose files into 2 separate files, making it simple for each OS can have its own approach.

Bash aliases

Since we split out our Docker Compose file to be "per OS" we wanted to make it simple for developers to use these files.

After a couple of internal developers meetings, we came up with some bash aliases that developers only have to setup once.

# If you are on a Mac.
alias dc='docker-compose -f docker-compose.osx.yml'

# If you are running Linux.
alias dc='docker-compose -f docker-compose.linux.yml'

A developer can then run all the usual Docker Compose commands with the shorthand dc command eg.

dc up -d

This also keeps the command docker-compose available if a developer is using an external project.

Simple configuration

The following solution has also provided us with a consistent configuration fallback for local development.

We leverage this in multiple places in our settings.php, here is 1 example:

$databases['default']['default']['host'] = getenv("DB_HOST") ?: '127.0.0.1';

  • Dev / Stg / Prod environments set the DB_HOST environment variable
  • Local is always the fallback (127.0.0.1)

Conclusion

While the solution may have required a deeper knowledge of the Linux Kernel, it has yielded us a much simpler solution for developers.

How have you managed Docker local dev networking? Let me know in the comments below.

Photo of Nick Schuch

Posted by Nick Schuch
Sys Ops Lead

Dated 7 November 2017

Add new comment

Nov 02 2017
Nov 02

From time to time you may find you need to extend another module's plugins to add new functionality.

You may also find you need to alter the signature of the constructor in order to inject additional dependencies.

However plugin constructors are considered internal in Drupal's BC policy.

So how do you safely do this without introducing the risk of breakage if things change.

In this article we'll show you a quick trick learned from Search API module to avoid this issue.

So let's consider a plugin constructor that has some arguments.

Here's the constructor and factory method for Migrate's SQL map plugin

/**
   * Constructs an SQL object.
   *
   * Sets up the tables and builds the maps,
   *
   * @param array $configuration
   *   The configuration.
   * @param string $plugin_id
   *   The plugin ID for the migration process to do.
   * @param mixed $plugin_definition
   *   The configuration for the plugin.
   * @param \Drupal\migrate\Plugin\MigrationInterface $migration
   *   The migration to do.
   */
  public function __construct(array $configuration, $plugin_id, $plugin_definition, MigrationInterface $migration, EventDispatcherInterface $event_dispatcher) {
    parent::__construct($configuration, $plugin_id, $plugin_definition);
    $this->migration = $migration;
    $this->eventDispatcher = $event_dispatcher;
  }

  /**
   * {@inheritdoc}
   */
  public static function create(ContainerInterface $container, array $configuration, $plugin_id, $plugin_definition, MigrationInterface $migration = NULL) {
    return new static(
      $configuration,
      $plugin_id,
      $plugin_definition,
      $migration,
      $container->get('event_dispatcher')
    );
  }

As you can see, there are two additional dependencies injected beyond the standard plugin constructor arguments - the event dispatcher and the migration.

Now if you subclass this and extend the constructor and factory to inject additional arguments, should the base plugin change its constructor, you're going to be in trouble.

Instead, you can use this approach that Search API takes - leave the constructor as is (don't override it) and use setter injection for the new dependencies.

  /**
   * {@inheritdoc}
   */
  public static function create(ContainerInterface $container, array $configuration, $plugin_id, $plugin_definition, MigrationInterface $migration = NULL) {
    $instance = parent::create(
      $container,
      $configuration,
      $plugin_id,
      $plugin_definition,
      $migration
    );
    $instance->setFooMeddler($container->get('foo.meddler');
    return $instance;
  }
    
    /**
    * Sets foo meddler.
    */
    public function setFooMeddler(FooMeddlerInterface $fooMeddler) {
      $this->fooMeddler = $fooMeddler;
    }

Because the signature of the parent create method is enforced by the public API of \Drupal\Core\Plugin\ContainerFactoryPluginInterface you're guaranteed that it won't change.

Thanks to Thomas Seidl for this pattern

Photo of Lee Rowlands

Posted by Lee Rowlands
Senior Drupal Developer

Dated 3 November 2017

Comments

Nice!! Thank you for sharing it!

Pagination

Add new comment

Oct 26 2017
Oct 26

Services like dialogflow (formerly api.ai) do a much better job of natural language parsing (NLP) if they're aware of your entity names in advance.

For example, it can recognize that show me the weather in Bundaberg is a request for weather in Bundaberg, if you've told it ahead of time that Bundaberg is a valid value for the City entity.

Having the entity values automatically update in your service of choice when they're created and changed in Drupal makes this much more efficient.

This article will show you how to achieve that.

This is where the chatbot_api_entities sub-module comes in.

When you enable this module you can browse to Admin -> Config -> Web Services -> Entity Collections to create a collection.

The UI looks something like this:

Screenshot from Drupal showing entity collections in Chatbot API Entities module Adding an entity collection to send to dialogflow in Drupal

Each collection comprises an entity-type and bundle as well as a push handler and a query handler.

By default Chatbot API Entities comes with a query handler for each entity-type and a specific one for Users to exclude blocked users.

The api_ai_webhook module comes with a push handler for pushing entities to your dialogflow/api.ai account.

By default, these plugins query based on available entities and the push handler pushes the entity labels.

Writing your own query handler

If for example, you don't want to extract entities from entity labels, e.g. you might wish to collect unique values from a particular field. In this case you can write your own query handler.

Here's an example that will query speaker names from a session content type. The collection handed to the push handler will contain all published sessions.

namespace Drupal\your_module\Plugin\ChatbotApiEntities\QueryHandler;

use Drupal\chatbot_api_entities\Entity\EntityCollectionInterface;
use Drupal\chatbot_api_entities\Plugin\QueryHandlerBase;
use Drupal\Core\Entity\EntityTypeManagerInterface;


/**
 * Defines a query handler that just uses entity query to limit as appropriate.
 *
 * @QueryHandler(
 *   id = "speakers",
 *   label = @Translation("Query speakers from sessions"),
 * )
 */
class SpeakerQuery extends QueryHandlerBase {

  /**
   * {@inheritdoc}
   */
  public function query(EntityTypeManagerInterface $entityTypeManager, array $existing = [], EntityCollectionInterface $collection) {
    $storage = $entityTypeManager->getStorage('node');
    return $storage->loadMultiple($storage->getQuery()
      ->condition('type', 'session')
      ->exists('field_speaker_name')
      ->condition('status', 1)
      ->execute());
  }

  /**
   * {@inheritdoc}
   */
  public function applies($entity_type_id) {
    return $entity_type_id === 'node';
  }

}

Writing your own push handler

Whilst we've written our own query handler to load entities that we wish to extract values from, we need to write our own push handler to handle sending anything other than the label.

Here's an example push handler that will push field values as entities to Api.ai/dialogflow

<?php

namespace Drupal\your_module\Plugin\ChatbotApiEntities\PushHandler;

use Drupal\api_ai_webhook\Plugin\ChatbotApiEntities\PushHandler\ApiAiPushHandler;
use Drupal\chatbot_api_entities\Entity\EntityCollection;
use Drupal\Core\Entity\EntityInterface;

/**
 * Defines a handler for pushing entities to api.ai.
 *
 * @PushHandler(
 *   id = "api_ai_webhook_speakers",
 *   label = @Translation("API AI entities endpoint (speakers)")
 * )
 */
class SpeakerPush extends ApiAiPushHandler {

  /**
   * {@inheritdoc}
   */
  protected function formatEntries(array $entities, EntityCollection $entityCollection) {
    // Format for API.ai/dialogflow.
    return array_map(function ($item) {
      return [
        'value' => $item,
        'synonyms' => [],
      ];
    },
    // Key by name to remove duplicates.
    array_reduce($entities, function (array $carry, EntityInterface $entity) {
      $value = $entity->field_speaker_name->value;
      $carry[$value] = $value;
      return $carry;
    }, []));
  }

}

Learn more

If you're interested in learning more about Chatbots and conversational UI with Drupal, I'm presenting a session on these topics at Drupal South 2017, the Southern Hemisphere's biggest Drupal Camp. October 31st is the deadline for getting your tickets at standard prices, so if you plan to attend, be sure to get yours this week to avoid the price hike.

I hope to see you there.

Photo of Lee Rowlands

Posted by Lee Rowlands
Senior Drupal Developer

Dated 27 October 2017

Add new comment

Oct 26 2017
Oct 26

In this week's Lightning talk, I go through a case study on an investigation into Deadlocks and Render caching and why cache contexts are so important to get right. Check out the video below to find out how we were able to withstand 10x the throughput with smarter caching.

Oct 19 2017
Oct 19

In a recent project we were outputting CSV and wanted to test that the file contents were valid.

Read on for a quick tip on how to achieve this with Drupal 8's BrowserTestBase

Basically, the easiest way to validate and parse CSV in PHP is with the built in fgetcsv function.

So how do you go about using that inside a functional test - in that instance we're not dealing with a file so its not your ordinary approach for fgetcsv.

The answer is to create a stream wrapper in memory, and use fgetcsv on that.

The code looks something like this:

    $response = $this->getSession()
      ->getDriver()
      ->getContent();
    // Put contents into a memory stream and use fgetcsv to parse.
    $stream = fopen('php://memory', 'r+');
    fwrite($stream, $response);
    rewind($stream);
    $records = [];
    // Get the header row.
    $header = fgetcsv($stream);
    while ($row = fgetcsv($stream)) {
      $records[] = $row;
    }
    fclose($stream);

There you have it, you now have the header in $header and the rows in $rows and can do any manner of asserts that you need to validate the CSV generation works as expected.

Photo of Lee Rowlands

Posted by Lee Rowlands
Senior Drupal Developer

Dated 20 October 2017

Add new comment

Oct 16 2017
Oct 16

Drupal 8.4 is stable! With 8.3 coming to end of life, it's important to update your projects to the latest and greatest. This blog will guide you through upgrading from Drupal core 8.3 to 8.4 while avoiding those nasty and confusing composer dependency errors.

The main issues with the upgrade to Drupal core 8.3 are dependency conflicts between Drush and Drupal core. The main conflict being that both Drush 8.1.x and Drupal 8.3 use the 2.x version of Symfony libraries, while Drupal 8.4 has been updated to use Symfony 3.x. This means that when using composer to update Drupal core alone, composer will complain about conflicts in dependencies, since Drush depends on Symfony 2.x

Updating your libraries

Note: If you are using Drush 8.1.15 you will not have these issues as it is now compatible with both Symfony 2.x and 3.x

However, if you are using Drush < 8.1.15 (which a lot of people will be on), running the following command will give you a dependency conflict:

composer update drupal/core --with-dependencies

Resulting in an error message, followed by a composer trace:

Your requirements could not be resolved to an installable set of packages.

The best way to fix this is to update both Drupal core and Drush at the same time. Drush 8.x is not compatible with Drupal 8.4 so you will need to update to Drush 9.x.

composer update drupal/core drush/drush --with-dependencies
composer require "drush/drush:~9.0"

Some people have reported success with simply running a require on both updated versions of Drupal and Drush at the same time, but this did not work for me

composer require "drupal/core:~8.4" "drush/drush:~9.0"

What next?

Great, you're on the latest versions of both core and drush, but what's next? Well, that depends on a lot of things like what contributed and custom modules your project is running, how you're deploying your site, and what automated tests you are running. As I can't possibly cover all bases, I'll go through the main issues we encountered.

First things first, you'll need to get your site's database and configuration updated. I highly recommend running your database update hooks and exporting your site's configuration before proceeding any further.

Next, you'll want to ensure that all of your deployment tools are still working. Here at PreviousNext our CI/CD tools call Make commands which are essentially just wrappers around one or more Drush commands.

For the most part, the core Drush commands (that is, the commands that ship with drush) continued working as expected, with a couple of small caveats:

1. You can no longer pipe a SQL dump into the drush sql-cli (sqlc) command.

Previously, we had:
drush sqlc < /path/to/db.sql
Now we have:
`eval drush sql-connect` < /path/to/db.sql

Note: As of Drush 9.0-beta7 this has now been fixed, meaning the old version will work again!

2. The drush --root option no longer works with relative paths

Previously, our make commands all ran Drush with the --root (or -r) option relative to the repository root:
./bin/drush -r ./app some-command
Now it must be an absolute path, or Drush will complain about not being able to find the Drupal settings:
./bin/drush -r /path/to/app some-command

3. Custom Drush commands

For custom Drush commands, you will need to port them to use the new object oriented style approach and put the command into a dedicated module. Since version 9.0-beta5, Drush has dropped support for the old drush.inc style approach that could be used to add commands to a site without adding a new module.

For an example on this, take a look at our drush_cmi_tools library which provides some great extensions for importing and exporting config. This PR shows how we ported these commands to the new Drush 9 format.

For more information on porting commands to Drush 9, check out Moshe Weitzman's blog on it.

Other gotchas

Following the Drush upgrades, your project will need various other updates based on the modules and libraries it uses. I'll detail some issues I faced when updating the Transport for NSW site below.

1. Stale bundles in the bundle field map key value collection

Added as part of this issue, views now throws warnings similar to "A non-existent config entity name returned by FieldStorageConfigInterface::getBundles(): field name: field_dates, bundle: page" for fields that are in the entity bundle field field map that no longer exist on the site. We had a handful of these fields which threw warnings on every cache clear. To fix this, simply add an update hook which clears out these stale fields from the entity.definitions.bundle_field_map keyvalue collection:

/**
 * Fix entity.definitions.bundle_field_map key store with old bundles.
 */
function my_module_update_8001() {
  /** @var \Drupal\Core\KeyValueStore\KeyValueFactoryInterface $key_value_factory */
  $key_value_factory = \Drupal::service('keyvalue');
  $field_map_kv_store = $key_value_factory->get('entity.definitions.bundle_field_map');
  $node_map = $field_map_kv_store->get('node');
  // Remove the field_dates field from the bundle field map for the page bundle.
  unset($node_map['field_dates']['bundles']['page']);
  $field_map_kv_store->set('node', $node_map);
}

2. Custom entities with external uri relationships throw Fatal errors when delete while menu_link_content is installed

The menu_link_content module now has an entity_predelete hook that looks through an entities uri relationships and tries to find any menu links that link to that specific route, and if so deletes them. When the uri is external, an error is thrown when it tries to get the route name "External URLs do not have an internal route name.". See this issue for more information.

3. Tests that submit a modal dialog window will need to be altered

This is a very edge case issue, but will hopefully help someone! In older versions of jQuery UI, the buttons that were added to the bottom of the modal form for submission had an inner span tag which could be clicked as part of a test. For example, in Linkit's LinkitDialogTest. This span no longer exists, and attempting to "click" any other part of that button in a similar way will throw an error in PhantomJS. To get around that simply change your test to do something similar to the following:

$this->click('.ui-dialog button:contains("Save")');

Kudos to jhedstrom for finding this one. See this issue for more information.

Conclusion

Personally, I found the upgrade to be quite tedious for a minor version upgrade. Thankfully, our project has a large suite of functional/end-to-end tests which really helped tease out the issues and gave us greater confidence that the site was still functioning well post-upgrade. Let me know in the comments what issues you're facing!

Finally, take a look at Lee's blog on some of the major changes in 8.4 for some more insight into what you might need to fix.

Photo of Adam Bramley

Posted by Adam Bramley
Senior Drupal Developer

Dated 16 October 2017

Add new comment

Oct 03 2017
Oct 03

Last week I was fortunate enough to attend and deliver a session at DrupalCon Vienna. The session was based around leveraging and getting productive with the automated testing tools we use in the Drupal community.

For the kind of large scale projects we work on, it's essential that automated testing is a priority and firmly embedded in our technical culture. Stability and maintainability of the code we're working on helps to build trusting relationships and happy technical teams. I have for a long time been engaged with the developments of automated testing in Drupal core and internally we've worked hard to adapt these processes into the projects we build and fill-in any blanks where required.

I was fortunate enough to be selected to share this at DrupalCon Vienna. Without further ado, I present, Test all the things! Get productive with automated testing in Drupal 8:

[embedded content]

Our current testing ethos is based around using the same tools for core and contrib for our bespoke Drupal project builds. Doing so allows us to context-switch between our own client work and contributed project or core work. To make this work we've addressed a few gaps in what's available to us out of the box.

Current State of Testing

I had some great conversations after the session with developers who were just starting to explore automated testing in Drupal. While the tools at our disposal are powerful, there is still lots of Drupal-specific knowledge required to become productive. My hope is the session helped to fill in some of the blanks in this regard.

E2E Testing

Because all of the test cases in core are isolated and individually setup environments/installations, end-to-end testing is tricky without some additional work. One of the touch points in the session was based around skipping the traditional set-up processes and using the existing test classes against pre-provisioned environments. Doing so replicates production-like environments in a test suite, which helps to provide a high-level of confidence tests are asserting behaviors of the whole system. Bringing this into core as a native capability is being discussed on drupal.org and was touched on in the session.

JS Unit Testing

One thing Drupal core has yet to address is JavaScript unit testing. For complex front-ends, testing JS application code with a browser is can become clumsy and hard to maintain. One approach we've used to address this is Jest. This nicely compliments front-ends where individual JavaScript modules can be isolated and individually tested.

Summing up, attending DrupalCon Vienna, presenting the session and meeting the members of the broader community was a great experience. I'm hopeful my session was able to contribute to the outstanding quality of sessions and technical discussions.

Photo of Sam Becker

Posted by Sam Becker
Senior Developer

Dated 3 October 2017

Add new comment

Sep 24 2017
Sep 24

Drupal 8.4.0 comes out in October, and at that time 8.3.x will be end-of-life (EOL).

There are two major vendor updates in 8.4.0 so the time to test your contrib and client projects is now.

In this post we talk about the coming changes and how to test your client and contrib projects.

The two major vendor updates in Drupal 8.4.0 are as follows:

You can start testing now by updating to Drupal 8.4.0-rc2.

Symfony 3.x

If your project interacts Symfony directly at the lower level (rather than using Drupal core APIs that in turn use Symfony), you should be sure to review your code to make sure you're not using any of the APIs impacted by the BC breaks between 2.x and 3.x. Hopefully, your automated testing will reveal these regressions for you (you have automated testing right?). See the Symfony change list for the details of BC breaks.

One thing to note with the Symfony update is that whilst core dependencies were updated, your project may rely on other third-party PHP libraries that have dependencies on Symfony 2.x components. This may cause you issues with your update - and require you to update other dependencies at the same time - including drush - so testing sooner rather than later is recommended. If you find you're having issues with composer dependencies, we have another blog post dedicated to debugging them.

jQuery 3.x

While it's most likely that you'll have automated tests to catch any issues with the Symfony upgrade, it's less likely that you'll have test coverage for the jQuery update, as JavaScript test coverage is typically low in Drupal projects, particularly in contrib modules.

Of note in the jQuery update are several BC breaks - listed here http://blog.jquery.com/2016/06/09/jquery-3-0-final-released/ and http://jquery.com/upgrade-guide/3.0/. This may have a major impact on contrib projects that are heavy on JavaScript - and your client project code if you have a large amount of custom JavaScript, both in modules and your theme.

Of particular interest

  • .load removed
  • .unload removed
  • .error removed
  • .bind deprecated (use .on)
  • .delegate deprecated
  • .on('ready', function() {}) removed
  • jQuery('#') and .find('#') throw invalid syntax errors
  • .andSelf() removed (use .addBack())

A recommended approach to auditing and tackling this is to add the jQuery migrate plugin to your project, and begin testing whilst watching the JavaScript console to detect deprecation notices thrown from the plugin.

A word on testing

Finally, if you are reading this and thinking, I really need to add some test coverage to my project, one of our team Sam Becker is presenting on all things testing at Drupalcon Vienna this week. If you can't wait that long, check out his session from the last Drupal South.

Photo of Lee Rowlands

Posted by Lee Rowlands
Senior Drupal Developer

Dated 25 September 2017

Add new comment

Sep 18 2017
Sep 18

Conversational UIs are the next digital frontier.

And as always, Drupal is right there on the frontier, helping you leverage your existing content and data to power more than just web-pages.

Want to see it action - click 'Start chatting' and chat to our Drupal site.

Start chatting

So what's going on here?

We're using the Chatbot API module in conjunction with the API AI webook module to respond to intents. We're using API.ai for the natural language parsing and machine learning. And we're using the new Chatbot API entities sub module to push our Drupal entities to API.ai so it is able to identify Drupal entities in its language parsing.

A handful of custom Chatbot API intent plugin to wire up the webhook responses and that's it - as we create content, users and terms on our site - our chatbot automatically knows how to surface them. As we monitor the converstions in the API.ai training area, we can expand on our synonyms and suggestions to increase our matching rates.

So let's consider our team member Eric Goodwin. If I ask the chatbot about Eric, at first it doesn't recognise my question.

Screenshot of chatbot conversation showing Eric not recognised Eric isn't recognized as an entity

So I edit Eric's user account and add some synonyms

Screenshot of editing Eric's account to add synonyms Adding synonyms to Eric's account

And then after running cron - I can see these show up in API.ai

Screenshot from API.ai console showing Eric's synonyms Synonyms now available in API.ai

So I then ask the bot again 'Who is eric?'

Screenshot showing the default response Screenshot showing the default response

But again, nothing shows up. Now I recognise the response 'Sorry, can you say that again' as what our JavaScript shows if the response is empty. But just to be sure - I check the API.ai console to see that it parsed Eric as a staff member.

Screenshot showing Eric is resolved and intent is matched Intent is matched as Bio and Eric is identified as staff member

So I can see that the Bio Intent was matched and that Eric was correctly identifed as the Staff entity. So why was the response empty? Because I need to complete Eric's bio in his user account. So let's add some text (apologies Eric you can refine this later).

Screenshot of editing Eric's account to add a biography Adding a biography

Now I ask the bot again (note I've not reloaded or anything, this is all in real time).

Screenshot showing Eric's bio in the bot response A working response!

And just like that, the bot can answer questions about Eric.

What's next?

Well API.ai provides integrations with Google Assistant and Facebook messenger, so we plan to roll out those too. In our early testing we can use this to power an Actions on Google app with the flick of a switch in API.ai. Our next step is to expand on the intents to provide rich content tailored to those platforms instead of just plain-text that is required for chatbot and voice responses.

Credits

Thanks go to @gambry for the Chatbot API module and for being open to the feature addition to allow Drupal to push entities to the remote services.

And credit to the amazing Rikki Bochow for building the JavaScript and front-end components to incorporate this into our site so seamlessly.

Further Reading

Photo of lee.rowlands

Posted by lee.rowlands
Senior Drupal Developer

Dated 18 September 2017

Add new comment

Sep 15 2017
Sep 15

In this post, we will show the pain points of running Xdebug in a Docker local development environment and how we overcame them.

Xdebug is essential when it comes to local development.

Normally the hardest part about configuring Xdebug is setting the IP address which it should send its debugging data to (eg. PHPStorm).

Configuring this with Vagrant was very simple since we were able to use the following setting for it to "Just Work":

xdebug.remote_connect_back = 1

Remote Connect Back is awesome, it allows for Xdebug to send its debugger information back to the IP address making the web request, where PHPStorm is running.

However, running Xdebug with "Docker for Mac" (D4M) is hard. D4M runs over multiple networks:

  • OSX host
  • Linux VM

This means that the IP address Xdebug ends up sending data to is the IP address of the Linux VM.

A diagram showing where xdebug traffic stops

Existing solutions in the Docker community usually end up with the developer running additional configuration that they have to manage:

eg. https://forums.docker.com/t/ip-address-for-xdebug/10460/21

To solve this we wrote a tool called "D4M TCP Forwarder", which receives requests being sent to a port on the D4M host and forwards them to the OSX users host IP.

https://github.com/nickschuch/d4m-tcp-forwarder

Diagram showing Xdebug traffic being forwarded

To add this to your project you simply add this service to your Docker Compose file:

xdebug:
  image: nickschuch/d4m-tcp-forwarder
  network_mode: host

The solution results in:

  • No configuration for a developer
  • Reusable solution for the community
Photo of nick.schuch

Posted by nick.schuch
Sys Ops Lead

Dated 15 September 2017

Add new comment

Sep 05 2017
Sep 05

We're starting up our Lightning talks again during our weekly developer meetings here at PreviousNext. This week was about wiring up a straight forward plugin.js and extending CKEditorPluginBase to create a custom CKEditor widget in Drupal 8.

Watch the video for a run through of how this is done in Drupal 8.

Aug 22 2017
Aug 22

Browsing through the interweb I happened across this bold statement a few weeks ago. A statement so bold, it inspired me to write a blog post in response.

Scrum Masters being co-located with their teams, sure it is the best and most favourable scenario for teams working on complex projects, but to go as far as to say that Scrum Masters are ONLY effective in this instance - nope. Sorry, I have to graciously disagree.

Obviously there are different challenges that come with facilitating Agile ceremonies and interacting with the team remotely as opposed to face-to-face. A completely different approach needs to be taken on my behalf to keep the team engine purring away.

Personally for me, the “different approach” I take with managing remote teams, as opposed to co-located teams is to ensure uber transparency and over-communication on my part in regards to the all of the work that the team currently have in-flight. On my part this includes:

  • Ensuring that work in flight includes “Acceptance Criteria” and a “Definition of Done” agreed to by both the team and the client. This ensures that both the client and the team have an agreed vision of the product we are building. More importantly, it removes the need to make assumptions about a solution on both sides

  • The use of an online and up-to-date Kanban board that both the client and the team can freely access

  • Complete honesty with the client and the team in regards to all aspects of the project. Especially during the trickier and stressful moments of project delivery. If something is starting to go pear shaped, call it out early - don’t hide it!​

There are a plethora of tools now available that help enable remote collaboration. I thought it might be worthwhile sharing some of the tools that the teams at PNX use to make remote collaboration simpler.

Slack / Go To Meetings / Google Hangouts

With a large percentage of our internal staff located across Australia, these are PNX’s go-to tools for remote collaboration. We utilise both GoToMeeting and Google Hangouts (depending on individual client preferences) as tools to enable our daily stand-ups with our clients. Daily stand-ups and the ability to quickly ask via a hangout or GoToMeeting has drastically reduced the amount of email correspondence between PNX and our clients. The result? A reduction in idle time, as questions can be answered relatively quickly instead of waiting for a reply via email.

Access to an online Kanban board

The ultimate in uber transparency. There is nothing more satisfying for an Agile Delivery Manager than to see tickets move to the right of the board. Likewise for our clients! Each ticket on the board details who the work is assigned to and the status of the task. At a glance, anyone with access to the project kanban board can see the status of work for a given sprint.

The most common question I’m asked about working with remote teams is “how do you facilitate an Agile ceremony like a Retrospective with a remote team?” My favourite go-to tool for this is Google Sheets. Before each retro, I spend a half hour putting the retro board together on a Sheet. I try and mix it up every retro as well, using different Retro techniques to keep things interesting.  I mark defined spaces on the sheet where comments are to go, and I share the sheet with the team. Facilitating the Retrospective via a video conference (if possible), I timebox the retro using a timer app shared on my desktop. The team then fill in the Google Sheet in real time. The virtual equivalent of walking up to a physical board, and placing a post-it up there! I have replaced all of the original text captured during the retro with lorem ipsum text. What's said in retro - stays in retro! We had a little fun with the below retro as you can see!

For sensitive conversations - A video conference (or the phone)

The tools above are handy for enabling remote collaboration but for sensitive conversations with a colleague or client in a remote location, a video conference (where you can see each other) is a must. Sensitive conversations are fraught with danger via chat or email and a neutral tone is difficult to convey when we’re in the thick of things. If a video conference is not possible, though, simply pick up the phone.

I’d love to hear about some of the tools you use with your team to enable remote working. What are your recommended tools of choice?

Photo of irma.kelly

Posted by irma.kelly
Agile Delivery Manager

Dated 22 August 2017

Add new comment

Pages

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web