Upgrade Your Drupal Skills

We trained 1,000+ Drupal Developers over the last decade.

See Advanced Courses NAH, I know Enough
Feb 11 2017
Feb 11

In any decoupled architecture, people tend to focus on the pieces that will fit together. But what nobody ever tells you is: watch out for the cracks!

The cracks are the integration points between the different components. It’s not GraphQL as a communication layer; it’s that no one thinks to log GraphQL inconsistencies when they occur. It’s not “what’s my development environment”, it’s “how do these three development environments work on my localhost at the same time?”. It’s the thousand little complexities that you don’t think about, basically because they aren’t directly associated with a noun. We’ve discovered “crack” problems like this in technical architecture and devops, communication, and even project management. They add up to a lot of unplanned time, and they have presented some serious project risks.

A bit more about my recent project with Amazee Labs. It’s quite a cool stack: several data sources feed into Drupal 8, which offers an editorial experience and GraphQL endpoints. Four React/Relay sites sit in front, consuming the data and even offering an authenticated user experience (Auth0). I’ve been working with brilliant people: Sebastian Siemssen, Moshe Weitzman, Philipp Melab, and others. It has taken all of us to deal with the crack complexity.

The first crack appeared as we were setting up environments for our development teams. How do you segment repositories? They get deployed to different servers, and run in very different environments. But they are critically connected to each other. We decided to have a separate “back end” repo, and separate repos for each “front end” site. Since Relay needs to compile the entire data schema on startup, this means that every time the back end is redeployed with a data model change, we have to automatically redeploy the front end(s). For local development, we ended up building a mock data backend in MongoDB running in Docker. Add one more technology to support to your list, with normal attendant support and maintenance issues.

DevOps in general is more complicated and expensive in a decoupled environment. It’s all easy at first, but at some point you have to start connecting the front- and back-ends on peoples' local development environments. Cue obvious problems like port conflicts, but also less obvious ones. The React developers don’t know anything about drupal, drush, or php development environments. This means your enviroment setup needs to be VERY streamlined, even idiot-proof. Your devops team has to support a much wider variety of users than normal. Two of our front-enders had setups that made spinning up the back-end take more than 30 minutes. 30 minutes! We didn’t even know that was possible with our stack. The project coordinater has to budget significant time for this kind of support and maintenance.

Some of the cracks just mean you have to code very carefully. At one point we discovered that certain kinds of invalid schema are perfectly tolerable to the GraphQL module. We could query everything just fine - but React couldn’t compile the schema, and gave cryptic errors that were hard to track down. Or what about the issues where there are no error messages to work with? CORS problems were notoriously easy to miss, until everything broke without clear errors. Some of these are impossible to avoid. The best you can do is be thorough about your test coverage, add integration tests which consider all environments, and document all the things.

Not all the cracks are technological; some are purely communication. In order to use a shared data service, we need a shared data model and API. So how do you communicate and coordinate that between 5 teams and 5 applications? We found this bottleneck extremely difficult. At first, it simply took a long time to get API components built. We had to coordinate so many stakeholders, that the back-end data arch and GraphQL endpoints got way behind the front-end sites. At another point, one backender organically became the go-to for everything GraphQL. He was a bottleneck within weeks, and was stuck with all the information silo’ed in his head. This is still an active problem area for us. We’re working on thorough and well-maintained documentation as a reference point, but this costs time as well.

Even project managers and scrum masters found new complexities. We had more than 30 people working on this project, and everyone had to be well coordinated and informed. You certainly can’t do scrum with 30 people together - the sprint review would take days! But split it out into many smaller teams and your information and coordination problems just got much harder. Eventually we found our solution: we have 3 teams, each with their own PO, frontender(s) and backender(s), who take responsibility for whole features at a time. Each team does its own, quite vanilla, scrum process. Layered on top of this, developers are in groups which cut across the scrum teams, which have coordination meetings and maintain documentation and code standards. All the back-enders meet weekly and work with the same standards, but the tightest coordination is internal to a feature. So far this is working well, but ask me again in a few months. :)

Working in a fully decoupled architecture and team structure has been amazing. It really is possible, and it really does provide a lot more flexibility. But it demands a harder focus on standards, communication, coordination, and architecture. Sometimes it’s not about the bricks; it’s about the mortar between them. So the next time you start work on a decoupled architecture, watch out for the cracks!

Feb 05 2017
Feb 05

Drupal Case Studies Available

Michael J. Ross 2017-02-05

This article was published in the print magazine Drupal Watchdog, Volume 7 Issue 1, 2017-03-25, on page 8, by Linux New Media. The magazine was distributed at DrupalCon Baltimore, 2017-04-24.

Businesses and other types of organizations that plan on building substantial new Drupal-based websites can learn from others that have already traveled along that path, by reading and learning from case studies, each of which is a report that documents one or more aspects of a completed website development project. This typically includes the project's context and goals, an overview of the project stakeholders, known technical constraints, the reasons for choosing Drupal over competing solutions, the website's design, technical challenges and how they were overcome, and pertinent notes about contributed and custom modules used. Drupal.org offers more than 200 such case studies, organized by category and core code version.

Copyright © 2017 Michael J. Ross. All rights reserved.
Feb 05 2017
Feb 05

Monitoring Multiple Drupal Websites

Michael J. Ross 2017-02-05

This article was published in the print magazine Drupal Watchdog, Volume 7 Issue 1, 2017-03-25, on page 9, by Linux New Media. The magazine was distributed at DrupalCon Baltimore, 2017-04-24.

Organizations that must continuously oversee a large number of websites — even if they are owned by others, as in the case of web development shops that offer maintenance contracts — have a variety of solutions at hand to automate the process as much as possible, to reduce the time spent by staff members manually checking those sites, and to reduce the chances of a problem escaping notice, such as a malware infection not yet reported by site visitors. Some of the solutions are commercial services (e.g., Drupal Monitor and Lumturio), whereas others are contributed modules that would necessitate you or your staff implementing them on your own servers (e.g., the Monitoring module).

Copyright © 2017 Michael J. Ross. All rights reserved.
Feb 05 2017
Feb 05

Drupal Sponsorship by Organizations

Michael J. Ross 2017-02-05

This article was published in the print magazine Drupal Watchdog, Volume 7 Issue 1, 2017-03-25, on page 9, by Linux New Media. The magazine was distributed at DrupalCon Baltimore, 2017-04-24.

As with all successful open source projects, Drupal's sustainability and growth depends upon the participation of contributors — including web agencies, corporations, universities, and other organizations that build websites using Drupal and then share their work with the rest of the community. Such sponsorship is becoming increasingly visible and rewarded, and can confer various substantial benefits to businesses that are willing to make such an investment — one that may be as simple as asking your team's developers to prepare and submit custom modules to Drupal.org. Each contribution increases the long-term strength of the foundation for your business's website, helps to attract quality designers and developers to your team, and much more.

Copyright © 2017 Michael J. Ross. All rights reserved.
Feb 04 2017
Feb 04

Drupal Distributions for Nonprofits

Michael J. Ross 2017-02-04

This article was published in the print magazine Drupal Watchdog, Volume 7 Issue 1, 2017-03-25, on page 8, by Linux New Media. The magazine was distributed at DrupalCon Baltimore, 2017-04-24.

Instead of building a new website by starting with a vanilla installation of Drupal, a not-for-profit organization could save much time and money by starting with a Drupal "distribution", which is a single installable package that contains the core Drupal code along with contributed and custom modules, appropriate configuration settings, and a theme, or some combination thereof — basically a generic prebuilt website for a particular purpose or category. There are more than a thousand such distributions available, and many of them are intended for nonprofit organizations, including Open Outreach (for community organizations), OpenChurch (religious institutions), OpenPublic (governments), and Drupal Commons (collaborative communities, such as Drupal Camps).

Copyright © 2017 Michael J. Ross. All rights reserved.
Feb 04 2017
Feb 04

Drupal Distributions for E-commerce

Michael J. Ross 2017-02-04

This article was published in the print magazine Drupal Watchdog, Volume 7 Issue 1, 2017-03-25, on page 9, by Linux New Media. The magazine was distributed at DrupalCon Baltimore, 2017-04-24.

Of all the best-known Drupal distributions, most are developed for the use of nonprofit organizations, and yet some are clearly intended for online businesses. The most popular of all, Commerce Kickstart, is arguably also the most generic — as one would expect, because it has been designed from the ground up for building just about any type of Internet-based store. In contrast, some distributions are ideal for a specific industry. One example is Restaurant (Drupal 7) / Open Restaurant (Drupal 8). For the entrepreneur with a hotel, B&B, or other type of vacation rental business that requires an online booking engine, potential solutions include Roomify Casa and Easy Booking, both of which use the essential Rooms module.

Copyright © 2017 Michael J. Ross. All rights reserved.
Feb 03 2017
Feb 03

In my previous blog post on managing microsites with Drupal 8 I promised to write something further and fuller about designing web APIs. This is less directly about Drupal 8, but I will comment on how to implement the recommendations here in Drupal 8.

These are the things that I take time to think about when building a web API.

As a developer, it’s all too easy, and too tempting, to just jump right into coding something. It’s certainly a weakness I suffer from and that I have to cope with.

Before putting the proverbial pen to paper, though, it’s really important to understand why we’re building an API in the first place. What are the problems we’re trying to solve? What do the users need or want?

With regard to building an API, that means thinking about the consumers of the data provided by your API. If you’re building a decoupled CMS, the main user is the frontend system. In other circumstances it may also mean other websites, embedded widgets, apps on mobile devices, and so on. Whatever it is, due consideration needs to be given to the needs of those consumers.

That means understanding your user’s needs, examining the patterns of behaviour of those users, and ultimately translating those into a design.

Sound like familiar language? Yes, that’s the language of visual designers and user experience specialists. In my books, I’d suggest that means you would do well to work closely with specialist design colleagues when designing and building an API.

Your web API needs to be designed: needs; behaviours; analysis; patterns; traits; design; feedback; improve.

Be an artisan with your API

Take time. Research. Think. Plan. Design.

When you’re working with Drupal, it is too easy to jump over the design step. Drupal does so much out of the box that it’s too easy to start coding without thinking properly about what we’re coding.

The availability bias when you’re a specialist Drupal developer, having it as the go-to toolkit, is that we think about the solutions to the problems (if we’ve even got as far as articulating the problems) in a Drupally way. For instance, since Drupal has a menu system it’s easy to think about navigation in a decoupled CMS system in terms of the way Drupal handles the menu system, which prevents you from thinking about other ways of handling navigation.

The same is true with Drupal 8’s support for REST. Drupal 8 core includes REST resources for most entities in a Drupal installation. That’s very useful. But, it can also make you lazy, just using these core RESTful API endpoints for nodes or comments or whatever, with all the guff they include, without even thinking about whether they’re appropriate, whether all the guff they include is appropriate, whether it’s useful or formatted appropriately.

That goes also for REST exports from Views. They can be useful, giving you a quick way of creating a RESTful API endpoint. The problem is, thought, that also confines you to working with the way Views works and what it can produce. You may find that a problem if you want to support optionally requesting for additional objects to be embedded in the response, for instance (see below).

Resist the temptation! Instead, take the time to think from the other end first.

I’ll return to the question of designing your API below, but first we need to talk about documentation, since designing and documenting your API can be part of the same process.

Yeah, I know. Most devs find this just the dullest thing in the world to write. With a web API, though, it’s incredibly important. If you want people to actually be able to use your API, they need to know how to work with it. It’s horrible trying to work with an undocumented or under-documented API.

So, what should go into the documentation for a web API? Here’s some pointers.

Yeah, this is probably what everyone thinks of when they think of documentation for a web API, but it is in fact only part of the documentation—maybe the most important part, but only part.

There a plenty of good blog posts and descriptions of what your API reference should include, so there’s no need for me to reiterate that here.

The most important thing to say, though, is that, beyond identifying resource paths, actions and parameters, your reference should describe in full both what the request should and the response will look like.

It is incredibly helpful to include a mock server with your API documentation. Preferably, your mock server will handle the documented requests and responses of each resource.

This will help those building apps and tools that will consume your API to get up-and-running quickly.

If your API gets to be any substantial scale then the developers who use your API will find it incredibly useful to have some tutorials and guides included in your documentation.

These should cover common tasks, or how to work with specific sections of your API. A guide to ‘best practices’ with your API may be appropriate to help people make the most out of your API.

Check out the guides in MailChimp’s API documentation for a good example. Twitter’s API docs ‘best practice’ section are great as well.

One invaluable guide is the ‘getting started’ or ‘quick start’ guide. This can often be just a single page, with a succinct summary of the things you need to do to get going.

The YouTube API ‘getting started’ page is a useful example.

There’s lots of useful tools out there to help you get started when you document your API design. Here’s some suggestions.

API Blueprint is an open-source high-level API design language that is very useful for writing your documentation. The language is similar to Markdown, so it’s easy to work with. There are a number of SaaS tools offering services based on API Blueprint. One that I really like is Apiary.io (though they’ve recently been bought by Oracle so who know where that’ll take them), but there are others, like Gelato.

You might also consider Read the Docs and daux.io amongst others. There’s also the Open API Initiative, which is ‘focused on creating, evolving and promoting a vendor neutral API Description Format,’ though the initiative is ‘based on the Swagger Specification.’ Open API is an initiative of Swagger.io, and they have a list of tools and frameworks using the specification. The OpenAPI specification is on GitHub.

Whatever you use, your documentation should (probably) end up in a public location so that other developers can use it. (An exception might be for an API used in a secure decoupled system.)

So, let’s return more directly to the question of designing your web API.

An important rule of thumb for me is to ‘keep it simple, stupid.’ There is no need to include anything more in the resources of your API than is necessary.

I say this as a long-time Drupal developer, knowing full well that we have a superpower in overcomplicating things, all those extra divs and classes all over the markup, all those huge arrays.

This is still true in the core REST resources of Drupal 8. For example, when GETting the core Content resource for node 10 /node/10?_format=json the response gives us …

{
"nid": [
{
"value": "10"
}
],
"uuid": [
{
"value": "6bfe02da-b1d7-4f9b-a77a-c346b23fd0b3"
}
],
"vid": [
{
"value": "11"
}
],

}

Each of those fields is an array that contains an array that contains the value name:value pair as the only entry. Whew! Exhausting. An array within an array, when there’s only one level-1 array ? Really? Maybe we could render that a little more simply as …

{
"nid": "10",
"uuid": "6bfe02da-b1d7-4f9b-a77a-c346b23fd0b3",
"vid": "11",

}

… which might help our API’s consuming applications to parse and use the JSON data more easily. Like I said above, I’d suggest that just using the core entity REST resources isn’t often the place to start.

The simplicity mantra should pervade your API design. Include only the data that is needed for the consuming apps. Pare it down, so it’s as easy to read as possible.

As a result, when you come to build that API in your Drupal 8 backend system, it will demand a good discipline on you of not just throwing out in the API resource responses what’s easiest but rather what’s best.

This is true in particular when it comes to your naming conventions and API resource paths.

Don’t just add root-level endpoints ad infinitum. Use well-structured paths for your resources where the depth of the path elements make sense together. The result should be that your resources are explorable via a browser address bar. E.g.

GET /articles/5/comments/19

… makes intuitive sense as a path: get comment 19 on article 5.

On the other hand, don’t just add depth to your resource paths unnecessarily. Separating things out with some logic will help make things intelligible for developers using your API. E.g.

GET /articles/comments

Umm? What’s that? The comments on articles — why would I want that? However …

GET /comments?contenttypes=articles

… is more obvious — a path to get comments, with a content types filter. Obvious. It also suggest we might be able to filter content types with a comma-separated list of types—nice!

Find a straightforward naming convention. Make the names of resource endpoints and data fields obvious and sensible at first glance.

Overall, make the way you name things simple, intuitive and consistent. If the title field of a data object in your resources is called ‘title’ in one place, ‘name’ in others and ‘label’ in still others, for instance, then it adds unnecessary complexity for writing reusable code.

When designing your web API, it needs to be simple to use and work with. Help users to get just what they want from your API.

You’ll make developers smile if you provide a way of limiting the fields that are returned in a response. You don’t always want to get everything from a resource. Being able to choose exactly what you want can help speed up usage of an API.

For example, consider supporting a fields parameter, that could be used like this:

GET /articles/5?fields=id,title,created

The opposite might also be important, being able to load extra resources in the same request. If a request can combine related resources then fewer requests will need to be made, which again will help speed up using an API.

Supporting an embed query parameter could give you this. For example:

GET /articles/5?embed=author.name,author.picture,author.created

… would enable users to load also the article author’s name, their picture and the date their account was created. Note the dot syntax, which might be useful.

Another way of making it easy for users is to support flexibility in the format of the data in the response. JSON is usually what people want to handle, but some do still prefer to use XML.

There’s also the problem that JSON has no support for hyperlinks, the building blocks of the web, which is a curiosity as the W3C admit. There are JSON protocol variants that attempt to address this, like HAL and JSON-LD, but I refer you to a fuller discussion of JSON and hypermedia and some useful resources on hypermedia and APIs from Javier Cervantes at this point.

When designing your API, you should expect it to have a certain lifetime. In fact, it’s bound to last long enough to need changing and improving. But what do you do about rolling out those changes?

Your devs will need the flexibility to change things, especially if they find bugs, and they’ll get frustrated if they can’t adapt the API to make improvements.

Your users need reliability and stability, though, and they’ll get frustrated if the API keeps changing and their consumer app dies without warning.

So, from the start, include versioning.

A pretty sensible thing is use a path element to specify the version number. E.g.

GET /api/v1/articles/5

You could use a query parameter instead, of course, though since query parameters are optional that would mean that without the version parameter your API would return the latest. Consumers who’d inadvertently missed including the version in their requests would be vulnerable to changes making their app die, which might result in some flame support emails.

Make sure there’s a way for your users to let you know when they have problems, there find a bug, or whatever.

If its an internal API, like with a decoupled CMS and frontend, then that is probably your bug tracker.

If it’s a public API, then you’ll need some public way for people to contact you. If you host your repository on e.g. GitHub then there’s support for issues baked in.

Giant lists of bugs that never get addressed are soul-crushing.

You’ll probably want to include some authentication to your API. You shouldn’t rely on cookies or sessions for your API as it should be stateless. Instead, by using SSL (you’re using SSL, right? yes, you’re using SSL.), you can implement a token-based authentication approach.

However, where a token approach is inappropriate, OAuth 2 (with SSL) is probably the best way to go. Here’s some further discussion on API security and authentication, if you’d like to read in more depth.

HTTP has a caching mechanism built in — woot! Just add some response headers and do some validation on request headers and it’s there.

I’ll point you elsewhere to read more about the 2 key approaches, ETag and Last-Modified.

HTTP defines lots of meaningful status codes that can be returned in your API responses. By using them appropriately, your API consumers can respond accordingly.

If a request has an error, don’t just return an error code. Your API should provide a useful error message in a format with which the consumer can work. You should use fields in your error message in the same way that a valid response does.

In summary, when building an API it’s not healthy to just jump in and start writing the code for the API from a specification. Neither is it healthy to just rely on the default resources of CMS tools like Drupal 8. APIs always need to be tailor-made for the task.

APIs need to be designed.

If you can make your web API simple to understand and adopt, easy to work with, incorporating plenty of flexibility, if it’s stable and reliable and well-supported, then you’re well on your way to being the proud owner of a healthy API.

Jan 29 2017
Jan 29

You can now easily test your Drupal projects on AppVeyor. Currently, AppVeyor is the major player in CI regarding Windows Servers. On other CI systems (Travis, Bitbucket pipelines) you are limited to Docker containers for the *nix platform. (This will soon change as some CI will throw Windows containers into the mix).

Until then, the only tool to CI your Drupal (or any PHP project) on a Windows based environment using IIS is AppVeyor.

In this post I'll show you how we setup free (because AppVeyor is free for open source/public projects) for the MS SQL Server driver for Drupal.

After opening a windows account, go the the + New Project menu.

AppVeyor automatically integrates with major code repository providers such as GitHub, BitBucket, Gitlab and Visual Studio Online. In our case, we will use the "raw" Git option, to connect to any Git repository (in this case the drupal.org one):

All the configuration for the project is done through an appveyor.yml file that you commit to the root of your repository. The reference for this file can be found here:

https://www.appveyor.com/docs/appveyor-yml/

In this file you configure the build script, services and any other behaviour required for your CI process.

We prepared a completely automated build and test script to download drupal, install it on a local IIS site and run the complete core test suite:

version: 1.0.{build}
skip_tags: true
init:
- ps: 
services:
- mssql2014
- iis
install:
- cmd: >-
    net start MSSQL$SQL2014
    powershell -command "Set-Service 'SQLAgent$SQL2014' -StartupType Manual"
    net start W3SVC
    powershell -command "new-item c:\tools\php\ext -itemtype directory"
    powershell -command "new-item c:\tmp -itemtype directory"
    rmdir c:\tmp /s /q
    powershell -command "new-item c:\tmp -itemtype directory"
    powershell -command "(New-Object Net.WebClient).DownloadFile('https://github.com/Microsoft/msphpsql/releases/download/4.1.5-Windows/7.0.zip','C:\tmp\sqlsrv.zip')"
    powershell -command "(new-object -com shell.application).namespace('C:\tmp').CopyHere((new-object -com shell.application).namespace('C:\tmp\sqlsrv.zip').Items(),16)"
    copy /Y "c:\tmp\7.0\x64\php_pdo_sqlsrv_7_nts.dll" "c:\tools\php\ext\php_pdo_sqlsrv.dll"
    powershell -command "(Get-Item c:\tools\php\ext\php_pdo_sqlsrv.dll).VersionInfo"
    rmdir c:\tmp /s /q
    powershell -command "new-item c:\tmp -itemtype directory"
    powershell -command "(New-Object Net.WebClient).DownloadFile('http://windows.php.net/downloads/pecl/releases/wincache/2.0.0.8/php_wincache-2.0.0.8-7.0-nts-vc14-x64.zip','C:\tmp\wincache.zip')"
    powershell -command "(new-object -com shell.application).namespace('C:\tmp').CopyHere((new-object -com shell.application).namespace('C:\tmp\wincache.zip').Items(),16)"
    copy /-Y "c:\tmp\php_wincache.dll" "c:\tools\php\ext\php_wincache.dll"
    powershell -command "(Get-Item c:\tools\php\ext\php_wincache.dll).VersionInfo"
    cinst -y OpenSSL.Light
    SET PATH=C:\Program Files\OpenSSL;%PATH%
    sc config wuauserv start= auto
    net start wuauserv
    cinst -y --allow-empty-checksums php -version 7.0.9
    cd c:\tools\php
    copy php.ini-production php.ini
    echo date.timezone="UTC" >> php.ini
    echo extension_dir=ext >> php.ini
    echo extension=php_openssl.dll >> php.ini
    echo extension=php_wincache.dll >> php.ini
    echo extension=php_pdo_sqlsrv.dll >> php.ini
    echo extension=php_com_dotnet.dll >> php.ini
    echo extension=php_sockets.dll >> php.ini
    echo extension=php_mbstring.dll >> php.ini
    echo extension=php_soap.dll >> php.ini
    echo extension=php_curl.dll >> php.ini
    echo extension=php_gd2.dll >> php.ini
    echo extension=php_gettext.dll >> php.ini
    echo zend_extension=php_opcache.dll >> php.ini
    echo opcache.enable=1 >> php.ini
    echo opcache.enable_cli=1 >> php.ini
    echo opcache.memory_consumption=128 >> php.ini
    echo opcache.revalidate_freq=1500 >> php.ini
    echo opcache.max_accelerated_files=8000 >> php.ini
    echo wincache.ucenabled=1 >> php.ini
    echo wincache.ucachesize=128 >> php.ini
    echo wincache.fcenabled=0 >> php.ini
    echo realpath_cache_size=5M >> php.ini
    echo realpath_cache_ttl=1800 >> php.ini
    echo pdo_sqlsrv.client_buffer_max_kb_size=24480 >> php.ini
    echo Setup certificate store
    powershell -command "(New-Object Net.WebClient).DownloadFile('https://curl.haxx.se/ca/cacert.pem','C:\cacert.pem')"
    echo curl.cainfo="C:\cacert.pem" >> php.ini
    SET PATH=C:\tools\php;%PATH%
    powershell -Command ($env:Path)
    powershell -command "new-item c:\composer -itemtype directory"
    cd /d C:\composer
    php -r "readfile('http://getcomposer.org/installer');" | php
    powershell -command "(Get-Item C:\composer\composer.phar).length"
    powershell -command "'@php C:\composer\composer.phar ' + $([char]37) + '*' | Out-File C:\composer\composer.bat -Encoding ASCII"
    SET PATH=C:\composer;%PATH%
    cd /d C:\projects\
    composer create-project -n drupal-composer/drupal-project:8.x-dev
    cd /d C:\projects\drupal-project
    composer config repositories.drupal composer https://packages.drupal.org/8
    composer require drupal/sqlsrv:~2.0
    xcopy /S /I /E %cd%\web\modules\contrib\sqlsrv\drivers %cd%\web\drivers
    composer config repositories.1 git https://%GITLABUSERNAME%:%GITLABPASSWORD%@gitlab.com/david-garcia-garcia/mssql.git
    composer require david/mssql
    powershell -command "'@php %cd%\vendor\drupal\console\bin\drupal ' + $([char]37) + '*' | Out-File %cd%/web/drupal.bat -Encoding ASCII"
    cd /d C:\projects\drupal-project\web
    drupal about
    drupal site:install standard --langcode="en" --db-type="sqlsrv" --db-host="localhost\SQL2014" --db-name="mydrupalsite" --db-user="sa" --db-pass="Password12!" --db-port="1433" --site-name="SQL Server Drupal Site" --site-mail="[email protected]" --account-name="admin" --account-mail="[email protected]" --account-pass="admin" --no-interaction
    drupal about
    cd /d C:\projects\drupal-project
    powershell -command "(New-Object Net.WebClient).DownloadFile('https://www.drupal.org/files/issues/2294731-39-phpunit-windows.patch','%cd%\patch.patch')"
    git apply patch.patch --directory=web
    powershell -command "(New-Object Net.WebClient).DownloadFile('https://www.drupal.org/files/issues/use_the_php_binary-2748883-15.patch','%cd%\patch.patch')"
    git apply patch.patch --directory=web
    powershell -command "(New-Object Net.WebClient).DownloadFile('https://www.drupal.org/files/issues/simpletest_is_broken_on-2605284-61.patch','%cd%\patch.patch')"
    git apply patch.patch --directory=web
    powershell -command "(New-Object Net.WebClient).DownloadFile('https://patch-diff.githubusercontent.com/raw/hechoendrupal/drupal-console/pull/3134.patch','%cd%\patch.patch')"
    git apply patch.patch --directory=vendor/drupal/console
    cd /d C:\projects\drupal-project\web
    drupal module:install simpletest
    choco install -y urlrewrite
    powershell -command "New-WebSite -Name 'MyWebsite' -PhysicalPath 'c:\projects\drupal-project\web' -Force"
    echo 127.0.0.1 www.mysite.com >> %WINDIR%\System32\Drivers\Etc\Hosts
    powershell -command "Import-Module WebAdministration; Set-ItemProperty 'IIS:\Sites\MyWebsite' -name Bindings -value @{protocol='http';bindingInformation='*:80:www.mysite.com'}"
    SET PATH=%systemroot%\system32\inetsrv\;%PATH%
    echo Change default anonymous user AUTH to ApplicationPool
    appcmd set config -section:anonymousAuthentication /username:"" --password
    echo Setup FAST-CGI configuration
    appcmd set config /section:system.webServer/fastCGI /+[fullPath='C:\tools\php\php-cgi.exe']
    echo Setup FACT-CGI handler
    appcmd set config /section:system.webServer/handlers /+[name='PHP-FastCGI',path='*.php',verb='*',modules='FastCgiModule',scriptProcessor='C:\tools\php\php-cgi.exe',resourceType='Either']
    iisreset
    NET START W3SVC
    powershell -command "wget http://www.mysite.com/"
test_script:
- cmd: >-
    cd /d C:\projects\drupal-project
    mkdir c:\testresults\
    php web/core/scripts/run-tests.sh --php php --all --verbose --url "http://www.mysite.com/" --xml c:\testresults\
artifacts:
- path: c:\testresults
  name: Results

You can see the CI process here with build results:

https://ci.appveyor.com/project/david-garcia-garcia/sqlsrv

If you host your projects in Bitbucket or GitLab AppVeyor has out of the box integrations that will update build statuses for pull requests, branches, commits, etc.

You can use the above testing script as a base template to implement CI and testing in your Windows based Drupal projects (private or public) against a MS SQL Server.

In order for Javascript based tests (full browser intergation tests) to work with the drupal test suite, you need to install and setup PhantomJs on your testing machine. Use these commands:

- cmd: choco install phantomjs
- ps: Start-Job -Name "test" -ScriptBlock {cmd /c "phantomjs --ssl-protocol=any --ignore-ssl-errors=true vendor/jcalderonzumba/gastonjs/src/Client/main.js 8510 1024 768 false 2>&1 >> c:\gastonjs.log"}
Jan 25 2017
Jan 25

There are lots of situations in which you need to run a series of microsites for your business or organisation — running a marketing campaign; launching a new product or service; promoting an event; and so on. When you’re with Drupal, though, what options do you have for running your microsites? In this article I review and evaluate the options in Drupal 8, make a recommendation and build a proof of concept.

Joe Baker

Why use multisites

Caveat: is the end of multisites support on the horizon?

Classic problems with Drupal multisites …

… and how to mitigate them

Domain Access

Organic Groups

Best practice: with Git

RESTful web services and Drupal 8

Design your own web services API

Decoupled frontend

Jan 06 2017
JK
Jan 06

In this article, we will present how we built a simple twitter feed in Drupal 8 with custom block and without any custom module. This block will display a list of tweets pulled from a custom list as in the example shown in the side bar.

As a prerequisite, you need to have a twitter account and a custom list in your feeds.

1) Get code from "twitter publish"

First, we need to get the code that is generated by twitter publishing api.

On this page, enter the needed url for the list of tweets you want to display as in the example below:

embed

The code will be generated from the url and you can add some custom format:

format

Once you have updated your options, you can just coy the custom code:

code

We will use this code to create the block.

2) Custom block

In custom block library (/admin/structure/block/block-content), click on "+ Add custom block" button.

In the custom block body (full HTML), copy the code generated previously in "<> source" mode and click save:

edit block

You now have a custom block that you can place anywhere in your site. To do so go to /admin/structure/block and click on the "Place block" button where you want to display your block.

The result :

Twitter block

For more advance block creation, see also this article.

Dec 26 2016
Dec 26

Description

The PHPMailer and SMTP modules (and maybe others) add support for sending e-mails using the 3rd party PHPMailer library.

In general the Drupal project does not create advisories for 3rd party libraries. Drupal site maintainers should pay attention to the notifications provided by those 3rd party libraries as outlined in PSA-2011-002 - External libraries and plugins. However, given the extreme criticality of this issue and the timing of its release we are issuing a Public Service Announcement to alert potentially affected Drupal site maintainers.

CVE identifier(s) issued

  • CVE-2016-10033

Versions affected

All versions of the external PHPMailer library < 5.2.18.

Drupal core is not affected. If you do not use the contributed PHPMailer third party library, there is nothing you need to do.

Solution

Upgrade to the newest version of the phpmailler library. https://github.com/PHPMailer/PHPMailer

The SMTP module has a modified third party PHPMailer library in its codebase. The modified version of the library is not affected.
The SMTP module had an update marked as a security update, this was incorrect, the pervious version of smtp was not vulnerable to the phpmailler issue. This update just removed some older code that was not being used.

A special thanks to Fabiano Sant'Ana, SMTP module maintainer, for working on this with short notice.

Reported by

  • Dawid Golunski

Updates

Updated on 2016-12-27, to clarify smtp module does not need a security update

Contact and More Information

The Drupal security team can be reached at security at drupal.org or via the contact form at https://www.drupal.org/contact.

Learn more about the Drupal Security team and their policies, writing secure code for Drupal, and securing your site.

Follow the Drupal Security Team on Twitter at https://twitter.com/drupalsecurity

Dec 22 2016
Dec 22

In this post I will share a quick tip on how to upgrade the jquery.once plugin in Drupal 7 without breaking current code.

Drupal 7 ships with jquery.once plugin version 1.2 that dates from 2013. For a project we needed to make code targeting the old 1.x and the 2.x work at the same time. In our case this was core javascript written for 1.x working with some code of ours designed for jquery.once 2.x.

The solution is easy: upgrade the core jquery.once to 2.x and add a polyfill to make it backwards compatible.

Just as a reference implementation create a module with two files:

js/jquery.once.js: the new 2.x jquery library

js/jquery.once.polyfill.js: the backwards compatibility layer (https://github.com/david-garcia-garcia/jquery.once.pollyfill)

Then use a hook to replace the core library and add the pollyfill:

/**
 * Implementes hook_js_alter.
 */
function {MYMODULE}_js_alter(&$javascript) {
  // Replace core jquery.once with a newer version....
  if (isset($javascript['misc/jquery.once.js'])) {
    $javascript['misc/jquery.once.js']['data'] = drupal_get_path('module', '{MYMODULE}') . '/js/jquery.once.js';
    // And a pollyfill to make it backwards compatible...
    $pollifylpath = drupal_get_path('module', '{MYMODULE}') . '/js/jquery.once.pollyfill.js';
    // Clone with new array
    $javascript[$pollifylpath] = $javascript['misc/jquery.once.js'];
    $javascript[$pollifylpath]['data'] = $pollifylpath;
  }
}

This is a quick workaround. You might be willing to use the libraries module to better approach the issue.

Dec 14 2016
Dec 14
Well used set of home repair tools.
Have a shiny new Mac? :)
Botched a Mac OS upgrade and it's time for a fresh install? ;(

Use our checklist to get your Drupal development environment up and running quickly, including setting up front end tools like Node & Gulp. It's also fun to learn what tools different folks like to use, if you have any suggestions then let us know!


Dec 09 2016
Dec 09

I'm working on a site where the editorial staff may occasionally produce animated GIFs and place them in an article. Image styles and animated GIFs in Drupal don't play nice out of the box. Drupal's standard image processing library, GD, does not preserve GIF animation when it processes them, so any image styles applied to the image will remove the animation. The ImageMagick image processing library is capable of preserving animation, but I believe the only way is to first coalesce the GIF which dramatically increases the output size which in unacceptable for this project (my sample 200kb GIF ballooned to nearly 2mb). For anyone interested in this approach anyway, the Drupal ImageMagick contrib module has a seemingly stable alpha release, but it would require a minor patch to get it working to retain animation.

I'm mostly interested in somehow getting Drupal to just display the original image when it's a GIF to prevent this problem. On this site, images are stored in an image field that's part of an Image Media Bundle. This media bundle supports JPEGs and PNGs as well, and those are typically uploaded in high resolution and need to have image styles applied to them. So the challenge is to use the same media bundle and display mode for GIFs, JPEGs, and PNGs, but always display the original image when rendering a GIF.

After some digging and xdebugging, I created an implementation of hook_entity_display_build_alter which lets you alter the render array used for displaying an entity in all view modes. I use this hook to remove the image style of the image being rendered.

/**
 * Implements hook_entity_display_build_alter().
 */
function my_module_entity_display_build_alter(&$build, $context) {
  $entity = $context['entity'];

  // Checks if the entity being displayed is a image media entity in the "full" display mode.
  // For other display modes it's OK for us to process the GIF and lose the animation.
  if ($entity->getEntityTypeId() == 'media' && $entity->bundle() == 'image' && $context['view_mode'] == 'full') {
    /** @var \Drupal\media_entity\Entity\Media $entity */
    if (isset($build['image'][0])) {
      $mimetype = $mimetype = $build['image'][0]['#item']->entity->filemime->value;
      $image_style = $build['image'][0]['#image_style'];
      if ($mimetype == 'image/gif' && !empty($image_style)) {
        $build['image'][0]['#image_style'] = '';
      }
    }
  }
}

So now whatever image style I have configured for this display mode will still be applied to JPEGs and PNGs but will not be applied for GIFs.

However, as a commenter pointed out, this would be better served as an image field formatter so you can configure it to be applied to any image field and display mode. I've created a sandbox module that does just that. The code is even simpler than what I've added above.

Nov 28 2016
Nov 28

This article assumes you are familiar with what RESTful is & what do we mean when we use the term REST API. Some of you might have already worked with RESTful Web Services module in D7, it exposes all entity types as web services using REST architecture. Drupal 8 out of the box is RESTful with core support. All entities (provided by core + ones created using Entity API) are RESTful resources.

To explore the RESTful nature of Drupal 8, we will need to enable the following modules:

In Core

  • HAL - Serializes entities using Hypertext Application Language.
  • HTTP Basic Authentication - Provides the HTTP Basic authentication provider.
  • RESTful Web Services - Exposes entities and other resources as RESTful web API
  • Serialization - Provides a service for (de)serializing data to/from formats such as JSON and XML.

Contributed

  • REST UI - Provides a user interface to manage REST resources.

RESTful Resources

Every entity in D8 is a resource, which has an end point. Since, its RESTful, the same end-point is used for CRUD (Create, Read, Update, Delete) operations with different HTTP verbs. Postman is an excellent tool to explore / test RESTful services.  Drupal 8 allows you to selectively choose & enable a REST API. e.g., we can choose to expose only nodes via a REST API & not other entities like users, taxonomy, comments etc.

After enabling REST_UI module we can see list of all RESTful resources at /admin/config/services/rest. In addition to ability to choose the entity one can enable, we can also choose the authentication method per resource & enable specific CRUD operations per resource.

Resource Settings

Let us take a look at what the REST APIs for User entity would be after we save the configuration in the above screenshot.

User

POST

http:
{
 "_links": {
   "type": {
     "href": "http://domain.com/rest/type/user/user"
   }
 },
 "name": {
   "value":"testuser"
 },
 "mail":{
   "value":"[email protected]"
 },
 "pass":{
   "value":"testpass"
 },
 "status": {
   "value": 1
 }
}

Header

X-CSRF-Token: Get from http://domain.com/rest/session/token
Content-Type: application/hal+json
Accept: application/hal+json
Authorization: Basic (hashed username and password)

Note: Drupal 8 doesn't allow anonymous user to send a POST on user resource. It is already fixed in 8.3.x branch but for now we can pass the credentials of the user who have permission to create users. If you are interested in taking a deeper look at the issue, you can follow https://www.drupal.org/node/2291055.

Response: You will get a user object with "200 OK" response code

 

PATCH

http:
{
 "_links": {
   "type": {
     "href": "http://domain.com/rest/type/user/user"
   }
 },
 "pass":[{"existing":"testpass"}],
 "mail":{
   "value":"[email protected]"
 }
}

Note: Now as user have permission to update his own profile so we can pass current user's credentials in authentication header.

Response: You will get "204 No Content" in response code.

 

GET

http:

Response: You will get a user object with "200 OK" response code.

 

DELETE

http:

Response: You will get "204 No Content" in response code.

RESTful Views and Authentication

Drupal 8 also allows us to export views as a REST service. It allows you to use all the available authentication mechanism in views itself.

JSON API Module

JSON API module provides a new format called "api_json" which is soon becoming the de-facto standard for Javascript Frontend frameworks, If you plan to use completely de-coupled Drupal with frontend framework like Angular / React / Ember then its worth a look. To read more about JSON API you can visit the site.

Nov 25 2016
Nov 25

Drupal's Robot.txt

How to (Somewhat) Control Search Engine Indexing Bots

Michael J. Ross 2016-11-25

This article was published as a cover story in the print magazine Drupal Watchdog, Volume 6 Issue 3, 2016-12-24, on pages 40-43, by Linux New Media.

Every new Drupal installation, by default, contains a text file named robots.txt as one of the files in its root directory. The majority of site builders pay little or no attention to this particular file, especially if they are put off by the somewhat arcane commands contained within it. It is understandable that the typical Drupal site builder could easily conclude that the robots.txt mysteries are best left to hard-core techies who enjoy the dark arts of web server configuration and optimization and who are happy to run the risk of inadvertently breaking their websites, making them invisible to search engines or even human visitors.

Yet if you were to willfully ignore the robots.txt file, then you would be missing out on the ability to exert better control over the search engine indexing programs that periodically visit your site. Fortunately, as you will learn in this article, the syntax of the file is not difficult to comprehend, and once learned, you should feel comfortable examining the current versions of that file that are part of Drupal 7 and Drupal 8.

Robot, Obey!

The primary purpose of the robots.txt file is to provide information to online programs developed and operated by search engine companies that automatically visit one web page after another, following links to pages not yet examined or whose contents may have changed since the last time they were checked. This process of "crawling" from one page to another is a critical part of how search engines can record the Web and later determine what pages to display given any particular search query.

Considering that these indexing programs are written by engineers who know far more about search engines than most of us will ever know, it is safe to assume that the programs used by the top-tier companies (e.g., Google) are well behaved and do not intentionally cause problems, including excessive consumption of the valuable bandwidth allotted to your site by your web hosting company. (However, there is a limit to any web bot's understanding of what havoc it might cause if a web programmer does something as foolish as the one fellow who unwisely added a "delete this file" link for each one of a collection of valuable files he had stored on his site, along with instructions to all visitors to only use those links selectively. Naturally, the first indexing bot to come along did not understand or comply with those instructions to humans, and it dutifully visited each link, thereby nuking his entire collection. Don't assume that all of your site's visitors are human and humane!)

The consumption of bandwidth can become excessive if your site contains a huge number of pages, the indexing bots are visiting those pages at a faster pace than you had budgeted, or both. Fortunately, you can ask those bots to slow down their rate of page consumption, shifting from the level of hyperactive Pac-Man to something more leisurely. Those first two lines of code in the Drupal 7 robots.txt show how to do it:

User-agent: *
Crawl-delay: 10

The first line of code declares that the subsequent adjacent lines are intended for those user agents (i.e., search engine bots) whose names match the regex pattern "*" (i.e., all of them). The second line asks the bots to slow down their crawl rate to one page every 10 seconds. Note that the Drupal 8 robots.txt is missing this command.

You can further help out these bots by telling them ahead of time which resources they do not need to bother indexing. For instance, consider the third line of code from the Drupal 7 file:

Disallow: /includes/

It tells the bots to ignore all pages found in Drupal's includes directory — or, more precisely, all those pages whose URLs have file paths beginning with the string /includes/. That is not to say that a human or robotic visitor to a well-built Drupal site would ever encounter such a URL, but it is still safe to exclude all of the core .inc files in that directory from any sort of indexing, just in case any such URLs are ever exposed to the public. The default Drupal 7 robots.txt file then does the same exclusion for the misc, modules, profiles, scripts, and themes directories. The Drupal 8 version needs to do the same for only the core and profiles directories.

Such exclusion commands can be applied not only to directories, as shown above, but also to individual files. The Drupal 7 robots.txt file includes Disallow commands for three PHP files (cron.php, update.php, and xmlrpc.php), along with eight text files. Drupal 8 does the same for README.txt and web.config.

The aforementioned statement about the Disallow commands applying to URLs and not necessarily directories is well illustrated by the robots.txt commands that Drupal 7 uses to warn the indexing bots away from such generated paths as /admin/, /node/add/, and several others — in both their clean URL forms and otherwise (/?q=admin/, /?q=node/add/, etc.). The same is true for Drupal 8, whose non-clean URLs employ an intriguingly different and non-query format (e.g., /index.php/admin/ and /index.php/node/add/).

Another difference with the Drupal 8 version of the file, is that it also includes explicit Allow commands for CSS, JavaScript, and image files found at /core/ and /profiles/ URLs. The comments in the robots.txt file do not indicate why any site builder would want those sorts of files to be indexed by search engines.

Last but not least, you can even use the robots.txt file to inform search engines of any XML sitemap that you have handcrafted or that your site generates automatically. For instance, simply include the command

Sitemap: http://www.example.com/sitemap.xml

if you use the XML Sitemap module.

Suspicious Spiders

Perhaps most if not all of your website's human visitors use only the better-known search engines, such as Google and Bing, and never use — or at least, never find your website via — any of the lesser-known search engines. Some bots that index sites are not intended to provide search results to the public, but instead have their own nefarious purposes. In all such cases, do you even want those search engines spiders to be indexing your site and consuming resources? If not, you should consider using your robots.txt file to block them. Here is a short list of some of these bots that website owners will often exclude:

User-agent: Baiduspider
User-agent: Baiduspider-video
User-agent: Baiduspider-image
Disallow: /

User-agent: moget
User-agent: ichiro
Disallow: /

User-agent: NaverBot
User-agent: Yeti
Disallow: /

User-agent: sogou spider
Disallow: /

User-agent: Yandex
Disallow: /

User-agent: YoudaoBot
Disallow: /

User-agent: aipbot
Disallow: /

User-agent: BecomeBot
Disallow: /

User-agent: MJ12bot
Disallow: /

User-agent: psbot
Disallow: /

The first six groups are for Asian and Russian bots, and the remaining four are for various questionable companies, none of whom provide any benefits to your website. Be sure to include at least one blank line between groups, otherwise the commands will be considered invalid.

This next option is debatable, depending upon whether or not you want your website's older versions to be archived for future access. If you don't, then consider also blocking the Internet Archive:

User-agent: ia_archiver
Disallow: /

Bad Robot!

Unfortunately, not all search index bots are polite and respectful of your resources. In fact, you have no guarantee that any particular web bot will even bother to read the rules that you have posted in your robots.txt, much less comply with them. One well-known bad actor is Copyscape, which searches your web pages looking for any content that could be utilized by your competition to hit you with copyright infringement suits. Its bot ignores robots.txt entries, HTML meta-tags, and nofollow attributes in hyperlinks.

If their bot won't abide by your instructions, do you have a way to stop it? Fortunately, yes, by blocking the IP addresses they use, which are listed at http://www.copyscape.com/faqs.php#password. Simply add the following commands to your HTTPS access file (.htaccess, also found in the root directory of your Drupal site):

Deny from 162.13.83.46
Deny from 212.100.254.105
Deny from 212.100.239.219
Deny from 212.100.243.112-116

Robotic Resources

If you want to learn more about the robots.txt file, a limited number of resources are available online. Perhaps start with the Web Robots Pages, which offer both general information about these files and specific commands and techniques. Google offers a detailed Robots.txt Specifications document, as well.

To test a specific robots.txt file and see if it contains any errors that might prevent the desired search engines from properly indexing your site, log in to your Google Webmaster Tools account, and on the "Search Console" page, go to the "Crawl" section, and click on the link for "robots.txt Tester". This section will list all the content of the chosen site's robots.txt, including comment lines. Any errors and warnings will be flagged. To get a terse explanation of each error and warning, mouse over the corresponding red or yellow symbol, and a tooltip will pop up, showing the explanation. In the case of the Drupal 7 robots.txt file, no errors are indicated, but Google does warn that it ignores the command Crawl-delay: 10. Just below the large text area showing your robots.txt file, is a field where you can enter a specific URL and have Google test whether it will be indexed on the basis of the rules you have specified.

If you are building a new website without Drupal (and why in heavens name would you do that?!), then you might need to create a new robots.txt file that incorporates all of the search engine bot blocking that you specifically desire for the new site. In that case, consider using the Robot Control Code Generation Tool, which does exactly what the name implies. For each of the major search engines, you can use the default value or specify that the particular indexing bot be allowed or refused, and those choices will be reflected in the robots.txt file generated by the website.

This is probably more information than you ever wanted or even needed regarding the relentless indexers that will periodically visit your site, but at least now you can gain more control over what they are doing — especially those that may be overstaying their welcome and abusing your site's resources.

Copyright © 2016 Michael J. Ross. All rights reserved.
Nov 24 2016
Nov 24

Autocomplete on textfields like tags / user & node reference helps improve the UX and interactivity for your site visitors, In this blog post I'd like to cover how to implement autocomplete functionality in Drupal 8, including implementing a custom callback

Step 1: Assign autocomplete properties to textfield

As per Drupal Change records, #autocomplete_path has been replaced by #autocomplete_route_name and #autocomplete_parameters for autocomplete fields ( More details -- https://www.drupal.org/node/2070985).

The very first step is to assign appropriate properties to the textfield:

  1. '#autocomplete_route_name':
    for passing route name of callback URL to be used by autocomplete Javascript Library.
  2. '#autocomplete_route_parameters':
    for passing array of arguments to be passed to autocomplete handler.
$form['name'] = array(
    '#type' => 'textfield',
    '#autocomplete_route_name' => 'my_module.autocomplete',
    '#autocomplete_route_parameters' => array('field_name' => 'name', 'count' => 10),
);

Thats all! for adding an #autocomplete callback to a textfield. 

However, there might be cases where the routes provided by core might not suffice as we might different response in JSON or additional data. Lets take a look at how to write a autocomplete callback, we will be using using my_module.autocomplete route and will pass arguments: 'name' as field_name and 10 as count.

Step 2: Define autocomplete route

Now, add the 'my_module.autocomplete' route in my_module.routing.yml file as:

my_module.autocomplete:
  path: '/my-module-autocomplete/{field_name}/{count}'
  defaults:
    _controller: '\Drupal\my_module\Controller\AutocompleteController::handleAutocomplete'
    _format: json
  requirements:
    _access: 'TRUE'

While Passing parameters to controller, use the same names in curly braces, which were used while defining the autocomplete_route_parameters. Defining _format as json is a good practise.

Step 3: Add Controller and return JSON response

Finally, we need to generate the JSON response for our field element. So, proceeding further we would be creating AutoCompleteController class file at my_module > src > Controller > AutocompleteController.php.

query->get('q')) {
      $typed_string = Tags::explode($input);
      $typed_string = Unicode::strtolower(array_pop($typed_string));
      // @todo: Apply logic for generating results based on typed_string and other
      // arguments passed.
      for ($i = 0; $i < $count; $i++) {
        $results[] = [
          'value' => $field_name . '_' . $i . '(' . $i . ')',
          'label' => $field_name . ' ' . $i,
        ];
      }
    }

    return new JsonResponse($results);
  }

}

We would be extending ControllerBase class and would then define our handler method, which will return results. Parameters for the handler would be Request object and arguments (field_name and count) passed in routing.yml file. From the Request object, we would be getting the typed string from the URL. Besides, we do have other route parameters (field_name and Count) on the basis of which we can generate the results array. 

An important point to be noticed here is, we need the results array to have data in 'value' and 'label' key-value pair as we have done above. Then finally we would be generating JsonResponse by creating new JsonResponse object and passing $results.

That's all we need to make autocomplete field working. Rebuild the cache and load the form page to see results.

Nov 22 2016
Nov 22
Drupal 8 Logo in Clouds

Automated testing is an important part of any active project workflow. When a new version of Drupal 8 comes out, we want to smoke test it on the platform to ensure that everything is working well before we expose it to our customers. We have confidence that Drupal itself is going to work well; we confirm this by running the unit tests that Drupal provides. Most importantly, we also need to ensure that nothing has changed to affect any of the integration points between Drupal and the Pantheon platform. In particular, the behaviors we need to test include:

  • Site install, to ensure that the Pantheon database is made available to the new site without interfering with the installation process.

  • Configuration import and export, to ensure that the Pantheon configuration sync directory is set up correctly.

  • Module installation, upgrade, and database updates, to ensure that the authorize.php and update.php scripts are working correctly with our nginx configuration.

This level of testing requires functional tests; to ensure that everything is working as it should on the platform, we use Behat. The thing that is most challenging about the repository we are testing, though, is that it serves as the upstream for a large number of Drupal sites.  Each one of these sites may want to do testing of their own; therefore, we do not want to add a .travis.yml or a circle.yml to our repository, as this would cause conflicts with the downstream version of the same file whenever we made changes to our tests—something we definitely want to avoid.

Fortunately, Circle CI has a feature that fits this need perfectly. Most of the directives that can be placed in a circle.yml file can now be filled in to the project settings in the Circle CI web interface.  The relevant settings are in the “Test Commands” section.  A screenshot of the dependency commands below shows what we do to initialize the Circle container for our tests:

Drops 8 Screenshot

In the pre-dependency commands, we need to set the timezone to use to avoid PHP warnings in our tests. When using a circle.yml file, we would usually specify the version of PHP that we wanted to use; then, we could adjust the php.ini file directly using a well-known path.  If we do not have a circle.yml file, then we must use whatever version of PHP Circle wants to give us, as the settings in the web interface have no affordance for this. We find the appropriate ini file like so:

echo "date.timezone = 'US/Central'"  >  
  /opt/circleci/php/$(php -r 'print PHP_VERSION;')/etc/conf.d/xdebug.ini

There are a couple of things to note about this line. This is just an ordinary bash expression, as is every field in these panels, so we can evaluate expressions and use output redirection here. The path to the PHP ini files contains the PHP version; we use a short bit of PHP to emit this so that we do not need to evaluate the output of php --version, or anything of that nature. Every line is implemented in a separate subshell, so it does not work to set a shell variable on one line, and then try to reference it on the next. If persistent variables are needed, they can be set in the environment variables section, as usual. Finally, we write our setting into the xdebug.ini file in order to turn off xdebug, which speeds up composer considerably. Xdebug is necessary if you plan on generating code coverage reports, though, so you can change the output redirection operator > (overwrite) to >> (append) if you want to keep xdebug enabled.

After that, we install a few projects with Composer, including:

  • hirak/prestissimo is installed to speed up composer operations.

  • consolidation/cgr is used to ensure that other tools we install do not encounter dependency conflicts.

  • pantheon-systems/terminus is installed so that we can do operations directly on a remote Pantheon Drupal 8 site.

  • drush/drush is also installed on Circle, just in case we want to install the Drupal site to do functional tests on Circle prior to running the tests against the Pantheon site. At the moment, though, Drush is not used, and would not necessarily need to be installed.

Note that we specify the full path to the tools we install, rather than adjusting the $PATH variable, because it seems that the $PATH variable is initialized by Circle sometime after the environment variables section is evaluated, making setting it up there ineffective.

The final step of the dependency commands is to clone the project that contains our test scripts.

We keep our tests in a separate tests repository in order to keep our repository-under-test as similar to the base Drupal 8 repository as possible. The tests repository is cloned into the folder specified by the TESTING_DIR environment variable. This is defined in the environment variables section of the Circle CI project settings. There are a number of variables necessary for the scripts in the ci-drops-8 project to work. They include:

  • ADMIN_PASSWORD: Used to set the password for the uid 1 user during site installation.

  • GIT_EMAIL: Used to configure the email address for the git user for commits we make.

  • TERMINUS_ENV: Defines the name of the Pantheon multidev environment that will be created to run this test. Set to ci-${CIRCLE_BUILD_NUM}

  • TERMINUS_SITE: Defines the remote Pantheon site that will be used to run this test.

  • TERMINUS_TOKEN: A Terminus OAuth token that has write access to the terminus site specified by TERMINUS_SITE.

  • TESTING_DIR: Points to a directory on Circle CI that will hold the local clone of our test repository. Set to /tmp/ci-drops-8.

In addition to these environment variables, it is also necessary to add an ssh key to Circle, to allow the terminus drush commands to run.  You should create a new ssh key for use only by this test script. Add the ssh public key to the site identified by TERMINUS_SITE (create a user specifically for this purpose, attach the public key in the SSH Keys section of the Account tab, and add the user as a team member of the site), and place the private key in the Circle “SSH permissions” settings page:

SSH Key Screenshot

Note that the domain drush.in is used for any ssh connection to any Pantheon site. Initially, this was only used for Drush commands; now, WP-CLI and Composer commands are also ran this way. 

The test scripts are executed via the commands in the Test Commands section of the Circle settings pages:

Drops 8 Screenshot

This part is pretty simple; it first runs the Drupal phpunit tests on Circle CI, and then runs the create-pantheon-multidev script, followed by the run-behat script, which tests against the Pantheon site on the prepared multidev environment.

The create-pantheon-multidev script prepares a new multidev environment to run our tests in; the new environment is named according to the TERMINUS_ENV environment variable. The TERMINUS_SITE variable indicates which site should be used to run the test; the TERMINUS_TOKEN variable must contain a machine token that has write access to the specified site. The script then uses the local copy of the drops-8 repository that Circle prepared for us, and adds the behat-drush-endpoint to it. See Better Behavior-Driven Development for Remote Servers for more information on what this component is for. Once that is done, the resulting repository is force-pushed to a new branch on the specified Pantheon site. The multidev environment is created after the branch is pushed to avoid the need for the platform to synchronize the code twice. The multidev sites created for testing purposes are left active after the tests complete. This provides an opportunity to inspect a site after a test run completes, or perhaps re-run some of the test steps manually after a failure to help diagnose a problem. A script, delete-old-multidevs, is run in the post-dependencies phase, just prior to the creation of the new multidev environment for the latest tests. This script deletes all of the test environments except for the newest two.

The run-behat script is just a couple of lines, as its only job is to fire off the Behat tool with the appropriate configuration file and path. Note that we define BEHAT_PARAMS in the script to supply dynamic parameters, so that we do not need to rewrite any portion of our configuration file.

In typical usage, most projects that use Behat to test Drupal sites will first install a fresh site to test against using Drush. In our case, however, we want to test the Drupal installer, so we will instead wipe the site with Terminus, and use Behat to step through the installer dialogs.  The Gherkin for one of our installer steps looks like this:

  Scenario: Profile selection
    Given I am on "/core/install.php?langcode=en"
    And I press "Save and continue"
    And I wait for the progress bar to finish
    Then I should see "Site name"

The interesting step here is the step “And I wait for the progress bar to finish”. There are a number of places in the Drupal user interface where an action will result in a progress dialog. Getting Behat past these steps is a simple matter of following the meta refresh tags in the response. A custom step function in our Features context provides this capability for us.

Sometimes you might want to use a secret value in a Behat test—to provide the password to the admin account in the installer, for example. To do this, we use a custom step function that will fill in a field from an environment variable. We use this step function to reference the previously-mentioned ADMIN_PASSWORD environment variable.  Also, there are a number of other useful step functions in the context for running terminus and drush commands during your tests; you might find these useful in your own test suites.

Finally, the ci-drops-8 project contains a circle.yml.dist file that shows the equivalent content for testing a drops-8-based using a circle.yml file rather than placing the test scripts in the Circle admin settings. While the techniques used in this post are somewhat atypical compared to most Drupal Behat setups, it would be possible to adapt them to work on a standalone Drupal site. Perhaps we’ll do that in a future blog post. In the meantime, the circle.yml.dist file can be used as a starting point to re-use these tests for your Drupal site. 

Topics Development, Drupal Planet, Drupal
Nov 21 2016
Nov 21

UX Ireland conference took place at the Trinity Biomedical Sciences Institute, a very modern facility beside the classic grounds of Trinity College. The conference featured a great line-up of keynoters and speakers such as Jon Kolko (author of acclaimed books including “Well-Designed” or “Exposing the Magic of Design”) and Brenda Laurel.

As usual with a conference of this nature, Annertech attended in force, with about 50% of our frontend/design team attending one or both days. I got a lot of takeaways from the talks and workshops: here's a synopsis of them:

Creativity (by involving the whole team in the design process)

The first day started with a great session. Kolko’s “Be the lion tamer: Manage the chaos of creativity” was a joy to watch.

He described how getting the whole team involved in the design process increases creativity. Self-critique is common among designers during the iteration process. Constructively applying that concept to group critique will not only increase creativity but also will make us designers feel better and the rest of the team feel good too. This, in turn, helps to focus and increase trust.

For this to work the designer needs to do certain things: acknowledging feelings, managing ambiguity, letting them run amok, and setting a vision.

His point was interesting on the importance of making the thing first (doesn’t matter if it is good initially) as it will simply start the process and help the team to articulate constraints.

Another takeaway was Jon’s emphasis on rules and how they destroy creativity (unlike the constraints). I really enjoyed the talk, very uplifting.

Design for Scale and Impact

My main takeaway from “Service design at scale: designing for impact” by Oli Shaw was the importance of starting small to lead into the final product. It was very interesting to see how starting with atomic design (and our curiosity to understand the problem) will lead into features (flow, uses cases), the final product and so on.

When (re)designing an interface this is very clear. He gave the example of redesigning a button and how such a seemingly small change can affect loads of different things such as Customer experience, Employees experience, Technology systems, Business processes or even 3rd party/partner business.

As designers we want to prove the value of design, we want to create impact, in one word, we want “change”, because, let’s face it, for us a design “is everything”. He explained the importance of measuring this impact, for us designers to prove our point. Looking at impact has an extra benefit as he said  “looking at how to measure the impact (of a solution) can actually help in focusing on the real problem”.

Understand + Find Patterns + Don't Hi-Fi Everything

I found Denise Burton’s “Design language systems: beware the hobgoblins” one of the highlights of the first day. Starting again at atomic design, I liked her definition of Design Language as the DNA of your brand (what I normally call identity) and her recommendation when creating a Design Language System is to understand that you shouldn’t do it all. Start with the top level nav (for example) and apply to other parts of the design.

Efficiencies and consistency, which are what we want for a good design language system can be achieved by understanding the user, finding patterns and not hi-fing everything. Keeping in mind of course that banality may be an issue if we just “lego” things together, there is a risk of stopping thinking.

Intent for the Good

Day two started with Brenda Laurel and her “Staying grounded in a sea of new 'realities'" key talk which was a history lesson on Virtual Reality that went beyond the present day and into the future. I liked her idea of doing Desig /Research to prove (users) point to the client. It was a very interesting talk.

Yes We Can

Next, I decided to explore the workshops rather than the theoretical sessions and I went to see Matthew Lee running a session about “Research for startups... yes, you can!” which started with an overview of typical research methods (that is, the first half diamond of the first diamond of the The Double Diamond Design Process - not sure I could squeeze the word 'diamond' in there any more!). I really enjoyed how he adapted these to small budgets on what he called research for startups by making it cheap and fast.

He described the excuses clients usually have such us “we don’t have enough money” (Only cost is time), “we don’t have enough time” (one day to one week), “we can follow our gut” (You are not the user) or “my idea is the best” (Humility).

Then Matthew continued with some suggestions on how a guerrilla approach could apply to the startup environment, with ethnographic research/interviews becoming “Field Studies”, stakeholder interviews “Executive Interview”, Focus Groups turning into “Round Table discussions” and Usability Testing becoming “Street Testing”.

A Couple of Workshops to Finish it off

My hands-on experience continued with “ExperienceOps: continuous design in agile teams”, led by Simon Bostock, who highlighted that designers have only a limited amount of control of various elements of the process ranging from an almost 100% control to almost no control. I guess we have to accept that we have no control over certain aspects.

It was followed by another workshop, “Introduction to structured content” by Bonny Colville-Hyde. The first thing I realised as soon as the session started was how much her division of content into taxonomy, content types, fields, paragraphs, etc matches the Drupal world. It was a good session, maybe designers that are not involved in site planning, migration or site building tasks would have found it more revealing.

And that was it.

Overall I think all of the annertechies enjoyed it, I certainly did anyway. As with most conferences, I couldn’t go to all I wanted to. There were really interesting talks by fellow drupalists like Daniel Alb’s “Content is king: the DNA of designing a citizen-centred local authority website for dlrcoco.ie” or Conor Cahill’s “Researching the experiences of people cycling”.

These drupal related talks can possibly be seen again at http://drupalcampcork.com/. If you haven’t registered yet, please do, it is a free event bringing together Drupal developers, themers, end users and those interested in learning more about Drupal for two days of talks, sessions and collaborative discussions. Taking place at Cork Institute of Technology (CIT), see you there.

Nov 18 2016
Nov 18

Man, it's gonna be great!

But then a little voice pipes up: 'It's too complicated! We're trying to do too much! Can't we simplify this?' Nobody wants to listen to the nay-sayer, and the project proceeds apace. In due course, the complicated and extensive nature of the project begins to take its toll. Budgets run dry. Completion dates make a faint whizzing noise as they fly by. And yet the project isn't finished. Cracks appear, bugs sneak through and by the end, you just can't wait until it's over. The love of your life has turned into a horror-show that is slowly leeching the joy from existence.

The little voice, long forgotten, can no longer even be heard.

Let's do things differently! On time, on budget, in scope and on point. Wouldn't that be lovely? One important strategy on any project is the championing of simplicity. For any given item, be it design, feature or content, is to ask: "Can this be simplified? Is it currently over-complicated?"

Simple does not mean Stupid

A simple site need not be one that is devoid of functionality. Nor is it one with an overly simplistic data model or information architecture. It is one which has had the fat trimmed from it; it only includes the elements that are actually needed. Often it is in the identification of the actual needs and the elimination of flights of fancy that the greatest challenge and real rewards lie.

Simple is not Ugly

A simple design will capture the elegance of form, forgoing the unnecessary in the pursuit of perfection. In this era of responsive design, simplicity shines. Single column, full width designs are far more readily made responsive than more complex designs. Accessibility also benefits from simplicity. Naturally, the fewer tricks, hacks and workarounds used to bring a design to light, the more likely it is to be accessible by default. Also, with less thinking needed for the actual design implementation, it leaves more room to build the site in such a way as to benefit the most people.

Drupal is rather opinionated in the way it expects you to build your website's theme. That can be a frustrating experience, if you have not learned how it works. However, imperfect as it is, the theme system is very, very powerful and can actually help a themer to realise that design dream. The simplicity champion says: work with the system, don't fight it. Figure out the Drupal way and make it work for your design. Simplicity lends itself to theme harmony.

Lastly, minimalism as a design school is a beautiful thing, albeit sometimes difficult to achieve. Simplicity strips away the noise from a design until you are left with just the signal.

Simple will not be Useless

Simplicity includes the functionality that people need to get things done. It eliminates the things that people never use. You might look at eye tracking or click tracking data to figure out what people use and iterate your design to improve it over time. Real data from real users is invaluable for this process.

A simplicity champion will also reign in the wilder ideas of functionality: for example, maybe you don't need full, continuous, synchronous communication between your CRM and your website. Maybe one-way communication (i.e. web-to-lead style communication) would actually be sufficient. Or maybe periodic data imports from the site meets all the requirements, in which case the site only needs to be able to export data.

A simplicity champion will not be blinded by a request that comes tightly coupled with a suggested solution, but will reach beyond to figure out the real core requirements and design solutions to meet those.


Simple will be Beautifully Functional

On a massive scale, Google is simple. In effect, it's a one page website with only an input field and then some results returned. But at heart it is beautifully functional. You type in your request, it gives you back suggestions. We love this approach and try to make it work on all projects that we design and build. Take, for example, the www.housing.gov.ie website of the Department of the Housing, Planning, Community and Local Government. A limited colour palette, simple fonts, a simple layout... and a great-looking site that works across devices and transforms what was once a highly complex maze of documents into a very easy to use, information rich asset for the department and all its customers.

Overly Complex is always Expensive & Difficult to Maintain

Complex sites are not only more difficult and costly to build, but this trend continues throughout the lifecycle of the project. With many moving parts, changes need to be planned with greater care and tested far more extensively in order to avoid unintended consequences. Even supposedly simple changes can become large enterprises. Sometimes complexity is unavoidable and that is fine: all these hurdles can be overcome, but it is worth considering the long term effect of your design & requirements choices at the beginning of your project. Your site is not just for Christmas.

Websites Are Like Whiskey

Minimalism is the art of stripping back everything unneeded until you are just left with the core of necessity. In this way, a minimalist site can be thought of like a good whiskey. On the surface, it's simple to look at and made of only a handful of ingredients. But its minimalist appearance belies the depth of complexity present in the process through which it is distilled into being. Skilled craftspeople with decades of knowledge put their love of their craft to use to build you the ultimate product.

Just like excellent whiskey, excellent websites are the product of a process honed over thousands of hours of experience, resulting in beautiful, simple sites that are a joy to use.


Would you like to benefit from our crafting process? Contact us to chat about how we can bring the beauty of simplicity to your project.

Nov 17 2016
D
Nov 17

This latest release for [RNG](RNG brings two major features: The ability for anonymous users to register for events. And the ability to create, and associate non-users with events.

RNG is an event management module for Drupal 8 created in the spirit of Entity Registration (Drupal 7) and Signup (Drupal 6). Users can create registrations for events, and event managers can manage these registrations.

Note: This post discusses updates to the RNG project which are available in a beta release. See this issue for how to get RNG 1.3 beta.

The event registration form has been reworked into a re-usable Drupal element, whilst making heavy use of AJAX. The registrant selector now accepts multiple registrants. Registrants can also be modified after the registration is created.

*Associate multiple registrants with a registration.* *Create new registrants within the registration form.* *Modify the meta registrant form within the registration form.*

Access control has been reworked to permit anonymous users to register for events.

RNG requires that all registrants for a registration are a Drupal entity. Since anonymous users do not correspond to a user entity, the RNG Contact project provides a way to create non-user registants in a similar fashion to how contacts work on your phone.

See main RNG Contact article: RNG Contact: Anonymous registrants for RNG.

  • Registrant entities now have bundles
  • Added registrant type configuration entity
  • Added control over which identity types can be referenced or created within each event type.
  • Added ability to specify minimum and maximum registrants per registration.
  • Added an interface to view and add RNG related fields.
  • Event settings pages now use the admin theme.
  • And many other behind the scene changes.

Cover photo: B&W Crowd by whoohoo120. License CC BY 2.0

Nov 15 2016
Nov 15

Drupal User Login Path Customization

Thwarting Attack Web Bots

Michael J. Ross 2016-11-15

This article was published in the print magazine Drupal Watchdog, Volume 6 Issue 3, 2016-12-24, on pages 22-24, by Linux New Media.

Drupal websites, like those based upon other CMSs, are frequently hammered by web bots that use various automated techniques for trying to break into those sites. One of the most common methods is for the bot to go to the default page where users login, which varies depending on the CMS chosen by the site builders. A bot has ways to try to determine the particular CMS of any given target, often by examining the HTML of the target site's home page or searching for certain CMS-specific files or directories on the server. However, because there is a limited set of CMSs used to build the majority of such websites, a bot can simply try their various user login paths until landing on the right one.

The problem of misbehaving web bots goes beyond security concerns, because they are consuming your web account's allocated bandwidth, program executions, and allowed time, which might be quite limited depending on your budget and the chosen hosting plan. Simply put, this web traffic is doing your site absolutely no good.

In the case of a Drupal website, the login page is by default found at /user/login (e.g., http://www.example.com/user/login). Unfortunately, having a known login page is like presenting hackers with a fixed target. Thus, it is wise to remove that target from automated attacks, specifically by changing the path of the login page, so that when any of these hacker bots eventually reach your Drupal site, it will appear to the bot that your site doesn't even have a login page, even though it does and is easily accessible by human users through a clearly labeled navigation link. Such a tactic can be an effective measure in making your site more secure.

You can easily confirm that such bots are constantly on the prowl, looking for login pages on Drupal sites: Simply create and publish a new site — or modify an existing and indexed site — so that within the user account settings (/admin/config/people/accounts), visitors can approve accounts without administrative approval. It won't take long before those tireless bots are signing up for new accounts, with junk email addresses, and attempting to login to your site using those new bogus accounts.

HTTP Trickery Possible?

Before diving into solutions that are known to work, I briefly consider whether it is possible to achieve this functionality simply by making one or more changes to the website's HTTP access file (.htaccess), because any Drupal developer who feels comfortable making changes to that file (or any others in the root directory of the site) will likely wonder if such a solution is possible.

At least one Drupal developer has asserted that this simple method works: Assuming your website is at http://www.example.com/, add the following lines of code to that HTTP access file:

Redirect 301 /user http://www.example.com/404
Redirect 301 /user/login http://www.example.com/404

These instructions to the web server tell it to send any visitor — whether human or robotic — to your 404 error ("Page not found") page for any attempt to go to the user or user login page. How, then, do you send legitimate visitors, including yourself, to the correct pages? The claim is that one merely adds one or more dummy directory names to the URLs in your site's navigation menu(s). For instance, a URL such as http://www.example.com/user/legit supposedly bypasses the redirect. However, testing indicates this is not true. Specifically, the server redirects that URL to http://www.example.com/404/legit.

Research has not turned up any other strategies that rely on the web server itself and not Drupal's handling of paths, but that does not imply that they do not exist and would not be appropriate for your situation if they are feasible.

Module Magic

If you would prefer not to make changes to critical files in your root directory or if your web hosting plan doesn't even allow it, then you could instead utilize a module. The simplest such solution is to use the Path core module. After enabling it, create a "Page not found" page (e.g., at /content/page-not-found). Then create a path alias with the "Existing system path" set to "user" and its corresponding "Path alias" set to "member" (or whatever value you want to use as the substitute path). This addition defines the path for the user login link in your navigation. Then create two more path aliases with the "Existing system path" set to "content/page-not-found" and the "Path alias" field set to "user" for the first one and "user/login" for the second. These additions will send the bad bots off in the wrong direction.

If you elect to customize both paths and stay with the default values suggested by the module (/backend and /member, respectively), then you will find, using a clean install of Drupal 7 as an example, that going to the conventional login path produces the expected "Page not found" message, whereas going to http://drupal_7_test/member produces the usual login page.

Login page with moduleFigure 1. Login page with module

Unfortunately, this strategy does not work when the website is put into maintenance mode (/admin/config/development/maintenance). Under that scenario, a bot that tries /user/login is sent to the (attack-proof) "Site under maintenance" message page, but if the bot tries /user, it is sent to the actual login page (/member), at which point the bot can perform whatever nastiness was planned.

The second solution works nicely when a site is put into maintenance mode and, thus, would be a much better choice for any site that may receive extensive maintenance in the future. It makes use of the Rename Admin Paths module, which allows you to change the admin path (which is typically at /admin, the user login path, or both — all in a simple settings page, at /admin/config/user-interface/rename-admin-paths.

Rename Admin Paths settingsFigure 2. Rename Admin Paths settings

The two customized paths work even when the site is put into maintenance mode.

The module has stable versions for both Drupal 7 and 8. Note that some Drupal developers recommend against renaming the admin path because doing so can apparently cause problems with other contrib modules, such as Display Suite.

Regardless of which approach you choose, modifying the user login path can help harden your website against such automated attacks. After all, if you know that your castle is going to be attacked by a multitude of fearsome and soulless dragons, it is best not to leave your most vulnerable opening at the exact same location as that of every other castle. Move your gate and drawbridge, and while you're at it, top up the moat with wildfire. Winter is coming.

Copyright © 2016 Michael J. Ross. All rights reserved.
Nov 13 2016
Nov 13

Drupal and ClamAV Integration for Scanning Upload Files

Michael J. Ross 2016-11-13

This article was published in the print magazine Drupal Watchdog, Volume 6 Issue 3, 2016-12-24, on page 8, by Linux New Media.

If your website needs to allow visitors to upload files that may later be downloaded by other visitors, then your site should definitely be checking those files to verify that they do not contain any viruses or other malevolent code. Fortunately, using the ClamAV module, you can add the open source ClamAV antivirus software to your site. The module documentation briefly explains how to install the virus scanner if you have control over the server. (For shared hosting, you would need to request the web hosting company to install it for you.) The module will examine every file that users try to upload, and if the ClamAV scanner detects malicious code, the file will not be saved on the server.

Copyright © 2016 Michael J. Ross. All rights reserved.
Nov 10 2016
Nov 10

Clients sign off on designs. You build a website for them based on these designs. It looks quite like the designs, but not exactly like them. It's not your fault. It's not the client's fault. But wouldn't it be nice if you could build what the client signed off?

Why are the websites we build not exactly like what the client signs off and why is it nobody’s fault? Here’s three (good) reasons:

  1. Websites in the real world use real content – not all titles have 5 words, images have different dimensions, etc.
  2. Designs are in static (image) format so can’t be tested on real devices and screen widths such as phones, tablets, desktops, and smart TVs. So, even though you've got “mobile” designs, they were designed for a specific mobile screen size, but mobile screen sizes can be anything from 3.5 inches to 11 inches.
  3. The designs were completed in my most hated design tool – Photoshop, which renders design elements (especially fonts) different to how browsers do. For example, a thin font in Photoshop might be much fatter in Firefox. Why not just see what it’s really going to look like.

Photoshop is for editing photos (the clue is in the name) not designing websites. If your designer comes to you in 2016+ with designs created in Photoshop, you’ve hired the wrong designer.

Surely there’s an interface designer that is better than Photoshop? There is: SketchApp - built especially for designing user interfaces, but it still falls waaaay short when you want to give your clients designs that they can touch and feel and smell and see exactly what they are going to get. SketchApp is great for rapid prototyping and early stage mockups. It’s great for quickly designing ‘patterns’ or ‘elements’ but not for full designs – again, you can’t expect clients to get a true feeling for how their website works rather than looks by giving them static images of it.

Right, Mark, is there a solution to this conundrum? Yes. It’s called “Design in the Browser” - use the tool that the design will be accessed in to create the design. Give your clients a coded-up prototype. Get your design ideas into code, send your client a link to the website. Let them test it on their phone, on their tablets, on their teenagers’ PlayStations, on their desktops. Let the CEO scream when it doesn’t work on his Windows XP with IE8 as the browser he refuses to let go of. And then explain to him that he wouldn’t have known that if we had sent him a Photoshop document and if he wants it to work on his dinosaur of a machine, it’s going to cost him 30% more. Let him make his decision based on real world interactions.

Do you have a magic workflow that can slay all the dragons?

Here’s my 10 Step Plan for Losing Weight (or at least reducing technical (frontend) debt):

  1. Discovery: see what the client wants.
  2. Research: find out what the client actually needs.
  3. Rapid prototyping 1: use pen and paper, post-it notes, anything to come up with quick ideas about what a design might encapsulate, what a workflow might look like, how an interaction might function.
  4. Rapid prototyping 2: use SketchApp to create quick outlines of what elements of the design might look like (from herein called ‘components’). For example, a search box (input and submit button with bounding border), a news teaser (teaser image, title, post date, snippet, read more link), etc.
  5. Create each design component as an actual coded object. Write the HTML for the structure, the CSS for the layout and styles, and the JavaScript for any interactivity.
  6. Use these design components to create fully-fledged mockups of sample pages of the new website – homepage, listing page, full article – complete with real sample content and images from the client's website.
  7. Send a link to the prototype to the client. This is their designs delivered. Ask for feedback.
  8. Make changes based on client feedback.
  9. Get sign off for the designs from the client.
  10. Use the HTML, CSS, JS from the prototype in the real world implementation of the designs. In short, create a website exactly like what the client was expecting. Not an approximation of it, the thing itself – so the product they get is the product they sign off.

10 Reasons Why Your Client Needs to Insist You Design in the Browser

  1. We use real world content to test that the designs work with the same type of content our clients create.
  2. We can test these designs on the devices they are ultimately going to be consumed on – phones, tablets, desktops, etc.
  3. QA for the frontend begins very early: as the client is signing off the designs, they are signing off the frontend of the website.
  4. QA becomes an on-going item throughout the website build, not something tacked on at the end.
  5. If the client wants an updated design – for example, she would like all text on buttons to be uppercase, we can simply edit the .button class in our CSS and not have to go through 40 PSDs to change each instance of it, saving you time and effort and the client money.
  6. Because we have an interactive prototype of the website, we can use this for regression testing. So, if you add a new feature to the website in Phase 2, you can easily check that the new feature doesn’t break any of the present features.
  7. The client always has the most up-to-date copy of the designs. All they need to do is click on the link you have sent them to see what has changed.
  8. You are providing your client with a styleguide. They can mark this against their print brand guidelines to make sure both are in sync.
  9. When a new feature is requested your client will already have a list of pre-defined design components to choose from. This means you may not need to invent new ones – again, a money saver for the client.
  10. There are no surprises or hidden charges. The client gets what they client is paying for.

I know, I know. This sounds difficult. It sounds like a new way of working. It’s going to take time and effort to implement this workflow. You build websites with Drupal, does this mean you will have to maintain two versions of the frontend?

I come with solutions, not problems. Our tool of choice for this approach is an “Atomic Design” system called PatternLab. This lets us do everything listed above. Using Version 2 of this allows us to integrate the templates that we create for PatternLab directly into our Drupal workflow. What does this mean? Well, without blowing your mind too much, it means that the design that the client signs off is the actual code that is going to be used in the Drupal theme. We do not copy/paste the CSS and JS from one place to another, we do not have anything magic to try to keep two systems in sync. We provide the client with a URL such as styleguide.example.com and they can refer to that as the canonical design as a static HTML prototype while example.com will be the Drupal implementation of it – pulling the templates, CSS, and JS into its system.

Thanks to the great work from the folks behind PatternLab and with some very generous help from the great team at Phase2 who first created a Drupal version of it, we are able to design in the browser, get sign-off from our clients, and then focus on developing the CMS with the frontend work already complete.

Ooooh. That sounds nice doesn’t it? Tune in for part 2 of this series where I’ll detail how to use PatternLab with Drupal. Or, even better, come to Drupal Camp Cork on November 25 and 26 where I’ll be giving a presentation about all of this.

Nov 09 2016
Nov 09

The composer.json file schema contains a script section that allows projects to define actions that are executed under certain circumstances. The most recognizable use for this section is to define events that happen at defined times during the execution of Composer commands; for example, the post-update-cmd runs at the end of every composer update command.

However, Composer also allows custom commands to be defined in this same section; an example of this is shown in the composer.json schema documentation
{
    "scripts": {
        "test": "phpunit"
    }
}

Once this custom command definition has been added to your composer.json file, then composer test will do the same thing as ./vendor/bin/phpunit or, equivalently, composer exec phpunit. When Composer runs a script, it first places vendor/bin on the head of the $PATH environment variable, so that any tools provided by any dependency of the current project will be given precedence over tools in the global search path. Providing all the dependencies a project needs in its require-dev section enhances build reproducibility, and makes it possible to simply “check out and go” when picking up a new project. If a project advertises all of the common build steps, such as test and perhaps deploy as Composer scripts, it will make it easier for new contributors to learn how to build and test it.

Whatever mechanism your project uses for these sorts of tasks should be clearly described in your README or CONTRIBUTING document. Providing a wrapper script is a nice service, as it insulates the user from changes to the default parameters required to run the tests. In the Linux world, it the sequence of commands make, make test, make install has become a familiar pattern in many projects. Make has a rich history, and is often the best choice for managing project scripts. For Composer-based projects that are pure PHP, and include all of their test and build tools in the require-dev section of their composer.json file, it makes a lot of sense to use Composer scripts as a lightweight replacement for Make. Doing this allows contributors to your project to get started by simply running git clone and composer require. This works even on systems that do not have make installed.

An example of a scripts section from such a project is shown below:

{
    "scripts": {
        "phar": "box build .",
        "cs": "phpcs --standard=PSR2 -n src",
        "cbf": "phpcbf --standard=PSR2 -n src",
        "unit": "phpunit",
        "test": [
            "@unit",
            "@cs"
        ]
    }
}

The snippet above offers the following actions:

  • phar: build a phar using the box2 application.
  • cs: run the PHP code sniffer using PSR2 standards.
  • cbf: run the PHP code beautifier to correct code to PSR2 standards, where possible.
  • unit: run the PHP unit tests via phpunit.
  • test: run the unit tests and the code sniffer.

One thing to note about Composer scripts, however, is that you might notice some changes in behavior when you run certain commands this way. For example, PHPUnit output will always come out in plain unstyled text. Colored output makes it a lot easier to find the error output when tests fail, so let’s fix this problem. We can adjust our definition of the test command to instruct phpunit to always use colored output text, like so:
{
    "scripts": {
        "test": "phpunit --colors=always"
    }
}

There is another difference that may affect some commands, and that is that standard input will be attached to a TTY when the script is ran directly from the shell, but will be a redirected input stream when run through Composer. This can have various effects; for example, Symfony Console will not call the interact() method if there is no attached TTY. This could get in your way if, for example, you were trying to write functional tests that test interaction, as the consolidation/annotated-command project does. There are a couple of options for fixing this situation. The first is to explicitly specify that standard input should come from a TTY when running the command:

{
    "scripts": {
        "test": "phpunit --colors=always < /dev/tty"
    }
}

This effectively gets Symfony Console to call the interact() method again; however, the down-side to this option is that it is not as portable; composer test will no longer work in environments where /dev/tty is not available. We can instead consider a domain-specific solution tailored for Symfony Console.  The code that calls the interactive method looks like this:

if ([email protected]_isatty($inputStream) && false === getenv('SHELL_INTERACTIVE')) {
    $input->setInteractive(false);
}

We can therefore see that, if we do not want to provide a TTY directly, but we still wish our Symfony Console command to support interactivity, we can simply define the SHELL_INTERACTIVE environment variable, like so:

{
    "scripts": {
        "test": "SHELL_INTERACTIVE=1 phpunit --colors=always"
    }
}

That technique will work for other Symfony Console applications as well. Note that the SHELL_INTERACTIVE environment variable has no influence on PHPUnit itself; the example above is used in an instance where PHPUnit is being used to run functional tests on a Symfony Console application. It would be equally valid to use putenv() in the test functions themselves.

That is all there is to Composer scripts. This simple concept is easy to implement, and beneficial to new and returning contributors alike. Go ahead and give it a try in your open-source projects—who knows, it might even increase contribution.

You may also like: 

Topics Development, Drupal, Drupal Planet
Nov 08 2016
Nov 08

Drupal sites with events functionality, often have to allow their users to export events in their personal calendars. On a recent Drupal 8 project we were asked to integrate 3rd party service Add to Calendar to their events and having found no formal integration of the widget with Drupal we developed and contributed this module. The widget provided by Add to calendar supports export of Dates / events to iCalender, Google Calendar, Outlook, Outlook Online and Yahoo Calendar.

add-to-calendar-blue

 

Why use Add To Calendar Service?

  • Add to Calendar Module provides a widget to export events.
  • With Add to Calendar Module, you can create event button on a page and allow guests to add this event to their calendar.

How Does Add to Calendar Module Works?

Add to Calendar Module provides third party field formatter settings for DateTime fields. Module internally uses services provided by http://addtocalendar.com to load free add to calendar button for event page on website and email. Clicking on this button, the event is exported to the corresponding website with proper information in the next tab where a user can add the event to their calendar. Besides, it provides a handful of configuration for a really flexible experience, Allowing you to use your datetime format along with Add to Calendar button.

Using Add to Calendar

  1. Download and enable Add to Calendar module (https://www.drupal.org/project/addtocalendar)

  2. Adding Add to Calendar button to any datetime field would require enabling “Show Add to Calendar” checkbox present at format configurations on Manage Display page of the desired content type.

add-to-calendar-manage-display

 

  1. Following configurations are available:

Option Description Style Three basic styles are available: Basic, Blue and Glow Orange Display Text Text for the display button. Event Details Module provides you three options here. You may opt for static data, tokenized value or any field value, specific to the current entity. Privacy Use public for free access to event information while private if the event is closed to public access. Security Level To specify whether button link should use http or https Calendars to show Select Calendars to be enabled for the display.

4. Save the settings and visit content display page.

Developer Support

Devs have the option to add "Add to Calendar" button anywhere on the website by following below steps:

1. Include base library ('addtocalendar/base') for add to calendar basic functionality. Optionally, You may also one of the following style libraries for styling the display button:

  • 'addtocalendar/blue'
  • 'addtocalendar/glow_orange'
$variables['#attached']['library'][] = 'addtocalendar/base';

2. Place event data on the page as:



2016-05-04 12:00:00
2016-05-04 18:00:00
Europe/London
Star Wars Day Party
May the force be with you
Tatooine
Luke Skywalker
[email protected]

For further customization of this custom button visit: http://addtocalendar.com/ Event Data Options section.

3. This would create "Add to Calendar" button for your website.

 

Nov 05 2016
Nov 05

A year ago I proposed a session for Drupalcon Mumbai and Drupalcon New Orleans, called “The best of both worlds”. It promised to show attendees how to write Drupal 8 code for Drupal 7 sites. I never ended up giving the session, but this week I got an email asking for more information. So in case it ever comes up again, here’s my own collection of resources on the subject.

The big improvement that’s hard for D7 developers to get used to is injected services. The service container module makes that possible in D7. The brilliant FabianX wrote it to make his life easier in writing render cache, and his is always a good example to follow! This module creates a service container for D7, which you use just like the container in D8. You can write independent, OO code that is unit testable, with service dependencies declared in a YAML file. Note that you will also need the registry autoload module to get PS4 namespaced autoloading!

I just mentioned unit testable code as a benefit of the service container. To be honest this is a little tricksy in Drupal 7. For my own custom work I tend to isolate the test environment from the rest of Drupal, so I don’t have to deal with everything else. Again, I followed Fabian’s example there by looking at how render cache does it’s tests. If you do want better integration, there is a good lullabot post that talks about (more) proper PHPUnit integration. https://www.lullabot.com/articles/write-unit-tests-for-your-drupal-7-code-part-1 .

Next on my list is Composer-managed dependencies. The Acquia developer blog has a great post about using Composer Manager for this in D7. This is a huge win for a lot of custom modules, and very easy.

Last is plugins. The rest of this list is in no particular order, but I left plugins for last because I think this isn’t actually necessary in D7. Personally I use modules' own hooks and just autoload independent classes. You might consider using plugins instead if you’re going to write several plugins for the same module. In any case, Lee Rowlands has the go-to blog post about this.

All together, you can combine these approaches to write code for D7 with the biggest Dx features of D8: service injection, phpunit testing, composer libraries, and plugins. Note that each of these blog posts assumes different workarounds for all the other functionalities… but they should help you get an understanding of how to use that particular Dx improvement in 7.

When I wrote that session proposal, I thought of this as a good way for D7 developers to learn D8 practices gradually, one at a time. I no longer think that’s true. Mostly, there are so few working examples of D7 code using these practices, that it’s quite hard to get your stuff working. This is particularly hard when you’re just learning about the concept in the first place! Personally, I could mess around with this stuff and make my life harder with it in D7. But I couldn’t really get the best advantage out of them until I had better examples. My best learning aids were the examples in D8 core, and the code scaffolding available through Drush and Drupal console.

But now that I’m comfortable with the concepts… I would absolutely use these approaches in D7 work. You know, if I’m FORCED to work in the old system. :)

One last aside here: it is easy to fall into the mindset that Drupal 8 practices are better just because they’re newer. This is simply not true. These practices are not handed down from heaven, after all! When you have the rest of the D8 architecture in place, certain kinds of code tasks are much easier. That’s why we like developing for it so much more. But other (less common, IMO) tasks are harder. And doing any of this in D7 means you have to put the architecture in place, too. That’s a lot of time, and it’s only worthwhile if you’re going to use the particular strengths of these practices.

So if it looks like one of these D8 practices will make your life easier for a particular task in D7, then by all means use these approaches to get there. Composer manager has a particularly low bar - it’s so easy to use, and makes so many tasks easier, it’s a good approach to many tasks. But if I ever catch you implementing service container to get two lines of code into a form_alter, I will come to where you work and slap your hands off the keyboard.

Happy coding!

Nov 03 2016
Nov 03

Drupal configuration is the all-important glue that instructs the Drupal core and contrib code how to operate in the context of the current web application. In Drupal 7, there was no formal configuration API in core. The ctools contrib module provided an exportables API that was widely implemented, but was not universally supported. Drupal 8 has greatly improved on this state of affairs by providing the Configuration Management API in core. Now, configuration can be handled in a uniform and predictable way. During runtime, configuration items exist in the database, as always, but may be exported to and imported from the filesystem as needed.

These synchronization operations by default happen in the CONFIG_SYNC_DIRECTORY. The location of this directory is defined in the settings.php file. If a config sync directory is not defined when the Drupal installer runs, it will create one inside of the files directory. Because configuration files may contain sensitive information, Drupal takes measures to protect the location that the configuration files are placed to prevent a situation where an outside party might be able to read one of these files with a regular web request. There are two primary techniques employed:

  1. The name of the configuration folder is randomly generated, to make it impossible to guess the path to the configuration files.

  2. A .htaccess file is written to the directory, so that sites that use Apache, at least, will not serve files stored inside it.

While these measures provide a reasonable level of protection, an even better solution is to place the configuration files entirely outside of the web server’s document root, so that there is absolutely no way that the configuration files can be addressed. It is easy to change the location of the sync directory; this process is described in the drupal.org documentation page.

Your configuration files should be committed to your git repository, so, before you move your configuration files, you should ensure that you are working with a site that is utilizing a relocated document root. An example project to do this is presented in the blog post Using Composer with a Relocated Document Root on Pantheon.

To specify a different location for your configuration files, you can redefine this variable to place your configuration above your Drupal root by adding the following code to your settings.php file:
 
/**
 * Place the config directory outside of the Drupal root.
 */
$config_directories = array(
  CONFIG_SYNC_DIRECTORY => dirname(DRUPAL_ROOT) . '/config',
);

On a Pantheon site, you should make sure that you add this code after the settings.pantheon.php is included; otherwise, the CONFIG_SYNC_DIRECTORY will be overwritten with the Pantheon default value. Also, you need to ensure that the configuration directory exists before you change this variable in your settings file. If you already have an extant configuration directory, you can simply git mv it to its new location.

$ git mv web/sites/default/files/config .

That’s really all there is to it. Once your configuration directory has been relocated, all configuration management operations will continue to work the same way that they always have. If you are using Drupal 8 with a relocated document root, relocating your configuration files is something that you should be doing.

Related Information:

Topics Development, Drupal Planet, Drupal
Oct 30 2016
Oct 30

Being able to analyze and monitor - with ease - the performance of your application is a key part to the success of any web based project. Not only because a slow site might negatively affect conversions, but as Drupal 8 is shifting away from websites and more into complex web based applications, specific business transactions are becoming more important. Considering Drupal's historical poor performance when it comes to "backend" operations, having proper profiling tools is a must.

The RPS (requests per second) metric becomes meaningless in applications where user interaction is the norm, and is only relevant now for brochureware.

 

In this post I will be discussing the Tideways Profiler: a FREE and OPEN SOURCE maintained drop-in replacement for XHProf and UProfiler that has support for PHP7, Windows and Linux and is actively maintained. But this is not just a drop-in replacement for XHprof and UProfiler.... to keep the business running - Tideways - the company behind the free and open source PHP extension provides a cloud service (very much like the proprietary blackfire.io) where you can collect, monitor and analyze your profiling sessions. If you want to stay on the cheap side, you can still use any of the free XHprof and UProfiler compatible tools such as XHGUI.

Why is Tideways important? Because if it was not for it there wouldn't be any available free and open source profiling tool that you can use with PHP7 and that is actively maintained. Worth mentioning here is that you can still profile with the XDebug extension (also free and open source) but it is not suitable for continuous performance monitoring neither for production usage. And to be honest, it is a pain to use even in development environments as the application becomes orders of magnitude slower when enabled.

At some point a big PHP "customer" called Facebook realized that they needed a tool to profile PHP software that could be deployed to production environments. Facebook developed such a tool in-house and open sourced it somewhere between the end of 2008 and the start of 2009. They worked on it for a couple of years until the project was abandoned in 2013. The original PECL extension can be found here

Then a fork of XHprof called UProfiler took the lead, being taken care after by Friends Of PHP, a small group of individuals that includes Fabien Potencier and Pierre Joye (among others).

During those years, integrations with XHprof/UProfiler poped for almost any platform (such as the Drupal XHProf module) as well as external tools to collect, analize and monitor profiling data such as XHgui.

But as projects mature and the individuals and companies behind them start to seek feasible business models to keep the business running (because just giving away simply won't work), things start to change. And so hapend with UProfiler. With Fabien being the main maintainer and seeking monetization on the hard built PHP ecosystem, they released the proprietary Blackfire.io knowing that the arrival of PHP7 would mean that - unless someone big took the lead and upgraded the current profiler - people would rather pay a handful of bucks for a proprietary solution (as PHP now had found it's way into more serious business where being cheap/free is not important anymore). It was obvious that they had no interest in anyone keeping up the work on UProfiler or forking it out, otherwise this thread would have not been locked.

Then Tideways was born. A small company with talented and dedicated individuals that forked the UProfiler extension, made it PHP7 compatible, added full Linux and Windows support and built a cloud based business model while keeping the PHP Extension free and fully backwards compatible with XHprof and UProfiler tools.

If you deploy the free PHP extension with the Tideways daemon, profiling data can be collected directly on your production environments (without affecting performance) and sent to the Tideways cloud service to be analyzed.

The Tideways website already contains detailed instructions on how to setup Tideways extension and Daemon on Linux. They also give instructions for Windows, but I feel those to be a bit lacking and will expand them here.

The first thing you need is the Tideways PHP extension that you download from the CI service APPVeyor:

https://ci.appveyor.com/project/tideways/php-profiler-extension/history

Look for the latest successful build (the most recent one with the "green light"):

Then choose which build you need for your specific platform (PHP Version, Thread Safety and x86/x64):

Then navigate to the artifacts tab, where you fill find the download link for the compiled PHP extension:

As usually, download the php_tideways.dll into your extension folder (/ext) and enable it in your php.ini file:

extension=php_tideways.dll

After enabling the PHP extension in PHP.ini you should see it in a phpinfo() dump:

Download the Tideways.php PHP library from here:

https://github.com/tideways/profiler/blob/master/Tideways.php

And store it somewhere in your system, such as d:\_Webs\xxx\Tideways.php

Now configure the extension in your PHP.ini file:

extension=php_tideways.dll
tideways.auto_prepend_library=0
auto_prepend_file = D:\_Webs\xxx\Tideways.php
tideways.connection=tcp://127.0.0.1:8136
tideways.api_key=MYAPIKEY
tideways.sample_rate=50
tideways.collect=profiling
tideways.monitor=full

The tideways.api_key value here should be the API key for an application that you have setup through the Tideways Cloud Service:

https://app.tideways.io

They offer 30 day trials so you can easily test this without commitment.

To get your environment to send data to the Tideways cloud service you need to deploy the Tideways daemon, that you can download from here.

s3-eu-west-1.amazonaws.com/qafoo-profiler/downloads/testing/tideways-windows-4.0.1p3.zip

If you are just trying out the profiler, you can start the daemon manually from the console using:

D:\your_path\daemon_windows_amd64.exe --address="127.0.0.1:8136"

 

But if you are going to deploy this on production environments you need to setup the daemon as a windows service.

Unfortunately, as the tideways daemon is not a .Net native windows service, the recommended way to deploy it as a service is to use the Non Sucking Service Manager:

The NSSM manager will ensure that the daemon is up and running, and to restart it in case it crashes or gets stuck.

Remember to let the daemon through your firewall, otherwise it won't be able to report data to the Tideways cloud service.

The rest of the Tideways setup instructions are much the same as the ones for Linux.

You should also get ahold of the Tideways Chrome Extension. Usually on production environments you don't capture full call traces (due to performance reasons) and you only profile a percentage of the requests (see the Tideways official setup guide for more details about this). Using the Chrome extension you can tell the profiler to capture full traces during a period of time directly from your browser.

After deploying the Tideways extension and daemon, you should install the Tideways Drupal module.

Once enabled, you will see a small report in your status page:

There is nothing more to do from Drupal. Under the hood what the module does is tell the Tideways extension meaningful information about each request, so that you can find useful information in the Tideways cloud reports.

Launch your website on Chrome and using the Tideways Chrome exension, start a full trace capturing session:

 

Then the most recent captured traces from the manual session will start showing (right above the regular continous monitoring data):

Choose any specific business transaction and access the full data:

Out of the box you get full traces, callgraph, timelines and much more. If you need anything custom, you can always add custom instrumentation.

To get a deeper look into the possibilities of Tideways, use the official documentation.

Oct 25 2016
Oct 25

CSSgram module supplements Drupal Image styling experience by making Instagram like filters available to your Drupal 8 site images, we do this with help of CSSgram library. 

Beauty of this module is, it simply uses css to beautify your image.

cssgram-filters-sample

Few CSSGram sample filters applied to an image.

How CSSGram Module works?

CSSGram module uses CSSGram Library for adding filter effects via CSS to the image fields. Module extends Field Formatter Settings to add image filter for that particular field. CSSGram extends field formatter settings and hence these filters can be applied on top of the existing available image formatters and image presets. Allowing you to use your desired image preset along with CSSGram filters.
 

Using CSSGram

  1. Download and enable CSSGram module (https://www.drupal.org/project/cssgram)

  2. Visit Manage Display of content type and for the desired image field, click on the setting link under format column.

  3. Select Filter option lets us choose from the available image filters. Just select the desired image filter and hit update button.

third-party-settings-cssgram
  1. Save the settings and visit the content display page.

Developer Support

Devs have the option to use these filters anywhere on the site by just attaching the ‘cssgram/cssgram’ library and then applying any of the available css filter class to the wrapper element.


function mymodule_preprocess_field(&$variables) {
    // Add desired css class.
    $variables['attributes']['class'] = 'kelvin';
    // Attach cssgram library.
    $variables['#attached']['library'][] = 'cssgram/cssgram';
}
Oct 24 2016
Oct 24

code

You know the old saying: “This is how the world ends: not with a bang, but with a misplaced DROP TABLE.” Working directly with Drupal 7’s database is an arduous task at best.  It’s a sprawling relational system and it uses many space and memory saving tricks to be as speedy as possible.  Thankfully, there is a robust system of functions built into Drupal to help you change almost any setting from code–perfect if you want to automate changes upstream and features doesn’t do it for you.  Let’s go over a situation in which you may have been utilizing some of these functions.

Let’s say you finished your product (congratulations!), launched, and are onto fixing bugs and planning exciting new features for the future.  You’re knocking out bugs left and right like some high-flying Drupal ninja and you discover that using a field collection with conditional fields causes the field collection data to not save and all of your metadata is getting erased when certain conditions are fired.  With Cthulhu’s hot breath on your neck, you talk to the client and realize a ray of hope: you don’t actually need a field collection there, a normal set of Drupal fields will do.  How do we go about creating the new fields, copying existing data, and deleting the old fields?

The first thing we do is create the new fields and attach them.  For this, we’ll need two functions: ‘field_create_field()’ and ‘field_create_instance()’.  Both of these take an array of settings: field_name and type are we need for creating the field (also cardinality if you want to have multiple values for the field), field_name, entity_type, and bundle are required for creating the instances, though you will likely also want label, or it will otherwise default to the machine name.  So, we should have something that looks like this:

$name = [ ‘field_name’ => 'photographer_name', ‘type’ => ‘text’, ]; field_create_field($name); $instance = array( 'field_name' => $name['field_name'], 'entity_type' => node, 'bundle' => article, 'label' => 'Name', ); field_create_instance($instance);

1

2

3

4

5

6

7

8

9

10

11

12

13

14

$name = [

  field_name’ => 'photographer_name',

  type => text,

];

field_create_field($name);

$instance = array(

  'field_name' => $name['field_name'],

  'entity_type' => node,

  'bundle' => article,

  'label' => 'Name',

);

field_create_instance($instance);

If you go check out node/add/article, you should see your new text field there.  Congrats!  Next, we need to get the data from the old fields and copy it into our new field.  For this, we’ll rely on the nifty function ‘entity_load()’.  This takes two arguments, bundle name and an array of ids.  Since we are getting field collection items, we know the bundle name is ‘field_collection_item’.  We’ll need the IDs, but we’ll also need the field collection value that references the fields in each collection for later, so we’ll get them both at once.  It might be tempting to use ‘entity_load()’ to get them, but in this case you are quite safe using straight SQL, which also happens to be significantly faster.  That looks like this:

$entity_ids = array(); $field_collection_ids = array(); // Select the field collection id and the attached entity id from the db. $query = db_query('SELECT field_producer_value, entity_id FROM field_data_field_producer'); $results = $query->fetchAll(); // Separate the ids</span> foreach ($results as $result) { $field_collection_ids[] = $result->field_scald_producer_value; // We need to reference the entity ID by the field collection value for simplicity later $entity_ids[$result-&gt;field_scald_producer_value] = $result-&gt;entity_id; } // It’s possible that you might get duplicate Field Collection IDs, so we make sure they are all unique $field_collection_ids = array_unique($field_collection_ids); // Load all of the field collection entities. $field_collection_results = entity_load('field_collection_item', $field_collection_ids);

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

$entity_ids = array();

$field_collection_ids = array();

// Select the field collection id and the attached entity id from the db.

$query = db_query('SELECT field_producer_value, entity_id FROM field_data_field_producer');

$results = $query->fetchAll();

// Separate the ids

foreach ($results as $result) {

  $field_collection_ids[] = $result->field_scald_producer_value;

  // We need to reference the entity ID by the field collection value for simplicity later

  $entity_ids[$result-&gt;field_scald_producer_value] = $result-&gt;entity_id;

}

// It’s possible that you might get duplicate Field Collection IDs, so we make sure they are all unique

$field_collection_ids = array_unique($field_collection_ids);

// Load all of the field collection entities.

$field_collection_results = entity_load('field_collection_item', $field_collection_ids);

Now that we have all of the entity ids and field collection ids, we can get to the fun part: copying data! (You know you have been doing this too long when that is exciting.) What we want to do is loop through the field collection ids, load the the entity (that has the new field on it) by the id associated with the collection, copy the data from the collection to the new field, and save.  It seems like a lot, but it’s fairly simple:

foreach ($field_collection_ids as $field_collection_id) { // Load the entity the field collection is attached to $entity = entity_load('node', array($entity_ids[$field_collection_id])); // Copy the data from the collection field to the new field $entity[$entity_ids[$field_collection_id]]->photographer_name['und'][0]['value'] = $field_collection_results[$field_collection_id]->field_producer_name['und'][0]['value']; // Save! entity_save('node', $entity[$entity_ids[$field_collection_id]]); }

foreach ($field_collection_ids as $field_collection_id) {

  // Load the entity the field collection is attached to

  $entity = entity_load('node', array($entity_ids[$field_collection_id]));

  // Copy the data from the collection field to the new field

  $entity[$entity_ids[$field_collection_id]]->photographer_name['und'][0]['value'] =

  $field_collection_results[$field_collection_id]->field_producer_name['und'][0]['value'];

  // Save!

  entity_save('node', $entity[$entity_ids[$field_collection_id]]);

}

A word of warning: depending on how many entities you are processing, this could take a long time.  As of Drupal 7.34, there is a memory leak in entity_save()–this means that each save will take slightly longer than the last. This is not a problem if you have only a few hundred fields, but when you get up into five and six digits, this script will take many hours. At that point, unless you have the time (and/or can run the script as a process in the background), you might want to consider investigating other options.

Okay, so the data is copied, the nodes are saved, and the elder gods have hit the snooze button.  Last thing you have to do is delete the old field.  We’re not going to do that, at least not yet. Instead, we’re going to delete the instances of the fields.  This will preserve the old field collection data, but remove the fields from the edit forms. This way, if something goes wrong, you don’t lose the data in the old fields and can try again if needed. You can go back at a later time, if you wish, after you have confirmed that everything is correct and delete the fields. Luckily, this is the easy part:

$instance = array( 'field_name' =&gt; 'field_scald_producer', 'entity_type' =&gt; node, 'bundle' =&gt; article ); field_delete_instance($instance);

$instance = array(

  'field_name' =&gt; 'field_scald_producer',

  'entity_type' =&gt; node,

  'bundle' =&gt; article

);

field_delete_instance($instance);

And that’s it, crisis averted!  You no longer lose data and no longer have to worry about supernatural madness and death!  All you need to do now is run your script upstream with ‘drush php-script’ and watch the magic.

This sort of scripting can be daunting at first glance, but Drupal’s rich entity API can keep you from pulling out your hair or inadvertently causing an otherworldly alien intelligence from rising from the deep.  There are many more functions to take advantage of, and just about anything you can set with a click in the interface you can set in code, perfect for automation or locked down production environments.

Happy Drupaling !

Oct 23 2016
Oct 23

Discover more about the services E-commerce and CMS our digital agency has to offer for you.

In this blog post I will present how, in a recent e-Commerce project built on top of Drupal7 (the former version of the Drupal CMS), we make Drupal7, SearchAPI and Commerce play together to efficiently retrieve grouped results from Solr in SearchAPI, with no indexed data duplication.

We used the SearchAPI and the FacetAPI modules to build a search index for products, so far so good: available products and product-variations can be searched and filtered also by using a set of pre-defined facets. In a subsequent request, a new need arose from our project owner: provide a list of products where the results should include, in addition to the product details, a picture of one of the available product variations, while keep the ability to apply facets on products for the listing. Furthermore, the product variation picture displayed in the list must also match the filter applied by the user: this with the aim of not confusing users, and to provide a better user experience.

An example use case here is simple: allow users to get the list of available products and be able to filter them by the color/size/etc field of the available product variations, while displaying a picture of the available variations, and not a sample picture.

For the sake of simplicity and consistency with Drupal's Commerce module terminology, I will use the term “Product” to refer to any product-variation, while the term “Model” will be used to refer to a product.

Solr Result Grouping

We decided to use Solr (the well-known, fast and efficient search engine built on top of the Apache Lucene library) as the backend of the eCommerce platform: the reason lies not only in its full-text search features, but also in the possibility to build a fast retrieval system for the huge number of products we were expecting to be available online.

To solve the request about the display of product models, facets and available products, I intended to use the feature offered by Solr called Result-Grouping as it seemed to be suitable for our case: Solr is able to return just a subset of results by grouping them given an “single value” field (previously indexed, of course). The Facets can then be configured to be computed from: the grouped set of results, the ungrouped items or just from the first result of each group.

Such handy feature of Solr can be used in combination with the SearchAPI module by installing the SearchAPI Grouping module. The module allows to return results grouped by a single-valued field, while keeping the building process of the facets on all the results matched by the query, this behavior is configurable.

That allowed us to:

  • group the available products by the referenced model and return just one model;
  • compute the attribute's facets on the entire collection of available products;
  • reuse the data in the product index for multiple views based on different grouping settings.

Result Grouping in SearchAPI

Due to some limitations of the SearchAPI module and its query building components, such plan was not doable with the current configuration as it would require us to create a copy of the product index just to apply the specific Result Grouping feature for each view.

The reason is that the features implemented by the SearchAPI Grouping are implemented on top of the “ Alterations and Processors” functions of SearchAPI. Those are a set of specific functions that can be configured and invoked both at indexing-time and at querying-time by the SearchAPI module. In particular Alterations allows to programmatically alter the contents sent to the underlying index, while the Processors code is executed when a search query is built, executed and the results returned.

Those functions can be defined and configured only per-index.

As visible in the following picture, the SearchAPI Grouping module configuration could be done solely in the Index configuration, but not per-query.

SearchAPI: processor settings

Image 1: SearchAPI configuration for the Grouping Processor.

As the SearchAPI Grouping module is implemented as a SearchAPI Processor (as it needs to be able to alter the query sent to Solr and to handle the returned results), it would force us to create a new index for each different configuration of the result grouping.

Such limitation requires to introduce a lot of (useless) data duplication in the index, with a consequent decrease of performance when products are saved and later indexed in multiple indexes.

In particular, the duplication is more evident as the changes performed by the Processor are merely an alteration of:

  1. the query sent to Solr;
  2. the handling of the raw data returned by Solr.

This shows that there would be no need to index multiple times the same data.

Since the the possibility to define per-query processor sounded really promising and such feature could be used extensively in the same project, a new module has been implemented and published on Drupal.org: the SearchAPI Extended Processors module. (thanks to SearchAPI's maintainer, DrunkenMonkey, for the help and review :) ).

The Drupal SearchAPI Extended Processor

The new module allows to extend the standard SearchAPI behavior for Processors and lets admins configure the execution of SearchAPI Processors per query and not only per-index.

By using the new module, any index can now be used with multiple and different Processors configurations, no new indexes are needed, thus avoiding data duplication.

The new configuration is exposed, as visible in the following picture, while editing a SearchAPI view under “Advanced > Query options”.

The SearchAPI processors can be altered and re-defined for the given view, a checkbox allows to completely override the current index setting rather than providing additional processors.

Drupal SearchAPI: view's extended processor settings

Image 2: View's “Query options” with the SearchAPI Extended Processors module.

Conclusion: the new SearchAPI Extended Processors module has now been used for a few months in a complex eCommerce project at Liip and allowed us to easily implement new search features without the need to create multiple and separated indexes.

We are able to index Products data in one single (and compact) Solr index, and use it with different grouping strategies to build both product listings, model listings and model-category navigation pages without duplicating any data.

Since all those listings leverages the Solr FilterQuery query parameter to filter the correct set of products to be displayed, Solr can make use of its internal set of caches and specifically the filterCache to speed up subsequent searches and facets. This aspect, in addition to the usage of only one index, allows caches to be shared among multiple listings, and that would not be possible if separate indexes were used.

For further information, questions or curiosity drop me a line, I will be happy to help you configuring Drupal SearchAPI and Solr for your needs.

Oct 20 2016
Oct 20

We understand the importance of getting involved by contributing back and supporting the Drupal community. Drupal Contrib Days have in fact become a common practice in our agency. It's where we all split up into teams and each team collaborates on contributing back to Drupal core or to various projects or modules.

Drupal contrib day at the agency

contribe-day-1contribe-day-2

Websolutions team in the middle of a Contrib Day session.

For example, during the last Drupal contrib session one of our teams worked on porting Locker module to Drupal 8. Locker is a site authentication module that uses session to forbid access and serves as an additional layer to hide a Drupal site from public.

Another team contributed to project called Video Embed Screencast, a submodule of Video Embed Field module, which is a quick way to embed Screencast.com video in Drupal. It creates a simple field type that enables you to embed videos from Screencast.com simply by entering a video URL.

Another team worked on a project called Commerce Neteller, which is a Neteller integration for Drupal Commerce payment and checkout system that supports online payments using Neteller API.

Next project we contributed to was AudioField module. Audiofield adds a new CCK field that allows you to upload audio files and automatically displays them in a selected audio player.

The last project we contributed to was JSON Editor, which is a tool based on JSON editor library for viewing, editing, formatting, and validating JSON.

Actively contributing to the projects we use on daily basis has become an essential part of what we do. Contributing is the foundation of open source and contrib days are something that all of us look forward to. It helps projects move forward and stay competitive, it helps the entire Drupal ecosystem, and in the end it helps our customers.

Oct 17 2016
Oct 17

The information in this blog post has been superseded by: 

[DOC] Drupal 8 and Composer on Pantheon Without Continuous Integration

[DOC] Build Tools

Composer is the de-facto dependency manager for PHP; it is therefore no surprise that it is becoming more common for Drupal modules to use Composer to include the external libraries it needs to function. This trend has rather strong implications for site builders, as once a site uses at least one module that uses Composer, then it becomes necessary to also use Composer to manage your site.
 
To make managing sites with Composer easier, Pantheon now supports relocated document roots. This feature allows you to move your Drupal root to a subdirectory named web, rather than serving it from the repository root. To relocate the document root, create a pantheon.yml file at the root of your repository. It should contain the following:
 
api_version: 1
web_docroot: true

 
With the web_docroot directive set to true, your web document will be served from the web subdirectory. Using this configuration, you will be able to use the preferred Drupal 8 project layout for Composer-managed sites established by the project drupal-composer/drupal-project. Pantheon requires a couple of changes to this project, though, so you will need to use the modified fork for Pantheon-hosted sites.

Installing a Composer-Managed Site

Pantheon has created an example repository derived from drupal-composer/drupal-project for use on Pantheon with a relocated document root. The URL of this project is:

https://github.com/pantheon-systems/example-drops-8-composer

There are two options for installing this repository: you may create a custom upstream, or you may manually push the code up to your Pantheon site.

Installing with a Custom Upstream

The best way to make use of this repository is to make a custom upstream for it, and create your Drupal sites from your upstream. The example-drops-8-composer project contains a couple of Quicksilver “deploy product” scripts that will automatically run composer install and composer drupal-scaffold each time you create a site. When you first visit your site dashboard after creating the site, you will see that the files created by Composer—the contents of the web and vendor directories—are ready to be committed to the repository. Pantheon requires that code be committed to the repository in order to be deployed to the test and live environments.

We’ll cover the workings of the Quicksilver scripts in a future blog post. In the meantime, you may either use the example-drops-8-composer project directly, or fork it and add customizations, if you are planning on creating several sites that share a common initial state.

Installing by Manually Pushing Up Code

If you don’t want to create an upstream yet, or if you are not a Pantheon partner agency, you can use the following Git instructions instead. Start off by creating a new Drupal 8 site; then, before installing Drupal, set your site to Git mode and do the following from your local machine:

$ composer create-project pantheon-systems/example-drops-8-composer my-site
$ cd my-site
$ composer prepare-for-pantheon

The “deploy product” Quicksilver scripts run during site create, so you will need to run composer install and composer drupal-scaffold yourself after you clone your site. Then, use the commands below to push your code up to the site you just created:

$ git init
$ git add -A .
$ git commit -m "web and vendor directory from composer install"
$ git remote add origin ssh://[email protected]:2222/~/repository.git
$ git push --force origin master

Replace my-site with the name that you gave your Pantheon site, and replace ssh://[email protected]:2222/~/repository.git with the URL from the middle of the SSH clone URL from the Connection Info popup dialog on your dashboard.

Copy everything from the ssh:// through the part ending in repository.git, removing the text that comes before and after. When you run git push --force origin master, you will completely replace all of the commits in your site with the contents of the repository you just created.

Updating a Composer-Managed Site

Once your site has been installed from this repository, you will no longer use the Pantheon dashboard to update your Drupal version. Instead, you will manage your updates using Composer. Updates can be applied either directly on Pantheon, by using Terminus, or on your local machine.

Updating with Terminus

To use Terminus to update your site, install the Terminus Composer plugin, placing it in your ~/terminus/plugins directory if you are using Terminus 0.x, or in your ~/.terminus/plugins directory if you are using Terminus 1.x. Using the newer version of Terminus is recommended.

Once you have the plugin installed, you will be able to run composer commands directly on your Pantheon site:
 
$ terminus composer my_site.dev -- update
 
Be sure that your site is in SFTP mode first, of course. Note that it is also possible to run other composer commands using the Terminus Composer plugin. For example, you could use terminus composer my_site.dev require drupal/modulename to install new modules directly on Pantheon.

Updating on Your Local Machine

If you have already cloned your site to your local machine, you may also run Composer commands directly on your site’s local working copy, and then commit and push your files up as usual.

Either way, you will find that managing your Drupal sites with Composer to be a convenient option—one that, sooner or later, you will find that you need to adopt. Give it a spin today, and see how you like the new way to manage Drupal code.

You may also like:​

Topics Development, Drupal, Drupal Planet
Oct 17 2016
Oct 17

Locker module is a Drupal authentication tool originally developed by our team. It uses sessions to forbid access to visitors and to hide a Drupal website. A user is required to login to gain access to the website. While Locker module isn’t a replacement for Drupal authentication, it serves as an additional layer to hide your Drupal website from public. It’s an alternative to HTTP Auth standard and recommended to be used in case your server doesn’t support HTTP Auth or if you don’t have permission to set it up or in case you want additional features that Locker module provides.

Drupal authentication use cases

There are multiple cases when you could use Locker:

  • An alternative to HTTP Auth
    Use Locker in case your server doesn’t support HTTP Auth or you don’t have permission to set it up.
  • Drupal maintenance mode replacement
    Use Locker instead of using Drupal maintenance mode.
  • Hiding a Drupal site from public on development and staging server
    Take proactive measures to keep your dev and staging sites from showing up to public and in Google search results.
  • White label development
    Customise Locker with your own brand, logo and identity.
  • Hiding a site from Google Analytics
    Use Locker to hide your Drupal site from Google Analytics.
  • Post-update site verification as an anonymous user
    Verify whether cache works correctly after an update.

How to use Locker module

After installing, access Locker module in admin/config/development/locker.

To lock your Drupal site:

  • Choose radio button "Yes"
  • Choose login option
  • Click "Submit"

WARNING: This will lock your Drupal site immediately! Use your credentials to gain access. If you forget your credentials you need to use drush unlock. Files will still be accessible via direct links!

To unlock your Drupal site:

  • Choose radio button "No"
  • Click "Submit"

drupal-authentication-tool-unlock-username-password

drupal-authentication-tool-unlock-passphrase

drupal-authentication-tool-sign-in-unlock

drupal-authentication-tool-locked

Visit the official Locker module project page - https://www.drupal.org/project/locker

Oct 12 2016
Oct 12

Ron Huber: Proprietary software does a really good job of being everything to everybody. When somebody goes and pitches something for a proprietary side, they'll say yes to everything, where in open source, we’ll say, well we do this really well, and it'll integrate and that's no problem, but we don't sell it as the end-all. We sell it as it's a solid player and this is what we can do with it and we feel comfortable because that's just our way of approaching things. We're a community of good people. Where on the other side, when they sell the internet of things, there are platforms out there that are, "Oh, we're going to control the world," and they won't let you do anything but control the world. Everything has to go through their system. They can do everything and they get you on it and it's too late once you realize that it only gets you about 75% of the way there. We have to sell it differently now.

Tom Friedhof: This is where the argument of the open web and silos comes in, right? And obviously, Drupal's pushing for the open web, because with these silos - say your marketing platform is on Facebook, right? If you want to do anything above and beyond what Facebook allows you to do, you can't. You're stuck within their platform.

Jordan Ryan: That’s their audience.

Tom Friedhof: Exactly. And if you want to reach that audience, you have to pay for it.

Chris Stauffer: I believe that quote was, "If you didn't pay for it, you are the product."

Jordan Ryan: Right. That's very true. You know, all those Facebook followers that you have can go in, and it's a reality they think a lot of small business owners and medium-sized enterprises - they don't realize, if that page goes away, your whole platform you spent all that money on is gone.

Chris Stauffer: I liked what both of you two just said a second ago, which was kind of that that's one of the main value adds of open-source. I had a conversation on Friday with a particular client - er, gentleman - that's been my client three times in the past, but his new start-up company is not my client. So he went with a quote/unquote open platform, and I'm putting air quotes around it, that would get him on all of these different devices. He's basically doing an MCM. So you have video platform - they did all of these different things, and he was really, really stoked, and really excited when he first told me about it. And now he's ready to go, and he's ready to make some changes, and they told him no, and he can't do anything. Whereas the initial platform that I was originally pitching him was, we'll start off with responsive web, like normal, we could throw an Android or iOS layer on top. It'll be nice a simple, and then you'll be able to do whatever you want to.

Ron Huber: You pivot when you need to pivot.

Chris Stauffer: Right, you pivot when you need to. And now he's sitting there going, "I can't get them to do the things I want them to do." And I'm like, "Dude, I would've told you you could do whatever you want." You know? I got no limitations - you want it? We’ll build it.

Tom Friedhof: But there's a cost to that as well.

Chris Stauffer: There is! There is.

Ron Huber: Because you don't benefit from the other hundred clients that are also asking for something.

Chris Stauffer: I mean, yeah. The initial bid for him was going to end up being about a hundred large to just kind of do a very simple CMS with a simple video object and couple simple video apps layered on top of that. And it did save him a hundred grand upfront, but now he's to a place where he wants to actually start monetizing his assets and actually start doing a lot of those different things that he's unable to do, and he's probably going to end up paying a hundred grand anyways. That kind of makes sense. Because he can't take it where he needs to.

Jordan Ryan: I'm not knocking anyone, but if you're building a business, you shouldn't be building on someone else's platform. You build your own.

Ron Huber: Very good point, right? Here you are, all your technology is owned by somebody else, and you're assuming - of course - that company's going to be around forever. You're also assuming that you're a large enough client that you can drive them into do what you need them to do. And, I don't know, most of the clients are not that big. We are actually building a java application right now, because one of our large media companies, the third-party system that they paid for went out of business.

Chris Stauffer: That sucks.

Ron Huber: Went out of business, they gave them the software, so they're running the software, but they have to replicate it, and they have to replicate it soon, because if it fails, there's no way they can touch it. It just goes down. So we're busy replicating the whole thing, so that they don't have this point of failure. And it's great, because we'll build it, it'll integrate into Drupal without a problem, which is a hundred of their other websites, and it'll be able to sit in there and integrate without any issue. But it's on a different technology, and very few other proprietary systems will allow another technology to come in and play nice. It's a very powerful tool, but you gotta pay the hundred thousand dollars upfront, which is killing us, because that's a big, big investment. If you can just sign on for twenty-five hundred dollars, and - boom, here I am - that's great.

Chris Stauffer: And that's basically what the guy did, by the way. I don't think it was twenty-five hundred, I think he paid ten for the whole platform.

Ron Huber: Ten, but maybe that's the business side of it. He should go ten, cause if you're starting a business, you put in a little bit of money - it's basically your MVP, and you get it tested, and then you move it over.

Chris Stauffer: For the time, I think he probably actually did make the right decision, but since he was successful, now it's the wrong decision.

Jordan Ryan: But he has to know that going into it, though.

Chris Stauffer: He did.

Jordan Ryan: The context of having a conversation with those kinds of people, and when we have those conversations - "Look, hosting service, build your own - this will probably give you runway for six months if you don't want to build it right now. And then prove that it works, and then come back to this later. But prove that it works, right?

Chris Stauffer: He did get a second round of funding.

Ron Huber: There you go. People have a hard time with just the minimal viable product concept. And it's really not the developers, or the engineers; it's a CFO. The CFO wants it all done, cause he wants to write one check, and be done with it. As much as we tell them, "How do you know you're gonna need this six months from now? Cause you can't even tell us your requirements today. We're building you a platform for something that we're guessing at, you're asking us to guess at. We're building it, it's working, you get it running, and chances are six months, or eight months from now, you're going to realize - hey, this piece that's sitting over here is making me a lot of money, and I didn't put any effort at all into it. Okay, now I'm going to pivot and go that way." Well, you can't do that. If you build the whole dream - the two-year plan - upfront, then you've built the two-year plan and 50% of it's not being used. But sometimes, these executives - marketing, CFO, etc. - get so hung up on the overall, "We want to do it once.”

Chris Stauffer: "We want to do it once and it has to be done right the first time."

Jordan Ryan: This is the experience that we're supposed to have, right?

Ron Huber: Well, we do.

Jordan Ryan: That's a sales point for CFOs.

Ron Huber: Right. But getting people to buy into that is the hard part. And I don't know - this is where I think proprietary software really kills it, is because they already have a package. It might be twice as much, or a three-year commitment - which is just silly. They're selling ten-thousand dollars a month or a hundred-thousand dollars a month on something that's already built, and that's what we want, whether they use it or not. I've been looking at this, trying to figure this out for years, and I just don't have an answer for open source. But I do think it's still - that's a part of our big challenge, and where we're going with that.

Tom Friedhof: The value proposition is freedom, right? It's the open web. It's the freedom to do what we want to do with the applications we built. When obviously there's the cost.

Ron Huber: And ownership, right? They own it.

Chris Stauffer: I think one of the other things, too, that I've started to sell as a value proposition of open source is that there's no vendor lock-in either. I can honestly look a client in the eye and tell them, "Look, my boys follow the rules, and if, at the end of the project, you don't like me, hire Achieve. Because I guarantee his team could pick up my code, and just be like 'alright'." And they would just keep running, as long as we're following the Drupal rules and Drupal standards, there's no vendor lock-in.

Ron Huber: Better yet, you could hire internally. Anybody you feel like. We don't really want to do your maintenance, right? We want you to do your maintenance. We want to build your next ambitious goal, cause that's what we're really good at. But to do your maintenance, you should hire internally or a sub-contractor, or get India to do it.

Chris Stauffer: My point, though, was more that you're not locked in. I have this other client that hired an engineer to develop a 100% custom system, and the engineer was, on a scale of one to ten, about a two and a half or a three. And so the whole thing is completely a worthless platform, and now my team is going in and reverse-engineering the worthless platform, to then move them out of the worthless platform, into something that's solid. And they're literally having to pay, I'm going to call it a $30,000 tax, if you will, on just my boys figuring out what that last guy was thinking. And with Drupal, as long as my team followed the rules, I can look a client in the eye and guarantee them that will never happen. You could, to Ron's point, hire internal staff. You could hire a competitor, you could do anything you want to, and you're not locked in. Whereas the client that I was referencing that wrote the proprietary software, that only worked for them - that, realistically, was a couple hundred-thousand dollar mistake. Pretty much. Because by the time she's done, the cost of her business, the cost of my bills, the cost of the bills she had previously - all of those costs are just ridiculous, compared to if she would've just hired us to do it in Drupal to begin with.

Oct 10 2016
Oct 10

Description

Recently the Drupal Security Team has seen a trend of attacks utilizing a site mis-configuration.
This issue only affects sites that allow file uploads by non-trusted or anonymous visitors, and stores those uploads in a public file system. These files are publically accessible allowing attackers to point search engines and people directly to them on the site. The majority of the reports are based around the webform module, however, other modules are vulnerable to this misconfiguration as well.

For example, if a webform configured to allow anonymous visitors to upload an image into the public file system, that image would then be accessible by anyone on the internet. The site could be used by an attacker to host images and other files that the legitimate site maintainers would not want made publicly available through their site.

To resolve this issue:

  1. Configure upload fields that non-trusted visitors, including anonymous visitors, can upload files with, to use the private file system.
  2. Ensure cron is properly running on the site. Read about setting up cron for for Drupal 7 or or Drupal 8).
  3. Consider forcing users to create accounts before submitting content.
  4. Audit your public file space to make sure that files that are uploaded there are valid.

Awareness acknowledgment

The Drupal Security Team became aware of the existence and exploits of this issue because the community reported this issue to the security team. As always, if your site has been exploited, even if the cause is a mistake in configuration, the security team is interested in hearing about the nature of the issue. We use these reports to look for trends and broader solutions.

Coordinated by

This post may be updated as more information is learned.

Contact and More Information

The Drupal security team can be reached at security at drupal.org or via the contact form at https://www.drupal.org/contact.

Learn more about the Drupal Security team and their policies, writing secure code for Drupal, and securing your site.

Oct 06 2016
Oct 06

I think I was pretty well prepared and knew what to expect. A couple of blogs from fellow Annertechies had helped to plan it, especially Mark's Get the Most out of DrupalCon Dublin.

Having become a father for the second time only two weeks before the event and spending the previous fortnight on paternity leave, I really enjoyed sharing a full week with my otherwise distributed work colleagues and, shall I say, friends. We even had a headquarters: booth 901, which was only a few steps away from the Drupal Ireland one.

Monday

Most of Monday was spent doing the pre-note rehearsal. I must say I really enjoyed being part of it. I had a pretty small part: I was one of the "O'Drupals", the trad band that would be playing a couple of tunes. As the conference took place in Ireland and I knew how to play the Irish drum called the bodhrán, I thought it would be a good way to break the ice.

I also found myself collecting stickers of previous DrupalCons. I have no clue why I did that, I suppose I wanted to go back in time and somehow compensate for my absence in previous events. I guess I wanted my laptop completely covered with these stickers like all the committed druapalists you see at the conference, they seem to have been everywhere! On second thoughts I decided to just keep the stickers, without trying to showcase something that didn't actually happen, so my laptop currently has only 2 stickers: DrupalCon Dublin and one for Drupal Ireland (that I happen to have designed myself).

Tuesday

Tuesday started early. I was there at 6 in the morning for a pre-note rehearsal. The last minute rehearsal went well as did the real performance. I think the O'Drupals played as good as we could have, a trad band consisting of 3 guitars (one of them electric), two bodhráns and an occasional flute here and whistle there didn't feel very traditional to me from the purist point of view, but hey, we did very very well, I think everybody enjoyed the music and we were even sharing the stage with the one and only Dries Buytaert, not a bad place to be when you are at the beginning of the second day of your first DrupalCon.

The highlight of the day for me was probably the Keynote by Dries, right after the prenote, I really enjoyed it. Really looking forward to start using the new D8 Block Placement or Menu settings tray. I think both are really going to affect site builders' usability. Also enjoyed his meaningful moments and how Drupal and its community actually improves people's lives. My lowlight would be "Streamlined Front-end Development with Pattern Lab and Twig", a session that I was really looking forward and that left me feeling like maybe I had too many expectations on this one.

I didn't make it to the welcome party that we, Drupal Ireland, had organised to be on a boat just outside the Convention Centre but I heard the following morning it was a great success and a lot of people went to have a pint of the black stuff with the local Drupal Community.

Wednesday

A mistake (I think) I made was not to attend the BoFs and to go to the sessions instead. "21 things I learned with Twig & Drupal" by MortenDK was the most enjoyable so far. I really enjoyed how he explained how designers and developers are thinking very different and how the themers kind of fall in the middle. By the end of his presentation I felt that I should have really be looking more often into the DrupalTwig Slack Channel and I already decided that I have to go to Frontend United in Athens next May.

It is amazing how much I took in of almost everything he said even though his non-stop presentation style reminded me of a Ramones concert where  songs are played at twice the original speed and one linked to each other with a "one two three four". I have already watched this talk on YouTube again. This is actually one of the good things I found at this DrupalCon. Sometimes there is an overlap of two sessions I wanted to attend: no problem, watch it later on YouTube. Sometimes I want to refresh something in particular as during the DrupalCon there is a lot to take in and energy and concentration levels reduce significantly as the day goes by: again YouTube saves the day.

As happened on Tuesday, I missed the evening side of the Conference. The Realex Web Awards 2016 took place that evening and ireland.ie, a collaborative project between Annertech, Big O Media and the Department of Arts, Heritage and the Gaeltacht, and others which happens to be the first site I worked on in Annertech around nine months ago, was nominated for "Best Arts and Culture" website and "Most Beautiful Website in Ireland" and you know what? It won them both!

And then it was announced that ireland.ie had won one more award, the overall icing on the cake "Best Website in Ireland" award. I felt so proud of my fellow annertechies and rest of the collaborators when I heard the news, it had been a really enjoyable project to work on and it was great to see it rewarded.

And finally, when things couldn't get better, there was one more trophy to collect by Alan, Anthony, Mark, Karen and Tom, my fellow colleagues representing us at the event. They went on to collect the "Web Agency of the Year" award. I truly believe days like these don't come very often, it is such an honour for me to be part of such an amazing team. 

Thursday

On Thursday I really enjoyed "Creating Layouts and Landing Pages for Drupal 8" by Suzanne Dergacheva and how she explores different theming approaches for landing pages such as using Paragraphs to define the call to actions or create a new content type for the call to actions and reference it with the Entity Reference Field and use the Inline Entity Form.

I also learnt something about DrupalCons the hard way: If you really want to go to something, don't go there just one minute before it starts, don't stop to talk to everybody you know and meet on the way to the second floor. "Drupal 8 hidden power: Entity Reference as a component-based site builder", a session I had been highly anticipating as one not to be missed already had a full room when I arrived, and I was forced to go somewhere else. I know, I can watch it in YouTube afterward, but being there in person would have been A1.

As with the rest of the night events, I couldn't make the Trivia Night but I really wish I had been to this historic occasion. Trivia Night was taking place in the country where it was born: in Ireland. And it was happening in a really astounding venue called the Round Room in the Mansion House. And this is something you can't really watch on YouTube. I will have to wait a full long year and get to it at DrupalCon Vienna.

Overall

In general it was a truly enjoyable and memorable experience, much more than I had anticipated. I know I didn't go to all the sessions I wanted to and that I missed the BoFs and most of the social side of the DrupalCon, but meeting great people at the stands, learning lots at the sessions, celebrating the awards and making new friends was more than enough to make it an extremely beneficial week.

I am already looking forward to the next one, it might be even better, at the next DrupalCon in September 2017.

Oct 05 2016
Oct 05

This year's DrupalCon was not different because of the happy coincidence that had Annertech scoop a raft of awards at the Realex Web Awards on the Wednesday (including "Web Agency of the Year"), but rather because of engagement and involvement.

Before the 'Con even occurred, the local team was preparing content, answering questions and, in my case, writing the 24 Hour Guide to Dublin. Seeing my work laid out beautifully in print in the DrupalCon programme was an unexpectedly great pleasure!

As a local, I had the opportunity to MC a keynote Q&A session. That was enormous fun, made all the more fun by a fantastic speaker, Emer Coleman. Coming off stage was a rush, lengthened by the deluge of interaction on Twitter as I was mentioned and our conversation bounced around the DrupalCon Twittersphere.

There was also the personal interest in both other keynotes as Annertechies took to the stage to chat to their speakers - Mark Conroy speaking to Dries at the "DriesNote" and Alan Burke chatting with Eduardo Garcia during the "Community Keynote". Suddenly it was not merely about entertainment any more - I cared. It had become relevant to me. I met Emer on Tuesday, in advance of the keynote, which meant that I was excited about it for most of a day of Drupalcon before even taking to the stage!

The DrupalCon prenote is always good fun - and I was several years in before I even discovered it! This year, taking part in it was very rewarding - from getting to know Jam, Cam, Adam and the crew a bit better, rocking out with the O'Drupals band, endless 12-bar blues in between rehearsals and finally being part of a very entertaining half hour show, I really felt that I was now part of the community. And I definitely made new friends who I'll be looking out for next time!

It was my first time to give a session at DrupalCon, speaking on the topic of "Happiness is ... Remote Working". I had spoken at camps and at Dev Days in Dublin, so public speaking was nothing new. But at DrupalCon, surrounded by my peers, talking to people at the top of their game, in a room full of people far cleverer than I am, it was a brand new experience. Andrew put it best: "Level unlocked!"

Although it was my first DrupalCon speaking slot, I had submitted talks for several years before and it made me think about what I had been doing wrong. Firstly, one must be able to prove that one can speak, so camps, Dev Days, Front End, other conferences and speaking opportunities are all good ways to beef up your speaker CV. Evidently I'd managed to convince the program team that I had cobbled together enough experience. Secondly, I read the track descriptions, and submitted sessions that attempted to deliver the things they were asking for. This is something I had neglected in years past: I would decide on a talk, write it up, and only then read the track descriptions (or even just the name!). Obviously, delivering what the program team wants is the most important hurdle. Here's the video of my talk:

Oct 01 2016
Oct 01

Jordan Ryan: Are any of you selling, in particular with Drupal, the power of integrations or integrating with other systems? Kind of like the microservices decoupled…

Chris Stauffer: To a certain extent, but it's kind of more selling them for me on the power of Drupal as an enterprise platform, then in that initial requirements-gathering process, talking to them about what other ancillary applications and legacy systems they have to tie into. Then once you've identified that, then selling them on the fact that you've already tied it in with Salesforce five times, and that's not really that big of a deal anymore because you kind of know how to do the Salesforce thing. I'm just using his example, but I've found that when I'm able to speak to the fact that you've already done that integration four times, then it becomes not necessarily a risk. I remember back in the old days, I would always think that every time I integrated into another system, that that was my largest point of risk on the project, was I'm going to plug into something else. Now, if someone tells me I'm going to take Drupal, and I'm going to plug that into Salesforce, I go, "Eh. I don't know. It's probably only ten grand, maybe; maybe 15. Depends on how complicated it is." But my blood pressure didn't raise at all. Whereas back in the old days, with all custom systems, since a lot of that integration wasn't already there, and I had to do it from scratch, and there weren't modules that did it, it was way scarier. Or integrating with Facebook Connect. The first time I did that, it scared the $#@! out of me.

Ron Huber: Especially a week later, when they change the API.

Chris Stauffer: When they changed the API and it blew up in my face before it was going to go live. You mean that time?

Ron Huber: And it was your fault, of course!

Chris Stauffer: Of course! That did literally happen. We did a project for Unilever, and we were launching a Facebook app, and it blew up like a week before the demo.

Ron Huber: That's right.

Chris Stauffer: Yeah, it was horrible. But nowadays, a lot of those prewritten integrations, I've already done them so many times, and they're so mature, that it's like, "Oh, Facebook integration. Yeah, whatever, dude. Sure, no problem."

Jordan Ryan: One click. Not quite.

Chris Stauffer: Well, I don't know if I’d go that far. Does that make sense? During that requirements gathering process, one of the things I tell clients a lot about Drupal, too, is when you hear me say, "There's a module for that," you should smile. When you hear me go, "Ooh, I don't know if there's a module for that," that means you should frown because that's what I just did to your budget. When I say there's a module for that, I'm going to get it done, and it's going to get done quick and cheap and efficient, and everything's cool. But the minute I go, "I don't know if there's a module for that," then that means it might take me 100 hours or 200 hours to pull off what you just asked me to do. Whereas, I might have done ten requirements that were all out of the box for the same price as that one requirement, which is going to be custom.

Ron Huber: I hate that term, out of the box. It drives me nuts.

Chris Stauffer: But you get my point.

Ron Huber: I totally get your point, and you live it, and et cetera. We consider ourselves a integration company. I feel like why people come to us is because we have so much experience in integration. There's other shops that do Drupal. That's not the problem. Frankly, you can get Drupal done in eastern Europe or whatever. It's all the integration and the API, and then of course, the management side of it. It was what should you be integrating? What should you be doing? Those are the real questions and why I think you hire a US-based firm, as opposed to somebody that's just building off of your requirements.

Chris Stauffer: Well, I think, Ron, for me, the difference, kind of building exactly on what you're saying, is that the US-based developers have the ability to get thrown a curve ball and still hit it. Whereas, the overseas developers, when you throw them a curve ball, they don't know what a curve ball is or what they're supposed to do with it. They just know, "I was told to do this, and you gave me that, and now I'm completely lost, and I don't know how to handle it."

Ron Huber: I want it to work. I've spent hundreds of thousands of dollars…

Chris Stauffer: Wasted…

Ron Huber: On ever country possible to be able to supplement our team, and it hasn't worked for what we do. I think it works excellent for somebody that's got a three-year roadmap, and you got a product, and you want to ... That works perfectly. But if you don't know what your requirements are and you need it by November 1st, you got to do it here in the US, and you should probably do it pretty local.

Jordan Ryan: Or you need great communication.

Ron Huber: Well, yeah. Just because everybody should hire you, they don't.

Chris Stauffer: Well, that's the thing about systems integration, though, is systems integration never actually goes the way it's supposed to.

Ron Huber: No.

Chris Stauffer: That's what I meant by hitting a curve ball.

Ron Huber: No, you're absolutely right.

Chris Stauffer: That systems integration, it's always like we planned to have that hook up to that, and then you find out that, oh $#@!, it's not going to work like that.

Ron Huber: It's a lot of moving parts.

Chris Stauffer: And uh-oh. And the US guy can hang, and the other guy doesn't.

Ron Huber: Well, it's not their fault, either. We just have a better communication process. We've seen it a little bit more. I don't think this is a US versus offshore conversation. There's just a certain element of what it is that we do best and why we're up here talking about it. I think that as we look at where we're headed and where Drupal is headed, I think this move into Drupal 8 was really ... Not that it's surprising, from Dries ... a visionary move because where we could go, that none of us have even ... Well, not really thought out yet. He's probably ten steps in front of us, right? He knows where we're going to go. We all just need to catch up. This move to Sympfony based and a different object oriented function is just going to be able to get us there. I think the face of Drupal's going to change. I think how we maybe sell or how we pitch it or how we use it is going to change, and there's nothing wrong with that. It's still a powerful tool. It might not be our only tool.

Tom Friedhof: One of these scenarios that comes to my mind when we're talking about all this integration was the example that Dries gave at DrupalCon with ordering food through the Alexa and basically asking Alexa if something was on sale at Trader Joe's, and basically having APIs talk to each other. Then when the gal at the supermarket updated the little produce and said it's on sale, that automatically sent a text. It's crazy how the world we live on is no longer just websites, right? Drupal is no longer just a website. It's got to work on your phone.

Ron Huber: Well, yeah. Where else are we going to ... It's going into everything, and that's the other big thing, is we do a lot of medical device work. Then where do you interact with the Internet of things? Where is it that we have to go? Okay, maybe Drupal doesn't actually show up on a device, but it is the aggregator. Then when you're trying to get in the backend and figure out, through your portal, where your customers are, where your employees are, where the new products coming, that's another powerful tool or another version of Drupal that I think is under-promoted and underused, at this point.

Tom Friedhof: But it doesn't have to be just Drupal. It can be anything. One of the things ... We've always been a web shop, but we're building a native app right now. We're building it with React Native. It's amazing how we're tying these services that started off on the web, still using web technologies to build a native experiences on a mobile device, tying it back into a Sympfony application. It's just amazing, as developers. This is one of the powers and the benefits of Drupal, is it can act as that content store or as that integration piece that these different systems can interact with.

Jordan Ryan: I think there's something to be said about how, for a while now, Drupal's community has wanted to get off the island. I think that's led a lot by how agencies have needed to get off the island in order to start integrating all of these different systems. One of the, I think, opportunities Drupal has is that with all these integrated systems, there really isn't a leading technology that you could consider a decision engine. When you're talking about a unified customer experience across many different disparate platforms, Alexa, your iOS apps, there really isn't a central hub. You either have to build one, or you have to start thinking about your digital strategy with Drupal as that hub that's going to make that happen. There's some things that I think will need to happen, as far as Drupal's infrastructure, in order to make that more accessible, talking to all of these different IoT apps. That has some performance implications if you have a lot of traffic. It's no longer just page views. You've got a lot of personalized content. I think that there's going to be opportunity there as Drupal continues to evolve.

Chris Stauffer: In my mind, I think that that evolution towards using Drupal as a central hub ... I actually think that started happening a while back. In Drupal 7, we've been building, for probably a good two or three years, the concept of having a Drupal website, and then having all of your content available via web services that then get ingested into an iOS app. We haven't done a React Native one yet, but we have done a couple systems where we used like Swift on the front-end and just normal Android development, where we were basically hitting a lot of those Drupal web services. I think the movement towards 8 is making those services more of the focus, but I think that that's kind of been there for a while now. I think that corporate executives are just starting now to understand, as you put it earlier, that it actually is a central hub, that I have a content management system that's going to manage my content, but everything else is just a display medium, whether it is a mobile app, Facebook ... You know what I mean? There's a million different ways of consuming content.

Jordan Ryan: It's the octopus controller controlling all the knobs.

Chris Stauffer: Right. Look at the Hollywood Reporter. The Hollywood Reporter has millions of content objects, but you can look at it through normal web; you can look at it through mobile web; you can look at it through an Android device. You can look at it through anything. If there's a device, I'm sure the Hollywood Reporter's got a new way of looking at it that way. You kind of see what I mean? I think that-

Jordan Ryan: Oculus Rift?

Chris Stauffer: I don't think we have that one yet.

Pages

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web