Jul 07 2015
Jul 07

Get ready to rumble! Watch how Drop Guard won against Drupalgeddon on 15.10.2014 at 6:00 PM in this live webinar! We're going to run a live demo re-enacting the whole epic match, and you'll learn about the techniques and strategies that Drop Guard uses to sucker-punch any future Drupal security threats. Don't expect a second round: it'll be a technical KO within minutes!

This free, 45-minute webinar takes place via Google Hangout on 27.07.2015 at 4 PM GMT+2. You'll learn the following:

  1. How to set up an automated workflow to keep your Drupal site updated and secure
  2. Three simple prerequisites for starting with Drop Guard
  3. How Drop Guard integrates with your individual development and deployment workflows

We'll use a Drupalgeddon-infected Drupal installation including several other modules with security issues. This exclusive live demo shows the current status of Drop Guard. All attendees also get free Drop Guard access until the end of September 2015. All attendees get free Drop Guard access until the end of September 2015.

Sign up for free!

Jul 06 2015
Jul 06

Looking at my blog posts, you would be forgiven for thinking that I work with Drupal all day but actually I don't. I've used it for odd private and business projects over the years, built a startup (1) with it and run my sailing club's website on it.

Actually, my working day is spent on a mix of legacy and new Symfony 2 applications. In years gone past that included applications written in frameworks such Zend framework, SugarCRM and plenty of home grown ones from early PHP days.

Sure there are things I don’t like about Drupal 7 such as configuration management, deployment, lack of modern OOP and practices and the fairly blunt caching mechanism (2) but I think it is because of having seen so much legacy code and so much internally written software that I still like Drupal.

Starting from the assumption that you aren’t going to touch your main application which could be running on another framework or technology, I’m going to write about using Drupal in your organization in ways that you might not have considered.

We’re just not going to switch to Drupal!

I think the reality regards PHP applications is that most companies use their own legacy frameworks and/or something like Zend Framework or Symfony, or they are using a totally different stack and they certainly aren’t about to switch any time soon to something else. Which is fair enough.

However there are ways to get value from Drupal with low effort and risk alongside - and in some cases replacing - existing applications and other technologies.

1. Drupal for building intranet tools

In my current company we have a myriad of intranet tools built on different technologies, mostly home grown legacy PHP apps, but also ones built on other frameworks and some in .NET. Nothing is standard, every one is special and each demands developer ramp up time. One is even stuck on a dead framework on an early PHP version.

Each generation of developers had seemed to want to try out a new framework, different stack or just write something new. I think that is fairly common in old internet companies.

Drupal can be pretty good for building intranet tools. It has some nice APIs, you certainly aren’t limited to the Drupal data model, you get a lot for free (no custom coding needed) plus there are many good modules out there that expose APIs as well. It is a bit of a revelation to define your own DB table structures and then manipulate or expose them using standard Drupal modules like Views.

Also, if you have good developers to set things up correctly initially, management of the application can then be moved to non-technical staff. For example, it won’t then take a developer to alter a permission, set up a new data view, trigger an alert, turn off a functionality, set up a new workflow or add a new language and so on.

This is one of the things I really like about Drupal in that if done properly, it pushes more control to non-technical staff. Developers are expensive. And just using Drupal on an intranet simplifies things a lot in the sense of deployment, maintenance and possible custom coding / themeing.

I think this “myriad of unique applications” scenario should just end. It’s too expensive, complicated and risky and companies should choose standard solutions and stick with them. A number of open source PHP projects are now very established, not going away and 100% enterprise ready. In the PHP world Symfony 2 with its connected eco system is one, Drupal is another.

Take a look at intranet distributions like Open Atrium for intranets and also other disributions on the Acquia downloads page.

2. Drupal as a data source for other applications

In 2010, in one company we had a large Zend Framework application, a java powered auction system in the back end and we wanted to add multilingual news and help sections. I chose Drupal as the CMS as it had great multilingual functionality, rock solid content management and the possibility to build nice workflows for editors and managers.

There was however just no way that we were going to build a new Drupal application for news and faqs, theme it, deploy it and manage it in production alongside the existing frontend.

The solution for this was simple, use Drupal as an intranet CMS only and just pull the data where needed. I set up a multilingual content management application in Drupal to enable staff to add news and help pages, and we just pulled that data into the front facing Zend Framework application through the caching layer via some simple database views.

drupal_in_org.png

Note: this was how it looked in 2010, nowadays you would most likely use the services module to surface the data.

The Drupal application used only out of the box modules, needed no theming and had no special deployment , it was simple and with short development time. Everyone was happy.

I’ve seen plenty of home built CRUD applications for all sorts of things like FAQs, news and help pages, community announcements, landing pages, email templates, translation strings, configuration management. Why write and maintain that stuff yourself? Just use a CMS like Drupal.

Just on translation strings, it would be very easy to build a Drupal application to manage translation strings consumed by other applications. Just build the translations app in Drupal and then write a script to export them to a format that the external application needs, e.g. see here for Symfony translations.

And regards exposing Drupal data, Drupal does have an XML-RPC API and RSS but check out services. See the links below.

3. Drupal as a frontend for manipulating data on other applications

Let’s say you have a public site where users can add content. Rather than add more code to your public facing application, you could make a management frontend for that data in Drupal. Drupal has nice APIs for forms and data access, but you could even hook that data into Drupal APIs more fully, allowing you to use native drupal modules to manipulate the data.

4. Drupal for microsites

Drupal can be perfect for microsites such as News and FAQ sections or forums that work alongside your main application(s). Theming isn’t that difficult and if you don’t need to authenticate users, this is a very easy option.

5. Drupal for your corporate website

The corporate website is usually separate from the main application(s) and a good use case for Drupal. A corp site is usually content heavy, needs frequent updates and might need publishing workflow.

6. Drupal for rapid prototyping

Rocket Internet, the giant European incubator gets new minimum marketable websites out in under 100 days. I think this is a very good plan and Drupal is good for the rapid prototyping of ideas even if the data comes from other sources like search services.

Caveat

If you are having to make long term decisions now, then with Drupal 8 just around the corner whether you choose Drupal 7 or 8 needs careful research.

Further reading, listening, watching

(1) I ceased work on it in 2012
(2) Drupal 8 which is built using Symfony 2 components fixes these issues

Jul 06 2015
Jul 06

When I kick off a client project, two of my first questions are:

  • What are your project goals?
  • What are your project tactics?

Most of the time, my clients define their goals as tactics. For example, “I want a beautiful site with a great user experience.” While this is a useful thing to identify, it’s not a goal. It’s a tactic.

What purpose does a beautiful site serve? Does beauty drive revenue? Why improve the user experience? Doex UX cultivate positive emotions towards your business?

Asking the “why" behind the tactics can help reveal the true goals of the redesign. It’s easy to define the purpose of a project as “I need my site to look better” because it’s the most obvious thing to be improve. However, there is a missed opportunity in looking only skin deep. 

Defining the right goals for a project keeps things on track. It helps to reign in ideas that risk taking the project off course. It also frees up the imagination to imagine numerous tactics which may achieve the goal. For instance, if a project goal is to “build brand trust,” many tactics may work to a achieve this goal beyond making the site beautiful. For example:

  • Conducting user testing to identify and fix pain points
  • Reworking the content
  • Changing the tone of the copy to sound more friendly

Notice that some of these tactics transcend the web. "Changing the tone of the copy" may affect print materials, social media or other communication channels.

What this begins to illustrate is the need for the business strategy and goals to be in place before embarking on a website redesign. The goals should be shared across teams. Each team may then use their localized expertise to achieve the goals, leveraging whatever tactics they deem necessary. Tactics may evolve, but goals remain consistent.

Some of the most successful organizations I've seen maintain a solid vision and strategy paired with evolving tactics to maintain interest, cultivate new customers and grow their brand.

So, before starting your next web project, define your strategic business goals and share it across your teams.

Here are some helpful resources:

Jul 06 2015
Jul 06

If you are using a site deployment module, and running simpletests against it in your continuous integration server using drush test-run, you might come across Simpletest output like this in your Jenkins console output:

Starting test MyModuleTestCase.                                         [ok]
...
WD rules: Unable to get variable event_of_proposal, it is not           [error]
defined.
...
MyModuleTestCase 9 passes, 0 fails, 0 exceptions, and 7 debug messages  [ok]
No leftover tables to remove.                                           [status]
No temporary directories to remove.                                     [status]
Removed 1 test result.                                                  [status]
 Group  Class  Name

In the above example, the Rules module is complaining that it is misconfigured. You will probably be able to confirm this by installing a local version of your site along with rules_ui and visiting the rules admin page.

Here, it is rules which is logging a watchdog error, but it could by any module.

However, this will not necessarily cause your test to fail (see 0 fails), and more importantly, your continuous integration script will not fail either.

At first you might find it strange that your console output shows [error], but that your script is still passing. You script probably looks something like this:

set -e
drush test-run MyModuleTestCase

So: drush test-run outputs an [error] message, but is still exiting with the normal exit code of 0. How can that be?

Well, your test is doing exactly what you are asking of it: it is asserting that certain conditions are met, but you have never explicitly asked it to fail when a watchdog error is logged within the temporary testing environment. This is normal: consider a case where you want to assert that a given piece of code logs an error. In your test, you will create the necessary conditions for the error to be logged, and then you will assert that the error has in fact been logged. In this case your test will fail if the error has not been logged, but will succeed if the error has been logged. This is why the test script should not fail every time there is an error.

But in our above example, we have no way of knowing when such an error is introduced; to ensure more robust testing, let's add a teardown function to our test which asserts that no errors were logged during any of our tests. To make sure that the tests don't fail when errors are expected, we will allow for that as well.

Add the following code to your Simpletest (if you have several tests, consider creating a base test for all of them to avoid reusing code):

/**
 * {inheritdoc}
 */
function tearDown() {
  // See http://dcycleproject.org/blog/96/catching-watchdog-errors-your-simpletests
  $num_errors = $this->getNumWatchdogEntries(WATCHDOG_ERROR);
  $expected_errors = isset($this->expected_errors) ? $this->expected_errors : 0;
  $this->assertTrue($num_errors == $expected_errors, 'Expected ' . $expected_errors . ' watchdog errors and got ' . $num_errors . '.');

  parent::tearDown();
}

/**
 * Get the number of watchdog entries for a given severity or worse
 *
 * See http://dcycleproject.org/blog/96/catching-watchdog-errors-your-simpletests
 *
 * @param $severity = WATCHDOG_ERROR
 *   Severity codes are listed at https://api.drupal.org/api/drupal/includes%21bootstrap.inc/group/logging_severity_levels/7
 *   Lower numbers are worse severity messages, for example an emergency is 0, and an
 *   error is 3.
 *   Specify a threshold here, for example for the default WATCHDOG_ERROR, this function
 *   will return the number of watchdog entries which are 0, 1, 2, or 3.
 *
 * @return
 *   The number of watchdog errors logged during this test.
 */
function getNumWatchdogEntries($severity = WATCHDOG_ERROR) {
  $results = db_select('watchdog')
      ->fields(NULL, array('wid'))
      ->condition('severity', $severity, '<=')
      ->execute()
      ->fetchAll();
  return count($results);
}

Now, all you tests which have this code will fail if there are any watchdog errors in it. If you are actually expecting there to be errors, then at some point in your test you could use this code:

$this->expected_errors = 1; // for example

Jul 06 2015
Jul 06

Data-driven content management

Although creating, searching for, updating, and publishing content in Drupal is a snap, understanding and making decisions based on that content can be challenging. Questions like, "What are the most viewed, untranslated case studies?" or "Does an accelerated blogging cadence increase page views?" are difficult or impossible to answer within Drupal alone.

Though the data that could help answer those questions may live in Drupal, content editors or administrators are unlikely to find answers on their own because the information is made available, if at all, through complex UIs only understood by Drupal site builders, developers, or power users.

This problem--where those most knowledgeable about some dataset are only able to ask questions of that data by proxy through a specialist--extends far beyond just content management in Drupal.

Tableau (my employer) takes pride in helping solve this type of problem for organizations the world over, and because Drupal runs on pervasive database technologies like MySQL and PostgreSQL, we also happen to work well with Drupal: just add database credentials, connect, and go.

Connecting Tableau to a Drupal MySQL database. Visual analysis

Container cloud complications

If you run Drupal on Pantheon, you may be familiar with (and likely benefit from) their container-based architecture. The efficiency and agility that containers provide are what allow Pantheon to offer development and test environments at scale. Containers also enable Pantheon to offer you highly available (distributed) and horizontally elastic applications by default.

Although these features are a Drupal developer's dream, the underlying technology complicates things for data-driven content managers. When Pantheon updates servers, migrates endpoints, or does other maintenance work transparent to end-users, database connection details change, breaking Tableau's connection to the Drupal database.

There are a few options for working around this problem, each with its drawbacks:

  • Send out updated credentials whenever they break: just instruct every Tableau user to e-mail you when the dashboard they built stops updating; you can find the new credentials and send them back. Rinse and repeat for every user with access and every site: welcome to your new full-time job.
  • Give content editors access to the Pantheon dashboard: train them to navigate to the specific environment you want them to connect to, suss out the MySQL details, and be sure not to hit that "delete live site" button you just gave them access to...
  • Use Pantheon's CLI to replicate the DB locally on a schedule: on the whole, not a bad option, but what happens when the DB server goes offline or your replication script starts failing? Didn't you go with Pantheon to get out of the infrastructure management and monitoring game in the first place?

Introducing the Pantheon Switchboard

Pantheon switchboard

We, being a data-driven marketing organization who coincidentally has a large Tableau installation base, and one that happens to host many Drupal sites on Pantheon, know the struggle well.

To that end, we've developed and open sourced the Pantheon Switchboard, a Docker image that mashes up the Pantheon command line interface and MySQL proxy, allowing Tableau users (both those connecting ad-hoc using the desktop client as well as those scheduling extracts with our cloud or on-premise servers) to reliably and seamlessly connect to MySQL databases hosted on Pantheon, despite those periodic database connection detail changes.

The Switchboard's container approach attempts to strike the right balance between infrastructure requirements and the type of self-service simplicity that Tableau users expect.

Complete details on installation and usage are available on the project's README.

Deploying to Google Compute Engine

For production use, we're enamored with the simplicity of deploying containers on GCE using their Container-Optimized VM images; feel free to use this as a recipe to get started:

# switchboard-manifest.yml
version: v1
kind: Pod
spec:
  containers:
    - name: my-drupal-site-proxy
      image: tableaumkt/pantheon-mysql-proxy
      imagePullPolicy: Always
      ports:
        - name: mysql
          containerPort: 3306
          hostPort: 11337
          protocol: TCP
      env:
        - name: PROXY_DB_UN
          value: un_to_connect_to_drupal_proxy
        - name: PROXY_DB_PW
          value: pw_to_connect_to_proxy_here
        - name: PANTHEON_SITE
          value: my-drupal-site
        - name: PANTHEON_ENV
          value: test
        - name: PANTHEON_EMAIL
          value: [email protected]
        - name: PANTHEON_PASS
          value: password_for_email_here
    - name: my-wp-site-proxy
      image: tableaumkt/pantheon-mysql-proxy
      imagePullPolicy: Always
      ports:
        - name: mysql
          containerPort: 3306
          hostPort: 11338
          protocol: TCP
      env:
        - name: PROXY_DB_UN
          value: un_to_connect_to_wp_proxy_here
        - name: PROXY_DB_PW
          value: pw_to_connect_to_proxy_here
        - name: PANTHEON_SITE
          value: my-wp-site
        - name: PANTHEON_ENV
          value: dev
        - name: PANTHEON_EMAIL
          value: [email protected]
        - name: PANTHEON_PASS
          value: password_for_email_here
    # - Additional containers/proxies here.
  restartPolicy: Always
  dnsPolicy: Default

Then spin up a VM in Google's Cloud with their CLI, using the above manifest as a template:

gcloud compute instances create pantheon-switchboard-test \
    --image container-vm \
    --metadata-from-file google-container-manifest=switchboard-manifest.yml \
    --zone us-central1-a \
    --machine-type f1-micro

After which you should:

  1. Provision a permanent IP and assign it to the VM (or use the VM's ephemeral IP for testing)
  2. Set up network rules to only allow connections from a specific range of IPs (like your corporate network or if you use Tableau Online, its IP), and to the ports you specified in your manifest, and optionally,
  3. Route a domain to the VM's IP.

Once wired up, you should be able to connect to your Pantheon databases using the PROXY_DB_UN, PROXY_DB_PW, and host port specified in your manifest, along with the IP (or domain) you configured in your Google Cloud console.

Get Started

Jul 06 2015
Jul 06
DrupalCamp Bristol attendees Mark Pavlitski's picture Jul 6th 2015Technical Director

After nearly a year in the planning, Bristol's inaugural DrupalCamp has finally come and gone!

There have been murmors about a Bristol camp or event for a number of years and it's so rewarding to see the whole South West Drupal community coming together to make it a reality.

A few of my personal highlights were: Paul Johnson's Business day GOSH showcase; Matt Jukes recounting his numerous and often hilarious escapades trying to bring modern digital tools and techniques to ONS; and a very thought-provoking accessibility talk and demonstration by Léonie Watson.

There were also two very polished case studies to round-off the event, given by DrupalCamp Bristol committee chair Rick Donohoe and Ringo Moss.

We at Microserve have really relished the opportunity to help make DC Bristol a reality, and are very much looking forward to starting work on DrupalCamp Bristol 2016!

We will be following up with a detailed perspective on DrupalCamp Bristol from an organisers point of view, as well as a camp wash-up from the whole committee.

Jul 06 2015
Jul 06

In this article, we are going to look at building a multistep form in Drupal 8. For brevity, the form will have only two steps in the shape of two completely separate forms. To persist values across these steps, we will use functionality provided by Drupal’s core for storing temporary and private data across multiple requests.

Drupal 8 logo

In Drupal 7, a similar approach can be achieved using the cTools object cache. Alternatively, there is the option of persisting data through the $form_state array as illustrated in this tutorial.

The code we write in this article can be found in this repository alongside much of the Drupal 8 work we’ve been doing so far. We will be dealing with forms quite a lot so I do recommend checking out one of the previous articles on Drupal 8 in which we talk about forms.

The plan

As I mentioned above, our multistep form will consist of two independent forms with two simple elements each. Users will be able to fill in the first one and move to the second form where they can either go back to the previous step or fill it in and press submit. While navigating between the different steps, the previously submitted values are stored and used to pre-populate the form fields. If the last form is submitted, however, the data gets processed (not covered in this article) and cleared from the temporary storage.

Technically, both of these forms will inherit common functionality from an abstract form class we will call MultistepFormBase. This class will be in charge of injecting the necessary dependencies, scaffolding the form, processing the end result and anything else that is needed and is common to both.

We will group all the form classes together and place them inside a new folder called Multistep located within the Form plugin directory of our demo module (next to the old DemoForm). This is purely for having a clean structure and being able to quickly tell which forms are part of our multistep form process.

The code

We will start with the form base class. I will explain what is going on here after we see the code.

MultistepFormBase.php:

/**
 * @file
 * Contains \Drupal\demo\Form\Multistep\MultistepFormBase.
 */

namespace Drupal\demo\Form\Multistep;

use Drupal\Core\Form\FormBase;
use Drupal\Core\Form\FormStateInterface;
use Drupal\Core\Session\AccountInterface;
use Drupal\Core\Session\SessionManagerInterface;
use Drupal\user\PrivateTempStoreFactory;
use Symfony\Component\DependencyInjection\ContainerInterface;

abstract class MultistepFormBase extends FormBase {

  /**
   * @var \Drupal\user\PrivateTempStoreFactory
   */
  protected $tempStoreFactory;

  /**
   * @var \Drupal\Core\Session\SessionManagerInterface
   */
  private $sessionManager;

  /**
   * @var \Drupal\Core\Session\AccountInterface
   */
  private $currentUser;

  /**
   * @var \Drupal\user\PrivateTempStore
   */
  protected $store;

  /**
   * Constructs a \Drupal\demo\Form\Multistep\MultistepFormBase.
   *
   * @param \Drupal\user\PrivateTempStoreFactory $temp_store_factory
   * @param \Drupal\Core\Session\SessionManagerInterface $session_manager
   * @param \Drupal\Core\Session\AccountInterface $current_user
   */
  public function __construct(PrivateTempStoreFactory $temp_store_factory, SessionManagerInterface $session_manager, AccountInterface $current_user) {
    $this->tempStoreFactory = $temp_store_factory;
    $this->sessionManager = $session_manager;
    $this->currentUser = $current_user;

    $this->store = $this->tempStoreFactory->get('multistep_data');
  }

  /**
   * {@inheritdoc}
   */
  public static function create(ContainerInterface $container) {
    return new static(
      $container->get('user.private_tempstore'),
      $container->get('session_manager'),
      $container->get('current_user')
    );
  }

  /**
   * {@inheritdoc}.
   */
  public function buildForm(array $form, FormStateInterface $form_state) {
    // Start a manual session for anonymous users.
    if ($this->currentUser->isAnonymous() && !isset($_SESSION['multistep_form_holds_session'])) {
      $_SESSION['multistep_form_holds_session'] = true;
      $this->sessionManager->start();
    }

    $form = array();
    $form['actions']['#type'] = 'actions';
    $form['actions']['submit'] = array(
      '#type' => 'submit',
      '#value' => $this->t('Submit'),
      '#button_type' => 'primary',
      '#weight' => 10,
    );

    return $form;
  }

  /**
   * Saves the data from the multistep form.
   */
  protected function saveData() {
    // Logic for saving data goes here...
    $this->deleteStore();
    drupal_set_message($this->t('The form has been saved.'));

  }

  /**
   * Helper method that removes all the keys from the store collection used for
   * the multistep form.
   */
  protected function deleteStore() {
    $keys = ['name', 'email', 'age', 'location'];
    foreach ($keys as $key) {
      $this->store->delete($key);
    }
  }
}

Our abstract form class extends from the default Drupal FormBase class so that we can use some of the functionality made available by it and the traits it uses. We are using dependency injection to inject some of the needed services:

  • PrivateTempStoreFactory gives us a temporary store that is private to the current user (PrivateTempStore). We will keep all the submitted data from the form steps in this store. In the constructor, we are also immediately saving the store attribute which contains a reference to the multistep_data key/value collection we will use for this process. The get() method on the factory either creates the store if it doesn’t exist or retrieves it from the storage.
  • The SessionManager allows us to start a session for anonymous users.
  • The CurrentUser allows us to check if the current user is anonymous.

Inside the buildForm() method we do two main things. First, we start a session for anonymous users if one does’t already exist. This is because without a session we cannot pass around temporary data across multiple requests. We use the session manager for this. Second, we create a base submit action button that will be present on all the implementing forms.

The saveData() method is going to be called from one or more of the implementing forms and is responsible with persisting the data from the temporary storage once the multistep process is completed. We won’t be going into the details of this implementation because it depends entirely on your use case (e.g. you can create a configuration entity from each submission). We do, however, handle the removal of all the items in the store once the data has been persisted. Keep in mind though that these types of logic checks should not be performed in the base class. You should defer to a dedicated service class as usual, or use a similar approach.

Now it’s time for the actual forms that will represent steps in the process. We start with the first class inside a file called MultistepOneForm.php:

/**
 * @file
 * Contains \Drupal\demo\Form\Multistep\MultistepOneForm.
 */

namespace Drupal\demo\Form\Multistep;

use Drupal\Core\Form\FormStateInterface;

class MultistepOneForm extends MultistepFormBase {

  /**
   * {@inheritdoc}.
   */
  public function getFormId() {
    return 'multistep_form_one';
  }

  /**
   * {@inheritdoc}.
   */
  public function buildForm(array $form, FormStateInterface $form_state) {

    $form = parent::buildForm($form, $form_state);

    $form['name'] = array(
      '#type' => 'textfield',
      '#title' => $this->t('Your name'),
      '#default_value' => $this->store->get('name') ? $this->store->get('name') : '',
    );

    $form['email'] = array(
      '#type' => 'email',
      '#title' => $this->t('Your email address'),
      '#default_value' => $this->store->get('email') ? $this->store->get('email') : '',
    );

    $form['actions']['submit']['#value'] = $this->t('Next');
    return $form;
  }

  /**
   * {@inheritdoc}
   */
  public function submitForm(array &$form, FormStateInterface $form_state) {
    $this->store->set('email', $form_state->getValue('email'));
    $this->store->set('name', $form_state->getValue('name'));
    $form_state->setRedirect('demo.multistep_two');
  }
}

This form will look something like this:

Drupal 8 Multistep Forms 1

In the buildForm() method we are defining our two dummy form elements. Do notice that we are retrieving the existing form definition from the parent class first. The default values for these fields are set as the values found in the store for those keys (so that users can see the values they filled in at this step if they come back to it). Finally, we are changing the value of the action button to Next (to indicate that this form is not the final one).

In the submitForm() method we save the submitted values to the store and then redirect to the second form (which can be found at the route demo.multistep_two). Keep in mind that we are not doing any sort of validation here to keep the code light. But most use cases will call for some input validation.

Since we’ve touched upon the issue of routes, let’s update the route file in our demo module and create two new routes for our forms:

demo.routing.yml:

demo.multistep_one:
  path: '/demo/multistep-one'
  defaults:
    _form: '\Drupal\demo\Form\Multistep\MultistepOneForm'
    _title: 'First form'
  requirements:
    _permission: 'access content'
demo.multistep_two:
  path: '/demo/multistep-two'
  defaults:
    _form: '\Drupal\demo\Form\Multistep\MultistepTwoForm'
    _title: 'Second form'
  requirements:
    _permission: 'access content'

For more information about what is going on in this file you can read one of the previous Drupal 8 articles which explain routes as well.

Finally, we can create our second form (inside a file called MultistepTwoForm):

/**
 * @file
 * Contains \Drupal\demo\Form\Multistep\MultistepTwoForm.
 */

namespace Drupal\demo\Form\Multistep;

use Drupal\Core\Form\FormStateInterface;
use Drupal\Core\Url;

class MultistepTwoForm extends MultistepFormBase {

  /**
   * {@inheritdoc}.
   */
  public function getFormId() {
    return 'multistep_form_two';
  }

  /**
   * {@inheritdoc}.
   */
  public function buildForm(array $form, FormStateInterface $form_state) {

    $form = parent::buildForm($form, $form_state);

    $form['age'] = array(
      '#type' => 'textfield',
      '#title' => $this->t('Your age'),
      '#default_value' => $this->store->get('age') ? $this->store->get('age') : '',
    );

    $form['location'] = array(
      '#type' => 'textfield',
      '#title' => $this->t('Your location'),
      '#default_value' => $this->store->get('location') ? $this->store->get('location') : '',
    );

    $form['actions']['previous'] = array(
      '#type' => 'link',
      '#title' => $this->t('Previous'),
      '#attributes' => array(
        'class' => array('button'),
      ),
      '#weight' => 0,
      '#url' => Url::fromRoute('demo.multistep_one'),
    );

    return $form;
  }

  /**
   * {@inheritdoc}
   */
  public function submitForm(array &$form, FormStateInterface $form_state) {
    $this->store->set('age', $form_state->getValue('age'));
    $this->store->set('location', $form_state->getValue('location'));

    // Save the data
    parent::saveData();
    $form_state->setRedirect('some_route');
  }
}

This one will look like this, again very simple:

Drupal 8 Multistep Forms 1

Again, we are extending from our base class like we did with the first form. This time, however, we have different form elements and we are adding a new action link next to the submit button. This will allow users to navigate back to the first step of the form process.

Inside the submitForm() method we again save the values to the store and defer to the parent class to persist this data in any way it sees fit. We then redirect to whatever page we want (the route we use here is a dummy one).

And that is pretty much it. We should now have a working multistep form that uses the PrivateTempStore to keep data available across multiple requests. If we need more steps, all we have to do is create some more forms, add them in between the existing ones and make a couple of adjustments. Of course you can make this much more flexible by not hardcoding the route names in the links and redirects, but I leave that part up to you.

Conclusion

In this article, we looked at a simple way to create a multistep form in Drupal 8. You can, of course, build on this approach and create highly complex and dynamic processes that involve not just forms but also other kinds of steps that leverage cross-request data. Thus, the purpose of this article has been as much about multistep forms as it has been about illustrating the power of the PrivateTempStore. And if you, like me, think that the cTools object cache is very powerful in Drupal 7, you’ll be very invested in its Drupal 8 counterpart.

Jul 06 2015
Jul 06

Modern websites talk. They talk through great content to the visitors who come to read them, but they also talk through APIs (Application Programming Interfaces) to other systems. Does yours? Should it? Could integrating your site with other systems bring you greater return on investment?

Your website can potentially talk to a plethora of other systems, such as your CRM (and here's why it should), your social media presence, your finance applications, other websites, forums, 3rd party apps and mobile apps... the list goes on.

Chatty Systems

You might be like any of these examples:

  • John captures leads via forms on his website, and sends the results through to Salesforce where he's got sales automation business rules which fuel his whole sales pipeline.
  • Mary has both follow and share buttons for her social media presences. Her embedded social media timelines draw fresh content to her site and she can update her social profiles with content directly from her site.
  • Joe has a webshop and a separate finance application for doing the accounts. He has a feed which exports data from his shop and goes straight into his accounting package, saving tons of time.
  • Síle wants her content to appear in multiple places, so she has exposed it via an API so that mobile applications and newsreaders can consume it. Now her adoring public can consume her content any way they choose, increasing her readership.
  • Bob has an active forum community. Users of his website are logged in automatically to the forum, and can pass from site to forum seamlessly. Bob's real goal, however is tighter integration, so he is considering Drupal's Harmony  as a pure Drupal solution for his forum.
  • Judy wants a richer content experience, so she tags her content with taxonomy terms powered by WikiData - a database of well known people, places and things. This allows her to not only control her vocabulary without editorial overhead, but also to access rich data within the context of her own content and enhancing it.

Joining The Dots

There are many ways to connect sites to other applications. Some sites, e.g. social ones, often supply ready made embed code which you simply copy and paste onto a page. Simple, yet effective, this is very common. You can get tighter integration by employing some more specialist modules, e.g. Drupal's Twitter module, which 'provides API integration with the Twitter microblogging service'.

Some other third party applications provide a public API you can talk to: e.g. Trello or Freeagent or Wikidata.

Often APIs are accessible over a communication method known as 'SOAP' calls, but it has become more common for these to be over RESTful interfaces, which simply allow normal URLs, like a web address to have functionality attached to them.

Why Bother?

When visitors come to your website, no longer are they simply thinking 'What can I read here?', but rather they are thinking: 'What can I do here?'. Read? View? Download? Comment? Interact? Create? Buy?

The same should apply for site-owners. 'What can my website do for me to make my life easier, or otherwise bring me benefit?'

Integrations with other systems can bring that to life. If your website can feed your CRM up to the second live lead data without any copy and paste, that is simply immense. Equally, if you can get your financial transaction data from your web shop into your accounts package simply and reliably, not only are you saving countless person-hours of tedium, but the overall quality of your data will be far greater than that achieved by a laborious, error-prone manual process.

Gotchas & Things to Consider

In any information exchange process it is important to consider the workflow before you begin. You need to answer questions like:

  • Is this a master-slave relationship, where one system holds the canonical data for both systems?
  • Is the data-transfer one-way?
  • Can one system write to the other (push)? Or must a system request data and do its own writing (pull)?
  • What happens if the data on the systems diverge?
  • How does validation work? Do both systems have separate validation rules for data? Do they run over an API? Do they even agree? (they really should!)

For example, say you have a CRM and a website, and they both contain contact data. Which is the one true source of contact data? Does the website merely capture contact data (e.g. in a shop)? Do users have access to update it thereafter? What happens in the CRM when the website gets new data?

Equally, if CRM data changes? What happens on the website? Nothing? Or is it updated based upon CRM?

Lastly, what happens if communications fail? Is there a Plan B? Is there a log? Can the system re-try the transaction in the event of failure?

All these questions need answers, or else, your Plan B might end up more like this.

Finally

One very important point to close on is this:

Make sure you have expert support for your chosen CRM on hand before you begin any integration project. I can't stress this enough. Your project will go more smoothly, it will be faster, and you will end up with a better product that better fits your needs once you engage with experts at both sides of the equation.

We've been doing integrations with other systems for years.

Would you like to bring your website to the next level?

Yes, wire up my website!

Jul 06 2015
Jul 06

If you have a standard RSS feed that you'd like to display in a block on your Drupal website, you've come to the right place. For this example, we will be using a sample feed at http://www.feedforall.com/sample.xml (excerpted here:). 

<item>
    <title>RSS Solutions for Restaurants</title>
    <description>
        <b>FeedForAll </b>helps Restaurant's communicate with customers. Let your customers know the latest specials or events.<br> <br> RSS feed uses include:<br> <i><font color="#FF0000">Daily Specials <br> Entertainment <br> Calendar of Events </i></font>
    </description>
    <link>http://www.feedforall.com/restaurant.htm</link>
    <category domain="www.dmoz.com">
        Computers/Software/Internet/Site Management/Content Management
    </category>
    <comments>http://www.feedforall.com/forum</comments>
    <pubDate>Tue, 19 Oct 2004 11:09:11 -0400</pubDate>
</item>

Let's say we want to display the two of the titles linked to the appropriate URL, along with their description, in a block like this: 

drupal feeds block with views

General Setup

We'll need the following modules:

And here is our general approach. We'll dive into the details next.

  • We will create 2 content types:
    • RSS Feed - we'll use this to tell Feeds about the XML feed we're importing
    • RSS Item - this will be the nodes for displaying the content from the feed
  • We will also create a Feeds Importer - this will tell Feeds what type of data we're importing, where we're importing it to, and how this RSS feed relates to our custom content types. 
  • We will set this up so the site checks for updates to the RSS feeds periodically using cron.
  • Last we'll create a block using views that display the most recent items in this feed. 

Feed Content Types

Why do we need 2 content types you might be thinking? Our RSS Feed content type will be very simple, we will just use it to save the URL of our feed, in this case, http://www.feedforall.com/sample.xml. The RSS Item content type will be used to save the individual items in the feed. In this case, it would contain the title, description, and link from the RSS feed. So go to admin/structure/types/add and create your new content type for your RSS feed. To keep things simple, I turned off authoring information, don't promote to front page, and turned off comments. 

xml feed content type

Next create your RSS Item content type

xml item content type

This one required the addition of fields. These will match the items in the XML feed that you want to pull into each feed item. I just changed the Body field to Description, and added a Link field.

rss item fields

This matches the information in the RSS feed.

<description>
        <b>FeedForAll </b>helps Restaurant's communicate with customers. Let your customers know the latest specials or events.<br> <br> RSS feed uses include:<br> <i><font color="#FF0000">Daily Specials <br> Entertainment <br> Calendar of Events </i></font>
    </description>
    <link>http://www.feedforall.com/restaurant.htm</link>

Set up Feeds module to import

If you haven't already, download the Feeds module and enable Feeds and Feeds Admin UI. 

drupal feeds module

We'll now configure Feeds to import at admin/structure/feeds by clicking on Add Importer, giving our importer a name, and clicking Create.

set up drupal feeds importer

After creating our importer, we'll be presented with a bunch of configuration options. First, under Basic Settings, we will attach our importer to the RSS Feed content type. I've unchecked 'Import on Submission' so we can confirm that this will work when we run cron (select As often as possible for Periodic Import so it will import as soon as you run cron). If you want your Feeds to be imported as soon as you save a new RSS Feed node, you can leave that checked. Save.

drupal feed basic settings

Under Fetcher, we'll select HTTP Fetcher, and you shouldn't need to do any configuration for the HTTP Fetcher Settings. 

Under Parser, if this is a standard RSS feed, you can select Common Syndication Parser, and there won't be any settings for this.

Lastly under Processor, select Node processor, as we will be importing the feed items into our Feed Items content type. Here we will need to do some quick configuration for the Node processor settings. These settings are a little up to you in terms of if you want to update existing nodes, text formats, and the author that will be used as the author of the nodes. The main thing is for the Bundle, you want to select the RSS Item content type. This is telling feeds what nodes to create when importing feed items. For our example, we'll select the RSS Item content type. If there is any HTML in your feed item content (like in the description) select Full HTML as the input format. Save.

 drupal feeds node processor settings

Then click on Mapping under the Node Processor - here is where we'll tell Feeds which fields to populate on our Feed Item content type with the information in the feed. The GUID is the unique identifier for each feed item, and we need to have at least one unique ID. 

Run your first import

Now, if you go to create a new RSS Feed node (node/add/rss-feed) you'll see the Feeds module has add a Feed URL required field. Enter in the URL of the RSS feed and hit Publish. If under your Basis Settings for Feeds you had checked the 'Import on Submission' option, when you saved this RSS Feed node, Feeds would have imported the new RSS Items for you, but since we left this unchecked, saving this node won't do anything except create a new node. 

drupal feeds import feed

Now, when we run cron, we'll see that Feeds will import new nodes.

drupal feeds successful

If we go to the content overview page, we'll see that 9 new RSS Item nodes have been created. 

drupal feeds new imported nodes

Display Feeds Items in Block

Lastly we'll create our block - I'll skip through the details of Views, but basically set up a view block that displays content of type RSS Item:

drupal feeds views block

And then display the title, which I linked to the 'link' field for these RSS item nodes, as well as the description.

drupal feeds set up views block

Display this block on your desired page and this is what you should see:

drupal feeds block with views

Hope this was helpful. Any questions or comments, please leave a note below!

Jul 05 2015
Jul 05

I've given a "Constructive Conflict Resolution" talk twice now. First at DrupalCon Amsterdam, and again at DrupalCon Los Angeles. It's something I've been thinking about since joining the Drupal community working group a couple of years ago. I'm giving the talk again at OSCON in a couple of weeks. But this time, it will be different. Very different. Here's why.

After seeing tweets about Gina Likins keynote at ApacheCon earlier this year I reached out to her to ask if she'd be willing to collaborate with me about Conflict Resolution in open source, and ended up inviting her to co-present with me at OSCON. We've been working together over the past couple of weeks. It's been a joy, and a learning experience! I'm really excited about where the talk is heading now. If you're going to be at OSCON, please come along. If you're interested, please follow our tweets tagged #osconCCR.

Jen Krieger from Opensource.com interviewed Gina and I about our talk - here's the article: Teaching open source communities about conflict resolution

In the meantime, do you have stories of conflict in Open Source Communities to share?

  • How were they resolved?
  • Were they intractable?
  • Do the wounds still fester?
  • Was positive change an end result?
  • Do you have resources for dealing with conflict?

Tweet your thoughts to me @kattekrab

Jul 04 2015
Jul 04

Recently, I was working with a team on the project for country music television (CMT) at Corus Entertainment. We need to synchronize videos from two different resources. One of them is Youtube. There are over 200 youtube channels on CMT. We need pull videos from all those channels regularly. By doing that, videos that published on Youtube will be available on CMT automatically. So, in-house editors do not need to spend time uploading the videos.

We store videos in file entities. Videos are from different sources. All imported videos act same across the site. Among the imported video, only their mime type is different if two videos are from the different source. Each imported video is a file entity on CMT. For the front end, we built views pages and blocks based on those video file entities.

We have some videos imported from another source: MPX thePlatform. We used "Media: thePlatform mpx" as main module to import and update those videos. To deal with customized video fields, we contributed a module "Media: thePlatform MPX entity fields sync" to work with the main module.

After we have done with MPX thePlatform videos, we have experience of handling videos import. How do we import YouTube videos now? We have over 200 channels on YouTube.

We gave up to make it work with Feeds module. At first, we planned to use feeds module. Since Google have just deprecated their YouTube API V2.0, Old RSS channels feed is no longer working. Thanks to the community, we found a module called feeds_youtube. The module's 7.x-3.x branch works with YouTube latest API V3.0. So, we have a feeds parser. We still need a feeds processor. Thanks to the community again, we found feeds_files module. We installed those modules and their dependent modules. We configure feeds module and spent couple days. It did work. At the end, we decide to build a lightweight custom module that can do everything from download YouTube JSON data to create local file entities. Each video imported from channels will have a uplink to an artist or a hosting show.

What do we want from the module? We want it to check multiple YouTube channels. If there is a new video uploaded in a channel, we create a associated file entity on CMT. In the created entity, we have some the metadata from YouTube saved to the entity fields. In that entity, we also save local metadata. Local metadata likes a show, artist information. We want the module to handle over 200 and maybe thousands of channels in the future. We want the module to handle the tasks gracefully and not burn out the system when importing thousands of videos. It sounds to be quite intimidating but not really.

We ended up building a module and contributed it to Drupal.org. It is called Youtube Channel Videos Sync V3. Here is how we come up with the module.

First of all, we gathered a list of channels name. We also collected relevant local metadata like artist id or show id. Then, we send a request to YouTube and get a list of videos for each channel. Then, for each video, we create a system queue task with video's data and the local metadata. At last, we built queue processor and created the video file entity one by one. See the diagram below for the complete process.

drupal module cyoutube video sync

A little more technical detail below:

How we get a list of YouTube channels name and the local metadata for the channels? In the module configuration page, we set the fields used to store the channel name. In CMT, we have the field in both show and artist nodes. The module went through the nodes and get an array of channel name and the node id with they content type. Array like this:

  array(
   'channel name' => array(
     'field_artist_referenced' => array('nid',...),
    ),

How to configure YouTube metadata fields into entity fields? On the module configuration page, we can set up mapping between the YouTube metadata field and local entity field.

In the imported videos, how to set the local artist and show information? The module has a hook where a custom module can implement to provide that information. Like the array above, "field_artist_referenced" is a field machine name of video entity. "array('nid')" is the value for that field. By doing that, the imported video has an entity reference pointing back to the artist or show node. One of the best ways to set up a relation is to have an entity reference in the video entity that point to the artist node or entity.

That is the overall process the module follow to import thousands of videos from YouTube.

Jul 04 2015
Jul 04

A friend of mine is a DUI lawyer who uses a Drupal site for content marketing and lead generation, managed by my friends at EverConvert. They quickly identified the weekend as the best time for the website to directly appeal to visitors who found the site after a Friday or Saturday night arrest. In addition to live chat, they decided to alter the site's appearance by publishing content only during these times that contains large, direct calls to action.

Fortunately, Drupal provides the tools to build this with nary a line of code.

Introducing the Rules Scheduler

The primary module you'll use to create something similar is Rules with its Rules Scheduler sub-module. Most purpose built content scheduling modules that I've seen allow you to set absolute dates for pieces of content to be published or unpublished. However, you wouldn't want to have to enter in every single weekend date to get that content published at the right times. Fortunately, Rules Scheduler allows you to schedule arbitrary actions using relative date strings (in addition to any other format supported by strtotime()).

Before we dive into the configuration I proposed to them, you should understand that the Rules module basically adds a GUI based scripting language to the back end of Drupal. In addition to configuring actions to be performed after certain events when a set of conditions are met, you can create Rules components that are essentially subroutines that can be directly invoked by Rules (or code) without being triggered by events.

To setup a Rules Scheduler based content publishing schedule, you have to create two Rules components: one will publish the piece of content and schedule it to be unpublished on a relative date (i.e. "next Monday"), while the other will unpublish the piece of content and schedule it to be published on a relative date (i.e. "next Friday"). Another Rule will need to react to content being created or updated to initiate the publishing schedule.

Using Fields to Avoid "Magic" Behavior

One of the keys to building a maintainable Drupal site (or module) is to ensure that every "automated" action is explicitly enabled. In the early days of Drupal Commerce development, I adopted an approach to some module behaviors where a feature just automatically worked if certain conditions were met (e.g. the representation of attribute fields on Add to Cart forms). "Neat!" I thought, until I realized that it was difficult to document and even more difficult to ensure users knew to look for said documentation. Much better to include explicit user interface components to enable functionality.

In the case of a scheduling system, you wouldn't want to build the site to just automatically enter every piece of content, or even every piece of content of a certain type, into the publishing pattern. Your client, a.k.a. the end user, really expects (and needs) to see an indicator on a form that submitting it will lead to a desired outcome.

Checkbox enabling the publishing pattern for calls to action.

For my friend's site, my recommendation was simply to add a Boolean field using a checkbox widget to the relevant content type reading, "Schedule this content to be published on Fridays and unpublished on Mondays." If the site required more publishing patterns than just the weekend pattern, I would've used a List (text) field with radio buttons or a select list identifying the different available publishing patterns.

Building the Scheduling System in Rules

Working with Rules is fairly simple if you can write out long hand what it is you're trying to accomplish. Considering all of our constraints here, we need a set of rules that accomplishes the following:

  1. When a call to action is created or saved with the scheduling checkbox checked, delete any scheduled tasks related to the content. (This is possible because Rules Scheduler lets you assign an identifier to scheduled rules, so identifying our scheduled tasks with the node ID will allow us to delete the tasks when needed in the future.)
  2. If the call to action that was just saved isn't published, schedule it to be published next Friday.
  3. If the call to action that was just saved is published, schedule it to be unpublished next Monday.
  4. When a call to action is automatically published on Friday, schedule it to be unpublished next Monday.
  5. When a call to action is automatically unpublished on Monday, schedule it to be published next Friday.
Actions that queue up a publishing schedule.

The first three items will be accomplished through a single rule reacting to two events, "After saving new content of type Call to action" and "After updating existing content of type Call to action." The rule will first delete any scheduled task for the piece of content and then it will invoke two rules components that will schedule the appropriate task based on whether or not the call to action was published.

Components that manage the publishing schedule.

The final two items will be accomplished through rules components that perform the necessary action and then schedule the appropriate task. As mentioned above, we'll use relative time strings ("next monday" and "next friday") and choose task identifiers that include the call to action node IDs ("unpublish-call-to-action-[node:nid]" and [publish-call-to-action-[node:nid]" respectively).

Give it a whirl!

It only took me about 10 minutes to create and test the rules based on the specification above, but if you aren't familiar with the Rules UI, it could take much longer. I believe Rules is worth learning (we built Drupal Commerce around it, after all), but there's something to be said for ready made examples.

I've attached to this post a Features export of the content type and related rules configurations for you to try on your own site. Give Scheduled Calls to Action a whirl and let me know how it works for you in the comments!

(Note: to see the rules configurations and scheduled tasks, enable the Rules UI, Views, and Views UI modules if they aren't already enabled on your site.)

Jul 03 2015
Jul 03

Navigation

Published Fri, 2015/07/03 - 11:25, Updated Fri, 2015/07/03 - 12:10

Last week, at the amazing Drupal North regional conference, I gave a talk on Backdrop: an alternative fork of Drupal. The slides from the talk are attached below, in PDF format.

Attachment Size drupalnorth-2015-backdrop.pdf 387.62 KB
»

Is your Drupal or WordPress site slow?
Is it suffering from server resources shortages?
Is it experiencing outages?
Contact us for Drupal and WordPress Performance Optimization and Tuning Consulting

Drupal Association Organization MemberDrupal Association Organization Member

Do you use any of our Drupal modules?

Did you find our Drupal, WordPress, and LAMP performance articles informative?

Follow us on Twitter @2bits for tips and tricks on Drupal and WordPress Performance.

Contact us for Drupal and WordPress Performance Optimization and Tuning Consulting

In Depth Articles

Search

Google

Custom Search

Links

social security card replacement
Jul 03 2015
Jul 03

This is the third in a series of posts about Drupal 8's configuration management system. The Configuration Management Initiative (CMI) was the first Drupal 8 initiative to be announced in 2011, and we've learned a lot during thousands of hours of work since then. These posts share what we've learned and provide background on the why and how. In case you missed them: here is part one and part two.

Configuration management helps to manage complexity

Drupal offers a rich user interface that allows us to build complex websites. With a few clicks we can create views and content entities with custom fields. We can limit a view to only listing a particular content type, and then we can display the view's output in a block that only certain roles can see. By the time we've finished building our site, we've created numerous configurations that have relationships to each other and the system that built them. Drupal 8’s solution to managing such complexity is configuration dependencies.

What is a configuration dependency?

In Drupal you can create a node type, add fields to it and configure how they are shown using an entity display. For example, you can create a 'news' node type and attach an image field called 'main photo' field. By default, Drupal comes with different views modes for content entities such 'teaser', 'rss', 'search result' and 'full content'. You can add custom view modes as well. Each view mode can be configured on each node type to have a different entity display. The entity display in turn configures which fields are displayed for that view mode of that node type and how they are displayed. For example, the 'main photo' field can be configured to be displayed on the 'full content' view mode for the 'news' node type and hidden for the 'search result' view mode.

Creating fields, node types and entity displays through the UI creates configuration entities representing each thing. Fields are a bit more complex than that. Each field consists of two configuration entities. Firstly, it has a field storage, which is unique to an entity type, defines the field’s storage requirements and is shared across all the entity type's bundles. (A node type is a bundle for the node entity type.) Secondly, it has a field, which defines a field's settings for a particular bundle, such as the default value. For example, the 'news' node type could have one default image in the 'main photo' field, while another node type could reuse the same field storage with a different default image.

In order for the entity display to exist, the fields, field storages and node type must exist. In order for the fields to exist, its storage and the node type it is attached to must exist. These are descriptions of configuration dependencies.

The configuration dependencies between node types, field storages, fields and entity displays

What happens when a configuration entity is deleted?

Dependencies allow us to work out what will be affected if we delete a field. When a field is deleted, we list the affected configuration entities that will be deleted or updated on the confirmation form. For example, if you delete the 'Tags' field on the article node type, the configuration system will warn you that all of the article's entity displays are going to be updated.

Deleting a field in Drupal 8 displays configuration that will be updated

We also do this on module uninstallation. The following example is particularly interesting because it shows that blocks that depend on a view will be deleted when uninstalling the Views module.

Uninstalling the Views module in Drupal 8 tells you that it's going to delete a block that depends on a view

This ability to manage dependencies provides a much better picture of the potential consequences of an action. It also makes Drupal 8 significantly more robust because orphaned configuration can't exist.

Drupal decides which configuration entities to update and which to delete based on how and if the configuration entity's code implements the onDependencyRemoval() method. Implementations can update the configuration to remove any dependency passed to it. If the dependency is not removed, then the configuration entity will be deleted instead. In the 'Tags' field deletion example above, by default, the implementation in EntityDisplayBase retains the entity display and removes any configuration it contains for the deleted fields.

How do configuration entities get dependencies?

Configuration entities can depend on four types of things: modules, themes, other configuration entities and content entities. Every configuration entity has an implicit dependency on the module that provides its entity type (that is, the module that contains the actual code for the entity implementing ConfigEntityInterface). For example, roles depend on the User module. Additional dependencies are added based on the logic implemented by the configuration entity. For example, block configuration entities depend on the theme they are placed in.

All configuration entities come with logic (implemented in ConfigEntityBase) that integrates with two systems: third-party dependencies and plugins.

Third-party settings

Third-party settings are settings that modules can add to configuration entities that it does not provide. For example, the Content Translation module includes third-party settings to add per-bundle translation settings to 'Content language settings' configuration entities.

For example if you make the 'news' node type translatable, the corresponding 'Content language settings' configuration entity's 'dependencies' key will look like this:

dependencies:
  config:
    - node.type.news
  module:
    - content_translation

The configuration entity ties the Content Translation and Language modules together with the 'news' node type. The dependency on the Language module is implicit because the configuration entity is called 'language.content_settings.node.news'. The first part of any configuration entity's ID, in this case 'language', always indicates which module provides the code for the configuration entity definition.

Every configuration entity has the methods to set and get third-party settings. If a module adds a third-party setting to a configuration entity, then the entity is updated to depend on the module. Dependencies due to third-party settings can always be removed when a module is uninstalled, as opposed to requiring that those entities be deleted or leaving orphaned data on them. This happens automatically as part of Drupal 8's configuration system and represents a major improvement. In Drupal 7, modules that added functionality to the field system would often add values to a serialised PHP array stored in the field database table that proved very difficult to clean up.

Plugins

Plugins are Drupal 8's API for combining configurable settings with reusable code. Many core plugins' configurable settings are stored in configuration entities, including block plugins, filter plugins and view display plugins. For more about plugins see Joe Shindelar's excellent blog post.

The interaction of plugins and configuration entities creates a more elaborate chain of dependencies than third-party settings. For example, the Filter module provides both the filter format configuration entity and filter plugins. The filter format configuration entity is used to configure filter plugins together in one text format. Drupal's Filter module ships with a default filter format called 'Plain text'. This configures three filter plugins which are provided by the Filter module: filter_html_escape, filter_url, and filter_autop. See the screenshot below for how this appears on the "Plain text" filter format configuration screen (admin/config/content/formats/manage/plain_text).

Configuring the 'Plain text' filter format

Since the Filter module provides the code for filter formats, their entity IDs begin with 'filter', creating an implicit dependency. Since all three plugins are also provided by the Filter module, there is no need to specify the dependency a second time, so the 'Plain text' filter's 'dependencies' key is empty by default:

dependencies: { }

Filter plugins can also be provided by other modules. For example, the Editor module provides a 'editor_file_reference' plugin. If you tick the box labelled "Track images uploaded via a Text Editor" you add this plugin to the 'Plain text' filter format. The configuration system is able to determine that the filter format has a new dependency on the Editor module. This works because FilterFormat implements EntityWithPluginCollectionInterface. After saving, the 'Plain text' filter format's 'dependencies' key will look like:

dependencies:
  module:
    - editor

Plugins can add dynamic dependencies as well by implementing DependentPluginInterface. This is how views adds a dependency on a role configuration entity when you only allow selected roles to use a view.

Configuration depending on content

The Content Block module provides a 'block_content' content entity. The module has a plugin deriver that creates a block plugin for every 'block_content' content entity that a user creates. When you place one of these blocks, the block system creates a block configuration entity that details which template region the block instance appears in and the instance's visibility settings. Each definition provided by the deriver also contains the content dependency information. This allows the configuration entity to add the content entity to its dependencies.

For example, here are the dependencies of a content block placed in Bartik:

dependencies:
  content:
    - 'block_content:basic:925ba628-dfe7-478a-be3e-6285f89751f0'
  module:
    - block_content
  theme:
    - bartik

Content dependencies are soft dependencies. This means that if the piece of content is missing, the configuration entity can exist without it and it should not break the system. For example, blocks created by the Content Block module will simply display a message to the user if the content for the block does not exist yet on the site.

The ability to determine if configuration depends on content is an exciting new feature. This is used during import to fire an event that allows modules to create or stub the missing content entities.

Enforced dependencies

All the dependencies described above always reflect the current code base and the configuration. We re-calculate configuration entity dependencies on save so that you don’t need to be concerned with adding and removing dependencies when something changes. However, there are situations in which you need to ensure that a configuration entity has a particular dependency that can not be calculated.

An example is the 'book' node type. The code for node types generally is provided by the Node module, so all node types need the Node module in order to function. However, other modules can also create node types, and those other modules are not required for the node types they create to function. So, the Book module creates the 'book' node type which does not have any calculable dependency on the Book module; only the Node module is actually needed. In this case, however, we still want to enforce that the 'book' node type be removed from the active configuration when Book is uninstalled, so we have to add an enforced dependency. This is one of the few times where a module developer would have to edit an exported configuration entity directly.

The module author adds the enforced dependency to the module's default configuration file like so:

dependencies:
  module:
    - book
  enforced:
    module:
      - book

'book' is added both as an enforced dependency and as the initial value for the calculated dependency since dependencies are not calculated on install (this behaviour might change). Note again that the implicit dependency on the Node module is not saved here, because it is automatically included at the beginning the entity ID itself, 'node.type.book'.

You can depend on configuration entities

Configuration entity dependencies allow us to manage our configuration more effectively. It's not uncommon to forget about an old view, make what you think are unrelated changes to another part of the site and experience unintended side effects. This can lead to broken block placements, missing handlers in views, or even disclosing information that was intended to be private. With Drupal 8 we can build tools to manage complexity more easily and with more flexibility and integrity.

This post has had input from: catch, mtift, and susanmccormick. And thanks to xjm for extensive comments, questions, and help without which this post would be much harder to understand.

Jul 03 2015
Jul 03

Drupal VM - Vagrant and Ansible Virtual Machine for Drupal Development

For the past couple years, I've been building Drupal VM to be an extremely-tunable, highly-performant, super-simple development environment. Since MidCamp earlier this year, the project has really taken off, with almost 200 stars on GitHub and a ton of great contributions and ideas for improvement (some implemented, others rejected).

In the time since I wrote Developing for Drupal with Vagrant and VMs, I've focused on meeting all my defined criteria for the perfect local development environment. And now, I'm able to say that I use Drupal VM when developing all my projects—as it is now flexible and fast enough to emulate any production environment I use for various Drupal projects.

Easy PHP 7 testing with CentOS 7 and MariaDB

After a few weeks of work, Drupal VM now officially supports running PHP 7 (currently, 7.0.0 alpha 2) on CentOS 7 with MariaDB, or you can even tweak the settings to compile PHP from source yourself (following to the PHP role's documentation).

Doing this allows you to see how your own projects will fare when run with the latest (and fastest) version of PHP. Drupal 8 performance improves dramatically under PHP 7, and most other PHP applications will have similar gains.

Read PHP 7 on Drupal VM for more information.

Other major improvements and features

Here are some of the other main features that have recently been added or improved:

  • Flexible database support: MySQL, MariaDB, or (soon) Percona are all supported out of the box, pretty easily. See guide for Use MariaDB instead of MySQL.
  • Flexible OS support: Drupal VM officially supports Ubuntu 14.04, Ubuntu 12.04, CentOS 7, or CentOS 6 out of the box; other OSes like RHEL, Fedora, Arch and Debian may also work, but are not supported. See: Using different base OSes.
  • Use with any Drupal deployment methodology — works with any dev workflow, including Drush make files, local Drupal codebases, and multisite installations.
  • Automatic local drush alias configuration
  • 'Batteries included' — developer utilities and essentials like Varnish, Solr, MailHog, XHProf are easy to enable or disable.
  • Production-ready, security-hardened configuration you can install on DigitalOcean
  • Thoroughly-documented — check out the Drupal VM Wiki on GitHub
  • First class support for any host OS — Mac, Linux or Windows
  • Drupal version agnostic — works great with 6, 7, or 8.
  • Easy configuration of thousands of parameters (powered by a few dozen component-specific Ansible roles) through the config.yml file.

I'd especially like to thank the dozens of people who have filed issues against the project to add needed functionality or fix bugs (especially for multi-platform, multi-database support!), and have helped improve Drupal VM through over 130 issues and 17 pull requests.

There are dozens of other VM-based or Docker/container-based local development solutions out there, and Drupal VM is one of many, but I think that—even if you don't end up using it for your own work—you will find sound ideas and best practices in environment configuration in the project.

Jul 03 2015
Jul 03

This was our 6th critical issues discussion meeting to be publicly recorded in a row. (See all prior recordings). Here is the recording of the meeting video and chat from today in the hope that it helps more than just those who were on the meeting:

[embedded content]

If you also have significant time to work on critical issues in Drupal 8 and we did not include you, let me know as soon as possible.

The meeting log is as follows (all times are CEST real time at the meeting):


[11:06am] alexpott: https://www.drupal.org/node/2280965
[11:06am] Druplicon: https://www.drupal.org/node/2280965 => [meta] Remove or document every SafeMarkup::set() call [#2280965] => 90 comments, 13 IRC mentions
[11:06am] alexpott: https://www.drupal.org/node/2506581
[11:06am] Druplicon: https://www.drupal.org/node/2506581 => Remove SafeMarkup::set() from Renderer::doRender [#2506581] => 49 comments, 9 IRC mentions
[11:07am] alexpott: https://www.drupal.org/node/2506195
[11:07am] Druplicon: https://www.drupal.org/node/2506195 => Remove SafeMarkup::set() from XSS::filter() [#2506195] => 40 comments, 3 IRC mentions
[11:09am] dawehner: https://www.drupal.org/node/2502785
[11:09am] Druplicon: https://www.drupal.org/node/2502785 => Remove support for #ajax['url'] and $form_state->setCached() for GET requests [#2502785] => 51 comments, 18 IRC mentions
[11:09am] Druplicon: dawehner: 5 hours 36 sec ago tell dawehner you might already be following https://www.drupal.org/node/1412090 but I figure you would be in favor.
[11:10am] alexpott: GaborHojtsy: do you what the you link is for the live hangout?
[11:11am] GaborHojtsy: live hangout page: http://youtu.be/rz_EissgU7Q
[11:11am] GaborHojtsy: alexpott: ^^^
[11:11am] GaborHojtsy: to watch that is
[11:11am] alexpott: GaborHojtsy: thanks!
[11:12am] plach: https://www.drupal.org/node/2453153
[11:12am] Druplicon: https://www.drupal.org/node/2453153 => Node revisions cannot be reverted per translation [#2453153] => 159 comments, 58 IRC mentions
[11:12am] plach: https://www.drupal.org/node/2453175
[11:12am] Druplicon: https://www.drupal.org/node/2453175 => Remove EntityFormInterface::validate() and stop using button-level validation by default in entity forms [#2453175] => 79 comments, 9 IRC mentions
[11:14am] plach: https://www.drupal.org/node/2478459
[11:14am] Druplicon: https://www.drupal.org/node/2478459 => FieldItemInterface methods are only invoked for SQL storage and are inconsistent with hooks [#2478459] => 112 comments, 29 IRC mentions
[11:14am] larowlan: https://www.drupal.org/node/2354889
[11:14am] Druplicon: https://www.drupal.org/node/2354889 => Make block context faster by removing onBlock event and replace it with loading from a BlockContextManager [#2354889] => 100 comments, 19 IRC mentions
[11:15am] larowlan: https://www.drupal.org/node/2421503
[11:16am] Druplicon: https://www.drupal.org/node/2421503 => SA-CORE-2014-002 forward port only checks internal cache [#2421503] => 46 comments, 11 IRC mentions
[11:16am] larowlan: https://www.drupal.org/node/2512460
[11:16am] Druplicon: https://www.drupal.org/node/2512460 => "Translate user edited configuration" permission needs to be marked as restricted [#2512460] => 20 comments, 8 IRC mentions
[11:17am] WimLeers: catch: pfrenssen is likely gonna be working on the "numerous paramconverters" issue
[11:17am] WimLeers: the issue catch talked about: https://www.drupal.org/node/2512718
[11:17am] Druplicon: https://www.drupal.org/node/2512718 => Numerous ParamConverters in core break the route / url cache context [#2512718] => 22 comments, 12 IRC mentions
[11:17am] catch: WimLeers: did you discuss last night's discussion with him already?
[11:18am] WimLeers: https://www.drupal.org/project/issues/search/drupal?project_issue_follow...
[11:19am] WimLeers: https://www.drupal.org/node/2450993
[11:19am] Druplicon: https://www.drupal.org/node/2450993 => Rendered Cache Metadata created during the main controller request gets lost [#2450993] => 132 comments, 27 IRC mentions
[11:20am] pfrenssen: catch: WimLeers: yes I'm working on that, for the moment just studying how it works
[11:20am] berdir: WimLeers: I think we can also demote it if all the other must issues are critical on their own
[11:20am] GaborHojtsy: https://www.drupal.org/node/2512460
[11:20am] Druplicon: https://www.drupal.org/node/2512460 => "Translate user edited configuration" permission needs to be marked as restricted [#2512460] => 20 comments, 9 IRC mentions
[11:20am] catch: pfrenssen: cool. I'll be around most of today so just ping if you want to discuss.
[11:21am] GaborHojtsy: https://www.drupal.org/node/2512466
[11:21am] Druplicon: https://www.drupal.org/node/2512466 => Config translation needs to be validated on input for XSS (like other t string input) [#2512466] => 34 comments, 3 IRC mentions
[11:21am] WimLeers: pfrenssen++
[11:21am] WimLeers: berdir: assuming you're talking about https://www.drupal.org/node/2429287 — then yes, that's exactly the plan :)
[11:21am] Druplicon: https://www.drupal.org/node/2429287 => [meta] Finalize the cache contexts API & DX/usage, enable a leap forward in performance [#2429287] => 112 comments, 10 IRC mentions
[11:21am] GaborHojtsy: https://www.drupal.org/node/2489024
[11:21am] Druplicon: https://www.drupal.org/node/2489024 => Arbitrary code execution via 'trans' extension for dynamic twig templates (when debug output is on) [#2489024] => 43 comments, 12 IRC mentions
[11:22am] GaborHojtsy: https://www.drupal.org/node/2512718
[11:22am] Druplicon: https://www.drupal.org/node/2512718 => Numerous ParamConverters in core break the route / url cache context [#2512718] => 22 comments, 13 IRC mentions
[11:22am] xjm: GaborHojtsy: I took care of updating the security polict with mlhess
[11:22am] xjm: GaborHojtsy: that's done
[11:23am] • xjm listening to the meeting but cannot join because of 10 person limit
[11:23am] GaborHojtsy: xjm: yay
[11:23am] GaborHojtsy: xjm++
[11:24am] WimLeers: Yes
[11:24am] WimLeers: I DO HAVE A MICROPHONE!!!
[11:24am] WimLeers: :P
[11:24am] xjm: GaborHojtsy: https://www.drupal.org/node/475848/revisions/view/7267195/8630716 now restrict access is mentioned explicitly, so any perm that has it is covered
[11:24am] Druplicon: https://www.drupal.org/node/475848 => Security advisories process and permissions policy => 0 comments, 1 IRC mention
[11:25am] larowlan: https://www.drupal.org/node/2509898
[11:25am] Druplicon: https://www.drupal.org/node/2509898 => Additional uncaught exception thrown while handling exception after service changes [#2509898] => 22 comments, 5 IRC mentions
[11:25am] alexpott: WimLeers: a working microphone :)
[11:28am] WimLeers: alexpott: all of you guys are breaking up for me from time to time. It looks like it's something with Chrome/Hangouts :(
[11:28am] xjm: WimLeers: yeah I had to use FF
[11:28am] WimLeers: My mic *works* if I test it locally.
[11:28am] WimLeers: xjm: lol, the beautiful irony
[11:30am] WimLeers: berdir: alexpott: I checked and I agree with the change you guys just asked me feedback on: https://www.drupal.org/node/2375695#comment-10082188
[11:30am] Druplicon: https://www.drupal.org/node/2375695 => Condition plugins should provide cache contexts AND cacheability metadata needs to be exposed [#2375695] => 117 comments, 40 IRC mentions
[11:32am] alexpott: https://www.drupal.org/node/2497243
[11:32am] Druplicon: https://www.drupal.org/node/2497243 => Rebuilding service container results in endless stampede [#2497243] => 97 comments, 22 IRC mentions
[11:33am] • larowlan using FF too, doesn't trust Google
[11:33am] alexpott: dawehner, catch: anyone got the nid of the chx container issue?
[11:33am] dawehner: https://www.drupal.org/node/2513326
[11:33am] catch: alexpott: https://www.drupal.org/node/2513326
[11:33am] Druplicon: https://www.drupal.org/node/2513326 => Performance: create a PHP storage backend directly backed by cache [#2513326] => 43 comments, 13 IRC mentions
[11:33am] Druplicon: https://www.drupal.org/node/2513326 => Performance: create a PHP storage backend directly backed by cache [#2513326] => 43 comments, 14 IRC mentions
[11:34am] catch: beat me.
[11:36am] larowlan: https://www.drupal.org/node/2511568 for the ctools issue I mentioned
[11:36am] Druplicon: https://www.drupal.org/node/2511568 => Create "context stack" service where available contexts can be registered [#2511568] => 0 comments, 3 IRC mentions
[11:41am] xjm: dropping off now to walk to the venue
[11:41am] alexpott: pfrenssen++
[11:50am] GaborHojtsy: for browser we are working on https://www.drupal.org/node/2430335
[11:50am] Druplicon: https://www.drupal.org/node/2430335 => Browser language detection is not cache aware [#2430335] => 47 comments, 21 IRC mentions
[11:50am] GaborHojtsy: Fabianx-screen: see above
[11:51am] berdir: and it's not a blocker
[11:51am] berdir: because right now it just kills page/smart cache
[11:51am] berdir: so if you use that, you are not seeing a cached page anyway
[11:51am] berdir: Fabianx-screen: we have a cache context for the content language
[11:52am] xjm: the youtube video has like a 20 second delay
[11:53am] GaborHojtsy: xjm: we get what we pay for :D
[11:54am] xjm: now I am talking
[11:55am] plach: https://www.drupal.org/node/2453175#comment-10080242
[11:55am] Druplicon: https://www.drupal.org/node/2453175 => Remove EntityFormInterface::validate() and stop using button-level validation by default in entity forms [#2453175] => 79 comments, 10 IRC mentions
[11:55am] xjm: more like 60s
[11:55am] GaborHojtsy: xjm: well, you’ll be in the recording forver now :D
[11:55am] xjm: :P
[11:56am] xjm: GaborHojtsy: is there any way to increase the number of participants or to include the chat sidebar in the video?
[11:56am] WimLeers1: catch: Can you leave a comment on https://www.drupal.org/node/2512718 with your POV/conclusion — sounds like you have the clearest grasp on this/completest view on the likely solution.
[11:56am] Druplicon: https://www.drupal.org/node/2512718 => Numerous ParamConverters in core break the route / url cache context [#2512718] => 23 comments, 14 IRC mentions
[11:56am] WimLeers1: xjm: chat is here, we don't use Google Hangout's chat sidebar
[11:56am] GaborHojtsy: xjm: the chat window is THIS ONE
[11:57am] larowlan: xjm: the chat one is lost soon as the call ends
[11:57am] xjm: WimLeers: GaborHojtsy: so the thing is that listening to the video it's very hard to figure out which issues people are discussing, even with Gábor's notes
[11:57am] larowlan: xjm: you can get more than 10 in the call if you have a corporate account I think
[11:57am] WimLeers: larowlan: Gábor copy/pastes this chat to the g.d.o post with the link of the recording
[11:57am] • larowlan nods
[11:57am] WimLeers: that was meant for xjm, oops
[11:58am] GaborHojtsy: xjm: the trick is if we would start on time, then my chat log shows timestamps which are sync with the video
[11:58am] GaborHojtsy: xjm: we have a few minute delay between the video start and the top of the hour
[11:58am] catch: WimLeers: yep will do.
[11:59am] GaborHojtsy: xjm: because people arrive later basically :)
[11:59am] WimLeers: catch++
[12:01pm] GaborHojtsy: xjm: its also hard to get Druplicon in google hangouts — as in impossible ÉD
[12:01pm] GaborHojtsy: xjm: and there was some interest for people watching to be able to follow links WHILE the meeting was on
[12:02pm] xjm: GaborHojtsy: but it doesn't work -- I'm trying that right now -- the delay is too big
[12:02pm] WimLeers: alexpott: regarding in title: https://api.drupal.org/comment/26#comment-26
[12:03pm] GaborHojtsy: xjm: well, then at least the Druplicon advantage is there :)
[12:03pm] GaborHojtsy: xjm: I am happy to use some way to get the comments in if there is one
[12:04pm] GaborHojtsy: xjm: I am not sure it helps if eg. someone shares their screen with an IRC client throughout the video
[12:04pm] GaborHojtsy: xjm: the links are not clickable either
[12:04pm] xjm: GaborHojtsy: shannon did that for the meeting we had 2y ago -- though the same problem with links not being clickable
[12:04pm] GaborHojtsy: xjm: but that may be a workaround

Jul 03 2015
Jul 03

Organization and Individual Member badgesIf you have been to a DrupalCon before, you will notice something new in the registration process. We've made it really easy for you to sign up for Drupal Association membership and bundle it into your overall DrupalCon purchase. Whether you're joining or renewing for yourself or for your organization, you can select the amount you wish to pay, and then you'll be on your way to seeing the member badge on your Drupal.org profile and your conference name badge.

DrupalCon is brought to you by the Drupal Association and an amazing global team of volunteers. By giving back to Drupal through membership in the Association, you are helping our worldwide community thrive. Your funds help to give grants and scholarships that help on the local level, and your membership shows that you care about Drupal. Through your generosity, you continue the momentum of Drupal and inspire all of us to keep pushing the project forward.

Jul 03 2015
Jul 03

Mobile application development is a relatively new area of Drupal development services, but it is advancing rapidly, because mobile devices are used increasingly.

In terms of adapting websites for mobile platforms, PhoneGap and technologies built on it remain very popular. So let’s see what it is and how it works.

Installing everything necessary

First you need to install all the necessary software. We will show it on Linux Ubuntu OS.

Installing NodeJS

sudo apt-get install node

It's simple, there is only one comment. When Ubuntu installs a package, it gives “nodejs” name to the executable file, although many applications, including PhoneGap, expect it is called “node”. To fix this discrepancy, you need to create a link to this file with the “node” name:

sudo ln -s /usr/bin/nodejs /usr/bin/node

Installing PhoneGap and Cordova

npm install -g phonegap
npm install -g cordova

Cordova is a platform to create mobile applications using html, css and javascript.

Installing Ant

sudo apt-get install ant

Apache ant is a means to automate the build process. However, there may be no need for it because newer versions of Cordova use built-in gradle that does not require installation.

Installing Android SDK

We will consider the creation of mobile applications with the example of applications for Android. We need to install Android SDK (Software development kit), it can be found here. Extract the files and place them at: /usr/local/android-sdk-linux.

Installing Java (JRE & JDK)

sudo apt-get install openjdk-7-jre
sudo apt-get install openjdk-7-jdk

JRE (Java runtime environment) includes a Java virtual machine, libraries of classes and launcher of Java applications that are necessary for running programs written in Java. JDK (Java development kit) is a development environment for building applications, applets, and components using the Java programming language.

Configuring paths

Open the .profile file to edit:

gedit ~/.profile

Add the following lines to the end of the file (look carefully not to put them inside some “if” operator):

export ANDROID_HOME="/usr/local/android-sdk-linux"
export ANDROID_PLATFORM_TOOLS="/usr/local/android-sdk-linux/
platform-tools"
export PATH="$PATH:$ANDROID_HOME:$ANDROID_PLATFORM_TOOLS"

Now you need to save the file and log out from the system to save the changes. It is important not to confuse the path to ANDROID_HOME, because in older versions (and it is written in many websites), the path is indicated to the tools folder (such as /usr/local/android-sdk-linux/tools), which may display annoying errors and the application will not be created.

Configuring Android SDK

Enter the following command in the terminal:

android

This opens the Android SDK Manager window:

Drupal 7 application with PhoneGap

If you see this window, you have installed everything correctly, if not, then you’ve made a mistake somewhere, so look through all points to check whether the paths have been announced correctly. In the package manager, you can install additional Android components. Some of them will be selected by default, in addition to them you need to select the packages of the latest API version, as well as Android SDK Tools and Android SDK Build-tools, possibly of multiple versions at once. Then download and install everything necessary.

Updating C/C++

Installing packages through the following commands:

sudo apt-get install lib32stdc++6
sudo apt-get install lib32z1

It should be noted that it must done only on 64-bit systems. On 32-bit Ubuntu, this does not seem necessary (but it is not a sure bet).

Creating a device for emulation

android avd

This command opens the Android Virtual Device Manager to manage the devices on which your application will be emulated. By default, no such devices are created, so you’ll need to do it by yourself. Everything is easy, just click “create” and fill in the form. How the emulator looks depends on the device you’ve chosen, for example, like this:

Drupal 7 application with PhoneGap

Creating and running the app

?ordova create my-app

The command creates a simple "Hello world" project in the my-app folder. Learn more about the parameters for this command here. You can add your static html pages in the www folder, having removed everything from there. Go to the my-app directory and add a platform:

cordova platform add android

It will create a folder with the appropriate platform. Now build the application:

cordova build android

The result of this procedure should be the .apk file in the /platforms/android/build/outputs/apk/ directory, or you can run the emulate command which immediately builds and runs the application on the emulator:

cordova emulate android

If you want to emulate on a specific virtual device, use the same command:

cordova emulate --target=emulator-5554 android

If everything is configured correctly, this creates and runs our application in the emulator (this process can take a long time). Although, at this stage, most errors occur.

Possible problems with Emulation

Gradle: Execution failed for task ':AppName:compileDebugAidl'.
> failed to find target android-22

You must go to the Android SDK Manager and make sure all packages from the Android M (API 22) group are installed.

FAILURE: Build failed with an exception.
* Where:
Script
'/var/www/hello/platforms/android/CordovaLib/cordova.gradle' line: 64
* What went wrong:
A problem occurred evaluating root project 'android'.
> No installed build tools found. Please install the Android build tools version 19.1.0 or higher.

Here you need to check availability of the appropriate version of build tools (19.0.1 or higher). The folder with the name of this version should be placed in the /usr/local/android-sdk-linux/build-tools folder. If the build-tools folder is empty, then incorrect paths in the ~/.profiles file are written. You can also debug the specified file (if you know how to work with the Java language).

Android studio

Android studio is a program that lets you create mobile apps for android. Instructions for installing it are available here. The program itself has a lot of options and offers a lot of opportunities, so this is a separate issue. If someone is interested, information can be found on the official site.

Drupal 7 application with PhoneGap

Drupal mobile application development

A mobile application is a program that runs on a mobile device, and a Drupal website is a rather complex structure, which includes a large number of files from the php source, a database and more. In order to “turn” your Drupal site into a mobile application, you need to “simplify” it first. You can use the Mobile App Generator module for that. However, it is already considered obsolete, but we will take a look at it, too.

This module is very easy to use. Go to the admin/mag settings page, make simple settings and click Generate mobile app. There may be problems with some views and maybe you will have to delete them. As a result, we get a mobile version of the website that contains only static html pages, javascript and css, which by default is stored in the sites/default/files/ directory. It’s easy, if the website contains one page (Drupal single page application), but if it contains more pages, you’ll need to write them separately in pure html (plus, of course, css and javascript). You can place such a mobile website version in phonegap and create a mobile application from it.

DrupalGap

DrupalGap is an open-source tool for application development for Drupal-based websites. It can be used for simple development of multi-platform mobile apps that communicate with the Drupal website, and web applications for Drupal websites.

To create your application, first you need to have a Drupal website, or install Drupal from scratch. Next you need to install the DrupalGap module, which, in its turn, will demand a number of modules that also need to be installed.

Here are these modules:

Services

Chaos tools

REST server

Libraries

Views

Views JSON (is installed together with Views)

You can download everything with a single command:

drush en drupalgap -y

Now you will need to enable all required modules because they can may not be enabled by themselves. Clean the cache. Then go to the admin/config/services/drupalgap page, enter the name of the directory where you create your application, and then click Install the SDK. Now, in your website folder, there will appear a folder with the name you have entered. The Test connection button allows you to check if everything is installed correctly. In case of successful installation, it will display a message:

The system connect test was successful, DrupalGap is configured properly!

If there is an error, then something has been configured incorrectly. A solution can be on one of these pages:

http://drupalgap.org/troubleshoot

https://www.drupal.org/node/2230031

https://www.drupal.org/node/1820552

If everything is OK, we can now see our application in the browser, or by following its link from a mobile device. If we are working locally, the address will look like this: my-site/my-app/index.html. If we have not changed the start page settings, then it will look like this:

Drupal 7 application with PhoneGap

Now let's try to create our own custom module to display the home page. Go to the application directory, then to app (if the website is called my-site, and the application is called my-app, then the path will look my-site/my-app/app), create a modules/custom directory, and that’s exactly where our module will be (e.g. my_module). Here we create a javascript file that will be the main file of the module, give it the name of our module and the .js format. Its full name will look like my-site/my-app/app/modules/custom/my_module/my_module.js - this file serves as a .module file in Drupal.

Now you need to tell DrupalGap about our module. Edit the app/settings.js file (by the way, if you don’t have it, you need to create it, namely make a copy of the default.settings.js file from the same directory and rename it as settings.js), find this line:

/** Custom Modules - www/app/modules/custom **/

and write after it:

Drupal.modules.custom ['my_module'] = {};

If you want to add more modules, appends the string is the same, just replace 'my_module' the name of the new module (eg 'my_additional_module').

Open the main module my_module.js file and write code in JavaScript there:

/**
* Implements hook_menu().
*/
function my_module_menu() {
  var items = {};
  items['hello_world'] = {
    title: 'DrupalGap',
    page_callback: 'my_module_hello_world_page'
  };
  return items;
}

/**
* The callback for the "Hello World" page.
*/
function my_module_hello_world_page() {
  var content = {};
  content['my_button'] = {
    theme: 'button',
    text: 'Hello World',
    attributes: {
      onclick: "drupalgap_alert('Hi!')"
    }
  };
  return content;
}

As we see, the code is similar to our usual features of hook_menu and callbacks. Open the app/settings/js file, find the line with the homepage settings and replace it with your own:

drupalgap.settings.front = 'hello_world';

Now, if we open our application, we see homepage which is already custom:

Drupal 7 application with PhoneGapDrupal 7 application with PhoneGap

In such an application, we can create and edit content on the site, leave comment, view users and taxonomy (of course, if we gave them all the necessary permissions).

DrupalGap has much more features that you can read about on the official site.

Technology forges ahead. Today, making a mobile site version is no longer enough. It is becoming more popular and convenient to use mobile applications instead of opening the site in the browser. Drupal should not lag behind in this respect, it is developing, so we also need to develop to keep pace with progress. In terms of integration with mobile platforms, drupalgap is the newest invention so far, which we should know and be able to use, because we want to move forward, right?

Jul 03 2015
Jul 03
Last week, at the amazing Drupal North regional conference, I gave a talk on Backdrop: an alternative fork of Drupal. The slides from the talk are attached below, in PDF format.
Jul 03 2015
Jul 03
Last week, at the amazing Drupal North regional conference, I gave a talk on Backdrop: an alternative fork of Drupal. The slides from the talk are attached below, in PDF format.
Jul 03 2015
Jul 03

In early June a Drupal 8 theme system critical issues sprint was held in Portsmouth, New Hampshire as part of the D8 Accelerate program.

The sprint started the afternoon of June 5 and continued until midday June 7.

Sprint goals

We set out to move forward the two (at the time) theme system criticals, #2273925: Ensure #markup is XSS escaped in Renderer::doRender (created May 24, 2014) and #2280965: [meta] Remove or document every SafeMarkup::set() call (created June 6, 2014).

Sponsors

The Drupal Association provided the D8 Accelerate grant which covered travel costs for joelpittet and Cottser.

Bowst provided the sprint space.

As part of its NHDevDays series of contribution sprints, the New Hampshire Drupal Group provided snacks and refreshments during the sprint, lunch and even dinner on Saturday.

Digital Echidna provided time off for Cottser.

Summary

xjm committed #2273925: Ensure #markup is XSS escaped in Renderer::doRender Sunday afternoon! xjm’s tweet sums things up nicely.

As for the meta (which is comprised of about 50 sub-issues), by the end of the sprint we had patches on over 30 of them, 3 had been committed, and 7 were in the RTBC queue.

Thanks to the continued momentum provided by the New Jersey sprint, as of this writing approximately 20 issues from the meta issue have been resolved.

Friday afternoon

peezy kicked things off with a brief welcome and acknowledgements. joelpittet and Cottser gave an informal introduction to the concepts and tasks at hand for the sprinters attending.

After that, leslieg on-boarded our Friday sprinters (mostly new contributors), getting them set up with Drupal 8, IRC, Dreditor, and so on. leslieg and a few others then went to work reviewing documentation around #2494297: [no patch] Consolidate change records relating to safe markup and filtering/escaping to ensure cross references exist.

Meanwhile in "critical central" (what we called the meeting room where the work on the critical issues was happening)…

lokapujya and joelpittet got to work on the remaining tasks of #2273925: Ensure #markup is XSS escaped in Renderer::doRender.

cwells and Cottser started the work on removing calls to SafeMarkup::set() by working on #2501319: Remove SafeMarkup::set in _drupal_log_error, DefaultExceptionSubscriber::onHtml, Error::renderExceptionSafe.

Thai food was ordered in, and many of us continued working on issues late into the evening.

Saturday

joelpittet and Cottser gave another brief introduction to keep new arrivals on the same page and reassert concepts from the day before.

leslieg did some more great on-boarding Saturday and worked with a handful of new contributors on implementing #2494297: [no patch] Consolidate change records relating to safe markup and filtering/escaping to ensure cross references exist. The idea was that by reviewing and working on this documentation the contributors would be better equipped to work directly on the issues in the SafeMarkup::set() meta.

Mid-morning Cottser led a participatory demo with the whole group of a dozen or so sprinters, going through one of the child issues of the meta and ending up with a patch. This allowed us to walk through the whole process and think out loud the whole time.

Sprinters gathered around a table
The Benjamin Melançon XSS attack in action. Having some fun while working on our demo issue.

By this time we had identified some common patterns after working on enough of these issues.

By the end of Saturday all of the sprinters including brand new contributors were collaborating on issues from the critical meta and the issue stickies were flying around the room with fervor (a photo of said issue stickies is below).

15 Drupalists posing outside the sprint space after a long day of sprinting

10 Drupalists having dinner
Then we had dinner :)

Sunday morning

drupal.org was down for a while.

We largely picked up where we left off Saturday, cranked out more patches, and joelpittet and Cottser started to review the work that had been done the day before that was in the “Needs Human” column.

A whiteboard with many sticky notes representing drupal.org issues
Our sprint board looked something like this on the last day of the sprint.

Thank you

Thanks to the organizing committee (peezy, leslieg, cwells, and kbaringer), xjm, effulgentsia, New Hampshire DUG, Seacoast DUG, Bowst, Drupal Association, Digital Echidna, and all of our sprinters: cdulude, Cottser, cwells, Daniel_Rose, dtraft, jbradley428, joelpittet, kay_v, kbaringer, kfriend, leslieg, lokapujya, mlncn, peezy, sclapp, tetranz.

Jul 02 2015
Jul 02

The quest for improved page-load speed and website performance is constant. And it should be. The speed and responsiveness of a website have a significant impact on conversion, search engine optimization, and the digital experience in general.

In part one of this series, we established the importance of front-end optimization on performance, and discussed how properly-handled images can provide a significant boost toward that goal. In this second installment, we’ll continue our enhancements, this time by tackling CSS optimization.

We’ll consider general best practices from both a front-end developer’s and a themer’s point of view. Remember, as architects and developers, it’s up to us to inform stakeholders of the impacts of their choices, offer compromises where we can, and implement in smart and responsible ways.

Styles

Before we dive into optimizing our CSS, we need to understand how Drupal’s performance settings for aggregation work. We see developers treating this feature like a black box, turning it on without fully grokking its voodoo. Doing so misses two important strategic opportunities: 1. Controlling where styles are added in the head of our document, and 2. Regulating how many different aggregates are created.

Styles can belong to one of three groups:

  • System - Drupal core
  • Default - Styles added by modules
  • Theme - Styles added in your theme

Drupal aggregates styles from each group into a single sheet for that group, meaning you’ll see at minimum three CSS files being used for your page. Style sheets added by inclusion in a theme’s ‘.info’ file or a module’s ‘.info’ file automatically receive a ‘true’ value for the every_page flag in the options array, which wraps them into our big three aggregates.

Styles added using drupal_add_css automatically have the every_page flag set to ‘false.’ These style sheets are then combined separately, by group, forming special one-off aggregate style sheets for each page.

When using drupal_add_css, you can use the optional ‘options’ array to explicitly set the every_page flag to ‘true.’ You can also set the group it belongs to and give it a weight to move it up or down within a group.

<?php
drupal_add_css
(drupal_get_path('module', 'custom-module') . '/css/custom-module.css', array('group' => CSS_DEFAULT, 'every_page' => TRUE));
?>

Style sheets added using Drupal’s attached property aren’t aggregated unless they have the every_page flag set to ‘true.’

Favoring every_page: true

Styles added with the every_page flag set to ‘false’ (CSS added via drupal_add_css without the true option, or without the option set at all, or added using the attached property) will only load on the pages that use the function that attaches, or adds, that style sheet to the page, so you have a smaller payload to build the page. However, on pages that do use the function that adds or attaches the style sheet with every_page: ‘false’, an additional HTTP request is required to build that page. The additional requests are for the one-off aggregates per group, per page.

In more cases than not, I prefer loading an additional 3kb of styling in my main aggregates that will be downloaded once and then cached locally, rather than create a new aggregate that triggers a separate HTTP request to make up for what’s not in the main aggregates.

Additionally, turning on aggregation causes Drupal to compress our style sheets, serving them Gzipped to the browser. Gzipped assets are 70%–90% smaller than their uncompressed counterparts. That’s a 500kb CSS file being transferred in a 100kb package to the browser.

Preprocessing isn’t a license for inefficiency

I love preprocessing my CSS. I use SASS, written in SCSS syntax, and often utilize Compass for its set of mixins and for compiling. But widespread adoption of preprocessing has led to compiled CSS files that tip the 1Mb limit (which is way over the average), or break the IE selector limit of 4095. Some of this can be attributed to Drupal’s notorious nesting of divs (ugly mark-up often leads to ugly CSS), but a lot of it is just really poor coding habits.

The number one culprit I’ve come across is over nesting of selectors in SASS files. People traverse the over nested DOM created by Drupal with SASS and spit out compiled CSS rules using descendant selectors that go five (sometimes even more) levels deep.

image00_2.png

I took the above example from the SASS inception page, linked above, and cleaned it up to what it should probably be:

image01_2.png

The result? It went from 755 bytes to 297 bytes, a 60% reduction in size. That came from just getting rid of the extra characters added by the excess selectors. Multiply the savings by the 30 partials or so, and it’s a pretty substantial savings in compiled CSS size.

Besides the size savings, the number and type of selectors directly impact the amount of time a browser takes to process your style rules. Browsers read styles from right to left, matching the rightmost “key” selector first, then working left, disqualifying items as it goes. Mozilla wrote a great efficient CSS reference years ago that is still relevant.

Conclusion

Once again we’ve demonstrated how sloppy front-end implementations can seriously hamper Drupal’s back-end magic. By favoring style sheet aggregation and reining in exuberant preprocessing, we can save the browser a lot of work.

In our next, and final, installment in this series, we’ll expand our front-end optimization strategies even further, to include scripts.

Jul 02 2015
Jul 02

Photo Backdrop CMS is a fork of the Drupal project. Although Backdrop has different name, the 1.0 version is very similar to Drupal 7. Backdrop 1.0 provides an upgrade path from Drupal 7, and most modules or themes can be quickly ported to work on Backdrop.

Backdrop is focused on the needs of the small- to medium-sized businesses, non-profits, and those who may not be able to afford the jump from Drupal 7 to Drupal 8. Backdrop values backwards compatibility, and recognizes that the easier it is for developers to get things working, the less it will cost to build, maintain, and update comprehensive websites and web applications. By iterating code in incremental steps, innovation can happen more quickly in the areas that are most important.

The initial version of Backdrop provides major improvements over Drupal 7, including configuration management, a powerful new Layout system, and Views built-in.

What's Different From Drupal 7?

Backdrop CMS is, simply put: Drupal 7 plus built-in Configuration Management, Panels, and Views.

Solving the Deployment Dilemma

Database-driven systems have long suffered from the deployment dilemma: Once a site is live, how do you develop and deploy a new feature?

Sometimes there is only one copy of a site – the live site – and all development must be done on this live environment. But any mistakes made during the development of the new feature will be immediately experienced by visitors to the site.

It’s wiser to also have a separate development environment, which allows for creation of the new feature without putting the live site at risk. But what happens when that new feature has been completed, and needs to be deployed?

We already have a few great tools for solving this problem. When it comes to the code needed for the new feature we have great version control systems like Git. Commit all your work, merge if necessary, push when you’re happy, and then just pull from the live environment (and maybe clear some caches).

Yet often, especially with complex content management systems like Drupal, there are a handful of configuration changes stored in the database that also need to be deployed. Configuration that is stored in the database is often tricky to separate from content and other information that should not be deployed.

The obvious solution is to get all the configuration out of the database and into something more like code, which can be changed, committed, merged, pushed, and pulled, in the same way. Unfortunately, getting configuration out of the database and into code has proven tricky in the Drupal world.

The most popular Drupal solution to the deployment dilemma is the Features module. But it has shortcomings. Trying to untangle or manage configuration packages deployed using Features can drive you mad. Features may be the best tool Drupal has today, but we can do better.

In Backdrop the deployment dilemma has been solved from the ground up: configuration is easily exportable, and never saved in the database.

Configuration Management in Backdrop CMS

Backdrop CMS has a built-in user interface for managing configuration: Administration » Configuration » Development » Configuration management. From this page you can export, import, and compare changes made to configuration for a site.

If you want to sync your local development environment with the live site, navigate to the live site's Configuration management page and download an export.

Then navigate to your local site's Configuration management page and upload the config.tar.gz file you just exported.

Once the config files have been imported, Backdrop will show you a list of all files that have been changed, and allow you the chance to review them.

Simply reverse the process to deploy from local to live.

In Backdrop, configuration is stored in JSON format. These JSON files are stored in a special config directory that's located inside the files directory by default.

Best practices dictate that you move the config directory completely outside web root so your config files are not accessible over HTTP, or accidentally copied along with files.

Within the config directory, you will notice both an active and a staging directory. Files in the active directory are being read and updated by your Backdrop site and it is generally unsafe to change these files directly. The staging directory is where you can place config files that you are ready to import. Most of the time this staging directory will be empty.

To stage changed config files without using the export/import tools on the Configuration management page, you simply copy all the config files from the active directory for the source site into the staging directory for the destination site. When you visit the Configuration management page on the destination site it will automatically compare the files in staging to those in active and let you review the changes.

When using the import/export tool, or when moving the config files around manually, the actual import should always be handled through the user interface on the Configuration management page. This last step is necessary because Backdrop may need to perform database updates based on changes in the config files.

Config files can also be versioned and moved from one environment to another with your favorite version control system. When adding config to version control, first move the config directory outside of files. It’s also important that active config files from the source site always get pulled into the staging directory on the destination site so that the necessary database updates can be performed. The following is a recommended setup for a two-site deployment strategy:

Having a solid framework for configuration management that works for every system in core (and every system in contrib, too) will help solve the deployment dilemma once and for all.

Variable Handling in Backdrop CMS

Backdrop CMS still has a variables table in the database, but it exists only for backwards-compatibility with Drupal 7. Modules that have been fully ported to Backdrop will not use the variables table, and instead will save their variables into config files. Let's look at the Administration Menu module as an example.

In Drupal 7, the settings form provided by the Administration Menu module is defined in admin_menu.inc. If you look at the last line of the function admin_menu_theme_settings(), you can see that the function system_settings_form() is called. This is a helper function in Drupal 7 that added submit buttons as well as a submit handler. The submit handler automatically saved any value collected by the form into the variables table, using the key of the form element as the variable name.

  return system_settings_form($form);

In Backdrop, the Administration Menu module has been included in core, and has been renamed Administration Bar. The function admin_bar_theme_settings() can be found in the file admin_bar.inc. In Backdrop, the function system_settings_form() has been deprecated and is only included for backwards compatibility with Drupal 7.

For a module to be ported fully to Backdrop, it must add its own submit buttons to configuration forms, and provide its own submit handler to save the values collected with that form, so that each module can define the name of the config file containing its settings:

  $form['actions'] = array(
    '#type' => 'actions',
  );
  $form['actions']['submit'] = array(
    '#type' => 'submit',
    '#value' => t('Save configuration'),
  );

  return $form;

  function admin_bar_theme_settings_submit($form, $form_state) {
    $config = config('admin_bar.settings');
    $config->set('margin_top', $form_state['values']['margin_top']);
    $config->set('position_fixed', $form_state['values']['position_fixed']);
    $config->set('components', array_values(array_filter($form_state['values']['components'])));
    $config->save();
  }

Retrieving and setting these values is no longer done with variable_get(), and variable_set(). To retrieve values from config, you load the whole config file and then retrieve the value you need from that file. Setting values follow the same pattern:

  // Load the config file admin_bar.settings.json.
  $config = config('admin_bar.settings');
  // Get a value.
  $margin = $config->get('margin_top');
  // Set a value.
  $config->set('margin_top', $margin);
  // Save the config.
  $config->save();

For developer convenience, Backdrop also includes procedural wrappers around the handling of config objects:

  // Get a value.
  $margin = $config_get('admin_bar.settings', 'margin_top');
  // Set a value.
  $config_set('admin_bar.settings', 'margin_top', $margin);

If you need only a single value from the config file, it is best to use the procedural wrapper function. If you need many values (for example, to set default values for the settings form), use the full object.

The Layout Problem

Though some websites use the same layout for every page of the site, there are many others that need more variety. Using multiple layouts is another pain point in Drupal development today, since Drupal core expects each site to use only a single layout.

Many different contrib solutions are trying to tackle the layout problem. Some work with Drupal's existing block system, showing and hiding blocks on different pages to create the illusion of different layouts. Others manipulate Drupal's template system, swapping out one file for another. And some replace the whole existing system with a more comprehensive solution.

Most Drupal users have had the experience of changing their theme, and then realizing that all the blocks they’ve placed have disappeared, and need to be placed again. This is because throughout Drupal's history, it’s been assumed that the layout for a site should be tied to the theme. But the layout of a site is more closely tied to the content that site chooses to display, where it’s displayed, and how it’s displayed, than which color or font is in use.

Layouts in Backdrop CMS

Layouts have been completely separated from themes in Backdrop, which means that users will be able to switch themes without needing to place blocks again. All the layouts provided by core are responsive, so Backdrop sites can remain responsive regardless of which theme is used.

It may take some mind-bending to fully grok the separation of layout and theme. But once you understand that the two are not tied together, you'll start to see how the whole system begins to make a lot more sense.

By default, Backdrop ships with two layouts already configured and in use: one for the visitor-facing pages, and one for administrator-facing pages. Either one of these default layouts can be modified to suit your site’s needs. Visit Administer » Structure » Layouts to see the provided layouts.

Adding a new layout will clone the appropriate default layout for you, allowing blocks to be rearranged quickly. A simple form will ask for the name, page layout, and path where the layout should be used (wildcards are supported). Optional conditions, such as switching based on user roles or content types, can also be added.

When adding any block to a layout, you can still define special criteria that will regulate when that block should (or should not) be displayed. Additionally, you can now choose between two block styles. The “Default” block style will render using the familiar block.tpl.php file. The “Dynamic” block style will allow you to add additional CSS classes and select which HTML elements should be used when the block is rendered.

The site header is now provided as a block. The header block provides its own settings and can be configured to show or hide certain elements within. A theme can also override the template for the site header to change the markup, or individual blocks for the logo and navigation can be added to a layout instead of using the header block at all.

Layouts in Backdrop are aware of their context. If you create a new layout and enter the path node/%, you’ll notice that Backdrop recognizes node/% as a page with a node context, and will automatically make that context available to the layout as well as the blocks within. If you add a wildcard into the path that Backdrop does not recognize, you can manually select a context from a list of available contexts.

Blocks now have per-instance settings. This means that you can add the same block to a layout as many times as you like. Each instance of that block will have its own configuration, and will be saved in the layout’s configuration file.

The Anatomy of a Backdrop Layout

A layout in Backdrop is a top-level concept, equivalent to a module or theme. Core layouts are located in the core/layouts directory and additional layouts can be added in the layouts directory in your site root, just as modules can be added into the modules directory and themes can be added into themes.

A layout usually consists of just four files: the info file, a template file, a style sheet, and a preview image. The preview image is displayed in the layouts user interface when a layout is being selected. Its purpose is to help administrators tell the difference between layouts that may have similar names.

The layout’s info file will contain a name, version number, and Backdrop core version, just like the info file for a module or theme. A layout’s info file will also contain a list of all available regions, and which of those regions is the default region. The default region is the place on the page where “the content” of any previously existing page will be placed (by default) when it’s using this layout.

name = 2 columns
version = BACKDROP_VERSION
backdrop = 1.x

; Specify regions for this layout.
regions[header] = Header
regions[top] = Top
regions[content] = Content
regions[sidebar] = Sidebar
regions[footer] = Footer

The layout's template file is roughly equivalent to a Drupal 7 theme’s page.tpl.php file. It contains the markup for all the regions listed in the info file. Best practices dictate that the HTML element that is your layout wrapper will have the class layout—layout-name and all the elements within will have class names that are prefixed with l- to indicate that these are layout-specific elements.

A layout’s style sheet is responsible for only the page layout, but should contain information about how this layout appears at all screen sizes. Styles for colors, borders, and other decorations should not be included in the layout’s style sheet, so that each layout will work equally well with all themes. The CSS in this style sheet should target only elements whose classes begin with the l- prefix.

Theme Code Changes

Since the new layout system in Backdrop handles the placement of content into pages, the .info file of a theme no longer contains a list of regions, and the theme will not contain a template file that places these regions. The file named page.tpl.php in Backdrop CMS is roughly equivalent to the html.tpl.php file in Drupal 7. It contains only the outermost HTML, HEAD, and BODY tags.

  
  >
    
      <?php print backdrop_get_html_head(); ?>
      <?php print $head_title; ?>
      <?php print backdrop_get_css(); ?>
      <?php print backdrop_get_js(); ?>
    
    "<?php print backdrop_attributes($body_attributes); ?>>
      <?php print $page; ?>
      <?php print $page_bottom; ?>
      <?php print backdrop_get_js('footer'); ?>
    
  

The rest of the theme system in Backdrop remains mostly unchanged from Drupal 7, though the process phase has been removed to reduce complexity (Note how functions are now called directly from template files.)

Block Changes

Blocks have also been completely refactored in Backdrop CMS, and are now objects. If you love Object-Oriented Programming and want to write your blocks as classes, now you can. (See the base block class in core/modules/layout/includes/block.class.inc.) If you are not a fan of OOP and love your block hooks, don't worry, Backdrop has kept those around for you too.

There are two small changes you'll need to note when comparing block hooks in Backdrop versus Drupal 7. Backdrop’s blocks now have two additional arguments: settings and context.
Drupal 7:

  function hook_block_view($delta = '')

Backdrop, with added settings and contexts:

  hook_block_view($delta = '', $settings = array(), $contexts = array())

The settings for each instance of the block are provided for both the configuration form and the display of the block. The contexts for the layout will be available for the display of the block as well.

Views in Backdrop CMS

The only significant difference between the Views module for Drupal 7 and the Views module in Backdrop is configuration: each view saves into a configuration file instead of into the database. If you've used Views in Drupal 7 you can expect the experience to be very similar in Backdrop.

It's Here: Backdrop CMS 1.0

Backdrop CMS is very similar to Drupal 7 by design. The goal is for the existing Drupal community to be able to move their Drupal 7 sites, modules, and themes to Backdrop quickly and easily.

The differences that are in Backdrop were carefully considered to solve the biggest problems in Drupal 7. By solving these problems while maintaining the highest amount of compatibility possible, Backdrop hopes to be a product that is both easier to use and less expensive to maintain.

Build your next project with Backdrop CMS.

Backdrop FAQ

Answers to Common Questions

How many modules are available for Backdrop?
When Backdrop was released in January of this year, we had three contributed modules listed under the Backdrop Contrib GitHub Project. Within the first month, 45 projects had been ported. By the time Drupal 8 is released, Backdrop may have all of the 50 top Drupal modules converted, or an equivalent included in core.

How does Backdrop work without Drupal.org?
The Backdrop community has created tools to integrate with GitHub to match key features provided by Drupal.org, including testing, packaging, and issues management. New Backdrop-driven sites provide documentation, project listings, and the update server.

Can I use Drush with Backdrop?
The Drush team has been actively working to abstract the project so that it isn't dependent on a Drupal bootstrap. Once that abstraction is in place, we'll bundle Drush support into Backdrop core or make it a contrib project.

How does Backdrop handle security updates?
The Drupal security team has kindly allowed members of the Backdrop security team to have access to all issues that affect Drupal and Backdrop in their private tracker: For issues that affect both Backdrop and Drupal, the projects will be doing joint releases. For issues that affect Backdrop alone, the Backdrop security team will fix them in a private repository on GitHub. (A feed of all security issues can be found on the Backdrop website.) Backdrop includes an Update module, so users will be notified of security updates in their sites in the same way they would expect from a Drupal site.

How does Backdrop compare to Drupal 8?
The biggest difference isn't in the code, it's in the target audience. Drupal's historical success in big business has skewed its development resources in that direction: Drupal 8 is heavily marketed as an enterprise platform and it will be successful in that market. Backdrop is trying to accommodate a massive market of Drupal users who may find that upgrading their sites to Drupal 8 is not an option because of its significantly different architecture.

Why upgrade a site to Backdrop from Drupal 7?
Backdrop is very similar to Drupal 7, but there are some things you can't do without breaking some APIs – such as the Backdrop configuration management system that stores all configuration in JSON files. Giving developers an opportunity to avoid the Features module is enough reason, for a lot of people, to upgrade to Backdrop.

How can Backdrop be less expensive to maintain than Drupal if it is so similar?
Backdrop values backwards-compatibility, which may slow down development but shifts the responsibility of maintenance from end-users and site-builders onto core and contrib development teams. By putting an emphasis on compatibility, Backdrop hopes to reduce maintenance costs and (especially) reduce the cost of upgrading sites between versions. Backdrop provides an upgrade path from Drupal 7 via update.php. Because it is so similar, this upgrade only takes about 30 seconds for most sites. If Backdrop can get enough momentum, it could start saving Drupal sites a lot of time while delivering next-generation functionality without an expensive upgrade.

Image: "Photograph by Bethany Legg"

Jul 02 2015
Jul 02

Part 1 of 2 Drupal user number 5622, John Faber, has been involved with Drupal since late 2003. He is a Managing Partner with Chapter Three, a San Francisco-based digital agency. Their slogan sums up well what a lot of us think about what we do: "We build a better internet with Drupal." John and I got on a Google Hangout on March 17th, 2015, to talk about the business advantages of contribution and sustainability when basing your business on open source software. We also touch on Drupal 8's potential power as a toolset and for attracting new developers, doing business in an open source context, and more!

This conversation was recorded via Google Hangout and hotel WiFi. I apologize for the occasionally poor audio quality.

Contribution: Pay it forward or just common sense?

With a gruelling, 5-year release cycle nearing the finish line, Drupal 8 has been a challenge for the Drupal community in many ways. It has raised many questions about the contribution models and their sustainability, from "amateur" contributor burnout to professionalization, to how to credit clients and employers for contributions, and more.

John and Chapter Three have taken an approach that only a few companies have taken so far: hiring a full time contributor to do nothing but work on Drupal itself. Alex Pott is on staff at Chapter Three with the title "Drupal Research Engineer". Alex doesn't do any billable or client work; his the specific responsibility to be a Drupal (core) contributor.

Running a business in the context of an open source toolset, according to John, "Has the tendency to straddle the double yellow line ... There's making money and there's contributing back. And we've tried to do a good job of that."

Drupal 8, big and live

The Fortune 50 Drupal 8 early adopter that John mentions ("We want to be innovative and we are willing to roll the dice on Drupal 8.") is C2HM Hill. Their site was built by Chapter Three and runs on Acquia Cloud. John points out that having Alex Pott working for Chapter Three gives them legitimacy to offer Drupal 8 services and an emergency "let's ask Alex!" channel if they were to get stuck anywhere.

"I have to tell you, it really excites me. I feel like as soon as we have this adoption rolling with somebody and they see some success on this thing, Drupal 8 is really going to be the future for a lot of organizations who've invested their time in it--Chapter Three being one of them."

Drupal 8, Drupal restart

I put it to John that some of the new features of Drupal 8--everything is an entity, everything is fieldable, combined with a powerful, flexible, Drupal-Views powered admin back end--mean that we're entering a new era of sitebuilding. We don't even know what best practices are going to look like, how much we'll need modules, how much will be configuration "recipes", and of course how much will be Drupal wrappers around other, external PHP libraries. He got very excited: "It's kind of like the beginning of Drupal again. When Drupal 4 came out, we knew that this was a platform that had extensibility, legs, and this extremely cool modular system that allowed you to do anything. And I feel as though Drupal 8 with CMI and other tools built into it ... We're right at the beginning of ... Now we have a new platform that can do ... We already know what the old platform can do and it's great! But this is great times two! Or great times unknown!"

Guest dossier

  • Name: John Faber
  • Twitter: @flavoflav2000
  • Drupal.org: flavor
  • Work affiliation: Managing Partner, Chapter Three.
  • 1st Drupal version: 4
  • How John found Drupal: "Some random guy in the [offshore fishing] club who had done very well in the beginning of the Internet era sent an email: 'You should check out Drupal. Cool project.' I installed it and I was like, 'This solves all of my immediate problems right now.' And from that point on, I began to get the idea that Drupal was actually a pretty good way to make some money, so I got myself a little cluster of clients. It's evolved from there."
  • The Fortune 50 Drupal 8 early adopter that John mentions is C2HM Hill. Their site was built by Chapter Three and runs on Acquia Cloud

Interview video

[embedded content]

Jul 02 2015
Jul 02

28 pages of unmarred perfection. This book is pure unadulterated genius

- Chris Arlidge

Never Be Shocked Again! - Budgeting your Web Project

Are you having trouble figuring out an appropriate budget for your next web project? Our whitepaper can help!

Download your FREE COPY, to learn the different types of website projects, understand the factors that play a role in the budgeting process, and determine where your web plans will fit when it comes to costs!

Don’t ever be shocked by web costs again! A clear guide to help you plan the budget for your next web project.

If you are looking to create a rich and dynamic web app that has little to no latency, Drupal and AngularJS is the answer as they go hand in hand. In this article I will show you a brief intro of what AngularJS is. There is a bit of a learning curve in this awesome architecture, but if you know some basic JavaScript you will be ok.

AngularJS,  is a front-end JavaScript framework for making web apps and is created by our good pal Google.

Couple of cool facts about AngularJS:

  • Open Source
  • MVC pattern (ASP.NET developer soft spot)
  • Handles task like DOM manipulation, updating UI based on data or input, registering callbacks
  • Has CRUD application library for : Data-binding, basic templating directives, form validation, routing, deep-linking, reusable components and dependency injection.
  • Declarative programming

Perks of AngularJS :

1. Maximizes Performance

We all know that Drupal has a heavy server-side PHP framework. Adding AngularJS to your Drupal site application is really beneficial by adding some of that business logic into the clients side. Drupal could be used as the backend data source, and AngularJS for presentation/theming in some instances.

Your own coding performance will improve also due to less code needed to do certain things. For example, data-binding is a breeze.

2. Easy to Troubleshoot

Angular uses declarative programming. So when looking at a html document, you can get a quick sense of :

  • What is static and dynamic.
  • What are the variable names.
  • What/Where the code is executing.

Here is a picture that compares AngularJS and jQuery:

As we can see in Angular example, the controllers/function and variables are displayed right in our faces.

For jQuery though you have to do a bit more research inside the JS files and decipher what is going on in the page.

AngularJS

3. Plenty of Resources

Angular has good backing by the web development community. It has 100,000 tagged issues on stackoverflow.com, videos on youtube and excellent documentation on angular’s own website.

Now that you have a seen a bit of what Angular has to offer, the best way to learn is to get your hands dirty. It’s impossible to teach you everything in one article, so here is a good list of resources that could help you learn:

Jul 02 2015
Jul 02

Yes, you read that right: we're going to release 200 free Drupal 8 videos.

We want Drupal 8 to be a success. One way we can help make that happen is to make reliable training available to as wide an audience as possible.

Back in April, we launched our Drupal 8 training Kickstarter project. The project closed in mid-May with enough sponsorship for 100 videos, but over the last 6 weeks we've been talking with more sponsors and now have backing for 200 videos!

Here's an overview of the all the free Drupal training, plus an introduction to the organizations who made it possible.

50 Beginner Videos: Acquia

Acquiaacquia is one of the most impressive open source companies. Started by Dries Buytaert, the founder of Drupal, Acquia provides an open cloud platform, developer tools and world-class support for Drupal.

Acquia positions Drupal as the central technology for many organizations, allowing them to unify their content, community and commerce on the Drupal platform.

We're pleased to announce that Acquia has chosen to sponsor our Drupal 8 Beginner classes. These 50 videos will cover an introduction to Drupal, installation of Drupal 8, taxonomy, modules, themes, managing people and reports, and site management.

50 Intermediate Videos: Glowhost

Glowhostglowhost is based near us in Florida and has been providing rock-solid hosting since 2002. Glowhost customers include Snopes.com, United States Postal Service, Keller Williams Realty and PBS (Public Broadcasting Service).

Glowhost offer datacenters all over the world and has an impressive dedication to environmentally-friendly hosting.

We're delighted that Glowhost is sponsoring our Intermediate and site-building classes. These videos will follow the the Beginner class and will really show you how to build Drupal 8 Sites. The Intermediate classes will cover Panels, Display Suite, Pathauto and other important modules you'll need to launch real-life sites.

50 Upgrading Videos: Commercial Progression

Commercial ProgressionCommercial-Progression builds and supports Drupal sites. Based in Michigan, they have worked with the National Geographic Channel and the University of Michigan amongst many others.

Commercial Progression offers a service called DrupalCare: their team will monitor, protect and maintain your Drupal site. They also record the Hooked on Drupal podcast.

We're grateful that Commercial Progression will sponsor 50 upgrading videos:

  • Upgrading a site from Drupal 7 to 8
  • Upgrading a module from Drupal 7 to 8
  • Upgrading a theme from Drupal 7 to 8

50 Performance Videos: InMotion Hosting

inmotion hostingBased in Virginia and California, InMotion Hosting has been an excellent hosting company since 2001. Do you ever get the feeling that your host's staff doesn't know what they're doing? If so, you should try InMotion. No one is allowed to help InMotion customers unless they have a full month of internal training on everything to do with their hosting. InMotion has several team members dedicated to filling their documentation area with helpful resources.

We're pleased to announce that InMotion is sponsoring the Drupal 8 site performance and maintenance videos. The 50 videos supported by InMotion will cover these topics:

  • Drupal installation in different environments
  • Drupal core, module and theme updates
  • Speeding up your Drupal site
  • Drupal backups
  • Moving a Drupal site
  • Keeping your Drupal site secure

Other Thank Yous

A big thank you goes to the Drupal Association which supports the Drupal project in 1001 ways. Their staff are just recovering from running an excellent DrupalCon LA and they are still raising money to help finish Drupal 8: Crowdrise.com/d8accelerate.

Thanks also to the individual backers of the Kickstarter project. We'll start the process of shipping t-shirts and rewards to you next week.

Specifics of the 200 free Drupal videos

  • Where will the videos be released? Primarily on YouTube. You'll also find the videos here on our site and on the sponsors' websites.
  • When will the videos be released? The Beginner videos will be published before Drupal 8 is released. Because the other video series rely more heavily on contributed modules, the time frame for releasing these videos will coincide with the release of the contributed modules.
  • What's the benefit for me if I'm an OSTraining member? You get these videos much more quickly than you would have otherwise. Plus, you'll get to see a non-sponsored version of the videos.
Jul 02 2015
Jul 02

I wrote this as a comment in response to Dries' post about the Acquia certification program - I thought I'd share it here too. I've commented there before.

I've also been conflicted about certifications. I still am. And this is because I fully appreciate the pros and cons. The more I've followed the issue, the more conflicted I've become about it.

My current stand, is this. Certifications are a necessary evil. Let me say a little on why that is.
I know many in the Drupal community are not in favour of certification, mostly because it can't possibly adequately validate their experience.

It also feels like an insult to be expected to submit to external assessment after years of service contributing to the code-base, and to the broader landscape of documentation, training, and professional service delivery.

Those in the know, know how to evaluate a fellow Drupalist. We know what to look for, and more importantly where to look. We know how to decode the secret signs. We can mutter the right incantations. We can ask people smart questions that uncover their deeper knowledge, and reveal their relevant experience.

That's our massive head start. Or privilege. 

Drupal is now a mature platform for web and digital communications. The new challenge that comes with that maturity, is that non-Drupalists are using Drupal. And non specialists are tasked with ensuring sites are built by competent people. These people don't have time to learn what we know. The best way we can help them, is to support some form of certification.

But there's a flip side. We've all laughed at the learning curve cartoon about Drupal. Because it's true. It is hard. And many people don't know where to start. Whilst a certification isn't going to solve this completely, it will help to solve it, because it begins to codify the knowledge many of us take for granted.

Once that knowledge is codified, it can be studied. Formally in classes, or informally through self-directed exploration and discovery.

It's a starting point.

I empathise with the nay-sayers. I really do. I feel it too. But on balance, I think we have to do this. But even more, I hope we can embrace it with more enthusiasm.

I really wish the Drupal Association had the resources to run and champion the certification system, but the truth is, as Dries outlines above, it's a very time-consuming and expensive proposition to do this work.

So, Acquia - you have my deep, albeit somewhat reluctant, gratitude!

:-)

Thanks Dries - great post.

cheers,
Donna
(Drupal Association board member)

Jul 02 2015
Jul 02

Becca Goodman (onelittlebecca), IT Specialist for Fish and Wildlife Refuges and Brad MacDonald (bjmac), Senior Project Manager for Mediacurrent join Mike to talk about Drupal GovCon, DA revenue, new verbs, and two new exotic animals!

Interview

Three Stories

Sponsors

Picks of the Week

Upcoming Events

Follow us on Twitter

Five Questions (answers only - Becca; Brad)

  1. Run marathons; Tae-kwon-do
  2. Skype, D8 development environment; Text Expander
  3. 100 miler; a production launch of a Drupal 8 site
  4. Serval; Macaque monkey
  5. DrupalCon Portland; 2008 Drupal project

Intro Music

"When you Install Drupal 8" - from the DruaplCon Los Angeles pre-note performed by Campbell Vertesi and Dries Buytaert (starts at 15:35).

Subscribe

Subscribe to our podcast on iTunes or Miro. Listen to our podcast on Stitcher.

If you'd like to leave us a voicemail, call 321-396-2340. Please keep in mind that we might play your voicemail during one of our future podcasts. Feel free to call in with suggestions, rants, questions, or corrections. If you'd rather just send us an email, please use our contact page.

Jul 01 2015
Jul 01

We hear it all the time:

Why do you recommend a 6 hour budget for a simple integration? Here's an embed widget right here -- if this were WordPress I could do it myself!

Well, in Drupal you can do it yourself, exactly the same way you might in WordPress. Add a block, use an input format that doesn't strip out Javascript, paste in your code, put the block where you want it on your page, and away you go.

It's pretty spectacular what you can do by mashing up a bunch of widgets on a webpage these days. There are tens of thousands of "software as a service" providers that provide embeddable widgets you can put on your site, and have all kinds of cool things: Twitter feeds, Google Ads, Facebook Likes, analytics (a.k.a spying on your visitors), maps, upcoming event widgets, data collection forms, CRM capture forms, live cam feeds, games -- the list is endless.

But... this entire approach has a dark side. Several of them, in fact!

It hurts your page load speed, and potentially loses visitors

Google takes page load speed into account when indexing your page. Very slow-to-load pages are penalized in search engines, largely because they've learned that people don't stick around waiting for pages to load.

Sample page load from already-primed cacheWe check the page load times for our maintenance customers each month, checking times for both an "Empty cache" (e.g. the first time a visitor loads a page from your site) and a "Primed cache" (e.g. subsequent page views). We aim to get the empty cache load time under 6 seconds, and a primed cache load time under 1 second if possible, through a combination of tuning the server, minimizing the number of requests done on the page before it renders, and other optimizations. In the page loads of this particular site, you can see we're a very long ways from those targets!Empty cache page load

In this case, the page is full of images, and the server could use some tuning to deliver those faster. But a big part of the primed cache problem is how many different external services are getting loaded on the page. This is a substantial e-commerce site doing a healthy business, and it has scripts from 9 external services embedded on the page. Many of these load additional pages inside the main page, which then load more scripts and images... and usually all of this needs to load before the visitor sees the page as it's designed to look!

2 minute load times? I wonder how much business has been lost simply because people don't want to wait that long.

This problem gets far worse on mobile devices, with less powerful browsers, less RAM, sometimes spottier and slower connections...

Sometimes it breaks your site completely

We've had at least 4 or 5 calls in the past few months from customers saying "My site is down! Please get us back up right away!" And it turns out their site is just fine -- but some embedded widget they've added to their page is not loading, and it's blocking the rendering of the page entirely. And these aren't even unusual services -- 3 cases in particular come to mind from big, well known services: Disqus, Google Adwords, and a mobile app vendor.

It might not even be the remote service's fault, it could just be a network issue. But the simple fact is, by using a bunch of different embedded apps on your site, you're making your site dependent on all of them! If any one of them has a problem on a particular day, it can bog down your site entirely, and affect your business!

Do you trust all the sources of these widgets?

Anytime you use code from other sources, you're putting the security of your site at risk. Even reputable, widely used services can introduce malware onto your site. For example, last September there was malware loaded by Google ads that may have been shown to millions of computers, via sites that used Ad words, and that's not the first time. And these are major services run by major companies with a strong security record -- if they can't keep their embeds secure, can you really trust all those other services that don't have those resources?

When you load any script from a server that's not yours, you're trusting that that script is not going to hijack data sent through your page. If you run an e-commerce site, this means they could potentially sniff data like your customer's credit card numbers being typed on the checkout page, or their passwords entered into a login form. Even your password!

You can download and review exactly what the script is doing, but if you don't control the source of that script, it could get changed at some point and you'll never now.

It may not "degrade gracefully"

More and more people are becoming sensitive to their online privacy, and running ad-blockers/script blockers. Have you checked out what happens if you visit your site in a browser configured to block all 3rd party scripts? If you just embed a bunch of widgets on the page, it's quite likely that the experience is very bad... so you're forcing your visitors to give up their privacy to use your service. Do you want to be sensitive to your customers' privacy, or lose that business?

Argh! Enough already! What can we do instead?

First of all, you should probably have a reason for each one of the widgets you want to use on the page. Here is how I would go through and analyze, decide upon a course of action, for each one...

Do I really need this widget?

If you don't need it, ditch it. There's enough noise out there already. Think about what the goals are for each type of user on your site -- prospective customer, prospective employee, current customer, current employee, content editor, business owner, administrator, etc. If a widget does not support the goals of any of your users, get rid of it.

Can I provide a behind-the-scenes integration, instead of using the widget?

If the service in question has a straightforward API, it's often much better to have Drupal send information through the API, rather than having the user's browser load a bunch of uncontrolled Javascript.

For many data collection forms, this kind of approach can solve all of the issues identified above -- the entire interface gets aggregated and can use Drupal's caching to load quickly, nothing visibly breaks if the service is unavailable, you control all the data flow and don't expose anything unnecessary to the 3rd party, and done right it degrades gracefully.

This is why our first response when asked to integrate something is to go check to see if we already have a Drupal module available that we can just turn on to do the integration, or whether we need a few hours to create something new or custom.

This tends to be much more "the Drupal Way", compared to pasting in an embed code!

Can I "do it in Drupal?"

One step further than doing a server-side integration, is to not use the 3rd party service at all, and simply roll your own solution in Drupal.

For really complex integrations (like SalesForce) this can cost less than doing the integration, if you only need the other service for a couple of scenarios -- Drupal is a very powerful platform that can be easily configured to do a lot of CRM-kinds of activities. And there's a huge ecosystem of sometimes pretty complete solutions for event management, mapping, e-commerce, and much more, which put you more in an "Ownership" role rather than a "Renter".

If your needs are complex, this will cost more than using another already-existing service -- but you gain the benefit of complete control over the resulting solution.

How to embed "The Drupal Way"

Sometimes it does make sense to embed scripts and code from other sites. You don't necessarily want to run your own ad server, if Google can fill all your ad slots for you, and that's a big source of your revenue. Lots of specific services for loading fonts, analytics, and more really end up necessitating 3rd party scripts.

Even in those cases, manually embedding code snippets leads to all the issues we identified. By using a Drupal module (either one publicly available on Drupal.org, or a custom one we create for you) you can at least minimize some of those:

  • With many of these services, you can download the script to your server where it can be aggregated with the rest of your Javascript and not impact page load speed
  • If it's on your server, it's not necessarily subject to the same outages
  • If it's on your server you can detect changes to the script and trigger new audits
  • If you manage where and how the script is loaded, you can make it run after the page is rendered, and degrade gracefully

These changes aren't always possible, but you can almost always at least improve the amount of control you have over the user experience if something goes awry by bundling it into a Drupal module.

So... Yes, you can just paste in an embed code, really, really quickly. But do you really want to?

Jul 01 2015
Jul 01

Identity theft and site compromises are all-too-common occurrences -- it seems a day rarely goes by without a news story detailing the latest batch of user passwords which have been compromised and publicly posted.

Passwords were first used in the 1960s when computers were shared among multiple people via time-sharing methods. Allowing multiple users on a single system required a way for users to prove they were who they claimed to be. As computer systems have grown more complex over the last 50 years, the same password concept has been baked into them as the core authentication method. That includes today’s Web sites, including those built using Drupal.

Why Better Protection Is Needed

Today, great lengths are taken to both protect a user’s password, and to protect users from choosing poor passwords. Drupal 7, for example, provided a major security upgrade in how passwords are stored. It also provides a visual indicator as to how complex a password is, in an attempt to describe how secure it might be.

The password_policy module is also available to enforce a matrix of requirements when a user chooses a password.

Both of these methods have helped to increase password security, but there’s still a fundamental problem. A password is simply asking a user to provide a value they know. To make this easier, the vast majority of people reuse the same password across multiple sites.

That means that no matter how well a site protects a user’s password, any other site that does a poor job could provide an attacker all they need to comprise your users’ accounts.

How Multi-Factor Authentication Works

Multi-Factor authentication is one way of solving this problem. There are three ways for a person to prove he is who he claims to be. These are known as factors and are the following:

  • Something you know - typically a password or passphrase
  • Something you have - a physical device such as a phone, ID, or keyfob
  • Something you are - biometric characteristics of the individual

Multi-factor authentication requests two or more of these factors. It makes it much more difficult for someone to impersonate a valid user. With it, if a user’s username and password were to be compromised, the attacker still wouldn’t be able to provide the user’s second form of authentication.

Here, we’ll be focusing on the most common multi-factor solution, something you know (existing password) and something you have (a cell phone).

This is the easiest method of multi-factor authentication, and is also known as two-factor authentication (TFA) because it uses two out of three factors. It’s already used by a large number of financial institutions, as well as large social websites such as Google, Facebook, and Twitter. Acquia implemented TFA for it’s users in 2014 as discussed in this blog post: Secure Acquia accounts with two-step verification and strong passwords.

How To Do It

A suite of modules are available for Drupal for multi-factor authentication. The Two-Factor Authentication module provides integration into Drupal’s existing user- and password-based authentication system. The module is built to be the base framework of any TFA solution, and so does not provide a second factor method itself. Instead the TFA Basic plugins module provides three plugins which are Google Authenticator, Trusted Device, and SMS using Twilio. TFA Basic can also be used as a guide to follow when creating custom plugins. You can find the documentation here: TFA plugin development.

You’ll find that it’s easy to protect your site and its users with TFA, and given its benefits, it’s a good idea to start now.

But before you do, check out the TFA documentation. Acquia Cloud Site Factory users can find documentation for configuring two-factor authentication for their accounts here.

For more advice, check out TFA tips from Drupal.org.

Jul 01 2015
Jul 01

You need to show a list of content on a page. You know that you need to use Views for it. But you are wondering whether to create a Page View or a Block View. Here is a simple question you need to ask yourself to decide.

What is the purpose of the webpage where you will be showing the view?

If the purpose of the page is to show a list of content, i.e. the view itself, then create a page view. If it is to show something else, and the view is merely something in addition, for e.g. in the sidebar or below the main content, then make it a block view and put it on that page. The reason is that the metadata of the page, for e.g. page title, metatag keyword/description, etc. should be related to the main content of the page for SEO purposes. Creating a page view ensures that the page title and metatags (if you are using Metatag module) on the webpage will be dervied from the view itself. In you are embedding a block view on a page, the page title and metatags will come from the main content, for e.g. node or user profile, etc.

This is assuming that you are not using Panels. If you are using Panels, then create a block view and embed it in a Panel page. 

Jul 01 2015
Jul 01

As professionals in the web tech sector, we know that there is no one set answer to that question. And yet, we really don't want to be spending our time having to explain over and over to prospects why we can't just answer the question.

We could provide an analogy and respond by saying, "That's like asking a random custom home builder how much it's going to cost to build a house." We could continue by explaining that the builder will need to know the location of the house, the square footage of the house, the desired finishes on the house, etc.

Even then, we might still meet with resistance about why it's so expensive to have a website built. It's so much easier for a prospect to understand the expense if s/he understands just how much work it really takes to build a good website.

Well, I've got great news for you lucky readers of this post. I've created a Udemy course called Start Smart! Clearly Define Your Website. The course walks students step-by-step through the process of answering all the necessary questions and culminates with the student creating a website definition document that allows you to give an accurate estimate. By completing the course the student will get first hand knowledge of just how much work s/he has had to put into even creating the document. S/he will see how much time and care s/he has invested. Your job of closing the sale will become tremendously easier.

Here's a coupon to access the course for just $10.

Do you think this will be a useful tool for you in your sales cycle?

What strategies have you been using to this point?

How have those strategies worked?

Jul 01 2015
Jul 01

Earlier this year, we launched a new site for the ACLU. The project required a migration from Drupal 6, building a library of interchangeable page components, complex responsive theming, and serious attention to accessibility, security and privacy. In this post, I’ll highlight some of the security and privacy-related features we implemented.

Privacy

As an organization, the ACLU has a strong commitment to protecting individual privacy, and we needed to translate their passion to this issue to their own website. This meant meeting their high standards at a technical level.

The ACLU ensures that technical innovation doesn’t compromise individual privacy, and most of our work around user privacy involved ensuring that third-party services like Facebook and YouTube weren’t accessing user data without their consent.

Social sharing

Facebook “Like” buttons on the site offer a quick way for visitors to follow ACLU content on their Facebook feed. But even if you don’t click to “Like” a page, the tracking scripts behind them – coming straight from Facebook – log your behavior and make note of your visit there. That data can be used for targeted advertising, and the data itself is also a sellable product. Because these buttons are all over the web, Facebook can piece together quite a bit of your browsing history, without your knowledge.

To prevent this from happening to ACLU site visitors, we hooked up a jQuery plugin called Social Share Privacy to make sure that visitors who want to “Like” ACLU can do so while others can avoid Facebook’s data tracking. The plugin initially loads a disabled (greyed-out) button for the Facebook “Like”. If the visitor wants to use the button, they click once to enable it, which then loads Facebook’s iframe, and then a second time to “Like” the page.

ACLU.org Facebook button

The Social Share Privacy jQuery plugin keeps Facebook from tracking you without your consent.

A nice side effect is a faster page load for everyone since we don’t have to load content from Facebook on the initial page render.

Video playback

YouTube and other video sharing sites present a similar privacy problem – the scripts used to embed videos can also dig into visitor data without their consent.

MyTube video embed

A MyTube video embed on ACLU.org.

MyTube is a script that was written by the Electronic Frontier Foundation as a Drupal 5 module, then updated as a Drupal 6 module by students in the Ohio State University Open Source Club. For the ACLU site, we worked on porting the module to Drupal 7 and updating various parts of the code to work with updated APIs (both on the Drupal end and the video host end).

Similar to the Social Share Privacy plugin, MyTube initially loads a static thumbnail image from the embedded video, and to play the video, the visitor gives it an extra click, opting-in to run code from the third-party site.

If you’re interested in helping complete Drupal 7 version of the module, take a look at some of the patches our team submitted, and test them out!

Security

Due to the ACLU’s high profile and involvement with controversial topics, security was a major concern throughout the development process. While we can’t go into too much detail here (because, well, security), here are a few steps we took to keep the site safe.

  1. Scrubbed database back-ups. When developers copy down database instances for local development, sensitive information – such as user emails and passwords – are removed. This is made pretty easy using Drush sql-sync-pipe.
  2. Secure hosting on Pantheon with controlled development environments. In addition, though our development team can make code changes, only a handful of authorized ACLU web managers can actually deploy change to their live site. 
  3. The site runs entirely over SSL via HSTS headers
  4. We only load assets (CSS/JS/images/fonts/embeds/iframes/etc.) from approved sources (via Content Security Policy headers). This reduces XSS risks in modern browsers. 
  5. The previous version of the site had various “action” tools where visitors could log in to sign petitions and send letters to their elected representatives. During the redesign this functionality was moved to a separate site, on a separate codebase, running on separate infrastructure. Any site that allows anonymous visitors to create accounts is, by definition, more susceptible to attack. By splitting into two sites, a successful exploit of one web property will not affect the other.
  6. All custom code was reviewed by multiple members of the Advomatic team, and Tag1 Consulting performed an additional security review, in which our codebase received excellent marks and a glowing recommendation.

When your organization champions a cause, all your activities should embody it, including your website. The ACLU takes these standards very seriously, and so do we. Take a look at some of our other projects to see how we worked with nonprofits on sites that reflect their values. 

You should also check out

Jul 01 2015
Jul 01

Yesterday something significant occurred, http://brandywine.psu.edu launched on the new Polaris 2 Drupal platform. And soon the Abington Campus web site will move to the same platform. And perhaps many more.

Ten years ago this would not have been possible. Not because of the technology but because of the landscape and attitude here at Penn State. Words like 'portal' and 'content management system' were perceived as negatives, as things to avoid, as poorly implemented technologies.

That has changed.

One could argue that moving the Penn State home page and new site to Drupal was the significant event, but I was not convinced. That change could have been an anomaly, a lack of other, better options, or just pure luck. Not that a number of people in the Penn State Drupal community did not spend a great deal of time and effort into presenting Drupal as a viable option, but that once that argument was presented and accepted, the process to actually create the site was... let's say byzantine. So in my mind moving www.psu.edu to Drupal, while radical and important, did not 'count'.

So yesterday's launch of the Brandywine web site confirms not only the success of Drupal at Penn State, but also a change in mindset and attitudes at a much higher and broader level at the University.  Additional possibilities for the use of the Polaris 2 platform may be in the works, hopefully we will learn more about those soon.

Perhaps there will also be a Polaris 3....

Jul 01 2015
Jul 01

One of the options in Nittany Vagrant is to build a local, development version of an existing Drupal site - copying the files and database, then downloading it to the Vagrant VM.  Its is pretty straightforward, but there is the occasional trouble spot.

Here is a short video of how to do it.

Jul 01 2015
Jul 01

When continuing development of a web site, big changes occur every so often. One such change that may occur, frequently as a result of another change, is a bulk update of URLs. When this is necessary, you can greatly improve the response time experienced by your users—as they are redirected from the old path to the new path—by using a handy directive offered in Apache's mod_rewrite called RewriteMap.

At Agaric we regularly turn to Drupal for it's power and flexibility, so one might question why we didn't leverage Drupal's support for handling redirects. When we see an opportunity for our software/system to respond "as early as it can", it is worth investigating how that is handled. Apache handles redirects itself, making it entirely unnecessary to hand-off to PHP, never mind Drupal bootstrapping and retrieving up a redirect record in a database, just to tell a browser (or a search engine) to look somewhere else.

There were two conditions that existed making use of RewriteMap a great candidate. For one, there will be no changes to the list of redirects once they are set: these are for historical purposes only (the old URLs are no longer exposed anywhere else on the site). Also, because we could make the full set of hundreds of redirects via a single RewriteRule—thanks to the substitution capability afforded by RewriteMap—this solution offered a fitting and concise solution.

So, what did we do, and how did we do it?

We started with an existing set of URLs that followed the pattern: http://example.com/user-info/ID[/tab-name]. Subsequently we implemented a module on the site that produced aliases for our user page URLs. The new patten to the page was then (given exceptions for multiple J Smiths, etc via the suffix): http://example.com/user-info/firstname-lastname[-suffix#][/tab-name]. The mapping of ID to firstname-lastname[-suffix#] was readily available within Drupal, so we used an update_hook to write out the existing mappings to a file (in the Drupal public files folder, since we know that's writable by Drupal) . This file (which I called 'staffmapping.txt') is what we used for a simple text-based rewrite map. Sample output of the update hook looked like this:

# User ID to Name mapping:
1 admin-admin
2 john-smith
3 john-smith-2
4 jane-smith

The format of this file is pretty straight-forward: comments can be started on any line with a #, and the mapping lines themselves are composed of {lookupValue}{whitespace}{replacementValue}.

To actually consume this mapping somewhere in our rules, we must let Apache know about the mapping file itself. This is done with a RewriteMap directive, which can be placed in the Server config or else inside a VirtualHost directive. The format of the RewriteMap looks like this: RewriteMap MapName MapType:MapSource. In our case, the file is a simple text file mapping, so the MapType is 'txt'. The resulting string added to our VirtualHost section is then: RewriteMap staffremap txt:/path/to/staffmapping.txt This directive makes this rewrite mapping file available under the name "staffremap" in our RewriteRules. There are other MapTypes, including ones that uses random selection for the replacement values from a text file, using a hash map rather than a text file, using an internal function, or even using an external program or script to generate replacement values.

Now it's time to actually change incoming URLs using this mapping file, providing the 301 redirect we need. The rewrite rule we used, looks like this:

RewriteRule ^user-detail/([0-9]+)(.*) /user-detail/${staffremap:$1}$2 [R=301,L]

The initial argument to the rewrite rule identifies what incoming URLs this rule applies to. This is the string: "^user-detail/([0-9]+)(.*)". This particular rule looks for URLs starting with (signified by the special character ^) the string "user-detail/", then followed by one or more numbers: ([0-9]+), and finally, anything else that might appear at the end of the string: "(.*)". There's a particular feature of regex being used here as well: each of search terms in parenthesis are captured (or tagged) by the regex processor which then provides some references that can be used in the replacement string portion. These are available with $<captured position>—so, the first value captured by parenthesis is available in "$1"—this would be the user ID, and the second in "$2"—which for this expression would be anything else appearing after the user ID.

Following the whitespace is our new target URL expression: "/user-detail/${staffremap:$1}$2". We're keeping the beginning of the URL the same, and then following the expression syntax "${rewritemap:lookupvalue}", which in our case is: "${staffremap:$1}" we find the new user-name URL. This section could be read as: take the value from the rewrite map called "staffremap", where the lookup value is $1 (the first tagged expression in the search: the numeric value) and return the substitution value from that map in place of this expression. So, if we were attempting to visit the old URL /user-detail/1/about, the staffremap provides the value "admin-admin" from our table. The final portion of the replacement URL (which is just $2) copies everything else that was passed on the URL through to the redirected URL. So, for example, /user-detail/1/about includes the /about portion of the URL in the ultimate redirect URL: /user-detail/admin-admin/about

The final section of the sample RewriteRule is for applying additional flags. In this case, we are specifying the response status of 301, and the L indicates to mod_rewrite that this is the last rule it should process.
That's basically it! We've gone from an old URL pattern, to a new one with a redirect mapping file, and only two directives. For an added performance perk, especially if your list of lookup and replacement values is rather lengthy, you can easily change your text table file (type txt) with a HashMap (type dbm) that Apache's mod_rewrite also understands using a quick command and directive adjustment. Following our example, we'll first run:

$> httxt2dbm -i staffrepam.txt -o staffremap.map

Now that we have a hashmap file, we can adjust our RewriteMap directive accordingly, changing the type to map, and of course updating the file name, which becomes:

RewriteMap staffremap dbm:/path/to/staffremap.map

RewriteMap substitutions provide a straight-forward, and high-performance method for pretty extensive enhancement of RewriteRules. If you are not familiar with RewriteRules generally, at some point you should consider reviewing the Apache documentation on mod_rewrite—it's worthwhile knowledge to have.

Jul 01 2015
Jul 01

A little over a year ago we launched the Acquia Certification Program for Drupal. We ended up the first year with close to 1,000 exams taken, which exceeded our goal of 300-600. Today, I'm pleased to announce that the Acquia Certification Program passed another major milestone with over 1,000 exams passed (not just taken).

People have debated the pros and cons of software certifications for years (including myself) so I want to give an update on our certification program and some of the lessons learned.

Acquia's certification program has been a big success. A lot of Drupal users require Acquia Certification; from the Australian government to Johnson & Johnson. We also see many of our agency partners use the program as a tool in the hiring process. While a certification exam can not guarantee someone will be great at their job (e.g. we only test for technical expertise, not for attitude), it does give a frame of reference to work from. The feedback we have heard time and again is how the Acquia Certification Program is tough, but fair; validating skills and knowledge that are important to both customers and partners.

We also made the Certification Magazine Salary Survey as having one of the most desired credentials to obtain. To be a first year program identified among certification leaders like Cisco and Red Hat speaks volumes on the respect our program has established.

Creating a global certification program is resource intensive. We've learned that it requires the commitment of a team of Drupal experts to work on each and every exam. We know have four different exams: developer, front-end specialist, backend specialist and site builder. It roughly takes 40 work days for the initial development of one exam, and about 12 to 18 work days for each exam update. We update all four of our exams several times per year. In addition to creating and maintaining the certification programs, there is also the day-to-day operations for running the program, which includes providing support to participants and ensuring the exams are in place for testing around the globe, both on-line and at test centers. However, we believe that effort is worth it, given the overall positive effect on our community.

We also learned that benefits are an important part to participants and that we need to raise the profile of someone who achieves these credentials, especially those with the new Acquia Certified Grand Master credential (those who passed all three developer exams). We have a special Grand Master Registry and look to create a platform for these Grand Masters to help share their expertise and thoughts. We do believe that if you have a Grand Master working on a project, you have a tremendous asset working in your favor.

At DrupalCon LA, the Acquia Certification Program offered a test center at the event, and we ended up having 12 new Grand Masters by the end of the conference. We saw several companies stepping up to challenge their best people to achieve Grand Master status. We plan to offer the testing at DrupalCon Barcelona, so take advantage of the convenience of the on-site test center and the opportunity to meet and talk with Peter Manijak, who developed and leads our certification efforts, myself and an Acquia Certified Grand Master or two about Acquia Certification and how it can help you in your career!

Jul 01 2015
Jul 01

We can easily checkout code from our git repositories for our local, development, and staging servers.  We can get a database from the live site through Backup and Migrate, drush, or a number of other ways.  But getting the files of the site, the images, pdfs, and everything else in /sites/default/files is not on the top of the list of most developers.  In recent versions of Backup and Migrate, you can export the files, but often times, this can be a huge archive file.  There is an easier way.

The Stage File Proxy module saves the day by sending requests for files to the live server if it does not exist yet in your local environment.  This saves you space on your non-production environment since it only grabs files from the pages you visit.  Great for us that have dozens of sites on our local. 

As simple as can be, it gets the files you need on your local server, as you need them.  No more navigating broken looking dev sites.  This will get your environment looking as it should so you can concentrate on your task at hand.

Installing and Configuring Stage File Proxy


// Download Stage File Proxy
drush dl -y stage_file_proxy
// Enable Stage File Proxy
drush en -y stage_file_proxy
// Set the origin, or where the files live, the production site
drush variable-set stage_file_proxy_origin "http://www.yoursitename.com"

You can also set it in your settings.php file, helpful for those of us who use different settings files per environment.


$conf['stage_file_proxy_origin'] = 'http://www.yoursitename.com'; // no trailing slash

Stage File Proxy has great documentation, and if Drush, or settings files aren't your thing, you can access configuration which can also be set in the admin UI at /config/system/stage_file_proxy, or Admin Menu > Configuration > System > Stage File Proxy.

Stage File Proxy Settings

This module is for use on your local, development, and staging servers, and should be used by anyone that works in multiple environments.  It should be disabled on your production/live site.

Jul 01 2015
Jul 01

After I posted a case study last week I had a number of readers ask me if they could try a demo and see how it works. There is no try-out demo yet but in the meanwhile I produced a video that demonstrates the basic controls:

[embedded content]

If you have any questions about integration and the open source library that powers it feel free to contact or comment!

Jul 01 2015
Jul 01
Wednesday, July 1, 2015 - 11:53

I did another video the other day. This time I've got a D7 and D8 install open side by side, and compare the process of adding an article.

Pages