Author

Mar 24 2018
Mar 24

Let's see how to update your Drupal site between 8.x.x minor and patch versions. For example, from 8.1.2 to 8.1.3, or from 8.3.5 to 8.4.0. I hope this will help you.

  • If you are upgrading to Drupal version x.y.z

           x -> is known as the major version number

           y -> is known as the minor version number

           z -> is known as the patch version number.

  • Make sure you have backup of all the files, directories and database, because even a minor change can crash your site.

  • If you have any issue while updating drupal core, you can check the drupal support page.

          Minor and Patch version updates:

  • Log in as admin and you can check if any core update is available in Administration -> Reports -> Available Updates (admin/reports/updates)

  • If you need to update drupal core, you can download the latest drupal version at https://www.drupal.org/project/drupal and extract it.

  • Sometimes we may want to update the site directly in the live site (though not recommended), the first thing we need to do is to put the site in maintenance mode. For that, go to Administration -> Configuration -> Development -> Maintenance mode. Enable the "Put site into maintenance mode" checkbox and save the configuration.

  • Remove core and vendor directory in the site directory and replace with latest files which you must have downloaded.

  • Copy and replace autoload.php, composer.php, composer.lock, index.php, licence.txt, readme.txt, robots.txt, update.php and web.config with latest files.

  • Same time setting.php may be changed, so ensure that you update your old settings in it before deploying your site.

  • Run update.php by visiting http://www.example.com/update.php (replace www.example.com with your domain name). This will update the database if there are any updates.

    • If you are unable to access update.php do the following:

      • Find the line $settings['update_free_access'] = FALSE; in settings.php file and change it to $settings['update_free_access'] = TRUE;

      • Once the upgrade is done, $settings['update_free_access'] must be reverted to FALSE.

  • Go to Administration -> Reports -> Status report (admin/reports/status). Verify that everything is working as expected.

  • To take the site live, go to Administration -> Configuration -> Development -> Maintenance mode. Disable the "Put site into maintenance mode" checkbox and save the configuration.

  • If you need any clarification, please refer the link.

Comments

Permalink

Hi,
It looks very hard, I think, we could do update with the drush command.
> drush pm-update // for drupal core update
> drush pm-update {module-name} // for single module update.

Dec 19 2016
Dec 19

[{"title":"Drupal 8: Custom Block Creation programmatically ","body":"

In Drupal 8 Blocks are made up of two separate API. Block Plugin API, is a reusable API and Block Entity API, use for block placement and visibility control.<\/p>\n\n

Before we begin with custom Block module development. We assume that you\u2019ve better understanding of Plugin API<\/a> & Annotation-based plugins<\/a>.<\/p>\n\n

Creating a custom block require following steps:<\/strong><\/p>\n\n

  1. \n\t

    Create a block plugin using Annotations<\/p>\n\t<\/li>\n\t

  2. \n\t

    Extend the Drupal\\Core\\Block\\BlockBase class.<\/p>\n\t<\/li>\n<\/ol>

    In Drupal 8, We need to keep the keep our custom, \u00a0contributed module in root directory<\/p>\n\n

    modules\/custom<\/p>\n\n

    module\/contrib<\/p>\n\n

    Step 1:<\/strong> \u00a0An essential part of a Drupal 8 module, theme, or install profile is the .info.yml file (aka, \"info yaml file\") to store metadata about the project.<\/p>\n\n

    In Drupal 8, .info file changes to .info.yml. Old .info files have been converted to YAML.<\/p>\n\n

    Added name, description, core, package, dependencies, type (The type key, which is new in Drupal 8, is required and indicates the type of extension, e.g. module, theme or profile.<\/p>\n\n

    \"Image<\/p>\n\n

    Step 2:<\/strong> \u00a0We should follow the PSR-4 standard code for custom block(s) & that has to be placed into article\/src\/Plugin\/Block\/ and named based on the class it contains. If we're going to define the class ArticleBlock this file would be article\/src\/Plugin\/Block\/ArticleBlock.php<\/p>\n\n

    \"Image<\/p>\n\n

    Create a file ArticleBlock.php under modules\/custom\/article\/src\/Plugin\/Block folder structure<\/p>\n\n

    Annotation contains just the id and label:<\/p>\n\n

    1. \n\t

      The 'id' property in the annotation defines the unique, machine readable ID of your block.<\/p>\n\t<\/li>\n\t

    2. \n\t

      The 'admin_label' annotation defines the human readable name of the block that will be used when displaying your block in the admin interface.<\/p>\n\t<\/li>\n\t

    3. \n\t

      The 'Category' defines which section belongs to under block listing page.<\/p>\n\t<\/li>\n<\/ol>

      \"Image<\/p>\n\n

      The ArticleBlock extends BlockBase class. This class provides generic block configuration form, block settings and handling of user defined block visibility settings.<\/p>\n\n

      \"Image<\/p>\n\n

      Where are we displaying markup in our block ?<\/p>\n\n

      Save the file and enable the module. To enable a block visit \/admin\/structure\/block and click on \u201cplace block\u201d under one of the region. i\u2019m selecting \u201cSidebar Second\u201d for my visibility or search for your block \u201cArticle block\u201d click on \u201cplace block\u201d and configure it.<\/p>\n\n

      \"Image<\/p>\n\n

      \"Image<\/p>\n\n

      So, we have done with configuration. it\u2019s time to visit our page. as we set the block to\u00a0<front> page.<\/p>\n\n

      \"Image<\/p>\n\n

      \u00a0<\/p>\n\n

      Source code:<\/strong> https:\/\/github.com\/xaiwant\/drupal8-block<\/a><\/p>","meta_tags":"{\"description\":\"In Drupal 8 Blocks are made up of two separate API. Block Plugin API, is a reusable API and Block Entity API, use for block placement and visibility control. Before we begin with custom Block module development.\",\"keywords\":\"Drupal 8, Module Development, Block, Custom, Plugin API, Annotation-based plugins\"}","author":"Jaywant.Topno","image":" http:\/\/cms.valuebound.com\/sites\/default\/files\/default_images\/bg-header-small-5.jpg\n","module_link":"","created":"19 December, 2016"}]

May 02 2016
May 02
Tags: Drupal, Technical

A walk through my first Platform experience with steps on how to avoid problems if you need to upload your local version into it. Lots of hints to get it right the first time!

If you are in a hurry and only need a recipe, please head to the technical part of the article, but I would like to start sharing a bit of my experience first because you might be still deciding if Platform is for you.

I decided to try Platform because a friend of mine needed a site. Do to several reasons I didn't want to host it on my personal server. But I didn't want to run a server for him either. I wanted to forget about maintaining the server, keeping it secure or do upgrades to it.

So I started thinking about options for small sites:

The list is not fair in terms of what's being offered, the offer is quite different. But I used all of these before for different projects. Only one left to try was Platform.

So going over above list with a fine cone:

  • I discarded RH Openshift because I wanted a solution, not just running an application. Others option offer caching servers too which does speed up things significantly.
  • Aberdeen: Used it before, works, but focused on the Europe market.
  • Acquia is easy to use, there was a comparable option, but price was still higher.
  • Pantheon: I've used it and like it, but the pricing was a bit risky. 25USD option was had traffic limits, the risk of needing the 100USD option was real.

Since I've been hearing very good things about Platform, the comparison on price/specs, and my appetite for new things, I decided to give it a try.

The Experience

Platform do not have a free option, only way to try/use is to pay for a site. So it was a bit of a jump from a cliff. Even had a couple of rough moments while fighting to get it online when I thought it wasn't the right choice. I must say I'm more used to sysadmin for dummies interfaces, this is a programmer oriented environment. In any case as with the others once you learn its perks, the offer balance out. Not having to deal with sysadmin stuff is great to avoid charging for support and OS upgrades to your clients and focusing on Drupal alone. They provide lots of documentation, which is quite useful. But some how I failed to get things running using just that, so I had to lean a bit on their online support. The time to respond was reasonable, and support was really good.

I would compare their approach to RH Openshift platform offering. Price is a bit higher, but final service is better. Along with hosting Drupal (or any other PHP app) they provide Solr search, Redis cache and a CDN caching network). So it is a more complete PaaS solution. I haven't run a production Drupal on RH Openshift, but given my experience I wouldn't risk running Drupal without a proper caching mechanism.

Any side effects? Well... Given I'm more familiar with Acquia or Pantheon service, I found a bit annoying the time you have to wait after every git push... Each time you do a push your application needs to be packaged, configured and delivered. So you end up with a bit of delay between you push something and are able to see it online. This happens with other platforms, but I found it a bit more noticeable. I guess Platform targets programmers as their main customer, so having a local environment would be a plus.

Overall I found the experience positive, but a bit involved. If you want an easy solution I must recommend against it, but hopefully this tutorial and all their documentation will help you to cross over the bridge. And keep in mind that's only the first step, then it is just a new way of launching a site in Drupal.

I haven't launched the site yet, so I yet to see performance of it. But given the caching approach and different services provided I think it will be ok.

The How

I guess things are easier if you don't have a process before and follow their workflow 100%. Since I had prior experience I decided to build the site locally first, and then publish online. How hard it could be? Well... Not hard, but has its perks.

So here are the steps if you are pushing your local Drupal 7 site to Platform:

  1. First thing is to add your key and start a GIT repo (or upload your current GIT repo). More on the repo on last steps, but you need to get your SSH key into Platform.sh as the first step to gain access to it using GIT, SSH, etc...
  2. Then you should write your own ".platform.app.yaml" file, here is my example:
    
    name: drupal
    type: php:5.6
    build:
        flavor: drupal
    relationships:
        database: "database:mysql"
        solr: "search:solr"
        redis: "cache:redis"
    web:
        document_root: "/"
        passthru: "/index.php"
        whitelist:
          # robots.txt.
          - /robots\.txt$    
          # CSS and Javascript.
          - \.css$
          - \.js$
          # image/* types.
          - \.gif$
          - \.jpe?g$
          - \.png$
          - \.ico$
          - \.bmp$
          # fonts types.
          - \.ttf$  
    disk: 2048
    mounts:
        "/public/sites/default/files": "shared:files/files"
        "/tmp": "shared:files/tmp"
        "/private": "shared:files/private"
    hooks:
        # We run deploy hook after your application has been deployed and started.
        deploy: |
            cd public
            drush -y updatedb
    crons:
        drupal:
            spec: "*/20 * * * *"
            cmd: "cd public ; drush core-cron"
        
    
  3. We also need to add MySQL service (among others) to our application. So we need to add another file to the repository: ".platform/services.yaml":
    
    database:
      type: mysql:5.5
      disk: 512
    
    search:
        type: solr:4.10
        disk: 512
    
    cache:
        type: redis:2.8
        
    

    This is actually one of the good things about using Platform.sh, you can have your Drupal with SOLR+Redis at the same cost. When using search on your site SOLR does really help, and when configured in Drupal Redis can speed up your site's logged in response time (it is similar to Memcache).

  4. We also need to add the domain, this is done by adding the file ".platform/routes.yaml":
    
    "http://www.{default}/":
        type: upstream
        upstream: "drupal:php"
    
    "http://{default}/":
        type: redirect
        to: "http://www.{default}/"
        
    
  5. Now you need to configure Drupal to use platform DB service. You do this by customizing your settings.php, Platform will automatically create a "settings.local.php" file which will connect you to the DB. So replace your "settings.php" with:
    
    <?php
    $update_free_access = FALSE;
    
    $local_settings = dirname(__FILE__) . '/settings.local.php';
    if (file_exists($local_settings)) {
      require_once($local_settings);
    }
        
    
  6. And we are ready to start copying things across. I will being with the DB copy. There are other options, but I will explain using an SSH connection:
    
    scp mysitesdb.dump.gz [PROJECT-ID][email protected][REGION].platform.sh:/app/tmp
    ssh [PROJECT-ID][email protected][REGION].platform.sh
    zcat tmp/mysitesdb.dump.gz | mysql -h database.internal main
    rm tmp/mysitesdb.dump.gz
        
    
  7. To get your code into the platform git repo you'll need to create/push it:
    
    cd mysite/code
    git init
    git remote add platform [PROJECT-ID]@git.[REGION].platform.sh:[PROJECT-ID].git
    git add --all
    git commit -m "Initial commit of My Project"
    git push
        
    

    It is important to know that each git push trigers your app building by platform. So the push will take more than you are used to. It won't only push, but also build the whole application for you on each push.

  8. To import the files you can use rsync:
    
    rsync -r my/local/files/. [PROJECT-ID]-[ENV]@ssh.[REGION].platform.sh:public/sites/default/files/
        
    
  9. Finally we will need to rebuild the site registry:
    
    ssh [PROJECT-ID]-[ENV]@ssh.[REGION].platform.sh
    drush dl registry_rebuild --destination=/app/tmp
    sed -i 's/, define_drupal_root()/, '"'"'\/app\/public'"'"'/' /app/tmp/registry_rebuild/registry_rebuild.php
    cd /app/public
    php ../tmp/registry_rebuild/registry_rebuild.php
        
    

Following these steps I ended up with my local version of the site running on platform. Process for Drupal 8 is quite similar, you definitely don't need the final step. Also I recall a simpler setup for settings.php, but it doesn't differ much from the above.

Main gotchas for me where adding normal files (CSS, JS & images) to the ".platform.app.yaml" file within the "whitelist" section and took me a while to be able to run from proper DB (by creating custom setting.php shared above). In both situations support took a bit of time, but got me into the right track.

There is lot of documentation, but I would like to flag the "Getting started for the impatient" because it is in line with this blog post. Didn't solve all my issues, but it is a nice brief of the whole thing.

Hope this helps someone out there. I know I looked for prior experiences and found little information but the Platform.sh docs.

May 02 2016
May 02

DrupalCon in New Orleans, Louisiana, May 9-13, 2016DrupalCon is almost here and it’s time to start filling out your schedule. There’s a lot to do and see (not to mention eating lots of great New Orleans food!), so we definitely recommend having at least a rough game plan for how to use your time. Here’s a look at things you should be considering, especially if you are looking to take away a lot of Drupal 8 knowledge.

My Schedule

Sessions and BoFs

One of the great things about DrupalCon sessions is that they are recorded and uploaded to YouTube pretty quickly. This means that you can skip the live session (unless you think you’ll have questions you want to ask) and catch up later. Why would you do that? Well, the Birds of a Feather (BoF) sessions are not recorded. These are informal sessions that get organized and happen during the 'Con. If you want to be part of these interesting conversations, then you need to be there at the time. So, our first bit of advice is to check out the BoF schedule each day and skip any formal sessions that conflict for you.

That said, it also makes sense to have a good outline of which sessions you do want to see ahead of time and that makes checking for conflicts much easier. On the DrupalCon site, you can click the “Add to my schedule” link in the sidebar of any session, and then you’ll have a nice, handy reference when you go to “My Schedule”. Of course, there are a lot of sessions about Drupal 8 in the schedule. In particular, you can find many of them in these main tracks: Coding and Development, Front End, and Site Building. Make sure you check out Joe’s session, Altering, Extending, and Enhancing Drupal 8. Last year’s DrupalCons introduced a new track just for Symfony, and this one is no different. The Symfony track definitely has some goodies for digging under the hood of Drupal 8 as well. For a different perspective on what you can do with Drupal 8, you should see Amber’s session Beyond the Blink: Add Drupal to Your IoT Playground where you’ll see how to use Drupal with "Internet of Things" projects.

If you really want to dive into Drupal 8, there are also day-long workshops available on the Monday before the main conference starts. In particular, for developers, we’re happy that our partners over at KnpLabs have a great workshop on D8 & Symfony: Dive Into the Core Concepts that Make Each Fly where you’ll spend the day really getting to understand the underlying structure by working with things like routes, controllers, events, and services.

For those who are not necessarily building, developing, or theming Drupal 8, there are a number of sessions that take a different look at Drupal 8:

Attend the Friday Sprints

Drupal mentoringIn addition to the regular sessions at DrupalCon, Friday presents a lot of opportunities for people who want to dive in and get some real work done. There’s no better way to understand what is going on with Drupal 8 than to actually work on it! Friday is the classic sprint day, where everyone comes together to work on Drupal. There is a huge range of work going on, from documentation to code, plus good people and fun. Blake, Will, Amber, and Joe from our team will be there taking part in the fun. Blake and Will are mentoring as part of the Mentored Core Sprint which also has a morning workshop to get you all up and running, even if you’ve never contributed before. It’s a great place to get oriented to contribution with lots of helpful people there to answer questions and help you find good projects to work on. Joe is going to help lead up a sprint on the new Drupal 8 User Guide for Drupal.org. You can learn more about that project, and the kinds of things that need work at the sprint, by attending Joe’s session earlier in the week in the Drupal.org track, Documentation Is Getting An Overhaul.

We’re excited that we’ll be at DrupalCon, and this is shaping up to be an amazing event as always. We’d love to hear from you, so please don’t feel shy about walking up to us (find pictures of us on our About page) and introducing yourself. You’re also sure to run into us at the Lullabot party on Wednesday. See you there!

May 02 2016
May 02

How to insert a contact form inside a content Drupal 8? Or on a specific page in a specific location ? By default, contact forms created have a dedicated page. But if we want to use them elsewhere. After some research, I almost thought I should write a few lines of code to create a specific Plugin.

But contact forms, like almost everything now in Drupal 8, are entities. Finally, place a contact form, or any other form besides, is a simplicity that had eluded me until now.

No need of Panels, Display Suite or some custom preprocess functions.  We will just need the Contact storage module, whose primary purpose is to extend the contact forms to store in the database their different bids, and we will see a little further, which add a little icing on the cake .

Discover how to inject a contact form inside content. This method can as well be used on any fieldable entity, such as blocks, for example.

Step 1: We add an Entity Reference type field, and select the Other option.

Step1 : add reference field other

And we give it a label.

Step2: give a label to the field

Step 2: After saving, we can select the entity we want to reference. We select the entity Contact form.

Step3: select the contact form entity to be referenced

Step 3: We then get the field configuration page. We can keep the default options. 

Step4 : default configuration field

Step 4: After adding the field, we set up the display formatter for our field (in the manage display page configuration of the content type). We select the formatter Rendered entity for our contact field. This is where the Contact storage module step in by adding into the field formatter the Rendered entity option for displaying directly the contact form. Option which by default is not available.

Step5: configure field formatter

Step 5: And that's it!

We just need now to create our content, and select the form, we want to display, in the autocomplete field.

Step6: create the content and select the contact form

And we have our content with the form available for your visitors. We can of course, for each created content, select a different form.

Step7: the content with the form injected

If you want to place your form on some others pages, simply do the same with a block, which you can then place where you want. And all this without any line of code. Effective not ?

May 02 2016
May 02

Ryan Szrama (rszrama), President and CEO of Commerce Guys, project leader of Drupal Commerce, and proud ex-Best Buy Geek Squad member joins Ryan, Ted, and Mike for a comprehensive discussion Commerce Guys' recent relaunch as a standalone company, and the current development progress of Drupal Commerce for Drupal 8. We also discussed Drupal 8.1, a potential future for the theme layer, the absolute correct pronunciation of "Szrama", and a big announcement from Ted.

Interview

DrupalEasy News

Three Stories

Sponsors

Picks of the Week

Upcoming Events

Follow us on Twitter

Five Questions (answers only)

  1. Kayaking
  2. Clash Royale
  3. Become a beverage professional
  4. Llamas
  5. DrupalCon Barcelona 2007

Intro Music

The Dean Scream.

Subscribe

Subscribe to our podcast on iTunes, Google Play or Miro. Listen to our podcast on Stitcher.

If you'd like to leave us a voicemail, call 321-396-2340. Please keep in mind that we might play your voicemail during one of our future podcasts. Feel free to call in with suggestions, rants, questions, or corrections. If you'd rather just send us an email, please use our contact page.

May 02 2016
May 02

One of the most favourite and  valuable features in drupal is multisite configuration, Drupal 8 provide simple way to create multisite it reduced lots of works. The following steps shows to configure multisite in drupal 8,

  • Should have more than one domain and databases, I am going to use the domain (www.domain1.com, www.domain2.com) and databases (domain1, domain2).
  • Create two folders in drupal-8/sites/ folder with domain1, domain2 name, the folder path would be like this drupal-8/sites/domain1/ and drupal-8/sites/domain2/
  • Create files/ folder in both the folder (drupal-8/sites/domain1/files)
  • Copy the default.settings.php file and paste it into the both folder then rename it as settings.php ( drupal-8/sites/domain1/settings.php, drupal-8/sites/domain1/settings.php)
  • Edit the settings.php file for domain1 to adding the database,
$databases['default']['default'] = array (
  'database' => 'domain1', // Change value to domain1 for www.domain.com and domain2 for www.domain2.com.
  'username' => 'root',
  'password' => 'root',
  'prefix' => '',
  'host' => 'localhost',
  'port' => '3306',
  'namespace' => 'Drupal\\Core\\Database\\Driver\\mysql',
  'driver' => 'mysql',
);
  • Copy the drupal-8/sites/example.sites.php file and paste in on the same location then change the file name to sites.php (drupal-8/sites/sites.php)
  • Add the following line in the bottom of the drupal-8/sites/sites.php file
$sites = array(
 'domain1' => 'domain1.com', // Folder name => Domain name.
 'domain2' => 'domain2.com',
);

Thats all, both domain would works well with the different db with a single instance.

May 02 2016
May 02

I needed a way to check the currect user has permission to view the currect/particular page, Searched lot finally got the exact way, going to show the tricks to you in this blog.

Drupal has an api called " drupal_valid_path " , Normally it used to test the url is valid or not. but the trick is that, It also check the user has permission to view the currect/particular page.

It will return TRUE If path is valid and user has permission to view the page, otherwise it will return FALSE,

For example,

$path = current_path();
if (drupal_valid_path($path)) {
  // Your code here.
}
May 02 2016
May 02

Most of the times developers don't like the GUI, It makes feel lazy. Drupal has a tool (Drush) to do some management work from command line.

And also the installing the drupal site makes very lazy while doing with browser, The Drush has an option to install the full site with a single command.

The Following command will install the drupal in standard method,

drush site-install standard --account-name=admin --account-pass=[useruser_pass] --db-url=mysql://[db_user]:[db_pass]@localhost/[db_name]
May 02 2016
May 02

In a recent post we talked about how introducing outside-in experiences could improve the Drupal site-building experience by letting you immediately edit simple configuration without leaving the page. In a follow-up blog post, we provided concrete examples of how we can apply outside-in to Drupal.

The feedback was overwhelmingly positive. However, there were also some really important questions raised. The most common concern was the idea that the mockups ignored "context".

When we showed how to place a block "outside-in", we placed it on a single page. However, in Drupal a block can also be made visible for specific pages, types, roles, languages, or any number of other contexts. The flexibility this provides is one place where Drupal shines.

Why context matters

For the sake of simplicity and focus we intentionally did not address how to handle context in outside-in in the last post. However, incorporating context into "outside-in" thinking is fundamentally important for at least two reasons:

  1. Managing context is essential to site building. Site builders commonly want to place a block or menu item that will be visible on not just one but several pages or to not all but some users. A key principle of outside-in is previewing as you edit. The challenge is that you want to preview what site visitors will see, not what you see as a site builder or site administrator.
  2. Managing context is a big usability problem on its own. Even without outside-in patterns, making context simple and usable is an unsolved problem. Modules like Context and Panels have added lots of useful functionality, but all of it happens away from the rendered page.

The ingredients: user groups and page groups

To begin to incorporate context into outside-in, Kevin Oleary, with input from yoroy, Bojhan, Angie Byron, Gábor Hojtsy and others, has iterated on the block placement examples that we presented in the last post, to incorporate some ideas for how we can make context outside-in. We're excited to share our ideas and we'd love your feedback so we can keep iterating.

To solve the problem, we recommend introducing 3 new concepts:

  1. Page groups: re-usable collections of URLs, wildcards, content types, etc.
  2. User groups: reusable collections of roles, user languages, or other user attributes.
  3. Impersonation: the ability to view the page as a user group.

Page groups

Most sites have some concept of a "section" or "type" of page that may or may not equate to a content type. A commerce store for example may have a "kids" section with several product types that share navigation or other blocks. Page groups adapts to this by creating reusable "bundles" of content consisting either of a certain type (e.g. all research reports), or of manually curated lists of pages (e.g. a group that includes /home, /contact us, and /about us), or a combination of the two (similar to Context module but context never provided an in-place UI).

User groups

User groups would combine multiple user contexts like role, language, location, etc. Example user groups could be "Authenticated users logged in from the United States", or "Anonymous users that signed up to our newsletter". The goal is to combine the massive number of potential contexts into understandable "bundles" that can be used for context and impersonation.

Impersonation

As mentioned earlier, a challenge is that you want to preview what site visitors will see, not what you see as a site builder or site administrator. Impersonation allows site builders to switch between different user groups. Switching between different user groups allow a page to be previewed as that type of user.

Using page groups, user groups and impersonation

Let's take a look at how we use these 3 ingredients in an example. For the purpose of this blog post, we want to focus on two use cases:

  1. I'm a site builder working on a life sciences journal with a paywall and I want to place a block called "Download report" next to all entities of type "Research summary" (content type), but only to users with the role "Subscriber" (user role).
  2. I want to place a block called "Access reports" on the main page, the "About us" page, and the "Contact us" page (URL based), and all research summary pages, but only to users who are anonymous users.

Things can get more complex but these two use cases are a good starting point and realistic examples of what people do with Drupal.

Step #1: place a block for anonymous users

Let's assume the user is a content editor, and the user groups "Anonymous" and "Subscriber" as well as the page groups "Subscriber pages" and "Public pages" have already been created for her by a site builder. Her first task is to place the "Access reports" block and make it visible only for anonymous users.

Place a block for anonymous users

First the editor changes the impersonation to "Anonymous" then she places the block. She is informed about the impact of the change.

Step #2: place a block for subscribers

Our editor's next task is to place the "Download reports" block and make it visible only for subscribers. To do that she is going to want to view the page as a subscriber. Here it's important that this interactions happens smoothly, and with animation, so that changes that occur on the page are not missed.

Place a block for subscribers

The editor changes the impersonation to "Subscribers". When she does the "Access reports" block is hidden as it is not visible for subscribers. When she places the "Download report" block and chooses the "Subscriber pages" page group, she is notified about the impact of the change.

Step #3: see if you did it right

Once our editor has finished step one and two she will want to go back and make sure that step two did not undo or complicate what was done in step one, for example by making the "Download report" block visible for Anonymous users or vice versa. This is where impersonation comes in.

Confirm you did it right

The anonymous users need to see the "Access reports" block and subscribers need to see the "Download report" block. Impersonation lets you see what that looks like for each user group.

Summary

The idea of combining a number of contexts into a single object is not new, both context and panels do this. What is new here is that when you bring this to the front-end with impersonation, you can make a change that has broad impact while seeing it exactly as your user will.

May 02 2016
May 02

More than two years ago I gave a session about the future of media at DrupalCon Prague. The outcome of that session was a planning sprint that happened two days after it. One of the ideas that was born on that sprint was Media entity, storage layer for media-related information built with simplicity and support for remotely hosted media in mind. It's development started shortly after that and got significantly accelerated in the spring of the next year, when the core of the media initiative met at NYC Camp and agreed on the common battle plan for Drupal 8.

Media entity and it's plugins have been pretty stable for the last few months. It seemed to be almost ready for it's first release, but there were few tickets in the issue queue which I wanted to resolve first. In the last few days I found some time to look at those. Together with Tadej Baša (@paranojik) we managed to finish all of the most important patches, which allowed me to tag 8.x-1.0 yesterday. I am thrilled and extremely proud. A lot of individuals and organizations invested many hours to make this possible and I would like to thank every single one of them. Special thanks go to NYC Camp organizers, who organized two sprints and have been supporting us from the beginning, Examiner.com, my ex employer who allowed me to spend significant amount of my time to work on many media-related modules and MD Systems, who organized two media sprints and let part of their team to work on Drupal 8 media for 3 months.

Along with the main module I released some of it's plugins too: Image, Slideshow, Twitter and Instagram. There are also plugins that handle Video, Audio and Documents, which are also quite ready to be used.

Media entity and it's plugins offer many interesting features:

  • simple and lean storage for local and remote media,
  • out of the box integration with standard Drupal's tools,
  • pluggable architecture that allows easy implementation of additional plugins,
  • 100% automatic delivery of thumbnails,
  • delivery of remote metadata and
  • mapping of remote metadata with entity fields.

I encourage you to try it and let us know what you think. We are looking for co-maintainers too. If you'd like to spend some time in contrib and have ideas for new features let me know.

In the next few weeks we're planning releases of the other media modules. Stay tuned!

May 02 2016
May 02

Today’s the day to reconsider your hosting. We are launching amazee.io, a state-of-the-art hosting service with an integrated development and hosting environment. Think of a battle-proven system, automated deployments, full congruence between your development and productive environment, and a very competitive pricing.

“Why another Drupal hosting provider?” You mask ask. Read why: stories.amazee.io 

And if you have not yet on seen our website or factsheet let me introduce the team behind the system: Michael Schmid (Schnitzel), CTO; Tyler Ward and Bastian Widmer for DevOps, and myself, who after three great years at the Drupal Association accepted the opportunity to lead the new venture as CEO. We are excited!

Hope to see you at the upcoming DrupalCon in New Orleans.

May 01 2016
May 01

I got started with task runners a while ago using Grunt, thanks to an excellent 24 ways article by Chris Coyier. Lately I've been using Gulp more, and all the cool kids seem to be going that way, so for the Drupal 8 version of The Gallery Guide, I've decided to use Gulp.

Since hearing Chris Coyier talk about SVG icon systems, I've been meaning to implement them in my own workflow. I've written about a similar setup for Jekyll on the Capgemini engineering blog, and wanted to apply something similar to this site.

The Gulp setup for this project is almost identical to the one described in that post, so I won't go into too much detail here, but in the spirit of openness that's guiding this project, the gulpfile is on Github.

In short, there's a directory of SVG icons within the theme, and there's a Gulp task to combine them into a single file at images/icons.svg. Then the contents of that file is injected into the page using a preprocess function. There's a slight gotcha here - if the value is added directly, without going through the t() function, then it automatically gets escaped to block cross-site scripting. It doesn't seem to make sense, according to the documentation, but I needed to pass the value in without any prefix character:


function gall_preprocess_page(&$variables) {
  $svg = file_get_contents(drupal_get_path('theme', 'gall') . '/images/icons.svg');
  $variables['svg_icons'] = t('svg', array('svg' => $svg));
}

If we were working with user-supplied content, this would be opening up a dangerous attack vector, but given that it's content that I've created myself in the theme, it's safe to trust it.

Having done that, in the page.html.twig template, the variable is available for output:


{{ svg_icons }}

Then these files can be referenced - here's a snippet from region--header.html.twig:


<a href="https://www.twitter.com/thegalleryguide" title="Follow us on Twitter">
  <svg class="icon">
    <use xlink:href="#twitter"></use>
  </svg>
</a>

Part of me feels like there should be a more Drupal-ish way of doing this, so that the links are part of a menu. But given that this is my own project, and changing the icons would require a code change, it doesn't feel so bad to have them hard-coded in the template.

May 01 2016
May 01

This post is on how we implemented a simple (yet effective) BigPipe "like" rendering strategy for Drupal 7.

Why is big pipe so important?

Big pipe is a render strategy that assumes that not all the parts of your page are equally important, and that a loaded delay on some of them is acceptable as long as the "core" of the page is delivered ASAP. Furthermore, instead of delivering those delayed pieces with subsequent requests (AJAX) it optimizes network load by using a streamed HTTP response so that you get all those delayed pieces in a single HTTP request/response.

Big pipe does not reduce server load, but dramatically improves your website load time if properly integrated with your application.

Sounds great, and will work very well on some scenarios.

Take for example this landing page (excuses for the poor UX, that's a long story...):

This page has about 20 views and blocks. All of those views and blocks are cached, but can you imagine what a cold cache render of that page looks like? A nightmare....

What if we decided that only 4 of those views were critical to the page, and that the rest of the content could be streamed to the user after the page has loaded? It willl roughly load 70% faster.

UPDATE: Adding support for content streaming has oppened the door to awesome succesfull business strategies - without penalizing initial page load times - such as geolocalizing (or even customizing per user) blocks, advertising and others. All of that while keeping page cache turned on and being able to handle similar amounts of traffic on the same hardware, and without resorting to custom Ajax loading (and coding).

We decided to take a shot and try to implement a big-pipe like render strategy for Drupal 7. We are NOT trying to properly do BigPipe, just something EASY and CHEAP to implement and with little disruption of current core - that's why this is going to be dubbed Cheap Pipe instead of Big Pipe.

Furthermore, it was a requirement that this can be leveraged on any current Drupal application without modifying any existing code. It should be as easy as going to a block or view settings and telling the system to stream it's contents. It should also provide programmatic means of defining content callbacks (placeholders) that should be streamed after the page is served.

We made it, and it worked quite well!

Now every block has a "Cheap Pipe" rendering strategy option:

Where:

  • None: Block is rendered as usual.
  • Only Get: Cheap pipe is used only on GET requests
  • Always: Cheap pipe is used on GET/POST and other HTTP methods.

Cheap pipe is never used on AJAX requests no matter what you choose here.

Why these options? Because some blocks might contain logic that could missbehave depending on the circumstances, and we want to break nothing. So you choose what blocks should be cheap piped, how, and in what order.

What happens after you tell a block (the example is for blocks but there is an API to leverage this on any rendered thing) to be cheap-piped?

  • The block->view() callback is never triggered and the block is not renderd but replaced with a placeholder.
  • The page is served (flushed to the user) and becomes fully functional by artificially trigerring the $(document).ready() event. The </body></html> tags a removed before serving so that the rest of the streamed content is properly formed.
  • After the page has been served to the user, all deferred content is streamed by flushing the php buffer as content gets created and rendered.
  • This content is sent to the user in such a way that it leverages the Drupal AJAX  framework (although this is not AJAX) so that every new content that reaches the page gets properly loaded (drupal behaviours attached, etc...)

Take a look at this end-of-page sample:

The output markup even gives you some stats to see what time it took to render each Cheap Piped piece of content.

Because cheap piped elements are generated sequentially, if an element is slow, it will delay the rendering of the rest of the elements. That's why we implemented a "weight" property so that you can choose in what order elements are cheap-piped.

What kind of problems did we find?

  • Deferred content that called drupal_set_message() was, obviously, not working because the messages had already been processed and rendered. Solved by converting the messages rendering into Cheap Pipe and making it the last one to be processed (thanks to the weight property).
  • Deferred content that relied on drupal_goto() (such as forms in blocks) would not work because the page had already been served to the user. drupal_goto() had to be modified so that if cheap pipe rendering had already started, the redirection was done client side with javascript.
  • When fatals are thrown after the main content has been served your page gets stuck in a weird visual state. There is nothing we can do about this because after fatals you loose control of php output.
  • Server load sky rocketed. What used to be anonymous pages served from cache, now require a full Drupal bootstrap to serve out the streamed content.
hw
May 01 2016
May 01

Things have gotten busy after DrupalCon Asia which meant that the Drupal meetup we hold in Bangalore every month was a little difficult to organize. Srijan Technologies stepped up and offered their office space in Whitefield, Bangalore. They also took care of snacks and even lunch for all the attendees. Kudos to Srijan for organizing the meetup. Thank you!

This is actually the second meetup since DrupalCon Asia. The first one was held soon after the con and we had the honour of hosting Danese Cooper, who was one of the keynote speakers in DrupalCon Asia in Mumbai. That meetup was held at 91Springboard in Koramangala on 5th March, 2016 and saw an attendance of about 40-50 people. Danese Cooper repeated her keynote in DrupalCon Asia in essentials, which received great response. This was followed by a session on Mobile features in Drupal 8 by Ram Singh. Photos from this meetup are at the end of this post.

Drupal Meetup Attendance - March 2016

Attendance Breakup in Drupal Meetup in March, 2016

April’s meetup started at around 10:45 AM with Soumyajit Basu explaining about using Protractor JS for front-end testing. After an insightful session and an in-depth discussion on testing frameworks and methodologies, we continued with the session on Migrating from Drupal 7 to Drupal 8 by Harish Goud. After that, I explained some of the more basic Migration concepts. I followed this with a discussion on drush make and composer.

Even though the attendance was lower than our usual meetups, we had a great and fruitful discussion. We ended with pizzas and soft drinks courtesy of Srijan and left around 2 PM.

Photos from Drupal Meetup in March 2016

Open Source and Drupal Meetup - March 2016

Photos from Drupal Meetup in April 2016

Drupal Meetup Bangalore - April 2016

Apr 30 2016
Apr 30

Note: This blog post is based on Drupal 8.1.x. It is an updated version of a previous tutorial based on Drupal 8.0.x. While the concepts are largely the same as 8.0.x, a refactoring of the core migrate modules took place in Drupal 8.1.x (migrations will become plugins in 8.1.x). This updated tutorial updates the previous example to work with Drupal 8.1.x, as well as demonstrates how to specify a migration group and run the migration with Drush. If you're familiar with the previous tutorial, you may want to skip to the "Rolling up our sleeves" section below.

Even if you're only casually acquainted with Drupal 8, you probably know that the core upgrade path to Drupal 8 has been completely rewritten from the ground-up, using many of the concepts of the Migrate and Drupal-to-Drupal migration modules. Using the Migrate upgrade module, it is possible to migrate much of a Drupal 6 (or Drupal 7) site to Drupal 8 with a minimum of fuss (DrupalEasy.com is a prime example of this). "Migrate upgrade" is similar to previous Drupal core upgrade paths - there are no options to pick-and-choose what is to be migrated - it's all-or-nothing. This blog post provides an example of how to migrate content from only a single, simple content type in a Drupal 6 site to a Drupal 8.1.x site, without writing any PHP code at all.

Setting the table

First, some background information on how the Drupal 8 Migrate module is architected. The Migrate module revolves around three main concepts:

  • Source plugins - these are plugins that know how to get the particular data to be migrated. Drupal's core "Migrate" module only contains base-level source plugins, often extended by other modules. Most Drupal core modules provide their own source plugins that know how to query Drupal 6 and Drupal 7 databases for data they're responsible for. For example, the Drupal 8 core "Node" module contains source plugins for Drupal 6 and Drupal 7 nodes, node revisions, node types, etc… Additionally, contributed and custom modules can provide additional source plugins for other CMSes (WordPress, Joomla, etc…), database types (Oracle, MSSQL, etc…), and data formats (CSV, XML, JSON, etc.)
  • Process plugins - these are plugins designed to receive data from source plugins, then massage it into the proper form for the destination on a per-field basis. Multiple process plugins can be applied to a single piece of data. Drupal core provides various useful process plugins, but custom and contributed modules can easily implement their own.
  • Destination plugins - these are plugins that know how to receive data from the process plugins and create the appropriate Drupal 8 "thing". The Drupal 8 core "Migrate" module contains general-purpose destination plugins for configuration and content entities, while individual modules can extend that support where their data requires specialized processing.

Together, the Source -> Process -> Destination structure is often called the "pipeline".

It is important to understand that for basic Drupal 6 to Drupal 8 migrations (like this example), all of the code is already present - all the developer needs to do it to configure the migration. It is much like preparing a meal where you already have a kitchen full of tools and food - the chef only needs to assemble what is already there.

The configuration of the migration for this example will take place completely in two custom .yml files that will live inside of a custom module. In the end, the custom module will be quite simple - just a .info.yml file for the module itself, and two .yml files for configuring the migration.

Reviewing the recipe

For this example, the source Drupal 6 site is a large site, with more than 10 different content types, thousands of nodes, and many associated vocabularies, users, profiles, views, and everything else that goes along with an average Drupal site that has been around for 5+ years. The client has decided to rewrite the entire site in Drupal 8, rebuilding virtually the entire site from the ground-up - but they wanted to migrate a few thousand nodes from two particular content types. This example will demonstrate how to write a custom migration for the simpler of the two content types.

The "external article" content type to be migrated contains several fields, but only a few of consequence:

  • Title - the node's title
  • Publication source - a single line, unformatted text field
  • Location - a single line, unformatted text field
  • External link - a field of type "link"

Some additional notes:

  • The "Body" field is unused, and does not need to be migrated.
  • The existing data in the "Author" field is unimportant, and can be populated with UID=1 on the Drupal 8 site.
  • The node will be migrated from type "ext_article" to "article".

Several factors make this a particularly straight-forward migration:

  • There are no reference fields at all (not even the author!)
  • All of the field types to be migrated are included with Drupal 8 core.
  • The Drupal 6 source plugin for nodes allows a "type" parameter, which is super-handy for only migrated nodes of a certain type from the source site.

Rolling up our sleeves

With all of this knowledge, it's time to write our custom migration. First, create a custom module with only an .info.yml file (Drupal Console's generate:module command can do this in a flash.) List the Migrate Drupal (migrate_drupal) and Migrate Plus modules as dependencies. The Migrate Drupal module dependency is necessary for some of its classes that contain functionality to query Drupal 6 databases, while the Migrate Plus module dependency is required because custom migrations are now plugins that utilize the MigrationConfigEntityPluginManager provided by Migrate Plus (full details in a blog post by Mike Ryan).

Next, create a "migration group" by creating a migrate_plus.migration_group.mygroup.yml file. The purpose of a migration group is to be able to group related migrations together, for the benefit of running them all at once as well as providing information common to all the group migrations (like the source database credentials) in one place.

The "shared_configuration -> source -> key" value of "legacy" corresponds to a database specified in the Drupal 8 site's settings.php file. For example:

Next, create a new "migrate_plus.migration.external_articles.yml" file in /config/install/. Copy/paste the contents of Drupal core's /core/modules/node/migration_templates/d6_node.yml file into it. This "migration template" is what all node migrations are based on when running the Drupal core upgrade path. So, it's a great place to start for our custom migration. Note that the file name begins with "migrate_plus.migration" - this is what allows our custom migration to utilize the Migrate Plus module's MigrationConfigEntityPluginManager.

There's a few customizations that need to be made in order to meet our requirements:

  • Change the "id" and "label" of the migration to something unique for the project.
  • Add the "migration_group: mygroup" to add this migration to the group we created above. This allows this migration access to the Drupal 6 source database credentials.
  • For the "source" plugin, the "d6_node" migration is fine - this source knows how to query a Drupal 6 database for nodes. But, by itself, it will query the database for nodes, regardless of their type. Luckily, the "d6_node" plugin takes an (optional) "node_type" parameter. So, we add "ext_article" as the "node_type".
  • We can remove the "nid" and "vid" field mappings in the "process" section. The Drupal core upgrade path preserves source entity ids, but as long as we're careful with reference fields (in our case, we have none), we can remove the field mappings and let Drupal 8 assign new node and version ids for incoming nodes. Note that we're not migrating previous node revisions, only the current revision.
  • Change the "type" field mapping from a straight mapping to a static value using the "default_value" process plugin. This is what allows us to change the type of the incoming nodes from "ext_article" to just "article".
  • Similarly, change the "uid" field mapping from a straight mapping to a static_value of "1", which assigns the author of all incoming nodes to the UID=1 user on the Drupal 8 site.
  • Since we don't have any "body" field data to migrate, we can remove all the "body" field mappings.
  • Add a mapping for the "Publication source". On the Drupal 6 site, this field's machine name is "field_source", on the Drupal 8 site, the field's machine name is field_publication_source. Since it is a simple text field, we can use a direct mapping.
  • Add a direct mapping for "field_location". This one is even easier than the previous because the field name is the same on both the source and destination site.
  • Add a mapping for the source "External link" field. On the Drupal 6 site, the machine name is "field_externallinktarget", while on the Drupal 8 site, it has been changed to "field_external_link". Because this is a field of type "link", we must use the "d6_cck_link" process plugin (provided by the Drupal 8 core "Link" module). This process plugin knows how to take Drupal 6 link field data and massage it into the proper form for Drupal 8 link field data.
  • Finally, we can remove all the migration dependencies, as none of them are necessary for this simple migration.

The resulting file is:

Note that .yml files are super-sensitive to indentation. Each indentation must be two spaces (no tab characters).

Serving the meal

To run the migration, first enable the custom module. The act of enabling the module and Drupal core's reading in of the migration configuration could trigger an error if the configuration isn't formatted properly. For example, if you misspelled the "d6_node" source plugin as "db_node", you'll see the following error:

[Error] The "db_node" plugin does not exist.

If the module installs properly, the Drush commands provided by the Migrate Tools (8.x-2.x-dev - 2016-Apr-12 or later) module can be used to manage the migration. First, use the Drush "migrate-status" command (alias: ms) can be run to confirm that the migration configuration exists. This is similar to functionality in Drupal 7's Migrate module.

~/Sites/drupal8migrate $ drush ms
 Group: mygroup          Status  Total  Imported  Unprocessed  Last imported       
 enternal_articles       Idle    602    602       0            2016-04-29 16:35:53 

Finally, using Drush, the migration can be run using the "migrate-import" (alias: mi) command:

~/Sites/drupal8migrate $ drush mi external_articles
Processed 602 items (602 created, 0 updated, 0 failed, 0 ignored) - done with 'external_articles' 

Similarly, the migration can be rolled back using the drush "migrate-rollback" (alias: rm) command:

~/Sites/drupal8migrate $ drush migrate-rollback external_articles
Rolled back 602 items - done with 'external_articles' 

Once the migration is complete, navigate over to your Drupal 8 site, confirm that all the content has been migrated properly, then uninstall the custom module as well as the other migrate-related modules.

Note that the Migrate module doesn't properly dispose of its tables (yet) when it is uninstalled, so you may have to manually remove the "migrate_map" and "migrate_message" tables from your destination database.

Odds and ends

  • One of the trickier aspects about writing custom migrations is updating the migration configuration on an installed module. There are several options:
    • The Configuration development module provides a config-devel-import-one (cdi1) drush command that will read a configuration file directly into the active store. For example: drush cdi1 modules/custom/mymodule/config/install/migrate.migration.external_articles.yml
    • Drush core provides a config-edit command that allows a developer to directly edit an active configuration.
    • Finally, if you're a bit old-school, you can uninstall the module, then use the "drush php" command to run Drupal::configFactory()->getEditable('migrate.migration.external_articles)->delete();, then reinstall the module.
  • Sometimes, while developing a custom migration, if things on the destination get really "dirty", I've found that starting with a fresh DB helps immensely (be sure to remove those "migrate_" tables as well!)

Additional resources

Thanks to Mike Ryan and Jeffrey Phillips for reviewing this post prior to publication.

hw
Apr 30 2016
Apr 30

Discovery

I have contributed to Drupal 8 over the course of last couple of years. I have also written custom modules for Drupal 8, but I never really tried building a Drupal 8 site. After Drupal 8’s release last November, I thought it was time to change that.

That is how I set about upgrading one of my Drupal 7 sites to Drupal 8. As soon as I started building a list of modules, I stopped. Pathauto wasn’t ready yet. It was not even close to being ready. Sure, there is a 8.x branch but also a warning that the architecture may change significantly. Okay, it was time to roll up my sleeves and get to work. I cloned the github repo where the port was happening and started.

And then I stopped again. It needed token, and it was also just a dev release. Now, token is a module everyone needs and no one knows, especially after token is mostly in core since Drupal 7. How can that still be dev two months after Drupal 8 final?! Okay, let’s start with token. And that is the story of how I landed in token’s issue queues one fine Monday.

Requirements

I started with reviewing the token module code. It seemed like it worked with Drupal 8 but it was still essentially Drupal 7 based architecture. I am not saying there is anything wrong with Drupal 7 based architecture but we were missing out on the clean code, discrete responsibilities, and easy testing that comes with code architected for modern PHP.

Refactoring

I found an issue that covered moving helper functions to services but the response wasn’t great. I see the point. The code worked and refactoring was not a priority against getting it to work; however, to me, getting it to work cleanly along with rest of Drupal would need a refactored code base. It was not easy getting in either, what with over 3000 combined lines of code in module and inc files. I started looking for low hanging fruits and found one – token browser not shown on help page.

Theme Hooks

After that and other simple fixes, I looked into converting theme hooks to TWIG. Theme hooks work in Drupal 8 but are deprecated. Further, there were three theme hooks in token module and they were interdependent and confusing. I started with eliminating one of them and also submitted patches or pull requests to other modules that used this behaviour.

Token tree screenshot

Essentially, there was a theme hook called token_tree_link which was never used. Instead, if a module wanted to show a link to show a token tree, they should use the token_tree hook with ‘#dialog’ option set to TRUE. This was quite counter-intuitive in my opinion and also it did not allow you to show links without dialog (we subsequently removed the dialog option entirely as there was no use case to show it without a dialog). In other words, the token_tree hook was doing too much and I submitted a patch that removed this functionality. Now, you should use token_tree_link directly if you want to show a link. Since this would break many other projects (like pathauto, metatag, etc…), I submitted patches to many of them.

Eventually, the token_tree_link theme hook was converted to a template_preprocess along with a twig file, as per current Drupal 8 standards.

As it turned out, the theme hook token_tree did little in itself and we later refactored it as a method on tree builder service. There was another hook called tree_table and that was converted to a render element. Now, to show a token tree, all you need to do is:


$element = [
 '#type' => 'token_tree_table',
 '#token_tree' => $token_tree, // Generated token tree.
 ...
];

The token_tree theme hook, which was a convenient function to generate the token tree (and hence not really just a theme hook) was moved to the buildRenderable() method on the token.tree_builder service.

Services

The first helper function to move to a service was token_get_info(). We didn’t just move it, but also simplified the function greatly. The function token_get_info() would return different results depending on different parameters and that is not a good idea. We split the function into three different methods – getInfo(), getTypeInfo() and getTokenInfo(). Similarly, we deprecated other functions and moved them to methods. Here is a list of functions now in Token service.

  • token_get_infogetInfo()
  • token_get_infogetTypeInfo()
  • token_get_infogetTokenInfo()
  • token_get_global_token_typesgetGlobalTokenTypes()
  • token_get_invalid_tokens_by_contextgetInvalidTokensByContext()
  • token_get_invalid_tokensgetInvalidTokens()

The cool thing here was that we replaced the core’s Token service with the one provided by the token module and it made a lot of sense. This also means that if you have the token module enabled, all you need to do is call \Drupal::token() to get the token service.

The next set of functions to be removed were related to building token trees. These functions were converted to corresponding methods on token.tree_builder service.

  • token_build_treebuildTree()
  • token_flatten_treeflattenTree()
  • _token_build_treegetTokenData() [internal use only]

Later, we also moved the token_tree theme hook functionality to a method on this service – buildRenderable(). This was done because the theme hook did a lot of processing to generate the token tree and pass it on to tree_table theme hook, which was really out of the scope of a theme hook.

Next, we moved token entity mapping functions to it’s own service.

  • token_get_entity_mappinggetEntityTypeMappings()
  • token_get_entity_mapping('token')getEntityTypeForTokenType()
  • token_get_entity_mapping('entity')getTokenTypeForEntityType()

This was another function that would give different kinds of results depending on the first parameter. For cleanliness, we just split them into different methods.

The above functions were deprecated for a while and subsequently removed.

Devel Integration

Token had stopped working with Devel as well during its development. We used devel’s ideas of a local task derivative and route subscriber to add dynamic routes to show tokens for each of the entities. There was already a controller responsible for showing the page with all tokens which was refactored to work with all entity types.

Tests

Apart from writing tests for the new services that we introduced, we also fixed a lot of old tests. Some of the tests were integration tests which were using simpletests’ KernelTestBase. We changed all such tests to use the modern Drupal\KernelTests\KernelTestBase which brought in some speed improvements as well. These can be run directly using PHPUnit.

Other Changes

There were a lot of other small changes that went a long way in improving the module, such as replacing define with class constants, removing deprecated functions, cleaning up a lot of old code, At the time of this writing, we are at alpha2 and there is still some work left, particularly around render caching token trees. Please help getting token to stable.

I hope this helps you in refactoring your module to Drupal 8, which are just modern PHP principles. Please let me know your feedback, questions, or suggestions.

Apr 29 2016
Apr 29

Drupal 8 lays the foundation for building robust and flexible RESTful APIs quickly and efficiently. Combine this with Drupal’s best-in-class fieldable entity model and it becomes incredibly easy to construct systems that solve many different problems well.

Out of the box, Drupal 8 comes with core modules for all of the standard RESTful HTTP methods, GET, POST, PATCH, and DELETE. These endpoints are entity specific. Collection endpoints - endpoints that deal with entities in aggregate - are another story. The solution offered is the Views module.

In a headless or API intensive site however, core Drupal 8 and Views are limited by a major shortcoming. Support for querying your entity content over an API is limited to only the custom views that you create. This means that you must first create a view for any content that you want to expose. Filtering is limited to only the filters you explicitly enable and there’s no clear solution for fine-grained control over sorting and paging your results via query strings - the common practice for RESTful APIs. This creates a lot of development overhead for headless and API intensive sites which will inevitably end up with innumerable views.

Creating publicly available APIs would be worse yet. Typically, you would like a public API to allow your consumers to discover and access your data as they see fit. Managing each view for all your entity types becomes increasingly difficult with every new field added or new entity type. This issue makes sense, the Views module’s original intent was to provide prescribed aggregations of your content, possibly modified by a few contextual items like the current path or the current user. Views were never intended to be an all-purpose query tool for the end user.

Enter Entity Query API. Entity Query API allows API consumers to make queries against any entity in Drupal. From users, to nodes, to configuration entities, this is an incredibly powerful tool. By creating a standardized set of parameters for crafting these queries, Entity Query API allows developers to create generalized tooling not tied to particular views or entities. Moreover, they need not worry about creating matching views for every collection of content. This ability to let API consumers craft their own result-set further reinforces the separation of concerns between the client and the server.

Entity Query API does all this by taking advantage of the excellent QueryInterface provided by Drupal Core. The module simply translates your request URI and associated query string into an entity query on the backend, executes it, and returns the results as JSON. By using this, we also get the built in access control that Drupal entity queries provide.

Entity Query API is still in alpha (as of April 2016), but it fully supports everything that you can do with an entity query in code, i.e., conditions, condition groups, sorting, ranges, etc. Like the REST UI module, we have a similar configuration page for enabling queryable entities. We support all core authentication methods as well as JSON Web Token Authentication (another module we’ve built). In future, we’d like to dynamically collect any authentication providers available, just like the REST UI module.

I’m going to be sprinting on Entity Query API at DrupalCon New Orleans on Monday, May 9th 2016 and during the after-DrupalCon sprints on Friday, May 13th 2016. We’d like to add support for other encodings like XML and HAL+JSON (currently the module just supports JSON). Finally, we’d like to add the option to retrieve just entity IDs instead of fully loaded entities.

As always, there’s plenty of work to be done in open source. If you’re interested in Entity Query API, come find me during the sprints or send me a tweet anytime during DrupalCon, my handle is @gabesullice. Of course, the easiest way to help is just to download the module and report any bugs you find. Finally, if you're going to be at DrupalCon New Orleans, stop by the Aten booth, I'd love to hear your ideas and feedback!

Apr 29 2016
Apr 29

Drupal 8 lays the foundation for building robust and flexible RESTful APIs quickly and efficiently. Combine this with Drupal’s best-in-class fieldable entity model and it becomes incredibly easy to construct systems that solve many different problems well.

Out of the box, Drupal 8 comes with core modules for all of the standard RESTful HTTP methods, GET, POST, PATCH, and DELETE. These endpoints are entity specific. Collection endpoints - endpoints that deal with entities in aggregate - are another story. The solution offered is the Views module.

In a headless or API intensive site however, core Drupal 8 and Views are limited by a major shortcoming. Support for querying your entity content over an API is limited to only the custom views that you create. This means that you must first create a view for any content that you want to expose. Filtering is limited to only the filters you explicitly enable and there’s no clear solution for fine-grained control over sorting and paging your results via query strings - the common practice for RESTful APIs. This creates a lot of development overhead for headless and API intensive sites which will inevitably end up with innumerable views.

Creating publicly available APIs would be worse yet. Typically, you would like a public API to allow your consumers to discover and access your data as they see fit. Managing each view for all your entity types becomes increasingly difficult with every new field added or new entity type. This issue makes sense, the Views module’s original intent was to provide prescribed aggregations of your content, possibly modified by a few contextual items like the current path or the current user. Views were never intended to be an all-purpose query tool for the end user.

Enter Entity Query API. Entity Query API allows API consumers to make queries against any entity in Drupal. From users, to nodes, to configuration entities, this is an incredibly powerful tool. By creating a standardized set of parameters for crafting these queries, Entity Query API allows developers to create generalized tooling not tied to particular views or entities. Moreover, they need not worry about creating matching views for every collection of content. This ability to let API consumers craft their own result-set further reinforces the separation of concerns between the client and the server.

Entity Query API does all this by taking advantage of the excellent QueryInterface provided by Drupal Core. The module simply translates your request URI and associated query string into an entity query on the backend, executes it, and returns the results as JSON. By using this, we also get the built in access control that Drupal entity queries provide.

Entity Query API is still in alpha (as of April 2016), but it fully supports everything that you can do with an entity query in code, i.e., conditions, condition groups, sorting, ranges, etc. Like the REST UI module, we have a similar configuration page for enabling queryable entities. We support all core authentication methods as well as JSON Web Token Authentication (another module we’ve built). In future, we’d like to dynamically collect any authentication providers available, just like the REST UI module.

I’m going to be sprinting on Entity Query API at DrupalCon New Orleans on Monday, May 9th 2016 and during the after-DrupalCon sprints on Friday, May 13th 2016. We’d like to add support for other encodings like XML and HAL+JSON (currently the module just supports JSON). Finally, we’d like to add the option to retrieve just entity IDs instead of fully loaded entities.

As always, there’s plenty of work to be done in open source. If you’re interested in Entity Query API, come find me during the sprints or send me a tweet anytime during DrupalCon, my handle is @gabesullice. Of course, the easiest way to help is just to download the module and report any bugs you find. Finally, if you're going to be at DrupalCon New Orleans, stop by the Aten booth, I'd love to hear your ideas and feedback!

Apr 29 2016
Apr 29

How to Host Drupal 8 on DigitalOceanThese are instructions on how to setup DigitalOcean droplet to host your personal website. DigitalOcean is a very affordable cloud hosting for developers (starting from $5 for a very simple droplet 512MB Memory / 1 CPU and 20GB disk).

DigitalOcean provides great documentation with step by step instruction about how to configure your servers to do what you need.

Certainly by building your own server you won't have the tools that Drupal sepcific hostings provide, such as Acquia or Pantheon and certainly I am recommending to use Drupal specific hosting for your clients because they provide much better support on a various levels, starting from server issues all the way to Drupal-specific issues.

While they have great support and pricing for businesses they don't have affordable plans for personal websites and that is the reason why I am writing this blog post. By following these instructions and configuring the servers yourself you will better understand how webservers work.

I won't rewrite all the instructions, instead I will include links to DigitalOcean's manuals with all the steps.

Note: When you create a droplet (server) you may choose preconfigured droplet with Drupal 8 already installed on it. However I prefer installing everything myself by following the steps you will learn new things.

We will be using Ubuntu 14.04, Ngnix, PHP7, MySQL 5.6. (LEMP stack). Please follow the instructions in the following order:

  1. Initial Server Setup with Ubuntu 14.0
  2. Once you complete initial server configuration make sure your server is secured. Please follow firewall configuration instructions: How To Set Up a Firewall with UFW on Ubuntu 14.04. Basically only allow 80 (http), 443 (https) and 22 (SSH) ports. Please read also UFW Essentials: Common Firewall Rules and Commands
  3. How To Install Linux, nginx, MySQL, PHP (LEMP) stack on Ubuntu 14.04
  4. The link above will only install PHP5. In order to upgrade it to PHP7 use this: How To Upgrade to PHP 7 on Ubuntu 14.04
  5. If you would like to use Memcached with PHP7 follow this instructions: Installing PHP-7 with Memcached
  6. Once you have your server configured begin installing Drupal 8. Please see Nginx configuration settings for Drupal 8

If you would like to use Apache (LAMP stack) for your webserver please use this:

  1. Follow the same server conifguraiton from above.
  2. How To Install Linux, Apache, MySQL, PHP (LAMP) stack on Ubuntu

Once you complete all the steps above you should be able to see your site by accessing http://Your_Server_Public_IP.

There are a great number of instructions how to build high availability servers, of course this will cost more since you have to create multiple droplets, however for a personal site this is not needed. Make sure your servers are in the same datacenter region:

  1. How To Create a High Availability Setup with Corosync, Pacemaker, and Floating IPs on Ubuntu 14.04 read also about Floating IPs: How To Use Floating IPs on DigitalOcean
  2. How To Create a High Availability HAProxy Setup with Corosync, Pacemaker, and Floating IPs on Ubuntu 14.04 (complete the first instruction before starting this)
  3. How To Set Up MySQL Master-Master Replication

More interesting instructions:

  1. How To Set Up Automatic Deployment with Git with a VPS
  2. How To Set Up a Host Name with DigitalOcean
  3. How To Use the DigitalOcean Docker Application
  4. Any many more tutorials.

Other similar cloud hosting providers that I would recommend:

  1. Linode
  2. Amazon EC2
Apr 29 2016
Apr 29

I’ll never forget the day that I talked to Angie at DrupalCon London and asked her who the community Project Managers were. There were none. I was floored. How could something so essential, useful & critical to success be overlooked? That was the state of our community as I saw it then: nonexistent. Who were the Project Managers in the Drupal Community? I didn’t know a single one in 2011.

Guess what? I do now. It’s so amazing to see so many names come to mind when I think about our PM community niche and to have a mailing list of hundreds of people to reach out to. Know what else is amazing? Our track has had the 2nd largest number of submitted sessions for this Drupalcon. BOOM! If that’s not arriving, then I don’t know what is.

Another fun fact for you: session selection has been extremely difficult. I see that as a very positive thing. When we get so much interest that our stomachs are in knots about having to say no to some really awesome sessions, we know something is being done right.

We hope you will love the track as much as we do. We’ve organized it in such a way that it speaks to PM’s of varying levels of experience. Some sessions are more “PM 101”, others are advanced, and the difficulty of topic progresses along with the con so you can (hopefully) walk in with little more than awareness, and walk out with an incredible amount of basic, intermediate and advanced knowledge.

To keep this momentum going post-conference, we are holding a BoF on Thursday May 12th at 1pm in Room 286 (thanks to Justin Rhodes, our Local Track Chair for suggesting this), as a closing ceremony for this budding community and to promote PM mentorship.

Add the BoF to your schedule

Who should come to the BoF?

Newbies: Join us! Veterans: Come share your know-how! Either way, come to exchange on a variety of Project Management issues.

We want to share our vision with you, and hopefully, inspire you to help us continue to grow the numbers and activities of this group. It would also be very useful for us as chairs to get your immediate feedback about the track so we can, ourselves, iterate on our selection process and give you more of what you want to see.

We hope you’ll join us for the sessions and BoF, and encourage you to hold your own BoF’s! We want to thank you, PM’s, for stepping up to the plate and getting involved. You are all wonderful to me, and I love being right about us needing a “voice” at the conference of the year for many of us. Most of all, we want to thank the DA for making this track possible, and for hearing our pleas for sessions.

In typical Shan fashion, I’ll end this blog post with a bunch of things I wish I could see from the community so you can go hold BoF’s, or write sessions and share your personal project intelligence with the masses. Steal these ideas, go do great things with them!

  • Is Project Manager a dirty word now that Scrum Master is popular?

  • What is the next PM disruption? Are we on the verge of replacing Agile with a new way of thinking? (My gut says yes.)

  • Freelance Project Management: a guide, pro’s & con’s.

  • PM tricks up your sleeve: what’s in your “must have” list of tools & techniques?

  • Share your story: Best & Worst clients you’ve ever had & why, what did you learn from them?

  • Never, ever do this: your warning war stories & how they made you a better PM

  • Community Project Management is tough & how to win at it

  • Whatever you think will help PM’s everywhere in this community & beyond!


Thank you all, and have a great conference!

Shannon Vettes, global project management track co-chair along with Ashleigh Thevenet, and Local Track chair, Justin Rhodes

Apr 29 2016
Apr 29

Today I was tracking down a strange issue with a form submission and validation, which finally turned out to be the consequence of an unobvious wrong function call inside a class constructor.

While I was trying to solve a minor cosmetical bug within Drupal Commerce checkout flow configuration UI, I was experiencing strange problems on form validation and submission like:

Fatal error:  Call to a member function submitConfigurationForm() on integer in /var/www/drupalvm/web/modules/commerce/modules/checkout/src/Plugin/Commerce/CheckoutFlow/CheckoutFlowWithPanesBase.php on line 457

I knew, that yesterday it worked without problems, And Bojan from Commerce Guys witnessed, that he can't reproduce this bug at all on a clean install. The only clear difference were my custom checkout panes, that I have developed so far. So I removed them all at first, asserting that the form was still working. Then I started to re-add them one by one, until I've found the errorneous one. I was debugging through different stages of form building, validation and submitting, seeing strange things, like broken form validation callback declarations and wrong properties of the form state, carrying arrays of my checkout pane's properties instead of the instantiate pane object itself.

After I removed several functions of my pane and completely eroded the internals of the only abstract function a pane has to implement, the error still occurred. I checked the class annotations for the xth time - everything ok. The only remaining stuff was the constructor and static factory method, that injects the needed services into my plugin. So I removed them all at first, and BOOM! No error anymore.

So I started to re-add the services one by one: the entity type manager, the renderer and the logger factory. And now I saw, what I've done wrong. As I didn't want to write $this->loggerChannelFactory->get('channel_name')->warning('message'), but instead preferred to use $this->logger->warning('message'), I did the following:

As I found it an overkill to declare another custom service to directly inject a specific channel as described in the official documentation, I decided to inject the logger factory, but call the getter function on it to retreive a specific channel during instantiation - calling $container->get('logger.factory')->get('channel_name') inside of the factory function.

The really mean thing here is, that it basically works. There's absolutely no problem during checkout at all. But it turned out, that it is getting a problem inside the configuration form, where the pane objects are also instantiated. And the part, where the settings are changed, is called via Ajax. I guess, that the Ajax call was the reason, why the problem arised here and not during checkout.

Finally I've changed to set the channel factory as a class property of my plugin and do $this->loggerChannelFactory->get('channel_name') instead and the errors were gone.

Lessons learned

  1. Never do function calls on injected services during instantiation of a plugin, when the plugin is loaded during an Ajax form callback.
  2. Better: never do function calls on injected services during instantiation of a plugin at all. Try to find a better way. Only do it, when you really know, that the underlying is unproblematic - the given example call itself is calling other services, which is/can be problematic.
jam
Apr 29 2016
Apr 29
Wim Leers, Senior Software Engineer in the Acquia Office of the CTO (aka “OCTO”), has been busy in the last few years making Drupal 8 amazing! His contributions include working with Fabian Franz on aspects of Drupal’s new caching and rendering systems to make Drupal 8 performant. Today’s podcast is a conversation he and I had about who he is and what he’s been up to following our own collaboration preparing my own post on BigPipe.
Apr 29 2016
Apr 29

The monthly Pune Drupal Group Meetup for April was hosted by QED42. The second PDG meetup to take place in the month of April, You would assume meeting this often would get tiring for other people but not us! We Drupalers love a good catchup session.

The first session was kicked off by Prashant and Rahul, Interns at QED42 and they spoke on, "Our experience with Drupal." They explained about their journey as new comers to Drupal, from the lenses of both CMS and the community. Their confusion at the beginning, the new tech and softwares they have learned, their experience at Drupalcon Asia and their love for the community. A really enjoyable session peppered with ernest observations and cute cat pictures and a brilliant first time attempt. Bravo boys!

Rahul and Prashant

The second session was taken by Arjun Kumar of QED42 on,"Introduction to CMI." With a brief on CMI and the difference from the features land, he concluded with a demo.

Arjun CMI

After a short discussion on the probable date and location for Pune Drupal Camp we broke off for BOF sessions,with Navneet leading the discussion on Acquia certifications and further discussions on CMI.

BOF

With 20 people in attendence we concluded the PDG april meetup with delicious Pahadi Sandwiches in our tummy. Have a great weekend and see you soon!

Apr 29 2016
Apr 29

Nowadays, lots of companies can benefit from having their own ecommerce sites. It allows brands to sell anything from physical products up to consultations and appointments. In one of our previous blogs, we outlined the main points as to why Drupal is the one stupendous solution for your ecommerce website. Today, we’ll take you on one of the most enthralling journeys and show you a variety of outstanding examples of ecommerce websites that incorporate Drupal. So come aboard!

Tuannguyen Photography

http://www.tuannguyen.com.au/

When you check out this website, you will be mesmerized by its captivating photography. But it is not the only thing that will astonish you! There, you’ll encounter a profound navigation system and thoroughly look over the product, due to Drupal being the platform to make it all possible! To implement this online store, developers used Commerce Kickstart and 46 additional contributing modules.

Artellite

https://artellite.co.uk/

The next one is a well-known platform for art lovers. It allows artists and organizations to do business together. With the help of Drupal, all products get synchronized to remote microsites. Drupal, along with several other efficient modules (Checkout Login, Commerce AutoSKU, Commerce Billy, Commerce Coupon Fixed Amount, Commerce Coupon Percentage Amount, PayPal WPS, Physical Product) equals to an incredible online platform experience!

Workout

workout.be

This website utilizes Drupal as a content management system to power its ecommerce store. Workout contains over 600 product entities. Furthermore, it includes an online advice section. Just ask away and the store owners will reply!

QDOS

http://www.qdossound.com/

Eye-catching, mobile friendly as well as multilingual, this ecommerce site sells multiple phone accessories captivating users with its user-friendly interface. With the help of Drupal, developers had the priviledge to utilize the device switcher tool and create the “carousel”.

Commerce Kickstart has become an excellent modular component for a website:

Lulishop

http://www.luli-shop.com/

Drupal also appealed to Lulishop in terms of the module series it had to offer to bring about the functionality they were envisioning. This webstore was built to facilitiate the process of purchasing and completing all the purchases in different online stores within a single transaction, all thanks to PayPal’s Adaptive (parallel) Payments. 

Make Up For Ever

http://www.makeupforever.com/

Even one of the best cosmetics brand in the world “Make Up For Ever” chose Drupal Commerce! Drupal is a powerful CMS which could handle a content rich site and a third-party integration. Breathtaking design and smooth navigation combine to create an unforgettable user-experience!

Sport Master

http://sportmaster.dk/

Sport Master is a massive Drupal Commerce shop. It virtualizes all the physical stores within the ecommerce realm, so that customers can have a splendid online experience. It includes modules such as Commerce Free Shipping, Commerce migrate, Commerce price by components, Customer profile type, Invoice and Commerce Dibs. 

If you like any of these online stores, and you’re dreaming about starting your own ecommerce website, our Drupal team is ready to implement your ideas!

Apr 29 2016
Apr 29

We hope you've had a great week!

Thanks for joining us for Episode 7 of the Mediacurrent Friday. Planning on attending DrupalCon next month in New Orleans? If so, you won't want to miss Marketing Director Shellie Hutchens give you 5 Ways to Connect with Mediacurrent at DrupalCon.

Watch the video below to learn more about our team's sessions, BOFs and sprints, where to pick up your swag and enter to win an Apple Watch at our booth #315, more details on snagging your mardi gras bead invitation to our after-party with Lingotek, and how to stay connected with us via our social media accounts.

[embedded content]

Mediacurrent's Sessions

Understanding the Critical Metrics for Your Drupal Business with Dave Terry
Next-level Drupal: Applied Progressive Decoupling with Javascript with Matt Davis
Connecting the Silos: Site Building Tools to Solve Common University Needs with Jeff Diecks
Theme-Driven Development Launches Travelport onto Drupal with Jeff Diecks, Allan Paquette

Mediacurrent's BOFs

Drupal & Marketing Automation Discussion with Jason Want, Shellie Hutchens
Decoupled Blocks Brainstorm Session with Matt Davis

Don't Miss Our Booth


Find us at booth #315 to connect with our team, grab some swag and enter to win an Apple Watch!

Meet Our Team


This year, we'll be taking the largest section of our team yet. We look forward to connecting with you!

Join Our After-Party with Lingotek


Mediacurrent and Lingotek have joined forces for the third consecutive year. Stop by either of our booths for your mardi gras bead invitation to our after-party at The Rusty Nail from 7-11 pm!

Have a topic that you'd like to learn more about? Email [email protected] with any suggestions and stay tuned for Episode 8 in two weeks. Have a great weekend!

Additional Resouces
DrupalCon New Orleans | Events
Need More Reasons to Come to DrupalCon New Orleans? | Blog Post

Apr 29 2016
Apr 29

One of our clients recently became interested in taking their inventory control system to a next level. They were offered a solution to go with an out of shelf ERP system. However, it's a requirement that the client has to change the business operation in order to meet the workflow and implicit technical requirements of the ERP system. Moreover, the solution was lack of integration with existing websites and very popular 3rd party mobile platform like WeChat in China (WeChat had 697 million MAUs at the end of December, 2015).

We were brought in to the project to offer an alternative solution. So, we started with reviewing the scale of the business first:

  • One young fashion brand
  • 4 physical retail stores with the 5th one opening within 2 months
  • 1 Magento powered eCommerce site
  • Retail stores, warehouses and manufacturing factory in different cities and countries, such as Shanghai and Hongkong
  • One 4 years old inventory control system which required the manual labor to enter the orders made by Magento
  • 60 products with 8000 SKUs
  • 500 product attributes among Style, Color and Size

We set our goals below for the first phase of the project:

  • Make data complete and rich; this is the foundation for any future analytics and predictions of manufacture and sales. In short: Full stock movement tracking cross the whole business.
  • Centralized stock management for physical stores as well as online Mangeto store. In other words, we need a integration between the inventory control system and Magento stock
  • Offline-Friendly Point of Sales system for physical stores
  • Bulk operation for managing stocks at warehouses
  • Easy but dynamic reporting for different roles at operation to see stock as well as movements
  • High availability or redundancy

We quickly decided to use Drupal 8 (at the time, D8.0.2 was just released) with the understanding that we might need to do a lot of develop within Commerce 2.x. After 3 months, we have met all of our goals, and now we are lunching the project with Commerce 8.x-2.0-alpha4 on Drupal 8.1. In the past 3 months, we have contributed below back to the open source communities:

Below are some screenshots from the project:


(Offline-friendly dedicate page to bulk manage inventory)


(Stock movement page built with Views)

Special thanks to bojanz at CommerceGuys for the great work on Commerce on Drupal 8

Apr 28 2016
Apr 28

Almost two months and seven thousand lines of code later, here's Commerce 2.0-alpha4. This release brings checkout and revamped product attributes. We've also added upgrade path APIs in preparation for beta1. Meanwhile, we helped push Drupal 8.1 out the door, and fixed many Inline Entity Form bugs.

Reminder: Commerce 2.x alphas are NOT production ready. No upgrade path is provided between alphas. A full reinstall is needed each time.

Drupal 8.1

Six months ago we helped create the Entity API module using the generic entity code from Drupal Commerce. For Drupal 8.1 we moved a lot of that functionality into core. The list includes:

  • add-page and add-form route generation (+ controllers)
  • Rendered entity views handler
  • Base entity property generation
  • RevisionableContentEntityBase (+ RevisionLogInterface)
  • Param converter and route enhancer for revisions

The Entity API module still exists, but is now two thousand lines of code lighter. We'll continue to expand it and then move those APIs into Drupal 8.2.

We also continued to improve the Composer integration in core. Thanks to recent bug fixes, adding Commerce 2.0 to an existing Drupal 8.1 site is now as easy as:

# Add the Drupal Packagist repository before downloading your first module.
composer config repositories.drupal composer https://packagist.drupal-composer.org
# Download Commerce.
composer require "drupal/commerce 8.2.x-dev"

This downloads all dependencies, including both Drupal modules and PHP libraries. Goodbye, Composer Manager! Read more about it in our installation documentation.

Checkout

To date we have done three major iterations of the 2.x checkout API, addressing user and developer feedback going back to the earliest 1.x releases.

The checkout form is now rendered by a checkout flow. Checkout flows are plugins providing multi-step forms that can be configured by store administrators. This configuration is stored in matching commerce_checkout_flow config entities. Each order type can have its own checkout flow. Developers who want to implement a completely custom checkout flow can now easily do so by providing a checkout flow plugin. Others can rely on our default checkout flow implementation which uses checkout panes. The checkout pane configuration UI is AJAX powered and resembles the "Manage display" field UI:


The new checkout flow configuration form.

We've spent a lot of time researching checkout UX. You can find our initial conclusions in this issue. The current implementation is still unstyled and unfinished, but it already contains some interesting improvements:

  1. "Login or continue as guest" checkout pane:
  2. Redesigned review pane (including edit links leading to the original step, better markup):
  3. Contextual action buttons ("Continue to review", "Pay and complete purchase")

Upcoming work includes a checkout progress block, the cart summary, and reusing addresses. See this meta issue for more information.

Product attributes

In our products blog post we talked about simplifying product attributes (such as Color and Size) by removing support for option fields and using only taxonomy. This was a move in the right direction, but it still left us with the problem of mixing Attribute related taxonomy vocabularies (such as Color) with content vocabularies (such as Tags). The taxonomy term creation screen also allows only creating one term at the time, which was criticized by clients in the D7 times.

We fixed this by creating entity types for attributes and their values (commerce_product_attribute and commerce_product_attribute_value). This allowed us to create a more optimized UI, which allows creating and reordering multiple values at the same time:


Adding values to a product attribute.

Naturally, these entities are still fieldable. Any additional fields on the attribute type (e.g. image or price fields) will show up for editing beneath each value's Name field.

The next step was to create an API for managing attribute fields. Attribute fields are entity reference fields on product variations which point to a specific attribute value assigned to the variation. Thanks to this API we now allow users to select attributes on the product variation type form, automatically creating entity referenced fields as needed:


Adding product attributes to a product variation type.

These changes significantly improve the user experience for merchants and site builders.

Next steps

Our focus for the upcoming week is merging an initial version of the Payment API. You can follow along this issue to be notified when that happens,

Apr 28 2016
Apr 28

We're all super excited to be heading to the Big Easy soon for DrupalCon!

We wanted to share some of the sessions and attractions we are looking forward to the most. Hope we see all of you there to share ideas, have a drink, and laissez les bon temps rouler!

We polled our team (and some friends) to find out what everyone is the most excited about - the general theme was food, but you can read all the silly and serious details below.

First the "serious" business question -

What session, BoF, training or speaker are you most excited about?

Cemetery of Previous Drupal Versions DoodleAimee:

I am excited to hear from Dave Reid and how he is handling module ownership of so many widely used modules. Also, this year has a dedicated Project Management track. The Drupal community needs a healthy influx of the business arts to help strengthen its enterprise-grade use. Drupal isn't just the technology, but the people and process used to implement and support it!

Kristen:

There are so many scheduled activities that look great, it's hard to pick just one. One that I'm hoping to get a lot out of is the "Teaching Drupal to Kids and Teens" BoF on Tuesday at 1pm. Having 10 and 13 year old kids, I plan to come home and use them as guinea pigs for the tips we share in that BoF.

Amyjune:

The session I am most excited about is - Documentation is getting an Overhaul or Using Paragraphs to Weave a Beautiful Content Tapestry

Chris:

It's my first time there, so I am going in with an open mind and no expectations. Generally just looking forward to a great experience.

Darryl:

Drupal Commerce

Genevieve:

I'm not sure if I have a single session or training I am most excited about, I've got a lot of UX sessions on my schedule at the moment. I am excited to know a lot more about Drupal this year compared to my first trip to DrupalCon last year - so all of it will probably make a bit more sense. Can I be excited about the closing ceremony and finding out where it will be next year?

Kevin:

I still have about three sessions chosen per hour, so I'll have to narrow it down. But looking forward Sara Wachter-Boettcher's keynote and sprinting on Friday.

Kristin (K2):

I'm most excited about mentoring on Friday - come join us! I'm also excited about the in between times when Lindsay and I get to say "Wow that was great!" or "What the heck were they on about?"

Jason:

I'm most excited about the "Hallway track"

Jeff:

The sessions I am most exited about are - "Drupal 8's multilingual APIs -- integrate with all the things" by Gábor Hojtsy, "Automated javascript testing: where we are and what we actually want" by dawehner
 and "Watch the Hacker Hack" by mlhess and greggles

Lindsay:

All the front end sessions. I'm excited to learn as much as possible and put that knowledge to good use.

Patrick:

I'm excited for the "Get off the island! But build bridges back" session! It's focus is building an independent PHP library with the intent of using it in Drupal 7 or 8 or any PHP project! I think it's important to be able to keep the bridges open with outside ideas that we can merge with Drupal so it stays fresh with the latest web development!

Tom:

I'm looking forward to “Must be Intuitive and Easy to Use”: How to Solidify Vague Requirements and Establish Unknown User Needs.

Now, on to the "fun" part -

What are you most excited to see or do in New Orleans?

Drupal Drop Having A Party With A Gator

Aimee:

Wow, so many! I'm excited about our team dinner on a steamboat, the ghost tour, and walking around the French Quarter to discover what there makes "les bons temps rouler"!

Kristen:

All of the food! I've heard amazing things about the food in New Orleans so I'm looking forward to beignets, jambalaya, gumbo, and po-boys. I normally try to eat pretty healthfully but I've given myself a pass to eat whatever I'd like that week. :)

Amyjune:

I very excited about geocaching in the Lower Garden District. I also like to drink beer...anywhere.

Chris:

Looking forward to some American food, and getting to spend time the with rest of the team!

Darryl:

Preservation Hall Jazz Band!

Genevieve:

I think I am most excited about the Steamboat Natchez Dinner Cruise! Maybe getting to explore the French Quarter and Garden District, or eating too much food, or visiting New Orleans in general, because I've always wanted to go!

Kevin:

Crawfish! And a beignet or more...

Kristin (K2):

I am most excited about beignets from the Cafe Du Monde! I'm most excited about just walking around enjoying the beautiful architecture and music. And beignets. And poor-boys. And food. All the food.

Jason:

Hallway track

Jeff:

Hanging out with people I haven't seen in awhile :)

Lindsay:

EVERYTHING. IT IS THE MOST HAUNTED CITY IN AMERICA. THE HISTORY IS AMAZING. THE FOOD IS GOING TO BE SPECTACULAR. ARE MY ANSWERS IN ALL CAPS DEMONSTRATING HOW EXCITED I AM?!?!?!

Patrick:

I don't know much about New Orleans, I'm just excited to travel. It would be fun to see a Pelican so I can figure out why they named their NBA team after them, and mostly I just want to eat!

Tom:

I have some family roots there, so there is a lot of personal excitement for me.

Can't wait to share a beignet, a conversation or maybe even a ghost sighting with you all!

Apr 28 2016
Apr 28

Before you engage with a new agency to build or redesign your site, there are some key data points you should know about your existing site.

In this post, I’ll cover what you need to know about your website traffic, site speed, SEO and hosting and highlight some helpful tools. Knowing these baseline metrics prior to engagement will help you establish a benchmark with which to evaluate your new site.

Traffic Behavior points to know 

1. What are your users’ geographic locations? Determining where your traffic comes from is a very important point that your agency will want to know. Why is this important? One reason is that if you are currently hosting your website on a web server in the basement of your facility and serving web pages globally, you may want to consider employing a CDN to speed up the page load time. From a digital strategy perspective, creating a location specific, personalized user experience can help maximize ROI on your website redesign.

2. Which devices do your visitors use? Thanks to Google’s new search results algorithm, any website that is not mobile friendly right now is surely losing traffic since Google now favors mobile friendly websites. You’ll want to know what percentage of visitors are using mobile, tablets and desktops to view your content.

3. Are your readers engaged? What percentage of your visitors are Unique versus New Visitors? Do your readers visit multiple page views per visit, or do you have a high bounce rate? What is the average time spent on a page?

4. What are your goal metrics? Are website actions and goals tracked and measured? If so, what actionable items (Sales Inquiries, Newsletter Subscriptions, etc.) are monitored and measured?

5. Do you know what your traffic sources are? Traffic sources are categorized by these terms:

  • Direct traffic can be explained as traffic that goes directly to a page on your site from either a bookmark, typing in the URL directly or by clicking on a link from an email.
  • Organic search is traffic coming to your website from a search engine such as Google, Bing or Yahoo.
  • Referral traffic is traffic to your site from other sites that aren’t search engines.

6. Anonymous and Authenticated Users. Who is using your site? Typically users can be organized into two main categories. Anonymous users are users who have no login credentials and typically have little interactivity with the website. Authenticated users typically do have login credentials and have permissions to create other users, create blog posts or perhaps even generate a report on website activity. Are your authenticated users logging into the website with a single sign on service like OpenID or OAuth? Your future vendor will need to know this to ensure that your new site can work with such a service.

7. Does your website currently have a publication workflow? Depending on the purpose of your site, you may have a workflow to create and publish content for your website. For example, if you are a marketing content strategist, you might write blog posts about your widget but require peer editing or a marketing director to approve your post.

Site Speed points to know 

8. How fast does your website load currently? There are many factors that can impact page load times such as page size, web server geographic location, web server capabilities and internet connection speeds. When testing the speed of your site, it is important to consider the initial page load time in addition to the page load time for returning visitors. 

There are several site speed tools available to establish these benchmarks such as:

SEO points to know 

9. Top Keywords. What are the current top keywords being used to find your site? Google Analytics reporting provides an easy way to monitor your SEO keywords.

10. Meta Descriptions and Page Titles. Do your pages currently have unique and appropriate titles and meaningful meta descriptions? Screaming Frog SEO Spider Tool is a great tool to analyze your page titles and meta data. Google Webmaster Tools also has a report to help you sort out duplicate page titles.

11. Sitemap. Does your website currently have a sitemap that both users or bots can use to crawl your website?

12. 404 Page not found and 403 Forbidden. Are there pages on your current site that have broken links or lead to protected pages? URI Valet can help scan your site and find any 400 errors.

13. W3C Validation and W3C CSS Validation. Does your current site pass the W3C Validation tests and if not, why? Mediacurrent’s Accessibility Lead, Michelle Williams, says it best in her recent blog post: Building features into a project from the start is almost always cleaner, easier, faster, and less expensive than retrofitting after the fact. It's no different for building accessibility into a website.

Hosting

14. Where is the site hosted currently? You’ll want to create a list of protocols and credentials that are used in accessing the current host, like FTP or SSH. If you’re not sure where your site is hosted you can determine this by using a tool like Who is Hosting This.

15. Who is the domain registrar? You’ll also want to know what company your domain name is registered with. If you’re not sure what registrar you are using, you can determine this by using a tool like ICANN Whois.

16. Who handles DNS changes? If you end up moving your website to a different host, you’ll need to know who to ask to make these changes and where DNS changes need to occur.

17. Do you have all the login credentials for the site, server, code repository, DNS, domain services, analytics, third party services, etc.? You’ll want to compile a list of all the login credentials that you currently have and obtain the ones you are missing prior to working with an agency.

18. Are you currently using a CDN? Many websites use a Content Delivery Network or CDN to serve content efficiently. Usually this is static content like images, documents and script files. You’ll want to know if you are currently using a CDN and if not will the agency be providing you with such a service.

19. What type of disk space requirements do you currently have/need? Before contacting an agency, you’ll want to assess what disk space your current site is using and whether or not that meets your business needs. If you are hosting many large files like video or audio files, you’ll have much greater disk space needs and you’ll want to evaluate that with the agency.

20. What software do you require to run your existing site? (Apache, MySQL, PHP, etc.) Most modern websites require software such as a database, a web server and a programming language that the site is built with. You’ll want to profile what software is used to run your website.

Conclusion

Engaging with an agency without having these key data points can be a costly decision. Take ownership and the necessary steps to gain insight into your website property. In the process, you may discover additional opportunities for growth. Know your website traffic, site speed, SEO and hosting information prior to engagement to make the best decisions for your website.

Additional Resources
How to Budget a Drupal Project | Blog
What it Really Means to Run An “Agile Scrum” Project | Blog
A Discovery Phase: Starting Your Drupal Project Off Right | Blog 
Planning a Rational Redesign: Part 1, Part 2, Part 3 | Blog Series 

xjm
Apr 28 2016
Apr 28

Start: 

2016-05-03 12:00 - 2016-05-05 12:00 UTC

Organizers: 

Event type: 

Online meeting (eg. IRC meeting)

The monthly core patch (bug fix) release window is this Wednesday, May 04. Drupal 8.1.1 will be released with dozens of fixes for Drupal 8. There will be no Drupal 7 bugfix release this month.

To ensure a reliable release window for the patch release, there will be a Drupal 8.1.x commit freeze from 12:00 UTC Tuesday to 12:00 UTC Thursday. Now is a good time to update your development/staging servers to the latest 8.1.x-dev code and help us catch any regressions in advance. If you do find any regressions, please report them in the issue queue. Thanks!

To see all of the latest changes that will be included in the release, see the 8.1.x commit log.

Other upcoming core release windows after this week include:

  • Wednesday, May 18 (security release window)
  • Wednesday, June 01 (patch release window)
  • Wednesday, October 5 (scheduled minor release)

Drupal 6 is end-of-life and will not receive further releases.

For more information on Drupal core release windows, see the documentation on release timing and security releases, as well as the Drupal core release cycle overview.

Apr 28 2016
Apr 28

The one big question I get asked over and over these days is: "How is Drupal 8 doing?". It's understandable. Drupal 8 is the first new version of Drupal in five years and represents a significant rethinking of Drupal.

So how is Drupal 8 doing? With less than half a year since Drupal 8 was released, I'm happy to answer: outstanding!

As of late March, Drupal.org counted over 60,000 Drupal 8 sites. Looking back at the first four months of Drupal 7, about 30,000 sites had been counted. In other words, Drupal 8 is being adopted twice as fast as Drupal 7 had been in its first four months following the release.

As we near the six-month mark since releasing Drupal 8, the question "How is Drupal 8 doing?" takes on more urgency for the Drupal community with a stake in its success. For the answer, I can turn to years of experience and say while the number of new Drupal projects typically slows down in the year leading up to the release of a new version; adoption of the newest version takes up to a full year before we see the number of new projects really take off.

Drupal 8 is the middle of an interesting point in its adoption cycle. This is the phase where customers are looking for budgets to pay for migrations. This is the time when people focus on learning Drupal 8 and its new features. This is when the modules that extend and enhance Drupal need to be ported to Drupal 8; and this is the time when Drupal shops and builders are deep in the three to six month sales cycle it takes to sell Drupal 8 projects. This is often a phase of uncertainty but all of this is happening now, and every day there is less and less uncertainty. Based on my past experience, I am confident that Drupal 8 will be adopted at "full-force" by the end of 2016.

A few weeks ago I launched the Drupal 2016 product survey to take pulse of the Drupal community. I plan to talk about the survey results in my DrupalCon keynote in New Orleans on May 10th but in light of this blog post I felt the results to one of the questions is worth sharing and commenting on sooner:

Survey drupal adoption

Over 1,800 people have answered that question so far. People were allowed to pick up to 3 answers for the single question from a list of answers. As you can see in the graph, the top two reasons people say they haven't upgraded to Drupal 8 yet are (1) the fact that they are waiting for contributed modules to become available and (2) they are still learning Drupal 8. The results from the survey confirm what we see every release of Drupal; it takes time for the ecosystem, both the technology and the people, to come along.

Fortunately, many of the most important modules, such as Rules, Pathauto, Metatag, Field Collection, Token, Panels, Services, and Workbench Moderation, have already been ported and tested for Drupal 8. Combined with the fact that many important modules, like Views and CKEditor, moved to core, I believe we are getting really close to being able to build most websites with Drupal 8.

The second reason people cited for not jumping onto Drupal 8 yet was that they are still learning Drupal 8. One of the great strengths of Drupal has long been the willingness of the community to share its knowledge and teach others how to work with Drupal. We need to stay committed to educating builders and developers who are new to Drupal 8, and DrupalCon New Orleans is an excellent opportunity to share expertise and learn about Drupal 8.

What is most exciting to me is that less than 3% answered that they plan to move off Drupal altogether, and therefore won't upgrade at all. Non-response bias aside, that is an incredible number as it means the vast majority of Drupal users plan to eventually upgrade.

Yes, Drupal 8 is a significant rethinking of Drupal from the version we all knew and loved for so long. It will take time for the Drupal community to understand Drupal's new design and capabilities and how to harness that power but I am confident Drupal 8 is the right technology at the right time, and the adoption numbers so far back that up. Expect Drupal 8 adoption to start accelerating.

Apr 28 2016
Apr 28

One of Bluespark's key discussion points in planning for DrupalCon this year has been how we can bring more value. One of the things we did in the past was to host 5 minute mini-talks at our booth during the coffee breaks to run through specific experience we have gained through the work we do. Based on feedback we received from these talks, we decided to do the same thing again this year, and we're excited about the topics we've selected. They are especially for universities and libraries, or any organizations focused on education and with a larger number of stakeholders and a more difficult decision-making process. 

Bluespark mini-talks at our booth at DrupalCon 2016

We're excited to see you at DrupalCon!

Apr 28 2016
Apr 28

ImageX’s front-end development lead, Trent Stromkins, brings a unique background to his role. As a former designer, he uses his love for good design to develop with aesthetics and user experience in mind by marrying form and function. We spoke with Trent to discuss his experience and his thoughts on where design and development intersect.

  Tell us a little about your background and the path you took to becoming a front-end developer?

I first started in development in 1997 working on static HTML pages, then moving on to PHP and database based sites. At the same time, I was working on a career path in architecture but realized that it wasn’t for me so I used those skills to get into design. I worked professionally in packaging design, all the while building my skills as a web developer on the side. I was getting tired of recreating the wheel on every project, instead of using components that could be efficiently repurposed, which lead me to join the Drupal community in 2007 just after Drupal 6 was released. In 2008 when the recession hit I was “economized” from my design job. This gave me the opportunity to change careers.

I merged those two experiences to fill what I thought was a weak spot in the Drupal community -- that you could often tell when a site was built on Drupal. It was there that I decided that front-end development was going to be my focus. Real-world web development situations, for the most part, are not easily defined as front or back-end work; we jump between both. I saw this as somewhere that I could add value, being that I had worked on many custom hand-built solutions along with graphic design in my past. 

  How does your experience as a graphic designer inform your approach to theming a website?

In packaging design, you can design in Illustrator to precise specifications. But in the actual execution, there are variances -- press gain, skidding on the press, shape of the container, etc. that aren’t exact -- so you learn to adjust your designs to match the output. 

Similarly, for the web, your designs should adjust for how actual content will flow which means you can’t always be pixel perfect. You have to understand the output and make adjustments based on what’s realistic. You need an understanding of the CMS’ output, limitations that may be imposed by browsers, screen sizes, etc.

As a designer, I’m very familiar with design tools so I can speak the same language with other designers. I understand that tools have quirks -- like how fonts render in design files versus on the web, for example. I’m also able to fill any visual gaps in cases where there is no design for a piece, and being a designer, I can make those decisions informed by both disciplines. 

  What are some common mistakes that people make when translating design to a theme?

It’s always best to work with the designer and the client to help them understand where things can and can’t be exactly accurate. An example is colour space in Photoshop versus browsers. 

When you’re looking at the design, everything looks controlled and ideal. But when you’re building it responsively, by definition it adjusts. Designers sometimes tend to be too uniform in their designs -- content isn’t always the same, so it will appear and render differently than in the designs. We can’t make all things on the web exactly accurate to the file. The flip side is that not all developers can notice the details in things like line spacing, white space, or visual flow. 

A lot of developers won’t push back on designers because they don’t speak the language, even when the designs aren’t built within the best practices for management, or within the limitations of the CMS.

  Given your experience as a designer and a developer, what advice would you give to a team trying to bridge the creative and technical disciplines?

The thing that’s important is getting the design team to communicate with lead developer ahead of time, especially ahead of the client. Have someone from every discipline at the table early on -- don’t work in silos, as it’s always best to collaborate. To be working best, teams should iterate through each discipline. This also helps keep teams from chewing through a project’s budget by the time it moves down the chain to development or QA. Work with your team and clients to help them understand best practices on the web. Our role should always include education. 

Narrow down the elements of the page, almost like a style guide; show the variations of the elements; and build them as components that can be reused elsewhere on the site (similar to application development) following a universal standard. Also, build tools that enable the layouts and components to be repurposable, and deployed more efficiently throughout the site.
 

  What best practices would you recommend to help improve the workflow between the two disciplines?

Practice componentized design. Design almost like a style guide where you create one design for the overall page and the rest show the variances of any different states. Make sure the components are cohesive and modular and then they become like template options. Working into this trend saves time for both design and development and ideally gives a realistic approach on how it gets assembled.

It also helps when the designer can show examples of how they envision an interaction effect to work on the web, so always give references wherever possible. Using them, developers can determine the specific execution as well as if it’s within budget. 
 

  What gets you excited about the future of development?

React.js has brought into the light some interesting concepts, such as that of using small components, one direction for data flow, and other techniques that make sites and web and mobile apps more modular. When you build websites in a way that allows the growth, evolution, and development of a site to take a componentized form, updating a component doesn’t break the site, it enhances it.

The concept of the web being an application interface and having your site interact like an application -- fast, responsive, and even offline has really got my imagination going. People don’t need to install applications any more, the web as an application is becoming more acceptable (Google docs, Slack, etc.). You can design and build what used to have to be a “native” application, but now have it cross-platform, for the browser, or a web interfacing native shell like Electron

Time will tell, but I see great things for the future of the web!

Apr 28 2016
Apr 28

5/25/16 – 10:00PST ~ 13:00EST ~ 17:00UTC

Register Now!

You know how to get things done with git: pull, add, commit, push; but have you mastered it like a jedi does the force? Nothing is a more lasting record of our work then our git commits. In a galaxy where companies ask you for your Github account in lieu of, or in addition to a resume, we have one more reason to make sure that our commit history is as readable as our code itself.

In this one hour session, we will cover:

  • Rewriting commits
  • Reordering commits
  • Combining commits
  • The perfect commit message
  • Finding bugs using git
  • Avoiding common pitfalls

Join us for this session and you will leave a jedi-level git master!

These Are Not the Commits You're Looking For

Register Now!

Apr 28 2016
Apr 28

Vagrant Overview, Tips and Resources

As developers, oftentimes we want or need our working environment to be an exact match of the production environment. This is especially true when your project is running a complex or specific infrastructure. Lucky for us, there’s Vagrant!

Apr 28 2016
Apr 28
      This blog describes about how to create a new node programmatically in Drupal 7. If you want to add a new node, you can done at node/add by default. In Drupal, you can also add a node programmatically. Let see the below code.

<?php
// create object
  $node = new stdClass();
  // set title for a node
  $node->title = t('Created node programmatically');
  // set node type
  $node->type = 'article';
  // set node language
  $node->language = LANGUAGE_NONE;
  // set value to node body
  $node->body[LANGUAGE_NONE][0]['value'] = t('This node has been created programmatically in Drupal 7');
  // set value to node body summary
  //$node->body[LANGUAGE_NONE][0]['summary'] = text_summary(t('This node has been created programmatically in Drupal 7'));
  // set node body format like plain_text, filtered_html, full_html
  $node->body[LANGUAGE_NONE][0]['format'] = 'filtered_html';
  node_object_prepare($node);
  // author for a node
  $node->uid = 1;
  // status of node  0 - unpublished, 1 - published
  $node->status = 1;
  // promoted to front page or not
  $node->promote = 0;
  // sitcky at top of tha page
  $node->sticky = 0;
  // comments 0 - hidden, 1 - closed, 2 - opened
  $node->comment = 1;

  // add term
  $node->field_tags[$node->language][]['tid'] = 1;

  // get the file path
  $file_path = drupal_get_path('module', 'phponwebsites') . '/Desert.jpg';
  // create file object
  $file = (object) array(
    'uid' => 1,
    'uri' => $file_path,
    'filemime' => file_get_mimetype($file_path),
    'status' => 1,
  );
  // Save the file to the public directory. You can specify a subdirectory, for example, 'public://images'
  $file = file_copy($file, 'public://');
  // assign the file object into image field
  $node->field_image[LANGUAGE_NONE][0] = (array)$file;
  // Prepare node for a submit
  $node = node_submit($node);
  //save the node
  node_save($node);


    After ran this code, you can see newly created node at admin/content. When you view that node, it looks like below image:

Create a new node programmatially in Drupal 7 at Phponwebsites

     Now I’ve hope you know how to create a new node programmatically in Drupal 7.
Related articles:
Add new menu item into already created menu in Drupal 7
Add class into menu item in Drupal 7
Create menu tab programmatically in Drupal 7
Add custom fields to search api index in Drupal 7
Clear views cache when insert, update and delete a node in Drupal 7
Create a page without header and footer in Drupal 7
Login using both email and username in Drupal 7
Redirect users into any page after logged into a site in Drupal 7
Apr 28 2016
Apr 28

Not long ago we were talking about the value of testing your updates in feature branch instances. It's the most efficient way of ensuring the quality of applied updates, but it's very time-consuming.

To use this process, you are required to maintain your own infrastructure to spin up QA servers quickly, run automated tests and share the testing instance between team members. And preferably, you do it every time an update is applied for any of the modules across your websites.

Sound scary? In fact, it is (at some point). Drop Guard alone is not capable of creating virtual servers or containers of any sort. Its only mission (and where it really shines) is to provide a blazingly fast method of Drupal update detection and application while being flexible enough to integrate into any workflow and with any 3rd party tools.

The last point is particularly handy, as we can delegate the heavy lifting work to another service, and let Drop Guard talk to it so that we're able to benefit from the extra functionality without overcomplicating the product itself.

Speaking of feature branch instances creation, our main requirements to such a service would be:

  • To be able to spin up quickly isolated "preview" instances of our website out of the feature branch;
  • Have an option to execute random SSH commands and run tests inside the environment;
  • To integrate with major Git hosting providers;
  • To know Drupal specifics and reduce the time needed to onboard Drupal or MySQL driven projects;
  • To handle everything in a secure way, so that no sensitive data is exposed.

There are not many solutions like this on the market, but luckily enough there are some really great ones we can rely on in our daily routine.  

Today we'll be talking about Probo.CI - a product of Zivtech. In this practical guide, we'll go step by step to configure Probo.CI and Drop Guard to ensure the best continuous security update and QA process for our test website.

Our goal is to configure Drop Guard to create feature branches for the security updates and pass the remaining work to Probo.CI to launch testing instances for each of the updates.

Ideally, we should end up with a full-blown quality assurance machine for Drupal updates. Excited already?

What Probo is

Probo.CI was initially created to fulfill an internal need for a proper testing workflow. It focuses equally on automated and human QA process and gives a very granular real-time view into the state of development before the work is merged into a git branch. In simple words - you can preview and test the website made out of the feature branch before this branch is even merged.

While not entirely Drupal focused, it was designed with Drupal in mind and provides excellent integration capabilities. Exactly what we need for our guide. 

Let's break our build first

Probo integrates with GitHub and Bitbucket, so to use it all you need is to create an account and enable access to the repository where the codebase is located. Travis CI or Circle CI users will find the onboarding process very familiar. For the purpose of this guide, we have connected Probo to a test repository on GitHub containing the Drupal core and a couple of outdated modules.

Probo.CI project

No builds so far, as we haven't pushed anything to the repository.

The first task is to ensure Probo actually works and is able to work with our repository.

You should remember that Probo mostly operates "inside" a pull request (PR), so let's go ahead and create a separate branch in our test repository, containing a configuration file named .probo.yaml with the following contents:

steps:
  - name: Test Connection
    plugin: Shell
    command: 'echo "Erroneous command'

Note the .yaml file syntax and always check it before pushing.

The next step we often do is to create a PR. After a couple of moments, we can see that our first Probo build failed!

Wrong command

Let's click on the "Details" link to check what went wrong exactly.

Command explanation

If you were following along patiently, you've probably noticed a syntax error in the .probo.yaml file - the SSH command line is missing the closing quotation mark. How dare we!

Visit Probo.CI documentation to check on the available plugins, integrations and extra options not covered by this guide.

Let's fix the command, commit, push and check the same PR page.

steps:
  - name: Test Connection
    plugin: Shell
    command: 'echo "Erroneous command"'

This time, we've got all greens, so we can be sure that Probo works for us.

As we can see, it appears to be very handy from the very beginning. You create a pull request for the feature branch and in case some of the steps defined in Probo.CI build file fail, you get a notification immediately in the pull request interface. How cool it is!

On the screenshot above, we can see that Probo created a test environment for us, but if we follow the "Details" link and try to access the environment, we will see nothing. Why? 

Simple! Our build consists of Drupal source code only, and it's definitely missing the database.

Working with assets in Probo

For Probo to be able to spin up a website instance, the database is a must, so our next step is to create a compressed and sanitized dump of our database.

Once it's done (with Drush or your favorite tool), go to the Build Assets page for your Probo project and upload the dump. You can do the same with the command line, but for the sake of simplicity we are doing it via the UI.

Probo assets

Harness the power of Drupal plugin

Remember the Shell plugin we used to test the connection? You may think our next step is to script manually the database import, settings.php file modifications and other things necessary to spin up a Drupal site, but in fact, all we need is to use Drupal plugin with a very simple syntax. It will do everything for us.

So let's go ahead and modify our .probo.yaml with the following contents (adjust the dump name if it's different from ours).

assets:
  - test.sql.gz
steps:
  - name: Echo out something
    plugin: Shell
    command: 'echo "Hoooray!"'
  - name: Probo CI site setup
    plugin: Drupal
    database: test.sql.gz
    databaseGzipped: true
    databaseUpdates: true
    clearCaches: true

That's it. What remains now is to commit the file change, push a feature branch and open a pull request.

PR Success

All checks passed, the environment created. Let's follow the "Details" link and see if we managed to create the working instance.

Localhost

Here it is - the actual working website created just in time, directly from the feature branch. Like a blast!

Testing Drupal updates

While we're very excited about Probo, let's not to forget our primary goal here - to be able to spin up a feature branch instance for security updates detected by Drop Guard.

You should be familiar with basic Drop Guard concepts. If not, please check out recent blog posts and videos on the topic, and don't forget to create an account - it's free and takes just a few moments of your time.

Let's connect Drop Guard to our test project, open up a project overview page and check for updates. You will see a very simple page with all the security issues and available updates listed. The project will be paused initially, as we haven't created any update behaviors yet.

Drop Guard security updates

And wow - we have two Critical security issues! 

Let's configure the update behaviors for Drop Guard to create a feature branch for each of those updates.

Update behaviours

Here we are telling Drop Guard to do the following for the group of security related updates:

  • Create a feature branch for the update task;
  • Skip all manual tests;
  • Create and process the update task right away after updates detection, without waiting for user input;
  • Watch for code modifications and stop the update process in case there are any conflicts.

After playing with update behaviors a bit, let's unpause the project and check for updates again. This time, Drop Guard will not only check for updates but also process them according to the configuration we've just set.

Live site deployment config

Although Drop Guard is not a full featured CI solution, it can be used for simple deployments to production. Let's visit the project edit screen and configure the "Task is closed" event to trigger our super simple deployment action. Once the update task is closed, Drop Guard will log in to the live site server and execute a set of SSH commands.

Deploy task

Drop Guard did the job

Going back to our Git repository, looking for commits made by Drop Guard. Here they are!

Update

So, our critical security updates are in the feature branch. Let's create a pull request and let Probo.CI to spin up a test instance for us.

Update

Drop Guard created two commits - one for each module. Let's scroll to the bottom a bit and check the Probo.CI output.

Probo checks

Probo.CI just made it as well

All clean. Checking the "Details" link next to the "Environment ready" message and going back to the Probo.CI interface, where we can see the details of the build creation process and the link to the test instance.

Probo UI

Let's click on the "View site" link, enter login and password and check the Modules page in Drupal. As we can see, both modules, Views and Services, were successfully updated by Drop Guard, and we are now ready to do our manual and automated tests.

Update successful

Merge and deploy

Once we are sure that our little test update works without issues, we are ready to merge the pull request to the production branch (although you may want to check them in the staging branch first).

Git merge

And finally - go back to Drop Guard and close the update task, which will trigger the deployment action we've created before.

Probo task

In the end - just a couple of clicks, the work is done in two browser tabs and no hassle at all! If we think about it further, this process can become even more automated, but it's out of the scope of the article.

This practical guide is just an example of how you might use Drop Guard and Probo.CI together to create feature branch instances for your updates. Have an idea for a more creative use? Used Drop Guard or Probo.CI and have feedback to share? Let us know in the comments!

Apr 28 2016
Apr 28

Recently I needed to migrate a small set of content into a Drupal 8 site from a JSON feed, and since documentation for this particular scenario is slightly thin, I decided I'd post the entire process here.

I was given a JSON feed available over the public URL http://www.example.com/api/products.json which looked something like:

{
  "upcs" : [ "11111", "22222" ],
  "products" : [ {
    "upc" : "11111",
    "name" : "Widget",
    "description" : "Helpful for many things.",
    "price" : "14.99"
  }, {
    "upc" : "22222",
    "name" : "Sprocket",
    "description" : "Helpful for things needing sprockets.",
    "price" : "8.99"
  } ]
}

I first created a new 'Product' content type inside Drupal, with the Title field label changed to 'Name', and with additional fields for UPC, Description, and Price. Then I needed to migrate all the data in the JSON feed into Drupal, in the product content type.

Note: at the time of this writing, Drupal 8.1.0 had just been released, and many of the migrate ecosystem of modules (still labeled experimental in Drupal core) require specific or dev versions to work correctly with Drupal 8.1.x's version of the Migrate module.

Required modules

Drupal core includes the base 'Migrate' module, but you'll need to download and enable all the following modules to create JSON migrations:

After enabling those modules, you should be able to use the standard Drush commands provided by Migrate Tools to view information about migrations (migrate-status), run a migration (migrate-import [migration]), rollback a migration (migrate-rollback [migration]), etc.

The next step is creating your own custom migration by adding custom migration configuration via a module:

Create a Custom Migration Module

In Drupal 8, instead of creating a special migration class for each migration, registering the migrations in an info hook, etc., you can just create a migration configuration YAML file inside config/install (or, technically, config/optional if you're including the migration config inside a module that does a bunch of other things and may or may not be used with the Migration module enabled), then when your module is installed, the migration configuration is read into the active configuration store.

The first step in creating a custom migration module in Drupal 8 is to create an folder (in this case, migrate_custom_product), and then create an info file with the module information, named migrate_custom_product.info.yml, with the following contents:

type: module
name: Migrate Custom Product
description: 'Custom product migration.'
package: Migration
core: 8.x
dependencies:
  - migrate_plus
  - migrate_source_json

Next, we need to create a migration configuration file, so inside migrate_custom_product/config/install, add a file titled migrate_plus.migration.product.yml (I'm going to call the migration product to keep things simple). Inside this file, define the entire JSON migration (don't worry, I'll go through each part of this configuration in detail later!):

# Migration configuration for products.
id: product
label: Product
migration_group: Products
migration_dependencies: {}

source:
  plugin: json_source
  path: http://www.example.com/api/products.json
  headers:
    Accept: 'application/json'
  identifier: upc
  identifierDepth: 0
  fields:
    - upc
    - title
    - description
    - price

destination:
  plugin: entity:node

process:
  type:
    plugin: default_value
    default_value: product

  title: name
  field_upc: upc
  field_description: description
  field_price: price

  sticky:
    plugin: default_value
    default_value: 0
  uid:
    plugin: default_value
    default_value: 0

The first section defines the migration machine name (id), human-readable label, group, and dependencies. You don't need to separately define the group outside of the migration_group defined here, though you might want to if you have many related migrations that need the same general configuration (see the migrate_example module included in Migrate Plus for more).

source:
  plugin: json_source
  path: http://www.example.com/api/products.json
  headers:
    Accept: 'application/json'
  identifier: upc
  identifierDepth: 1
  fields:
    - upc
    - title
    - description
    - price

The source section defines the migration source and provides extra data to help the source plugin know what information to retrieve, how it's formatted, etc. In this case, it's a very simple feed, and we don't need to do any special transformation to the data, so we can just give a list of fields to bring across into the Drupal Product content type.

The most important parts here are the path (which tells the JSON source plugin where to go to get the data), the identifier (the unique ID that should be used to match content in Drupal to content in the feed), and the identifierDepth (the level in the feed's hierarchy where the identifier is located).

destination:
  plugin: entity:node

Next we tell Migrate the data should be saved to a node entity (you could also define a destination of entity:taxonomy, entity:user, etc.).

process:
  type:
    plugin: default_value
    default_value: product

  title: name
  field_upc: upc
  field_description: description
  field_price: price

  sticky:
    plugin: default_value
    default_value: 0
  uid:
    plugin: default_value
    default_value: 0

Inside the process configuration, we'll tell Migrate which specific node type to migrate content into (in this case, product), then we'll give a simple field mapping between the Drupal field name (e.g. title) and the name of the field in the JSON feed's individual record (name). For certain properties, like a node's sticky status, or the uid, you can provide a default using the default_value plugin.

Enabling the module, running a migration

Once the module is ready, go to the module page or use Drush to enable it, then use migrate-status to make sure the Product migration configuration was picked up by Migrate:

$ drush migrate-status
Group: Products  Status  Total  Imported  Unprocessed  Last imported
product          Idle    2      0         2

Use migrate-import to kick off the product migration:

$ drush migrate-import product
Processed 2 items (2 created, 0 updated, 0 failed, 0 ignored) - done with 'product'           [status]

You can then check under the content administration page to see if the products were migrated successfully:

Drupal 8 content admin - successful product JSON migration

If the products appear here, you're done! But you'll probably need to do some extra data transformation using a custom JSONReader to transform the data from the JSON feed into your custom content type. That's another topic for another day!

Note: Currently, the Migrate UI at /admin/structure/migrate is broken in Drupal 8.1.x, so using Drush is the only way to inspect and interact with migrations; even with a working UI, it's generally best to use Drush to inspect, run, roll back, and otherwise interact with migrations.

Reinstalling the configuration for testing

Since the configuration you define inside your module's config/install directory is only read into the active configuration store at the time when you enable the module, you will need to re-import this configuration frequently while developing the migration. There are two ways you can do this. You could use some code like the following in your custom product migration's migrate_custom_product.install file:

<?php
/**
 * Implements hook_uninstall().
 */
function migrate_custom_product_uninstall() {
 
db_query("DELETE FROM {config} WHERE name LIKE 'migrate.migration.custom_product%'");
 
drupal_flush_all_caches();
}
?>

...or you can use the Configuration Development module to easily re-import the configuration continuously or on-demand. The latter option is recommended, and is also the most efficient when dealing with more than just a single migration's configuration. I have a feeling config_devel will be a common module in a Drupal 8 developer's tool belt.

Further Reading

Some of the inspiration for this post was found in this more fully-featured example JSON migration module, which was referenced in the issue Include JSON example in the module on Drupal.org. You should also make sure to read through the Migrate API in Drupal 8 documentation.

Apr 27 2016
Apr 27

About the Client

In a distributed team, a kickoff usually happens with a phone call. While pre-sales communication will have already happened, the kickoff call is usually the first time when everyone working on a team will be together at once. As a team member from the vendor, this is your chance to ask questions of the business stakeholders who might not be available day to day. I like to find out:

  • Why are we all here? Are the business, technology, or creative concerns the primary driver?
  • What is the business looking for their team to learn and accomplish?
  • What are the external constraints on the project? Are there timelines and due dates, or other projects dependent on our work? What are the upcoming decisions and turning points in the business that could have a big affect on the project?

About Me

We all have ideas about how we want to work and be utilized on a project. Making sure they align with the client is very important to work out during a kickoff. Sometimes, a client has specific priorities of work to get done. Other times, they might not have realized you have skills in a specific subject area that they really need. It’s really important to understand your role on a project, especially if you have multiple skill sets. Perhaps you’re a great Drupal site builder, but what the client really needs is to use your skills to organize and clean up their content model. Figuring all of that out is a great kickoff topic.

About Us

Once we understand each other, then we can start to figure out how we work together. It’s kind of like moving in with someone. You might know each other very well, but how are you going to handle talking with your landlord? How are each person’s work schedules going to integrate?

For a distributed team, communication tools are at the core of this discussion. We all have email, chat rooms, instant messaging, video, and more. What tools are best used when? Are there specific tools the client prefers, or tools that they can’t use because of their company’s network setup? Finding the middle ground between “all mediums, all the time” and “it’s all in person until you ask” is key.

Recurring meetings are another good topic to cover. Some companies will take new team members, add them to every recurring meeting, and use up a 10 hour-per-week consulting engagement with nothing but agile ceremony. Perhaps that’s what you’re needed for—or perhaps they’ve just operated out of habit. Finding a good balance will go a long way towards building a sustainable relationship.

Sharing each person’s timezones and availability also helps to keep expectations reasonable. Some companies have recurring meetings (like Lullabot’s Monday / Friday Team Calls) which will always be booked. Sometimes individuals have days their hours are different due to personal or family commitments. Identify the stakeholders who have the “worst” availability and give them extra flexibility in scheduling. Knowing all of this ahead of time will help prevent lots of back-and-forth over meeting times.

Finally, find out who you should go to if work is blocked. That might be a stakeholder or project manager on the client’s side, but it could also be one of your coworkers. Having someone identified to the team as the “unblocker of work” helps keep the project running smoothly and personal tensions low.

About Tech

For development projects, the first question I ask is “will we need any sort of VPN access?”. VPN access is almost always a pain to get set up—many companies aren’t able to smoothly setup contractors who are entirely remote. It’s not unheard of for VPN access to take days or weeks to set up. If critical resources are behind a VPN, it’s a good idea to start setting that up before an official kickoff.

Barring the VPN-monster, figuring out where code repositories are, where tickets are managed, and how development and QA servers work are all good kickoff topics. Get your accounts created and make sure they all work. If a client is missing anything (like a good QA environment or ticket system), this is when you can make some recommendations.

About Onsites

Some projects will have a kickoff colocated somewhere, either at a client’s office or at a location central to everyone. In distributed teams, an in-person meeting can be incredibly useful in understanding each person. The subtle, dry humour of your video expert becomes apparent in-person, but could have been misunderstood online. Most of the above can be handled in the first hour of an onsite visit, leaving much more time to fill given the travel time!

We like to focus onsites on the topics that are significant unknowns, require a significant number of people across many teams, and are likely to require whiteboards, diagrams, and group brainstorming. Project discoveries are a classic fit; it’s common to meet with many different people from different departments, and doing first meetings in person can be a significant time saver. The goal of an onsite shouldn’t be to “kick off” the project—it should be to build the shared understanding a team needs so they can be effective.

But what about sales engineering?

I’m sure some readers are now thinking “Wait a minute! Aren’t these all things you should know before a contract is signed?”. It’s true! Going into a kickoff without any of this information would be a serious risk.

It’s important to remember that the team on a kickoff isn’t going to be identical to the team who did the sales engineering work. Both the client and the vendor will have new people just getting started. As well, it’s useful to hear the project parameters one more time. Discrepancies in the discussions can alert the team to any misunderstandings, or more likely changes in the business environment running up to the signing of the contract. Especially on projects where a team is already working, hearing about progress or changes made in the week between signing an SOW and kickoff can be invaluable.

What did you learn the last time you helped to kick off a project? Let us know in the comments!

Pages