Jul 02 2019
Jul 02

In the last article we managed to set up all commerce types and additional modules to import the data from our csv files. Now we need to do this regularly in order to provide users with the latest updates from our remote Hotellinx server.

Importing data from the server and writing them into the csv files is done by hook_cron() in our custom module from the first article. We want to do this once every hour, so we use the ultimate cron module in order to set up different execution times for different cronjobs.

Feeds can also be called via cron which imports the data every hour. As mentioned in the second article, the feeds have to be imported in the correct order for the Commerce variations to actually become visible. Unfortunately, we cannot specify the order in the feeds module but because we repeat this process every hour, it takes maximal two hours for the variations to be correctly imported.

Jun 19 2019
Jun 19

In the last article, we programmed a new module which created CSV files from data on a different server. Now, we need to migrate this data into our live-system. As first step we need to create a commerce store to which our products and product variations belong. Otherwise, they cannot be imported.

Commerce product and product variations

Afterwards we need a commerce product type and a commerce product variation type. A product type stores a general product, while a product variation type stores different variation of said product type. I.e. "Pair of cotton pants" would be defined as product type, while "Pair of cotton pants in red/blue/yellow, size s/m/l would be defined as product variation type.

Our product type is a 'room' and as fields, we want to store an id, a title and an image carousel showcasing the room.

Our product variation type needs to display different variations of a 'room'. Here we also need the id and additional fields for a variation title and a description of the room variation.

Feed settings

After these types are finished, we need to define two corresponding feed types. Both feed types need almost the same settings.

Feeds Tamper basic settingsThe fetcher is 'Directory' as parser we choose 'CSV'. As processor we choose 'product' for our rooms and 'product variation' for our room variations. As product type/product variation type we choose the commerce types we just created in the first step.

The other settings can be changed to personal liking, except the default delimiter in the Parser settings which needs to be a comma(,) because that is what we chose for our csv files.

Now, the feed type mapping has to be set up. The feeds will only work, if all mandatory fields are mapped correctly. We need to select 'Feed: Authored by' and 'Feed: Authored on' as source fields. Additionally we need 'Status' and 'SKU' which need to be set for Commerce products. These are the minimum settings we need for feeds to run. Additionally, our product type room also needs 'Store'. Otherwise commerce cannot import the product. Store and status need to be imported from the csv files, authored by and authored on don't need to be. Store has to be the store for which the product and product variation types were created for.

feeds mapping

We select all other fields from the csv file we'd like to map to our product type /product variation type and save.

If we were to import now, the variations would not appear. They are linked to their corresponding products by the sku defined in the csv file, but the feeds module needs help to understand the link. The module feeds tamper creates another tab 'Tamper' in the feeds type settings. Here we create a plugin for our feeds type room. We select explode and as field we chose the sku and as delimiter we select '|', because that is what we chose during the csv file creation.

feeds tamper

Creating Feeds and importing

After defining the feed types, we have to create actual feeds from them. In Content > Feeds. All we need the the path to our csv file and the correct delimiter ','. After creating one feed for the products and one for the product variations, all we have to do is activate them. Feeds belong to website-content, which means they need to be imported like content. Otherwise you have to create them again and they can't be exported to config-files.

feeds

When actually importing, the order is important. We need to import the product variations before the products, otherwise they will not appear. Feeds module creates a temporary table with the newly imported product variations and compares their skus with skus from newly imported products and connects them accordingly. It does not work the other way around.

Jan 23 2019
Jan 23

The task was to integrate an ERP system (more specific Hotel management system called Hotellinx) which is (and should always stay or be) a single source of truth (SSOT) for the products they have and administer. If you want to sell these products from another system like a webshop you need to transfer the data there to showcase your goods to customers. Another way of explaining that is to say the webshop is only to make a purchase but all the data is stored to the ERP system or SSOT.

Importing old content to a new website (usually only once) is called migration what is a part of every site creation. If you do importing regularly by scheduling it automatically the process is called integration. Yes, integration usually goes both ways so the sequel of the article will talk about it. Here we are going to get data from the Hotellinx API, write the result into a CSV file and import the data as commerce products (more specific product variations) into our Drupal 8 project. Below a image about the integration from which we implement bringing the data to a CSV file in this article.

Hotellinx integration architectureFull Hotellinx ERP integration architecture

 

 

The modules we are going to need are Commerce, Commerce Feeds, Feeds and Feeds Tamper. We use Feeds for migrating the data because it is easy and appropriate for this case. Of course there are other options like custom PHP scripts, the Drupal migrate module/framework or doing it by hand.

As first step we need a custom module which downloads data from the Hotellinx API and writes the result into a CSV file.

We use a MODULE.install file with a MODULE_install() function to make it possible to change data from the drupal UI.

function MODULE_install() 
  { 
    \Drupal::configFactory()
    ->getEditable('MODULE.settings') 
    ->set('Username', '') 
    ->set('Password', '') 
    ->set('url', 'link/to/hotellinx.api') 
    ->set('filepath', 'public://MODULE/') 
    ->set('productsFileName', 'MODULE_hotellinx_rooms.csv') 
    ->set('VariationsFileName', 'MODULE_hotellinx_variations.csv') 
    ->save(); 
  }

Then we create an admin UI in MODULE>src>Form which enables the user to edit his Hotellinx credentials or the path of the CSV file. We need a MODULE.routing.yml to make the form accessible from the UI.

MODULE.settings: 
  path: '/admin/config/services/MODULE' 
  defaults: 
    _form: '\Drupal\MODULE\Form\MigrationAdminUI' 
    _title: 'Hotellinx Settings'
  requirements: 
    _permission: 'access administration pages'

First we add the route to the module.info by adding

configure: MODULE.settings 

 

as last line. This way a link to the settings page is accessible from the install new module page.

You can add a link to the admin menu. Create file

 

MODULE.links.menu.yml

 

and add

MODULE.adminUI: 
  title: 'Hotellinx Settings'
  parent: system.admin_config_services
  route_name: MODULE.settings 
  weight: 10

This enables a user to find the admin form from the module settings like API endpoint, tokens or passwords.

Now we can finally create the actual functionality. We are using hook_cron() to activate the download process.

We use PHP curl to connect to the external Hotellinx API.

$url = \Drupal::configFactory()
  ->getEditable('module.settings')
  ->get('url'); 
$ch = curl_init(); 
curl_setopt($ch, CURLOPT_URL, $url); 
curl_setopt($ch, CURLOPT_POST, true); 
curl_setopt($ch, CURLOPT_HTTPHEADER, array('Content-Type: text/xml')); 
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true); 
curl_setopt($ch, CURLOPT_POSTFIELDS, $request()); 
$xmlstring = curl_exec($ch); 
curl_close($ch);

 

The general $request() looks like this:

$Username = \Drupal::configFactory()
  ->getEditable('module.settings')
  ->get('Username'); 
$Password = \Drupal::configFactory()
  ->getEditable('module.settings')
  ->get('Password'); 
  return "<HOTELLINX POST REQUEST>";

Which returns the responcs as it is defined in the Hotellinx API - after adding necessary and optional parameters. So replace the part <HOTELLINX POST REQUEST> with the desired API request.

The result is written into two different CSV files like this:

$file = fopen($productsPath, "w"); 
  foreach ($roomlist as $line) 
    { 
      $line .= $productMap[explode(',', $line)[0]]; 
      fwrite($file, $line . "\n"); 
    } 
  fclose($file);

According to the Feeds module documentation, all strings have to be surrounded by quotes like "string".

Hotellinx provides us with rooms and which are sold at different rates. In order to translate that into Commerce "terms" (so that it's technically practical) we need two files. The rates will be migrated as Commerce variations and the rooms will be migrated as products. Each product can be sold in different variations (e.g. Price, offer, people in it etc.).

The first variation file contains the rates and some important variables which are needed later in the integration:

"SKU","RoomTypeId",            "Title"
  104,          67,   "Single package"
  105,          68,   "Family package"
  106,          68,"Newly wed package"

The different variations are identified by the SKU. The RoomTypeId defines to which room these rates belong. Because the Feeds module cannot make this connection without external help, we need a second file for the products.

"RoomTypeId",      "Title",   "Store",  "SKU"
          67,"Single Room","my_store",    104
          68,"Double Room","my_store",105|106

This file lists the products, in this case hotel rooms and defines with a pipe as delimiter which variations belong to each product. (See SKU 105|106. The pipe is used by the module Feeds tamper, which extends Feeds.)

After running the program (execute Cron) we have two different files, one containing future commerce products and the other containing future commerce variations of those products.

The next step is migrating the data from the CSV files by using the Feeds and Commerce Feeds modules usable Commerce entities - products and variations.

Sep 06 2018
Sep 06

Image by @grohsfabian

There is a lot going on in the Drupal community:

1. The biggest news is that version 8.6 was just released with many improvements. You can read more about the improvements here.

What we have waited for is availability to create files via REST request. Now when building mobile Apps we can easily interact with the backend when dealing with files like images.

2. Next week on 10 -14 September Drupacon Europe will be held in Darmstadt, Germany. See you there!

3. Drupal.org released a partnership with GitLab which is also the project management tool we use.

Exciting times as the project and tools to build great web apps just keeps getting better!

Apr 26 2018
Apr 26

Like already noted in the last post the previously found Drupal vulnerability is weaponized and the software has been executing automatic attacks now for a couple of weeks against all Drupal websites all around the internet.

Yesterday there was another release from Drupal security team. This means update your site AGAIN and IMMEDIATELY if you have not yet done it.

Because all these security fixes are linked we have probably very little time until the newly found security hole is again weaponized. That's is why the release window for the update was only a few days this time.

Usually when someone finds a "thing" from code which in this case is a security hole, others (security team and the bad guys) start looking the code more carefully and might find other interesting "things" (linked vulnerabilities). Because of the new trend there is money been made by using your server to mine cryptocurrencies so the bad guys are really motivated because there is money directly involved!

IMPORTANT!
If you have not yet updated your site for DrupalGeddon2 it is propably too late. Your server is hacked and you should start planing restoring the site and checking your server for cryptomining software.

Apr 14 2018
Apr 14

UPDATE on 14. April 2018: Now there is a automated attacks active after Checkpoint released post two days ago how the vulnerability works.

First (before the problem)

If you have a Drupal site and this is the first time you hear about the critical vulnerability published on March 28 2018 read the two last chapters immediately.

During the last week in the Drupal community around the world there has been a hustle about the security hole which was named DrupalGeddon2 [1] [2]. This vulnerability was "highly critical" and got many people scared - unnecessary. This post tries to explain when the vulnerability will become a problem? When the vulnerability is actually not a problem and how to handle the situation right. 

Drupal project has a own dedicated security team [3] which will take care of security issues like how to patch the found issues right and deal with the public announcement about it. A week before the publication [4] there was a announcement that a vulnerability has been found and a patch will be released on March 28th between 18:00-19:30 UTC.

In other words that is trying to tell all the site owners or people responsible about the updates that "Be ready to patch your site when we make the announcement. IMMEDIATELY!"

TIP Solution and many other companies who do things right reserved time from their calendars for 28.-29.3.2018 to patch the sites.

Day or couple days before the announcement

When the official announcement about the vulnerability was made it was known that the patch will be for core and sites will be patched pretty easily. So there won't be a lot of downtime.

We told the site owners about the update and the sites will be patched between 0-48 hours after the announcement. 

All our clients have a maintenance contract which makes it to our responsibility to keep on eye the announcements and sites updated without any additional costs.

Day of publication 

The cores were easy to patch (thanks to composer workflow and doing the development right) so all the sites were safely patched after a few hours. The process took a little longer than expected because we brought the sites up to date including the modules.

When the sites were patched the site owners were again informed.

Present moment (five days after the announcement)

Before the publication there was some talk that there might be attacks after a few hours. At the moment there is still no information if any sites has been compromised. We'll see...

Sometimes hackers share their knowledge how to exploit the vulnerability (PoC) and sometimes they just keep the knowledge to their selves that they can crack sites without that anyone notices. Therefore we can never be sure if there are attacks available or on going right now so the sites should be patched anyway.

How do you know if you site is patched?

Go to https://yourwebsite.com/admin/reports/status and check that your site's core that it is at least 7.58 or 8.5.1. Or alternatively someone has patched your site manually.

If you are not sure contact your site's administrator immediately!

Summary

You don't need to worry about the security updates if you are ready to patch the site as soon as there is a release. This is why a found security patch (by the white hats) is a good thing. Finding vulnerabilities and patching them is a natural part of every software project. 

Thank you security team for handle the case right!

[1] https://www.drupal.org/sa-core-2018-002

[2] https://www.drupal.org/PSA-2014-003

[3] https://www.drupal.org/drupal-security-team

[4] https://www.drupal.org/psa-2018-001

Apr 04 2018
Apr 04

First (before the problem)

If you have a Drupal site and this is the first time you hear about the critical vulnerability published on March 28 2018 read the two last chapters immediately.

During the last week in the Drupal community around the world there has been a hustle about the security hole which was named DrupalGeddon2 [1] [2]. This vulnerability was "highly critical" and got many people scared - unnecessary. This post tries to explain when the vulnerability will become a problem? When the vulnerability is actually not a problem and how to handle the situation right. 

Drupal project has a own dedicated security team [3] which will take care of security issues like how to patch the found issues right and deal with the public announcement about it. A week before the publication [4] there was a announcement that a vulnerability has been found and a patch will be released on March 28th between 18:00-19:30 UTC.

In other words that is trying to tell all the site owners or people responsible about the updates that "Be ready to patch your site when we make the announcement. IMMEDIATELY!"

TIP Solution and many other companies who do things right reserved time from their calendars for 28.-29.3.2018 to patch the sites.

Day or couple days before the announcement

When the official announcement about the vulnerability was made it was known that the patch will be for core and sites will be patched pretty easily. So there won't be a lot of downtime.

We told the site owners about the update and the sites will be patched between 0-48 hours after the announcement. 

All our clients have a maintenance contract which makes it to our responsibility to keep on eye the announcements and sites updated without any additional costs.

Day of publication

The cores were easy to patch (thanks to composer workflow [5] and doing the development right) so all the sites were safely patched after a few hours. The process took a little longer than expected because we brought the sites up to date including the modules.

When the sites were patched the site owners were again informed.

Present moment (five days after the announcement)

Before the publication there was some talk that there might be attacks after a few hours. At the moment there is still no information if any sites has been compromised. We'll see...

Sometimes hackers share their knowledge how to exploit the vulnerability (PoC) and sometimes they just keep the knowledge to their selves that they can crack sites without that anyone notices. Therefore we can never be sure if there are attacks available or on going right now so the sites should be patched anyway.

How do you know if you site is patched?

Go to https://yourwebsite.com/admin/reports/status and check that your site's core that it is at least 7.58 or 8.5.1. Or alternatively someone has patched your site manually.

If you are not sure contact your site's administrator immediately!

Summary

You don't need to worry about the security updates if you are ready to patch the site as soon as there is a release. This is why a found security patch (by the white hats) is a good thing. Finding vulnerabilities and patching them is a natural part of every software project. 

Thank you security team for handle the case right!

[1] https://www.drupal.org/sa-core-2018-002

[2] https://www.drupal.org/PSA-2014-003

[3] https://www.drupal.org/drupal-security-team

[4] https://www.drupal.org/psa-2018-001

[5] https://github.com/drupal-composer/drupal-project

Jan 26 2018
Jan 26

When adding a social media (SoMe) feed or "some wall" like here on the right bottom corner to your webpage the big question is: Why would you put it to your site? Why is it there?

[embedded content]

If the answer is something like "to get some content to your site." - consider again. The SoMe feed might benefit you or harm you depending how you manage it.

Here are some points to consider:

PROS

  • Give your user relevant content if it is news or happenings or some other data about your business
  • Show that you are active in SoMe and your business is alive
  • Impacting positive to your page rank (if done right)

CONS

  • Embed code (see: Installing below) have to fetch the content from third party service like Facebook and Twitter which can be slow. That again affects your page rank and your users will go away because nobody likes slow sites
  • It takes place from your site and distracts the user from relevant content. Just put a link (with an icon) where is our Facebook page
  • If your visitors actually click something from the feed (which usually is the purpose of putting a link) you lead them away from your site

If you still decide to put the feed in your site then the next step should be thinking about what you want to show to your visitors and how to embed the code to your site. We use Facebook as an example but all the major players offer pretty much the same possibilities.

INSTALLING (Setting up the feed)

  1. Use Widget (Plugin) or embed the code to your site.
  2. If you have Drupal or Wordpress or some other CMS you can create an App and the use a module like Social Media Channel Feed to do the job (how is shown in the video above)
  3. Embed iFrame directly which we'll not even going to discuss. :)

SEO DISCUSSION (OR SPECULATION)

The Widget or embedded JS fetches the content when the user (or Googlebot or spider) arrives so the markup is not actually there yet. Even though the spider would read JS it does not really help. If we would use caching or getting the content there before the spider crawls it would still no going to help because the feed is only an iFrame (and not much respected).

The second option is to use "App" so we can fetch the content of the feed beforehand an render it to the page. Then when the user comes it is already there and analyzed by Googlebot / page rank algorithm. This might actually help you or might not - depending what is there and where is it placed.

If you'll get the feed there on time then the next question is what and where to show it:

  • All your actions in SoMe?
  • Inbound and outbound links (ratio) in the feed
  • In every single page?
  • In which area considering (HTML5 semantic tags) <article><section><aside> or inside some unimportant <div>?
  • Is it then duplicate content on your site?
  • What happens when user has an add blocker?

If you then update the feed (post stuff in SoMe) regularly then your site seems to be active and search engines like your site more. So if done right it might help your page rank. Nobody knows how much exactly the spiders like (or hate) social media feed markup on your page. But if you're going to have one then do it right.

DISCUSSION

I think of having a SoMe feed or not should not be a question of SEO but rather about the pros and cons listed before.

What do you think? Which relevant aspects I forgot to mention about having a SoMe feed?

Jan 22 2018
Jan 22

Happy anniversary!

Another year of work behind which means thousands of patches, new iniatives and modules. Simple put: ever more modern and robust platform to create web applications and sites.

An article [1] published by Drupal association they gave some recent figures about the development of the framework from last year from which some interesting were:

  • There are already 190.000 sites running Drupal 8 in the web.
  • There are atleast 1597 stable modules for version 8.
  • 7240 developers around the world contributed code back for the project.

How are the stats from your CMS project?

More about the benefits and how the community works you can read here. From the video below you can see which organizations already use Drupal platform to grow and flourish in their businesses.

[embedded content]

Original article and more info:

[1] https://www.drupal.org/blog/happy-seventeenth-birthday-drupal

Jan 19 2018
Jan 19

Happy anniversary!

Another year of work behind which means thousands of patches, new iniatives and modules. Simple put: ever more modern and robust platform to create web applications and sites.

An article [1] published by Drupal association they gave some recent figures about the development of the framework from last year from which some interesting were:

  • There are already 190.000 sites running Drupal 8 in the web.
  • There are atleast 1597 stable modules for version 8.
  • 7240 developers around the world contributed code back for the project.

How are the stats from your CMS project?

More about the benefits and how the community works you can read here. From the video below you can see which organizations already use Drupal platform to grow and flourish in their businesses.

[embedded content]

Original article and more info:

[1] https://www.drupal.org/blog/happy-seventeenth-birthday-drupal

Jun 23 2017
Jun 23

Our goal is to create a block with an image 3d effect integrated view, so we can easily filter which images will be displayed. I chose cloud9carousel because it had the style we were looking for. You can check the default outlook here. 

The first step is creating the view with the name carousel3d. There is no need to change a lot. Just the fields, where I remove everything and add the field 'Image'. In the field-settings, I choose 'medium' as image style. This is optional and can be changed later on. Now I add a block and give it the machine name 'block3d'.

Done. I can place this block everywhere on the website that displays all kinds of images. Lastly I use Features to export the view carousel3d.

The next step is creating the custom module. I name it slideshow and it contains a slideshow.module, slideshow.libraries.yml, slideshow.info.yml a folder 'js' and a folder 'config'. The config folder contains a folder 'install' and there I put the views.view.carousel3d.yml that I exported via Features. When my module is installed, drupal will scan for the install folder and install everything in it automatically.

I need to add the js sources to the js folder. It contains the jquery.cloud9carousel.js from cloud9carousel and another js file slideshow.js.

slideshow.libraries.yml must define the js files that are included in this module. (I also included reflection.js because it adds reflections to the slideshow. But this is totally optional and is explained on cloud9carousel.)

slideshow:
  version: VERSION
  js:
    js/slideshow.js: {}
    js/jquery.cloud9carousel.js: {}
    js/reflection.js: {}
  dependencies:
    - core/jquery
    - core/jquery.once
    - core/drupal

slideshow.module must add the js from the library to the page. I use the hook page_attachments_alter. The path is 'modulename/libraryname'. Which are the same in this case.

function slideshow_page_attachments_alter(array &$attachments){

            $attachments['#attached']['library'][] = 'slideshow/slideshow';

}

Ive created the view, I've declared the js and added it to the website. Now I have to connect the two. This happens in the slideshow.js. I need to jquery the block that is created by drupal. The id begins with block-views-block-VIEWNAME-BLOCKNAME. Then I add a few cloud9carousel options including the itemClass. There I have to use 'field-content>img' to get images regardless of the resolution that is defined in carousel3d view. (The key 'mirror' belongs to the reflection.js and is optional.)

jQuery(document).ready(function ($) {
    var height = 150;
    var width = 600;
    var $block = $("[id^=block-views-block-carousel3d-block3d]");
    $block.css('visibility', 'hidden').Cloud9Carousel({
        autoPlay: 1,
        // bringToFront: true,
        itemClass: "field-content>img",
        farScale: 0.5,
        xRadius: width,
        yRadius: height,
        mirror: {
            gap: 12, /* 12 pixel gap between item and reflection */
            height: 0.2, /* 20% of item height */
            opacity: 0.4 /* 40% opacity at the top */
        },
        onLoaded: function () {
            // Show carousel
            $block.css('visibility', 'visible');
            $block.height(3 * height);
            $block.width(2 * width);
        }

    });
});

Now everything is done. After module installation, I can go to block layout and place my block carousel3d on where ever I like.

May 31 2017
May 31

We needed show or demonstrate several locations on the map which is called clustering. We knew you can do this using Google maps. On the quick video you can see what I mean.

[embedded content]

On default it shows the Hamburg, Germany area where most of the Locations (Pins) are. You can also choose what the Zoom level is.

We found a Styled Google Map module which has been at the alpha version since last Christmas but works just great. It uses Views to generate a map so you can quickly generate very flexible and different clusters. It even works together with Display Suite which makes it even better.

Of course you can also change the marker (pin image).

May 24 2017
May 24

Update (Jan 2018) : Definitely use config_split module.

This post was inspired by a situation where I found myself today so this post could also be named as "solving configuration workflow problem in Drupal 8 manually which you should never do because your team have a fast, reliable and automated deployment process which never fails".

There are some different configuration management workflow or pipeline strategies for Drupal 8 already available but there are no undisputed winner yet. Meaning that someone has solved the problem and found a perfect solution which always works and it has become a standard in the community. Why? Because there also different workflows, teams, tools, deployment processes, environments and situations where one need to export and import configuration to another project or environment. If you watch the brilliant presentation [1] from the creators of config_split module you get the idea how complex the issue is.

Because "every" setting or configuration (depends where you draw the line between content and configuration) is in a .yml file you should be able to export it and import it easily to another project or environment. What makes this complicated is that you sometimes want to export all configuration or just some of them.

Usually the challenges which occur are:

  • You need different modules or configuration enabled in different environments. E.g. Development and production server have different modules enabled.
  • You don't want to override or reset settings on production. E.g. ID number for every registration.
  • Don't mess up (change or override) anything which are updated after the last time you did the deployment. E.g. The end user has done some configuration changes in production environment.

Therefore standard trivial drush config-export (cex) and config-import (cim) does not work in practise even if you just transport (e.g. using git) the needed the .yml files. The problem is a little more complicated than that and there are already strategies and tools to tackle this problem. Here are some of them.

1. Drush cim --partial

Drush help cim says: "Allows for partial config imports from the source directory. Only updates and new configs will be processed with this flag (missing configs will not be deleted)."
Meaning that you can only commit the files you need to transfer to a new environment (and e.g. put the ones you don't need to .gitignore) and you won't override or remove the missing settings which is default otherwise. This solves the problem not to override already mentioned running ID number configuration. The problem here is that if you have "old" config or the system does not realize that the configuration is updated (e.g. need to change something like translation file manually) this won't do the trick. See section 4.


2. Nimbus module

The module with config_split also uses config_filter[2] to separate configuration files to different folders and therefore "is a configuration management tool aimed at extending the configuration import functionality of Drupal core to support the import from multiple concurrent configuration directories for sophisticated configuration deployment workflows using dependency management tools." [3]

With Nimbus you can override configuration and only import the changes you need. The problem is that the import is also done using drush cex and the drush cim and won't solve the problem whether the import should be done using --partial or not. So it would reset the ID for example if the build is done manually.

3. config_split module

The tool which we are actually using now and is getting more popularity. It lets you predefine rules how the export should be done and magically splits the configuration depending on those rules which are called blacklisting and/or graylisting. Unfortunately we weren't using this tool in this project and I'm still not sure if it would have solve the problem. All I want to say here that check out the module [4] and test it's capabilities. It makes magic.

4. Manually using Drush

If you found yourself in a situation that you only need to transfer one configuration which is not new and is not defined as "update" by Drupal you have a challenge. Because drush cim would import all of the configuration (which we don't want because I only want the one specifig) and drush cim --partial won't do anything because there is nothing new (the .yml file was there and registered already) or updated. Only something has changed but counted as updated.

The solution here was found at the comments from drushcommands.com [5] which was to

  1. Create new folder ($ mkdir config/language/fi-new) and copy the configuration file there which you want to import ($ cp fi/some_address_id_creation_stuff.yml config/language/fi-new)
  2. Import all changes from that folder
    $ drush cim --partial --source=../config/staging/language/fi-new --preview=diff
    Here the flag --preview=diff is not needed but highly recommend every time you do imports manually.
  3. Remove the folder because this was a one time import. The original same file stays where it was.

Furthermore

Importing only one config should be possible to import using drupal console [6]. But like said there are different tools and deployment processes and we are not using drupal console - yet.

If you have better solution how to import only one config file or a tool to build perfectly working automated deployment process let me know.

[1] https://www.youtube.com/watch?v=57t_CS2wbHI&list
[2] https://www.drupal.org/project/config_filter
[3] https://www.drupal.org/project/nimbus
[4] https://www.drupal.org/project/config_split
[5] https://drushcommands.com/drush-8x/config/config-import/
[6] https://hechoendrupal.gitbooks.io/drupal-console/content/en/commands/con...

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web