Nov 02 2018
Nov 02

We are in the process of transforming the way we host our applications to a docker based workflow. One of the challenges we face is the file storage. At the heart of our business are open source technologies and tools, therefore  we have looked into in using Minio (more or less the same as Amazon S3 for file storage) instead of local filesystem (or Amazon S3).

We are going to use the Drupal module Flysystem S3 - that works both with Amazon S3 and Minio (compatible with the Amazon S3).

Flysystem is a filesystem abstraction library for PHP which allows you to easily swap out a local filesystem for a remote one - or from one remote to another.

For a new site it is pretty straight forward, for a legacy site you need to migrate your files from one storage to another - that I am going to look into in the next blog post.

Minio container

First we need Minio up and running. For that i am using docker, here is an example docker-compose.yml:

services:
  minio:
    image: minio/minio:edge
    container_name: minio
    hostname: minio
    ports:
      - "8001:9000"
    volumes:
      - "./data:/data"
    environment:
      - "MINIO_ACCESS_KEY=AFGEG578KL"
      - "MINIO_SECRET_KEY=klertyuiopgrtasjukli"
      - "MINIO_REGION=us-east-1"
    command: server /data


Settings

When you have installed the Flysystem S3 module (and the dependency - the module Flysystem), we need to add the settings for Minio to our settings.php file (there is no settings for this in Drupal. Yet.):

$schemes = [
    's3' => [
        'driver' => 's3',
        'config' => [
            'key'    => 'AFGEG578KL', 
            'secret' => 'klertyuiopgrtasjukli',
            'region' => 'us-east-1',
            'bucket' => 'my-site',
            'endpoint' => "http://minio.mysite.com:9000",
            'protocol' => "http",
            'cname_is_bucket' => false,
            "cname" => "minio.mysite.com:8001",
            'use_path_style_endpoint' => TRUE,
            'public' => true,
            'prefix' => 'publicfiles',
        ],
        'cache' => TRUE, 
        'serve_js' => TRUE,
        'serve_css' => TRUE,
    ],
];
$settings['flysystem'] = $schemes;

Endpoint is for communicating with Minio, cname is the base URL that files is going to get on the site. Serve_js and serve_css is for Minio to store aggregated CSS and JS.

Create a field

You now need to define which fields are going to use the S3 storage, for this, I create a new image reference field, and use “Flysystem: s3” as the Upload destination.

Surf over to Minio - our example is on http://minio.mysite.com:9000 - add the defined bucket, my-site, and make sure that drupal can write to it (edit policy in Mino and make sure it has read and write on the prefix - or for wildcard prefix - *)

And you are done

And that is it - now we are using Minio for storing the images. Try to upload a file on the field you created - and you should see the file in Minio. Also on the site - you should of course see the image - but now with the URL used in the settings for CNAME, in our case, minio.mysite.com:8001.

We have put some time and effort into the Flysystem S3 module together with other contributors, and we hope you will test it out and report any feedback. Have fun!

Mar 28 2018
Mar 28

One week ago, we received a warning that a critical security update for Drupal, which affected drupal 7 and 8 (and even 6, which is not supported anymore) was going to be released today. And we braced ourselves for updates.


A couple of years ago, it was a hard work for us to update a site if a security update was released. Nowadays our hosting and our processes are much better and simpler - and thanks to a team effort by our Live-team at Digitalist, we got our most vulnerable sites patched minutes after the security fix was released. 

Digitalist Live Team patched in total around 1700 sites on our own hosting in less than 2 hours!

If you read the FAQ for the security issue - it is really critical to update - if the vulnerability is exploited all non-public data is accessible, and all data can be modified or deleted. Simply put - your site could be immediately hacked and taken over by someone else.

It is good to remember that the vulnerability has not been exploited anywhere that we know of. But after discolsure of a vulnerability, "black hat" hackers will immediately try to exploit Drupal sites. That is why it so important to act quickly and apply security updates once they become public.

Drupal is one of the most secure CMS systems available - and it stays that way due to its robust vulnerability-handling process.

Dec 07 2017
Dec 07

Some module require that you download external Javascript-libraries, and in drupal 8 that should be done in Composer. 

The module Masonry require the JavaScript library with the same name. So we need to include the package in composer.json, like:
 


"repositories":[
    {
        "type":"composer",
        "url":"https://packages.drupal.org/8"
    },
    {
        "type": "package",
        "package": {
            "name": "desandro/masonry",
            "version": "master",
            "type": "drupal-library",
            "dist": {
                "url": "https://unpkg.com/[email protected]/dist/masonry.pkgd.min.js",
                "type": "file"
            }
        }
    },


And in the require part that is:


"require":{
    ...
    "desandro/masonry":"master",
    ...
},

And then we need to add libraries in extra part of we do not have that:


"extra":{
...
"web/libraries/{$name}":[
    "type:drupal-library"
],
...

So to install the library, you just need to run


composer require desandro/masonry

Oct 04 2017
Oct 04

Group photo from Drupalcon Vienna

So last week we went with a big crowd (16 of us) from Digitalist to Drupalcon in Vienna to join the about 2000 other attendees. We went to sessions and BOF:s about caching in Drupal 8, Symfony components in Drupal core, Docker, Config split, Decoupled Drupal, Multi-sites in Drupal 8 and a lot of other things.

Driesnote

[embedded content]

A standard thing at Drupalcon is Dries talking about The state of Drupal - called the Driesnote – that means talking about all the good things we have in Drupal and in the community and what is working, and what we need to work on to make Drupal and the community even better. One great thing to hear is the result of the survey that the Drupal Association sent out to companies working with Drupal – a lot of the companies around the world is going very well – over 48.5% of the Drupal companies has a growing sale – and Drupal deal sizes are also growing for them – for 47% it is getting bigger.

Dries also talked about the workspace-project – that you could work on a part on the site with a bunch of content – and see how it looks – without publishing it – a big step forward for Drupal. Also some talk about adding a JS-framework to get a better UI experience in the admin parts of the site.
 

Contenta

[embedded content]

Decoupling Drupal (also know as headless Drupal) has been something that has been done for years now – and one really nice intiative is Contenta CMS, that try to build a best practice setup for decoupling Drupal – and on their session they presented some of the work they have done – with an easy setup to make frontenders that never has worked with Drupal to get started on which end points to use etc. If you are starting a project with a decoupled Drupal – I recommend you to check Contenta CMS out, with ready examples using JS-frameworks like Angular, React etc.

ELM

[embedded content]

There were a bunch of sessions discussing different uses of frameworks that could be used as an frontend for Drupal – one I find very interesting (being mostly a backender myself) is using ELM, presented on Drupalcon by Amitai Burstein – who the latest four years have build different solutions for decoupled Drupal.

Caching, a guru-guide

[embedded content]

Wim Leers is one of the core contributors that knows most about the caching layer in Drupal – and his session about caching in Drupal 8 was one of the most interesting in Vienna – there a lot of stuff to think about in caching – and his session went thru the most of them.

Moving configuration around

[embedded content]

The Drupal 8 configuration system is a hard nut to crack for some projects – and one of the solutions that are getting more and more attention is config split – used for handling configuration differences betweween different environments (dev → stage → prod). Fabian Bircher did a crowded presentation about his brain child and were also part of a BOF initiatives by the Swedish university Lunds Universitet, talking about the problems and solutions using Drupal 8 for multi sites setups.

Humanized Internet

[embedded content]


External keynote speakers has been a long tradition – and this year we had Monique J. Morrow talking about the Humanized Internet – with a lot of focus on personal security and the Internet – one of the examples she brought up about this was the Swedish data breach inside Transportstyrelsen, and the upcoming GDPR regulation that helps protecting personal data inside EU.

And there a lot of more

Of course there were a lot more – and most of the sessions you could now watch on Drupalcons site, just go to the schedule and pick which session you would like to watch.

Next Drupalcon in Europe will be in 2019 – and I am looking forward to it!
 

Apr 04 2017
Apr 04

If you just want your content to be cached before Drupal 8, there were almost no problems, just turn on caching for anonymous users, and you are all set. Muhahhaha! Who am I kidding...

If you want to interact with users with different content depending on the user, role etc. You got problems, if you want to invalidate the cache, you got problems. If you want to show a View of nodes, and turn off caching of that view, but had caching for anonymous users, you got problems. And so on. The most caching issues were solved by clearing all the cache, which could bring down the site if your unlucky.

So the real problem we wanted solve were not caching per see, it was cache invalidation. And like Phil Karlton supposedly said in the early nineties “There are only two hard things in Computer Science: cache invalidation and naming things.”

Some sites solved the caching issue before Drupal 8 with just turning of the cache completely, and scaled up the environments instead, and spent a lot of money in doing so. I have seen some high traffic sites with almost no cache logic in place, because it was to hard to get the cache invalidation to work. Those who worked hard on getting the cache and the cache invalidation to work smarter used modules like expire and purge, and integrated with rules solve complex cache invalidation. But is was almost impossible for any Drupal module to know where any content were used on a site. And that is what we want from smart cache invalidation.

Almost the only case that the default caching worked with no issues before D8, was if your site just were only one node, and some static blocks.. And if you updated that node, the cache of that node will be invalidated (hopefully). A normal sized site has hundreds and thousands of pieces of content, relations to other content, has listing of nodes etc. So it was real hard before D8.

So what we needed for cache in Drupal 8 is for Drupal to be aware on what cache and where it is used an in which context. Cache all things (aka. Fast by default), and make cache invalidation easier (aka Cache tags). And we got it. And let’s dive into what we got in the next blog post.

This is our second part of our ongoing series: Caching in Drupal 8, first part you could find here (with links to blog posts published so far).

Photo by Sera Tü. License

Mar 31 2017
Mar 31

Den här sidan använder cookies. Om du fortsätter använda sidan så accepterar du användandet av cookies. Läs mer om cookies

Jag förstår

In a series of blog posts we will go through how caching works in Drupal 8, to try to demystify the caching layer in Drupal 8 for developers.

In a series of blog posts we will go through how caching works in Drupal 8, to try to demystify the caching layer in Drupal 8 for developers.

Planned blog posts are (this will be linked from here when they are up)

  • Introduction
  • How it worked, and what we wanted to be solved in D8
  • Cache metadata - overview
  • Cache keys
  • Cache context
  • Cache max-age
  • Cache tags
  • Twig cache
  • Core cache services
  • Internal Page Cache
  • Internal Dynamic Page Cache
  • Creating your own cache tags
  • Creating your own cache context
  • Define and use your own cache bins
  • Solutions for working with good caching of Views
  • Disable caching – why, when and how
  • Cache tags together with Varnish
  • Cache bins in Memcache/Redis/whatever
  • Wrapping it up

The plan is to finish this series in a months time, with a couple or more of blog posts per week. Many parts of the series is loosely based on a session I did for DrupalCamp Northern Lights this February. But, here I will have the time to go into more detail, and with more code examples.

We started early to work with Drupal 8, our first project was this site, and after that we started to deliver Drupal 8 sites to our clients. We learned a lot during all the projects, and hopefully I will be able to share to you all the stuff that the learned about the caching layer in an understandable way. So this knowledge is based on trial and error on real projects.

Target audience for this blog series is both backend and frontend developers.

Looking for a job?

We are always looking for talents. In fact, our last 5 employees moved to Stockholm just to work with us. Will you be our sixth?

Vi hjälper dig nå resultat. Kontakta oss Ring direkt på 08-20 90 04.

Updating taxonomy term name in Drupal 8

Here you can read how to use hook update for updating taxonomy terms in Drupal 8.

Varnish Purge 8.x-1.4 is out

As for Purge module 8.x-3.0-beta5, creation of cache tags is removed from the Purge module itself, and should now be handled by purges instead – so from Varnish Purge 8.x-1.4 we now have a sub-module, Varnish Purger Tags, that handles the cache tags. 

Mar 04 2017
Mar 04

Den här sidan använder cookies. Om du fortsätter använda sidan så accepterar du användandet av cookies. Läs mer om cookies

Jag förstår

Next Wednesday you are all welcome to our Drupal Meetup at Wunderkraut, where we will talk about caching in Drupal 8 And drink some beer.

It has been a while, but now it is time again for a Drupal Meetup in Stockholm, and it will be at the Wunderkraut office in Stockholm, signup for it here. There will be beer, mingle, talk about caching in Drupal 8, and more. Bring a friend or two!

Vi hjälper dig nå resultat. Kontakta oss Ring direkt på 08-20 90 04.

Rekommenderad läsning!

Highlights from Drupalcon Vienna

So last week we went with a big crowd (16 of us) from Digitalist to Drupalcon in Vienna to join the about 2000 other attende…

MDN Rekommenderad läsning!

Highlights from Drupalcon Vienna

So last week we went with a big crowd (16 of us) from Digitalist to Drupalcon in Vienna to join the about 2000 other attende…

MDN

Kraften i Drupal

Vi har hittat en bra film som enkelt beskriver styrkan och kraften i Drupal och communityn runt omkring.

Nov 17 2016
Nov 17

Den här sidan använder cookies. Om du fortsätter använda sidan så accepterar du användandet av cookies. Läs mer om cookies

Jag förstår

Out of the box Drupal 8 has the header of a page request set to X-Frame-Options: SAMEORIGIN, that means that many modern web browsers does not allow the site to be framed from another domain, mostly for security reasons. This is good in many cases, but some web browsers has problem with this, and X-Frame-Options is deprecated in favor of using Content-Security-Policy.

So why do you need a header like that? It is mainly for protecting a site for what is called Clickjacking

Also, for some cases you want you site to be framed into another, and doing that out of the box with Drupal 8 is not possible in most modern web browsers if you don't alter the sites header in apache, nginx, varnish or in some other way. We are now going to look into doing in “some other way”, in this case with Drupal. I prefer using Drupal to control site headers because of the sites header is a part of the application. 

This solution is based on this post on drupal.org.

Create a structure like this:

mycustom 
  ├── mycustom.info.yml 
  ├── mycustom.services.yml 
  └── src 
    └── EventSubscriber 
      └── MyCustomEventSubscriber.php 

And here is the code

mycustom.info.yml



name: My custom
type: module
description: Replacing X-frame-Options with Content-Security-Policy
core: 8.x
package: custom

mycustom.services.yml


services:
  shp_xframe_remove_event_subscriber:
    class: Drupal\mycustom\EventSubscriber\MyCustomEventSubscriber
    tags:
      - {name: event_subscriber}

src/EventSubscriber/MyCustomEventSubscriber.php


<?php
namespace Drupal\mycustom\EventSubscriber;

use Symfony\Component\EventDispatcher\EventSubscriberInterface;
use Symfony\Component\HttpKernel\KernelEvents;
use Symfony\Component\HttpKernel\Event\FilterResponseEvent;

/**
 * Subscriber for changing header.
 */
class MyCustomEventSubscriber implements EventSubscriberInterface {

  /**
   * Remove X-Frame-Options, adding Content-Security-Policy.
   */
  public function setHeaderContentSecurityPolicy(FilterResponseEvent $event) {
    $response = $event--->getResponse();
    $response->headers->remove('X-Frame-Options');
    // Set the header, use FALSE to not replace it if it's set already.
    $response->headers->set('Content-Security-Policy', "frame-ancestors 'self' mysite.com *.mysite.com", FALSE);
  }

  /**
   * {@inheritdoc}
   */
  static public function getSubscribedEvents() {
    // Response: set header content for security policy.
    $events[KernelEvents::RESPONSE][] = ['setHeaderContentSecurityPolicy', -10];
    return $events;
  }

}

That is it.

Looking for a job?

We are always looking for talents. In fact, our last 5 employees moved to Stockholm just to work with us. Will you be our sixth?

Vi hjälper dig nå resultat. Kontakta oss Ring direkt på 08-20 90 04.

Oct 05 2016
Oct 05
In our developer workflow we install the site continuously during development, as a health check of the code base. One of the problems we have in this workflow is when we declare a service that belongs to a module that is not activated yet during the install, like Memcache module. In our developer workflow we install the site continuously during development, as a health check of the code base. One of the problems we have in this workflow is when we declare a service that belongs to a module that is not activated yet during the install, like Memcache module. This is what a site needs normally in settings.php,  $settings['memcache']['servers'] = ['localhost:11211' => 'default']; $settings['memcache']['bins'] = ['default' => 'default']; $settings['memcache']['key_prefix'] = 'foo_bar'; $settings['cache']['default'] = 'cache.backend.memcache'; This works perfect to add on a runnings site when the module is activated… Read More
Sep 12 2016
Sep 12
As for Purge module 8.x-3.0-beta5, creation of cache tags is removed from the Purge module itself, and should now be handled by purges instead – so from Varnish Purge 8.x-1.4 we now have a sub-module, Varnish Purger Tags, that handles the cache tags.  As for Purge module 8.x-3.0-beta5, creation of cache tags is removed from the Purge module itself, and should now be handled by purges instead – so from Varnish Purge 8.x-1.4 we now have a sub-module, Varnish Purger Tags, that handles the cache tags.  To use Varnish Purge with cache tags, from version 8.x-1.4, you need to activate the new sub-module. And also you need to reconfigure the Purger – because we renamed the Header for the cache tags to Cache-Tags, because we see no point calling it something else, so you also need to update your VCL file, if you using the older settings – Purge-Cache-Tags – for… Read More
Jul 09 2016
Jul 09
In this blog post I am going to go through a step by step setup for using the Varnish purge module together with Purge and Drupal 8. Please read this about updates done since the blog post were written. In this blog post I am going to go through a step by step setup for using the Varnish purge module together with Purge and Drupal 8. We have started to work on the Varnish purge module and we are using it some of our projects. The Varnish purge module is a fork of the Generic HTTP Purger with some minor changes. To use Purge and Varnish Purge you need to have a working setup with Varnish already, for an example VCL, see the end of the blog post. First install Purge and Varnish purge modules. For Purge to work normally you also have to install… Read More
May 18 2016
May 18

Jenkins setup

We are using jenkins to clone the project, and the only task we do after that is executing a shell, like this:

Jenkins execute shell

In the end of this series I am doing a post about our composer workflow, so I will not go into that now.

First we export some variables, to use with dropcat later.

PATH

Local path to composer

DROPCAT_ENV

Which environment we are going to deploy, in this case stage, so dropcat will use the dropcat.stage.yml file for settings.

ENV

This is a variable we are using for composer.

BUILD_DATE

Used to name our deployed folder.

And then the dropcat tasks

dropcat prepare

This command checks if the db used for the site exists, if it does not, it tries to create it. It also creates the drush-alias used for the site etc.

dropcat backup

This backups the db, if you want to backup the whole web folder, add the option -- backup_site 

dropcat tar

Packs the site in a tar-file. The options here could be set in dropcat.stage.yml, but I think it is more useful to use Jenkins variables here.

dropcat upload

Uploads the tar to the remote server, and removes it from the local server.

dropcat move

This unpacks the tar file and moves it in place. And creates a symlink to the deployed folder, like mysite_latest_stage. It also deletes the uploaded tar-file.

dropcat symlink

This we are using to create the files folder, which in our setup is outside the web folder.

dropcat config-import

Imports the configuration.

dropcat reset-login

Gets us a login-link to the site so we could check our deploy.

In next blogpost we are starting to look into in detail what happens in each step.

May 18 2016
May 18

Jenkins setup

We are using jenkins to clone the project, and the only task we do after that is executing a shell, like this:

Jenkins execute shell

In the end of this series I am doing a post about our composer workflow, so I will not go into that now.

First we export some variables, to use with dropcat later.

PATH

Local path to composer

DROPCAT_ENV

Which environment we are going to deploy, in this case stage, so dropcat will use the dropcat.stage.yml file for settings.

ENV

This is a variable we are using for composer.

BUILD_DATE

Used to name our deployed folder.

And then the dropcat tasks

dropcat prepare

This command checks if the db used for the site exists, if it does not, it tries to create it. It also creates the drush-alias used for the site etc.

dropcat backup

This backups the db, if you want to backup the whole web folder, add the option -- backup_site 

dropcat tar

Packs the site in a tar-file. The options here could be set in dropcat.stage.yml, but I think it is more useful to use Jenkins variables here.

dropcat upload

Uploads the tar to the remote server, and removes it from the local server.

dropcat move

This unpacks the tar file and moves it in place. And creates a symlink to the deployed folder, like mysite_latest_stage. It also deletes the uploaded tar-file.

dropcat symlink

This we are using to create the files folder, which in our setup is outside the web folder.

dropcat config-import

Imports the configuration.

dropcat reset-login

Gets us a login-link to the site so we could check our deploy.

In next blogpost we are starting to look into in detail what happens in each step.

May 12 2016
May 12

The idea with dropcat is that you use it with options, or/and with configuration files. I would recommend to use it with config files, and with minor settings as options. 

You could use just use a default settings file, that should be dropcat.yml, or as in most cases you have one config file for each environment you have – dev, stage, prod etc.

You could use an environment variable to set which environment to use, this variable is called DROPCAT_ENV.  To use prod environment you could set that variable in the terminal to prod with:
export DROPCAT_ENV=prod

Normally we set this environment variable in our jenkins build, but you could also set it as an parameter with dropcat like:
dropcat backup --env=prod

That will use the dropcat.prod.yml file

By default dropcat uses dropcat.yml if youi don't set an environment. 

Thing will be more in the next blog posts, but first we now look into a minimal config file, in our root dir we could hav a dropcat.yml file with this config:


app_name: mysite
local:
environment:
tmp_path: /tmp
seperator: _
drush_folder: /home/myuser/.drush

remote:
environment:
server: mytarget.server.com
ssh_user: myuser
ssh_port: 22
identity_file: /home/myuser/.ssh/id_rsa
web_root: /var/www/webroot
temp_folder: /tmp
alias: mysite_latest_stage

site:
environment:
drush_alias: mysitestage
backup_path: /backup
original_path: /srv/www/shared/mysite_stage/files
symlink: /srv/www/mysite_latest_stage/web/sites/default/files
url: http://mysite.com
name: mysitestage

mysql:
environment:
host: mymysql.host.com
database: my_db
user: my_db_user
password: my_db_password
port: 3306

The settings is grouped in a way that should explain what they are used for – local.environment is from where we deploy, remote.environment is to where we deploy. site.environment is for drush and symlinks (we use for the files folder), mysql.environment, is for… yeah you guessed correctly – mysql/mariadb. 

appname

This is the application name, used for creating a tar-file with that name (with some more information, like build date and build number).

local

These are the settings from where we deploy, it could be localy, it could be a build server as jenkins. 

tmp_path

Where we temporary store stuff.

Seperator

Used for i name of foler to deploy as seperator like myapp_DATE


drush_folder

Where drush-settings from you deploy from, normaly in your home folder (for jenkins normaly: /var/lib/jenkins/.drush), and this is also to which path the drush alias is saved on dropcat prepare.

Remote

server

The server you deploy you code too.

ssh_user

User to use with ssh to your remote server

ssh_port

Port used to use ssh to your server

identity_file

Which private ssh-key to use to login to your remote server

web_root

Path to which your site is going to be deployed to.

temp_folder

Temp folder on remote server, used for unpacking tar file.

alias

Symlink alias for you site


Site

drush_alias

Name of you drush alias, used from 'local' server. Drush alias is created as a part of dropcat prepare.

backup_path

Backup path on ”local” server. Used by dropcat backup

original_path

Existing path to point a symlink to – we use for the files folder

symlink

Symlink path that points to original_path

url

URL for you site, used in drush alias

name

Name of site in drush alias.


Mysql

host

name of db host

database

Database to use

user

Database user

password

password for db user to host

port

Port to use with mysql

We are still on a very abstract level, next time we will go through that is needed in an normal jenkins-build.

May 12 2016
May 12

The idea with dropcat is that you use it with options, or/and with configuration files. I would recommend to use it with config files, and with minor settings as options. 

You could use just use a default settings file, that should be dropcat.yml, or as in most cases you have one config file for each environment you have – dev, stage, prod etc.

You could use an environment variable to set which environment to use, this variable is called DROPCAT_ENV.  To use prod environment you could set that variable in the terminal to prod with:
export DROPCAT_ENV=prod

Normally we set this environment variable in our jenkins build, but you could also set it as an parameter with dropcat like:
dropcat backup --env=prod

That will use the dropcat.prod.yml file

By default dropcat uses dropcat.yml if youi don't set an environment. 

Thing will be more in the next blog posts, but first we now look into a minimal config file, in our root dir we could hav a dropcat.yml file with this config:


app_name: mysite
local:
environment:
tmp_path: /tmp
seperator: _
drush_folder: /home/myuser/.drush

remote:
environment:
server: mytarget.server.com
ssh_user: myuser
ssh_port: 22
identity_file: /home/myuser/.ssh/id_rsa
web_root: /var/www/webroot
temp_folder: /tmp
alias: mysite_latest_stage

site:
environment:
drush_alias: mysitestage
backup_path: /backup
original_path: /srv/www/shared/mysite_stage/files
symlink: /srv/www/mysite_latest_stage/web/sites/default/files
url: http://mysite.com
name: mysitestage

mysql:
environment:
host: mymysql.host.com
database: my_db
user: my_db_user
password: my_db_password
port: 3306

The settings is grouped in a way that should explain what they are used for – local.environment is from where we deploy, remote.environment is to where we deploy. site.environment is for drush and symlinks (we use for the files folder), mysql.environment, is for… yeah you guessed correctly – mysql/mariadb. 

appname

This is the application name, used for creating a tar-file with that name (with some more information, like build date and build number).

local

These are the settings from where we deploy, it could be localy, it could be a build server as jenkins. 

tmp_path

Where we temporary store stuff.

Seperator

Used for i name of foler to deploy as seperator like myapp_DATE


drush_folder

Where drush-settings from you deploy from, normaly in your home folder (for jenkins normaly: /var/lib/jenkins/.drush), and this is also to which path the drush alias is saved on dropcat prepare.

Remote

server

The server you deploy you code too.

ssh_user

User to use with ssh to your remote server

ssh_port

Port used to use ssh to your server

identity_file

Which private ssh-key to use to login to your remote server

web_root

Path to which your site is going to be deployed to.

temp_folder

Temp folder on remote server, used for unpacking tar file.

alias

Symlink alias for you site


Site

drush_alias

Name of you drush alias, used from 'local' server. Drush alias is created as a part of dropcat prepare.

backup_path

Backup path on ”local” server. Used by dropcat backup

original_path

Existing path to point a symlink to – we use for the files folder

symlink

Symlink path that points to original_path

url

URL for you site, used in drush alias

name

Name of site in drush alias.


Mysql

host

name of db host

database

Database to use

user

Database user

password

password for db user to host

port

Port to use with mysql

We are still on a very abstract level, next time we will go through that is needed in an normal jenkins-build.

Apr 20 2016
Apr 20

In a series of blog posts I am going to present our new tool for doing drupal deploys. It is developed internally in the ops-team in Wunderkraut Sweden , and we did that because of when we started doing Drupal 8 deploys we tried to rethink how we mostly have done Drupal deploys before, because we had some issues what we already had.

What we had - Jenkins and Aegir

Since some years we have been using a combination of Jenkins and Aegir to deploy our sites. 
That work-flow worked, sort off, well for us. And because it was not a perfect match we tried to rethink how we should do deploys with Drupal 8 in mind. 

Research phase

We looked in many directions, like Capistrano and Appistrano, OpenDevShop, platform.sh, Aegir 3 etc. But none of them fitted our current need – we wanted to simplify things, and most of the tools just added another layer that was not a perfect fit for us. Also, it was important to us that the solution should be open source.

We went old school and built our own solution – almost.

Re-use and invent

With Drupal 8 we got to know Symfony in a better way, and Symfony has a console, that also is used by Drupal console project. The advantages in using Symfony console for a base for our deploy flow were big, based on Symfony best practice and using open source projects. Also, drush does a lot of stuff that we need in the deploy process, so that is an important part also. We did not want to re-invent stuff that already worked well.

Enter Dropcat

So we started to build Dropcat (Drop as in Drupal, and cat because… because of cats) and we slowly added more and more stuff to it, and now we have most part of the commands that we need to do a normal deploy, we are still working on one important bit – and that is the rollback – and hopefully when this series of blog posts about Dropcat is finished, we have that in place also.

In next blog post we take a look into how to install dropcat and how the configuration files works. You could check out the Dropcat project on our GitLab server

Apr 20 2016
Apr 20

In a series of blog posts I am going to present our new tool for doing drupal deploys. It is developed internally in the ops-team in Wunderkraut Sweden , and we did that because of when we started doing Drupal 8 deploys we tried to rethink how we mostly have done Drupal deploys before, because we had some issues what we already had.

In a series of blog posts I am going to present our new tool for doing drupal deploys. It is developed internally in the ops-team in Wunderkraut Sweden , and we did that because of when we started doing Drupal 8 deploys we tried to rethink how we mostly have done Drupal deploys before, because we had some issues what we already had.

What we had - Jenkins and Aegir

Since some years we have been using a combination of Jenkins and Aegir to deploy our sites. 
That work-flow worked, sort off, well for us. And because it was not a perfect match we tried to rethink how we should do deploys with Drupal 8 in mind. 

Research phase

We looked in many directions, like Capistrano and Appistrano, OpenDevShop, platform.sh, Aegir 3 etc. But none of them fitted our current need – we wanted to simplify things, and most of the tools just added another layer that was not a perfect fit for us. Also, it was important to us that the solution should be open source.

We went old school and built our own solution – almost.

Re-use and invent

With Drupal 8 we got to know Symfony in a better way, and Symfony has a console, that also is used by Drupal console project. The advantages in using Symfony console for a base for our deploy flow were big, based on Symfony best practice and using open source projects. Also, drush does a lot of stuff that we need in the deploy process, so that is an important part also. We did not want to re-invent stuff that already worked well.

Enter Dropcat

So we started to build Dropcat (Drop as in Drupal, and cat because… because of cats) and we slowly added more and more stuff to it, and now we have most part of the commands that we need to do a normal deploy, we are still working on one important bit – and that is the rollback – and hopefully when this series of blog posts about Dropcat is finished, we have that in place also.

In next blog post we take a look into how to install dropcat and how the configuration files works. You could check out the Dropcat project on our GitLab server

Mar 17 2015
Mar 17
markering_508.png

Tuesday, March 17, 2015 - 11:11

Ok, so now we have a wysiwyg-editor in drupal 8 core, but if you want another editor, like something used on medium.com?

I have done som intial work to get the medium clone inside drupal 8, and have now setup a sandbox on d.o and commited that to medium module. Please test it out if you are interested. The further plan of the module is to get a working media solution working with it, and if you are skilled on js (I am not :-)), and you feel you want to contribute... 

Sandbox is over here: https://www.drupal.org/sandbox/mikkex/2453725

Update: Drupal 8 version is now merged into medium project on d.o: https://www.drupal.org/project/medium

Blog post author

mikke_mini.jpg

Solution facilitator

Stockholm

Dec 20 2014
Dec 20
markering_403.png

Saturday, December 20, 2014 - 10:19

The editor used to edit posts at medium.com is a real slick, and I find it interesting and intuitive. Davi Ferreira have made an open source clone of it, so it could easily be used in other places.

@cweagans have done great work to get the Medium editor in it's own module, but I would rather myself have it inside the WYSIWYG API. so I took some parts of his work and did a patch, so if somebody else finds it interesting to get this editor to work with WYSIWYG API, please try it out, test, review, throw stuff at it...

As a first step I just added the text editing part, with further plans on try it to get it to work with Asset for images, videos etc.

Blog post author

mikke_mini.jpg

Solution facilitator

Stockholm

Nov 28 2014
Nov 28

I looked for a simple debugger to use in the fronted, and discovered PHP-ref.

I like to keep things simple and clean. I like devel when using simple debugging on the frontend (for more advance debugging I am using xdebug), put devel adds a lot of overhead - like many drupal modules it tries to cover a lot of issues, and therefore it adds a lot of things you maybe not need. So I tested some of the debuggers you could use with php in the frontend, and I really like PHP-ref - simple and clean, just the way I liked it.

So now I have wrapped it out to a moudle - Ref debug. With it you could do debugging on entities and direct in code as an replacement for devels dpm() - instead you use r(), and to that you could add functions, variables, classes etc.

Apr 23 2014
Apr 23
5145392882_28d96c346c_b.jpg

Wednesday, April 23, 2014 - 15:34

When you work locally on development or test on stage/dev whatever you sometimes needs the files from production. Our old way in solving that is downloading the whole file catalogue and have it local. Sometimes the file catalogues where several gigabytes large so that is not a good workflow at all.

To solve that problem we are now using Stage file proxy. We have been using it for some time now to get files to stage or locally, and it works really well (we have some issues on D6-sites, but works almost flawless on D7-sites). Stage file proxy downloads the files that are requested from production (or whatever) site running environment.

So you just get the files you need, and you could easily delete the files locally and get them back when called for. Time and space saver.

markering_229.png

A nice patch to the module is getting an admin-interface - so that you don’t need to add settings to settings.php or creating variables. EDIT: Greggles just committed that patch to dev :-)

Photo by James Butler

Blog post author

mikke_mini.jpg

Solution facilitator

Stockholm

Apr 11 2013
Apr 11

On April 20th I am going to give my session about Drupal 8 and plugins for the third time - and the third version of the session - Drupal 8 is still a work in progress and some stuff keep changing.

Feb 21 2013
Feb 21

Semantic panels is for Panels what Semantic Views is for Views - better control of the markup.

Feb 12 2013
Feb 12

So I did some work on the Drupal module Semantic fields, to make it exportable. Did some patches, uploaded them and everything seemed ok, not so much activity in the issue queue so nobody else tested my stuff. So I deleted my folder with Semantic fields so I could test the patches myself. Then I realised I forgot to add important files to my patches - the new files with the exportability.

Feb 02 2013
Feb 02

Hi,

Thanks for this very latest screencast. I am using field collection on nearly all of my projects. Place where I get stuck is themeing the form of nested field collections. Default layout of the field collection fields on a form contains similar css classes and hard to handle when field collection has unlimited value and are nested.

For example: A candidate creates a cv on website using Profile 2 registration form.

-------------- Field collection: Profession -----------
select profession: dropdown menu
Select experience no. of years: drop down
-------------- Nested field collection: Skills for above profession --------
Select: skill from dropdown list related to above profession
Select: Skill leve ( basic, average, expert etc.)
Select: Last used (month / year)
***** Remove button for Skills f. collection**
**** Add another skill ***
***Remove button for profession f. collection****
*** Add another Profession f. collection**

My requirement is to theme all elements on one line and the should keep their position when a new instance is added. I tried using CSS but can only theme one instance of f. collection. When a new instance is added they fall back to default position. Not a good solution.

Please let me know if you have worked with render able elements with Display suite on forms? How can that help in this situation. I have seen a screencase by author of DS but that examplifies only CCK node edit form not profile 2 or f. collections.

Thanks, and waiting for more good stuff. :)

Jan 31 2013
Jan 31

Using xdebug is almost a must when doing development with Drupal, but sometimes things just break because you are using it.

Jan 25 2013
Jan 25

There are a lot of improvements you could do on the admin side of Drupal, one of the things could be to have a better workflow for adding references to entities.

Jan 24 2013
Jan 24

A quick way you could debug which panels are slow, is to install the contrib module Panels, Why so slow?

A little bit missleading name though - you don't get information about why the panels are slow, just how fast each panel is rendered, but it is a good start for debugging. Seem like the developer, drweish, are not planning any further development of the module, since it is marked as Unsupported. But hey, it seems to work.

A note though, my experience with Panels is not that they are slower than any other way to build sites with Drupal, just for the record.

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web