Upgrade Your Drupal Skills

We trained 1,000+ Drupal Developers over the last decade.

See Advanced Courses NAH, I know Enough
Feb 23 2021
Feb 23

Join me for a walkthrough of the steps I had to perform to keep track of the Drupal 9 compatibility, upgrade underlying tools like Composer 2, BLT 12 and adapting contributed as well as custom modules to be compatible with Drupal 9.

Discover more about the service CMS our digital agency has to offer for you.

Recently, I had the opportunity to upgrade one of our projects from Drupal 8 to Drupal 9. In this blog post I would like to share some of the learnings I had while completing the upgrade. As you might expect, updating from Drupal 8 to Drupal 9 involves very little steps on the application layer. Most contributed modules are Drupal 9 ready and only a few exotic modules required me to work on a reroll of a Drupal 9 compatibility patch.

1. Keep track of Drupal 9 compatibility using Upgrade Status

To get started, I used Upgrade Status to analyse and keep track of the Drupal 9 readiness of the site.

It takes a while to scan all modules, but the UI is really helpful in identifying what is left for you to do. Follow these steps:

Run a full index from the command line:

drush us-a --all

Index individual projects:

drush us-a project_a project_b

You can access your upgrade report at yoursite.dev/admin/reports/upgrade-status.

2. Update to Composer 2

One fundamental step was to update to Composer 2. Refer to the documentation here. First we update composer itself:

composer selfupdate --2

If you have the composer version specified in your docker container, you might need to set it up there. In our case, we are using Lando, so let’s refer to the documentation on how to choose a composer version in Lando. In our lando.yml, we can explicitly specify the composer version as follows:

services:
  appserver:
    composer_version: 2

Updating to composer 2 may result in errors depending on the packages that you are using. When you run composer install, you might get an error like the following:

Your requirements could not be resolved to an installable set of packages.

Problem 1
    - Root composer.json requires wikimedia/composer-merge-plugin 1.4.1 -> satisfiable by wikimedia/composer-merge-plugin[v1.4.1].
    - wikimedia/composer-merge-plugin v1.4.1 requires composer-plugin-api ^1.0 -> found composer-plugin-api[2.0.0] but it does not match your constraint.

The according issue was just merged recently, but during the upgrade composer 2 support was only available via a fork of the original repository. In such a case, you can include a forked repository using the following approach. Add the following to your composer.json:

    "require": {
        "wikimedia/composer-merge-plugin": "dev-feature/composer-v2 as 1.5.0"
    }

    "repositories": {
        "wikimedia/composer-merge-plugin": {
            "type": "vcs",
            "url": "https://github.com/mcaskill/composer-merge-plugin"
        }
    }

3. Update to Drupal 9 and BLT 12 using Composer

We are using Acquia BLT to automate building and testing our Drupal sites.

Updating to Drupal 9 requires updating BLT to version 12. Make sure to follow the BLT 12 upgrade notes. Most importantly, some dependencies like PHPCS have been moved into their own plugins such as acquia/blt-phpcs. The following adaptations should be performed in composer.json:

{
    ...
    "require": {
        "acquia/blt": "^12",
        "cweagans/composer-patches": "~1.0",
        "drupal/core-composer-scaffold": "^9.1",
        "drupal/core-recommended": "^9.1",
    }
    "require-dev": {
        "acquia/blt-behat": "^1.1",
        "acquia/blt-drupal-test": "^1.0",
        "acquia/blt-phpcs": "^1.0",
        "drupal/core-dev": "^9",
    }
}

With the BLT update, some commands have changed. The BLT 11 versions of the commands, i.e.

blt validate:all
blt tests:all

Are now replaced with BLT 12 versions:

blt validate
blt tests

To perform the necessary updates, you need to run the following

composer update -w

Depending on your module dependencies, this might result in update errors. Follow the next sections for tips how to update your module dependencies for Drupal 9 compatibility.

4. Update contributed modules for Drupal 9

Because of the switch to support semantic versioning, modules might have changed their major release. For example devel has abandoned the 8.x-3.x series and uses now 4.x. You can always check the module page and verify that you find a version that requires Drupal ^9. Adapt the version in composer.json as follows:

{
    "require": {
        "drupal/devel": "^4.0",
    }
}

5. Notes on applying patches for module compatibility

Since drupal.org now supports issue forks & merge requests based on GitLab, .diff patch files might not need be available anymore within issues. You can still apply them using the following approach. Add “.diff” at the end of the merge request url. The following example illustrates how a merge request-based patch can be applied to a module in composer.json:

{
    "extra": {
        "patches": {
            "drupal/config_ignore": {
                "Support for export filtering via Drush (https://www.drupal.org/i/2857247)": "https://git.drupalcode.org/project/config_ignore/-/merge_requests/3.diff"
            }
        }
    }
}

When a module doesn’t state Drupal 9 as core_version_requirement or you need to have the composer.json to be added, you can use the following approach to include such a module using the composer workflow. You can use the module based on the version that is provided by the git branch that contains the fixes.

{
    "require": {
        "drupal/term_reference_tree": "dev-3123389-drupal-9-compatibility as 1.3-alpha3",
    },
    "repositories": {
       "drupal/term_reference_tree": {
            "type": "git",
            "url": "https://git.drupalcode.org/issue/term_reference_tree-3123389.git"
        }
    }
}

6. Update your custom code for Drupal 9 using Rector

Drupal 9 compatibility issues should be outlined by the Upgrade Status module mentioned previously. We are using drupal-check to automatically detect issues in the code base and this threw significantly more errors after the upgrade as code style requirements were increased. I used Rector to apply some automatic code style fixes for our custom modules. Rector wasn’t able to do all of them, so plan for some additional work here.

7. Working in multiple Lando instances of the same site

Because the Drupal 9 upgrade branch has a lot of dependencies that are different from Drupal 8, switching back and forth between branches might be cumbersome. I decided to run two instances in parallel, so that I don’t have to do full lando rebuilds.

Check out the same repository twice in two separate folders. Add and adapt the following .lando.local.yml within your second instance, so that you can run lando separately for both folders.

name: project_name_2

Use the following configuration to adapt url mappings, so that they don’t overlap with the original project.

proxy:
  appserver:
    - project_url_2.lndo.site
    - project_domain_2.lndo.site
  solr_index:
    - admin.solr.solr_index.project_2.lndo.site:8983

services:
  appserver:
    overrides:
      environment:
        DRUSH_OPTIONS_URI: "https://project_2.lndo.site"

In case you need have specified a portforward for the database, you should define a custom port for your second project instance

  database:
    portforward: 32145

Now you will be able to use lando start and respective commands within both project folders and access both site instances independently.

8. Conclusions

Thanks to semantic versioning, updating from Drupal 8 to Drupal 9 involves very little steps on the application layer. Most contributed modules are Drupal 9 ready and only a few exotic modules required me to work on a reroll of a Drupal 9 compatibility patch.

As you can see from the topics being mentioned, the effort to update the infrastructure certainly accumulates with upgrading from Composer 1 to 2, PHPUnit and making sure that other toolchain components are up to date.

Thank you Karine Chor & Hendrik Grahl for providing inputs to this post.

Jan 27 2021
Jan 27
  • 28 January 2021
  • Benoît Pointet

You are facilitating a group process. Post-its gather and start overlapping each other. Time to make sense of all that. You just launched the group into some clustering exercise when someone drops the bomb: "Wait! It all relates!".

Discover more about the services UX Design and Agile teams and processes our digital agency has to offer for you.

"But… it all relates!" A reaction so often heard while facilitating (or participating) to group reflexion processes (brainstorming, agile retrospectives, …).

"You ask us to group things … but everything is connected!"

It often comes with a contrived smile ("things are complex, you know!"). Sometimes also with a counterproposal "let us make a single group around the central thing here which is X, since obviously all things relate to X."

A very human reaction, which if you’re unprepared as facilitator, can take you aback. Keeping the following arguments in your mind can help.

  1. That it all relates does not mean that it all ought to conflate. It makes sense to distinguish the different aspects of a situation or a problem, the different knots of its web of complexity. Some seem to think that seeing the big picture implies refusing to distinguish the whole from its parts. Yet if we can see the links, the relationships, it is because we have identified the parts.

  2. Although a holistic view provides a definite advantage when facing a complex situation, it is good to remind ourselves that action cannot be holistic. You cannot act on the system as a whole. You may only act on precise points of the system.

Two simple arguments to help us facilitate these "everything is connected" moments and realize that in a (group) reflexion process, taking things apart is the first step towards deciding meaningful action.

Photo: Ruvande fjällripa

Benoît Pointet

Holacracy Coach

Related services
  • Topics
  • Tags
Jan 26 2021
Jan 26

In this walk through I show my preferred setup for SPAs with Svelte, Typescript and Tailwind.

TL;DR

For the very impatient among us:

npx degit munxar/svelte-template my-svelte-project
cd my-svelte-project
npm i
npm run dev

Enjoy!

Overview

In this article I'll give you some insights how I set up Svelte with TypeScript and style components with Tailwind. There are plenty of articles around, but I found a lot of them overcomplicate things, or don't fit my requirements.

So here are my goals for the setup:

  • stay as close to the default template as possible, to make updates easy
  • production build should only generate css that is used
  • use typescript wherever possible

What Do I Need?

You'll need at least some node version with npm on your machine. At time of writing I have node version 15.6.0 and npm version 7.4.0 installed on my machine.

node -v && npm -v
v15.6.0
7.4.0

Install the Svelte Default Template

To setup Svelte I open a terminal and use the command from the official Svelte homepage. TypeScript support has been already added to this template, so nothing special here.

npx degit sveltejs/template my-svelte-project
# or download and extract 
cd my-svelte-project

Enable TypeScript

# enable typescript support
node scripts/setupTypeScript.js

At this point I try out if the setup works by installing all dependencies and start the development server.

# install npm dependencies
npm i
# run dev server
npm run dev

If everything worked so far, pointing my browser at http://localhost:5000 displays a friendly HELLO WORLD. Let's stop the development server by hitting ctrl-c in the terminal.

Install Tailwind

Back in the Terminal I add Tailwind as described in their documentation.

npm install -D [email protected] [email protected]

After this step I generate a default tailwind.config.js file with

npx tailwindcss init

If you prefer a full Tailwind config use the --full argument:
npm tailwindcss init --full
See the Tailwind documentation for more infos about this topic.

Configure Rollup to use Postcss

The default Svelte template uses Rollup as a bundler. When I run the setupTypeScript.js from the first setup step, I get the famous svelte-preprocess plugin already integrated into the rollup setup. The only thing left is that I add the config for postcss as options to the svelte-preprocess plugin. Here are the changes that I make in rollup.config.js:

// rollup.config.js (partial)
...
export default {
  ...
  plugins: [
    svelte({      
       preprocess: sveltePreprocess({
         postcss: {
           plugins: [require("tailwindcss")],
         },
       }),
    }),
    ...
  ],
  ...
};

At this point Rollup should trigger postcss and therefore the Tailwind plugin. To enable it in my application, I still need one important step.

Adding a Tailwind Component to the App

Now it's time to create a Svelte component that contains the postcss to generate all the classes. I call mine Tailwind.svelte but the name doesn't really matter.

// src/Tailwind.svelte

Some things to note here:

  • The component only has a single style element with no markup.
  • The attribute global tells the svelte-preprocess plugin to not scope the css to this component. Remember by default Svelte scopes every css to the component it was declared, in this case I don't want this.
  • The lang="postcss" attribute is telling svelte-preprocess to use postcss for the content. As a goody, some IDE extensions now display the content with the correct syntax highlighting for postcss.

Now use the Tailwind component in src/App.svelte

// src/App.svelte


Hello Tailwind!

Now my browser displays a Tailwind styled div. Very nice!
Let's clean up the public/index.html and remove the global.css link tag and remove the corresponding file from public/global.css I don't use it.





    Svelte app

Let's finish the setup for production builds. Right now it's perfect for development. I can use any Tailwind class and except for the first start of the development server, where all the Tailwind classes get generated, it behaves very snappy on rebuilds.

Production Builds

Purge

When it comes to production builds, right now I have not configured anything so I'll get a bundle.css with all Tailwind classes. I don't want that for a production build, so I modify the tailwind.conf.js to use it's integrated purgecss for that purpose.

// tailwind.config.js
module.exports = {
  purge: ["src/**/*.svelte", "public/index.html"],
  darkMode: false, // or 'media' or 'class'
  theme: {
    extend: {},
  },
  variants: {
    extend: {},
  },
  plugins: [],
};

With this modification Tailwind removes all classes that are not used in .svelte files and in the public/index.html html file. I added the public/index.html file because sometimes I add containers or some responsive design utilities directly on the <body> tag. If you don't need this, you can remove the index.html file from the purge list, or add additional files I don't have listed here. For example: if I use some plugins that contain .js, .ts, .html, ... files that use Tailwind classes, I would add them to this purge array too.

There is one little detail about the Tailwind purge: it only is executed if NODE_ENV=production which makes sense. I set this environment directly in my package.json scripts:

// package.json (partial)
{
  ...
  "scripts": {
      "build": "NODE_ENV=production rollup -c",
      ...
  },
  ...
}

With these settings my bundle.css only contains the Tailwind classed I really use, plus the mandatory css reset code that Tailwind provides.

Autoprefixer

One last thing to add for production is vendor prefixes. I usually go with the defaults and just add autoprefixer as postcss plugin. If you need more control, add configuration as you please.

Install autoprefixer with npm:

npm i -D autoprefixer

Add it as postcss plugin in rollup.config.js:

// rollup.config.js (partial)
{
  ...        
  preprocess: sveltePreprocess({
    postcss: {
      plugins: [
        require("tailwindcss"), 
        require("autoprefixer"),
      ],
    },
  })
  ...      
}

That's it.

Features of this Setup

Tailwind Classes

I can apply every Tailwind class on a every html element even in the index.html template.

Tailwind @apply

Additionaly I can use @apply inside a style tag of a Svelte component like this:



  

This will generate a class scoped to the button of this component. Important part here is the attribute lang="postcss", without this postcss would not process the content of the style tag.

Typesave Components

Let's implement a simple logo component with an attribute name of type string and a default value of "Logo".

  
  
  

{name}

When I use this component the svelte language service of my IDE (visual studio code) will yell at me, if I try to pass something as the name attribute that is not of type string.

  
  
  
  

If you have a IDE that supports the svelte language service, you get all the intellisense stuff you would expect inside your editor. I use Visual Studio Code with the very good svelte.svelte-vscode extension.

Recap

I demonstrated how easy it is to setup a Svelte project with the default template enable TypeScript and add production ready Tailwind support.

I hope you find some helpful information and write some amazing apps!

The source code is available at: https://github.com/munxar/svelte-template

Jan 14 2021
Jan 14

Who reads what? For how long? And where?

As a digital agency we would like to know what is going on on our website. Therefore we use cookies. They help us to measure exactly this. And no worries - we do not recognise underwear colour and coffee consumption. However, these cookies are stored with you. So we can at least find out how often you come by.

Accept all

Decline all

Customize my preferences

Jan 13 2021
Jan 13

We all know that decisions bring us forward, but deciding often feels like too big of a step. Discover how to take decisions with a lighter heart.

Discover more about the services Agile Organisation and Teal culture and agile minds our digital agency has to offer for you.

The concept of decision entails a sense of finality. Often decisions feel like a Rodin sculpture: once for all perfectly cut. How terrible and scary is that? No wonder that many refrain from taking (major) decisions.

Can't we remove this sense of fate and rigidity from decisions and turn decision-making into a lighter thing?

Take smaller decisions

Does that decision feel too big? What could be a smaller decision in the same direction that is safe enough to take? Find it and take it. Breaking up a big decision in a series of small decisions often helps to move forward. "One Fear at a Time", as John Whitmore writes in Coaching for Performance.

Be it fear or decision, breaking it up in smaller pieces also allows you to adapt the course of action.

Embrace the imperfection of a decision

Make explicit the fact that the decision has been taken based on finite knowledge of a situation and thus corresponds to a local optimum. Finish any decision statement, with : " … until we know better".

Shouldn’t we wait then, to take better decisions? Sometimes yes. Gathering more info, giving it more thoughts is always an option. There however always comes the time when Pareto's Law kicks in, a point beyond which an imperfect decision will show greater ROI than a more perfect one.

Make it a pilot

A great question I make use of to ease my clients in taking virtuous yet still uncertain steps: "Is it safe enough to try?" Often it is. Often, this question eases the "fear of final decision".

So decide to try, before finally deciding– if you still believe that you will have to decide once for all.

Give it a revision date

Since a decision is made at a certain point in time in a certain context and based on finite knowledge, it seems only fair to review it later down the road, doesn't it? Fair and definitely smart. Even more in the case of a decision declared as a temporary one, like a pilot.

Define a revision date or install the license and/or duty to revise a decision when the need or new knowledge arises.

This works particularly well for any structural or strategic decision. Imagine how fit your organization would be if every agreement in it was due to be revised! Well, the distributed governance scheme of Holacracy makes it possible for anyone to trigger revision of the governance and Sociocracy 3.0 also advocates regularly reviewing agreements.

To go one step further down the road, I dream of an organizational system where decisions that are not revised get dropped, like an expiry date for anything decided, in order to keep organizational mass as low as possible.

Embrace exceptions to the decision

Just as a local optimum will make sense for most cases around, there will be exceptions. Let them be and shine on the light of the decision. No exception should be hidden, for hiding exceptions calls to rigidify the decision even more.

On the contrary, collecting exceptions to any decision seems to me like a good practice— I yet still have to find a domain where this happens. Every exception enriches the understanding of the decision, sharpens the scope and effects of the decision, and brings material for further revision of it.

That's all (for now) folks!

This list is not exhaustive, it simply exhausts my current thoughts on the topic. I yet decide here and now to share it with you as such. Definitely safe enough. And the digital medium gives me the license to revise it later down the road ;)

I hope this gives you a few concrete ways to take the next decision with a bit more joy and serenity.

Artwork: Stereophotography of the Grindelwald Glacier

Jan 11 2021
Jan 11

As I am frequently coaching individuals who start in the Scrum Master role, I realized there was one aspect that was rarely written about: how to begin.

Discover more about the service Agile teams and processes our digital agency has to offer for you.

Yes, it's a terrifying role

So that's it, you have passed the certification for Scrum Master and it is time for you to join the team and play the role. You are terrified, and I understand why. After all, you're lucky we've spared some budget for this role and you'd better make the team perform better right away so your position is maintained. All eyes on you. If you don't make it, we give it all back to the almighty Project Manager. In the spotlight of all these expectations, here is an invitation to take a step back and relativize.

It's not about you, it's about the team

Most importantly, it is not about you, and will never be. It is about the team. So do not carry its destiny upon your shoulders. All you will ever do is serve them and hold a mirror to them. That's it. You have to walk with them, not ahead of them. Your angle is the one of curiosity: "Oh, have you noticed that? What do you think about it?". Naive is powerful, because it blows away all preconceptions. You can, over and over, invite your team to look at the status quo with a fresh angle, which may inspire them to take action, or try new things (and follow up on them). If you've managed that, your job is done.

Start from where the team is

It is also a bad idea to go in with an upfront plan of how you want to "change how things are run". Chances are there are many assumptions in your head, which may be completely off. Instead of bulldozing your way into the team, blasting and criticizing whatever is present, I urge you to think that whatever is in place has been put in place by professionals. The way the team functions today is how it has best overcome problems so far, so respect that. I'll quote the Kanban principle: "Start from where you are". And from then, lead the team to experiment, little by little. It may come a long way.

Don't wait to be ready

The polar opposite of this attitude is also very tempting. It is to remain paralyzed. "I don't feel ready". Who does? While it is certainly a good thing to attend a course and obtain a certification, there are enough books, articles and conferences on Scrum and Agile to fill several lifetimes. For the benefit of your team, don't wait until you've read them all. Practice is going to be your teacher. The best there is. Just like the team, you are going to do the best you can, day after day, and for sure it's not going to be perfect...

Look for criticism

... So there will be criticism. That is great news. If nobody says anything, that means everybody thinks you're beyond the point of recovery and it's not even worth it anymore to give feedback. Constructive criticism is your ally in doing a better job for your team. I even advise you to actively seek feedback. There are retrospective activities tailored just for that, such as "Build Your Own Scrum Master". Make it a game for the team. That way, you show that though you take the role seriously, you certainly do not take yourself seriously.

About today

So, what about today? Day One? Well, two postures I've previously written about are always available: The Servant and The Mechanic. As a servant, there's probably a hand you can lend to the team right now. Ask around, and remember, a chore is not a chore. It's a chance to lead by example. If you pull your finger out for your teammates, you'll not only shine but you'll also inspire them to do it more as well. As a process mechanic, have a look at the team's Scrum. How is the next sprint coming along? Is the backlog prioritized? If you have chosen User Stories to express needs, are there enough of them in a ready state? What does "Ready" mean for your team? Those are great conversation starters. Dive in. And if anything's off, investigate, don't blame.

Get accompanied on the journey

Sure, all of this is still a handful. But you don't have to go it alone. There is a tremendous global community of practice and many local ones too. Don't be afraid to check out scrum.org forums, browse meetup.com for groups near you – or far away from you, as remote work has made the world even flatter than before. If there are several Scrum Masters in your organization, hook up with them, set up weekly coffees to exchange your war stories. And if you feel like getting accompaniment on your journey, don't hesitate to reach out. Whether it is me or one of my colleagues from the Liip coaching team, it would be with pleasure to walk along with you.

Jan 05 2021
Jan 05

How can we leverage Open Source contribution (in particular to Drupal) to maximize value for our customers? In this article, I would like to share the results of a recent workshop we held on this question as part of our internal gathering LiipConf.

Discover more about the service CMS our digital agency has to offer for you.

Together with a few colleagues we met for a brainstorming session. The goals set for this session were:

  • Share experiences about open source contribution at Liip and together with customers
  • Reflect on added value we can generate when contributing to Open Source
  • Mention any blockers, uncertainties or difficulties that you encounter when it comes to Open Source contribution
  • Come up with ways of including Open Source contribution into our workflows
  • Brainstorm what our customers would find valuable to know about Open Source contribution

Check-in

In our check-in, we asked which topics attracted people to come to the workshop. We had a good mix of engineers, product owners and UX folks from Drupal and Symfony in our meeting. The topics of interest spanned from “motivating clients to pay to create reusable solutions”, “sharing experiences in the context of contributions”, “getting started with contributions in 2021”, “listening in”, “finding ways to giving back”.

Method

Thanks to Emilie’s suggestion and facilitation, we used the Customer Forces Canvas to structure the discussion.

Open Source contribution board based on Miro.com and Customer Forces CanvasOpen Source contribution board based on Miro.com and Customer Forces Canvas

The canvas allowed us to capture different aspects of adopting contribution practices by asking structured questions:

  1. Triggering Event - What were those events that led to your decision to contribute back to Open Source?
  2. Desired Outcome - What outcome were you looking for?
  3. Old Solution - What solution were you using that was already in place?
  4. Consideration Set - What were alternative solutions that were considered?
  5. New Solution - What solution was selected? Why?
  6. Inertia - What were some concerns/anxieties you had before starting to contribute?
  7. Friction - What were some concerns after you started contributing?
  8. Actual Outcome - What was the actual outcome after starting to contribute? Did it meet your expectations?
  9. Next Summit - What would you like to see next for contribution? Why?

Discussion points

Examples mentioned were finding issues in existing Open Source solutions. Another key triggering event was that when the client understood how
Open Source works, they would be much more motivated to fund contributions. Often it is the motivation by an individual or the team striving to create better solutions without the need to maintain custom code individually for a customer project.

Goals we are striving for when contributing to Open Source include externalizing maintenance efforts to the community at large as well as doing good. By contributing back we are fueling the ecosystem that keeps our software up to date and innovative. We create more sustainable solutions when we are able to use standardized building blocks and follow community best practices.

When facing contribution opportunities, we are often presented with various ways to solve the issue. Fix the issue in custom code (miss the chance of contribution), fix the issue in a contributed module or fix the issue in Drupal core. Depending on the layer of abstraction, we can shoot for quick solutions or spend more time working on a generic solution. Alternatives to fixing the issues ourselves also include that we sponsor other maintainers to work on a sustainable solution that includes the resolution of the current issue.

We have also encountered issues where relying on too much abstract code created a risk for the project over time, especially when you deviate from the standard components it might become easier to internalize the functionality into the custom project’s code base so that it can be adapted without context switching but at the cost of needing to maintain the functionality without community support.

Even non-perfect code or work-in-progress can be released as Open Source so that others are able to build on it and eventually these building blocks will be further evolved. Sandbox projects or alpha releases can serve well as incubators for contributed code. Over time, when the project gets more mature, the semantic versioning approach with alpha & beta releases allows to specify well what users of the module can expect.

When discussing what was holding us back from contributing, many reasons can apply. Contributing to Drupal core takes more time than writing custom code. Sometimes it is just that folks involved don’t understand how Open Source works or what it is good for. When we create quick & dirty solutions, we sometimes don’t feel quite ready to Open Source them. Sometimes, we just don’t feel a need to contribute back because we can achieve our short term goals without doing so. Family folks mentioned that they can’t commit private time and focus on getting the job done during work time.

When discussing what was holding us back when making a contribution, we found that sometimes the effort invested doesn’t match the outcome. We need to invest more time than what we think is worth solving the problem. This can be especially driven by the fact that contributed code may imply higher quality standards enforced by peer-review from the community. It’s also the urge that once a solution is Open Source, we feel like we need to maintain it and invest more time continuously. If a custom solution is cheaper, why should the client pay for it when they cannot reuse it themselves? Sometimes we are not certain if anyone else will be willing to make use of our custom code.

We talked about the benefits that folks got when adopting contribution was adopted as a practise. Getting good community feedback on their solutions, having their solutions improved and evolved further so that it matches new use cases was mentioned. Giving speeches at conferences is also something that was found to be valuable. As a new step for contribution, folks mentioned that they would like to get help pushing their contributed modules so that they get adopted by a wider audience.

We also identified some USPs (Unique Selling Proposition) for contribution during the discussion. Clients would not need to pay for improvements contributed by the community. The maintenance of solutions based on contribution becomes more reliable. Contribution elevated self-esteem for clients and teams and helped increase visibility. It helps as a sales argument for agencies towards clients and also helps engineers to become hired by a Drupal agency like Liip. Some folks even manage to make money on platforms like GitHub sponsors or Open Collective.

Takeaways

We closed our meeting to collect some takeaways and what’s next for folks in contribution. Here’s a list of the key takeaways:

  • A “contrib-first approach” that incorporates the contribution mindset
  • Adding contribution checkpoints into the definition of ready/done
  • Inviting for cross-community contribution between Symfony and Drupal
  • Raising contribution in daily meetings, motivating each other to speak at conferences
  • Making sure that our contributions are used by others
  • Helping to find areas of contribution for non-developers
  • Balancing being a taker vs. a maker
  • Evolving a plan to communicate our efforts around contribution

What’s next for you in contribution? Have you experimented with the Customer Forces Canvas? Thank you for improving Open Source & let us know in the comments.

Image credit: Customer Forces Canvas (c) LEANSTACK
https://leanstack.com/customer-forces-canvas

Dec 20 2020
Dec 20

A mobile application can take many forms and have many functionalities.
If I say "mobile application", what kind of app do you think of?

Discover more about the services UX Design and Mobile Apps our digital agency has to offer for you.

Maybe you will think of an application like Facebook, Instagram or TikTok? Or maybe an app like the SBB, Twint or something to check the weather with.

At Liip, we particularly like the ones which are really useful to our users and solve a specific issue.

The issue

When our clients from the Lausanne-Echallens-Bercher railway line (LEB) contacted us, their problem was: the real-time timetable they were offering to their users was incomplete, because old trains cannot share their position. Without this essential information, the LEB Company could not provide its passengers accurate information, in case of delays for example.

The need was a simple and effective solution, until these trains will get replaced in a few years.

After the analysis of the situation, we agreed on designing a mobile application for the drivers in the cabin. It enables them to retrieve the GPS location of the train and push it every 30 seconds to the server to calculate the real-time timetable. Easy, right?

Unfortunately not at all. In fact, at certain points along the route covered by these old trains, there are tunnels, so the calculation of the exact position is not guaranteed. Therefore, it was necessary for the driver to be able to manually indicate the position of the train to the application.

That said, when a driver is working (whether he is driving a LEB train, a bus or a car), they have more important tasks to do than checking the position of the train: Concentration on speed, turns, passengers, schedules, etc.

Therefore, the application to be developed had to solve the problem of the position, without interfering with the drivers' work or distracting them.

Field trip

To solve this problem, we went on site, and thanks to the project manager, we were able to take a seat in one of the LEB trains. We made the trip in the cabin to understand their work environment. This is what we identified :

  • the device should be a tablet to have a big enough display;
  • the luminosity of the device would have to be low not to dazzle;
  • the colours of the interface should be contrasted for good readability;
  • an anti-reflection screen would be necessary, because at the end of the day, when the sun hits the train window, the screen is almost not readable;
  • the interface elements should be large with easily clickable areas.

The app

We came back rich with information from our field trip.

Listening to our client and its constraints, Nicolas, one of our mobile developers, started by testing the GPS locations. Once this worked well, he developed the application in just a few days. I worked on the colours and their brightness. The interface had to be very simple, that the driver didn't have to learn how it worked.

A list of stops automatically scrolls through, according to the GPS locations. If the device does not receive any more locations, at its next stop a visual and audible alert is triggered to attract the driver's attention. They only have to click on the name of the stop for the error message to disappear and for the position of the stop to be sent to the server which collects the data.

3 screenshots of Realtime LEB application

A simple but useful product

It took about 20 days of work only - from field observations to the implementation of the app on tablets - to get the application up and running. The collaboration with the project manager Pierre-Yves was excellent. In addition, Nicolas and I worked hand in hand to ensure that design and code got along and stayed within the budget. An application is not always the result of a huge and complex project costing tens of thousands of francs.

But even more than that, I believe that the greatest pride we take in our job is the satisfaction of solving a real problem of users with an application.

Dec 17 2020
Dec 17

Secret Santa came to Liip this year. In the gift bag, there was a virtual conference with 120 participants, a lot of business topics, fun, party and Christmas carols.

What we’ve done

This year is different for all of us. Even though we got used to working remotely after that many months, Liipers would prefer to meet in person. Nonetheless, we switched our yearly LiipConf, where we focus on ourselves and internal topics to an online conference. A digital agency can struggle with digital set-ups too. The first video conferencing tool didn’t work out the way it was planned, so we switched. Within 10 minutes a different tool to host the conference was found, and we nevertheless started the sessions on time. Every Liiper can propose sessions which we tackled in 45-minute dialogues, workshops, discussion rounds, lean coffees or whatever format the Liiper hosting it likes. Yesterday's sessions covered a huge spectrum from OKRs, leverage open source contributions, Alpine JS to employee representatives and what we want to be as a company. 17 sessions happened in a little over three hours.

Why all of this

As a company spread all over Switzerland, with 6 locations, it is amazing to get all of us together for one day - normally in one place. It helps us to spend quality time together, work on topics there is no time for in our daily business and have fun. Here are some of the highlights Liiper reported about::

Our highlights

  • The experience of "walking around" in wonder.me and chatting with people
  • Last minute change of the conference tool and still starting all the talks on time
  • Tech Talks
  • OKRs Workshop
  • Secret Santa gifts
  • Christmas carols karaoke in a Google meet chat with 90 Liipers
  • Hanging around until 2am - like it’s 2019 ;)
  • Seeing people, we haven't seen in ages
  • The fact that despite being virtual, we shared a strong connection to each other — outside our usual business roles, but just as charming group of people
  • Evelinn Trouble

The fun part

Parties are what we do too! This year is a little different. First there was secret Santa —were Liipers gifted each other in the most digital way possible… The solution to that is a slack-bot ;) And once all the Liipers got their secret Santa, the run for free, creative, presents in a physical or digital form started. In the evening we “unwrapped” our presents together. Liipers were so creative in crafting and sharing their digital gifts. But of course, there are lazy secret Santas at Liip too, that’s when we all had to sing a Christmas song together. Yes, 90 people singing jingle bells was one of the highlights too.
And that’s not all...there was a concert for us as well. Evelinn Trouble played a live Christmas concert for us and everyone else that wanted to listen. Are you curious? Watch it on youtube!

The different sessions were recorded. Most of them are for internal use. But, there is one particular session that we enjoyed a lot and which we are happy to share with you –— its pure infotainment:

[embedded content]
Dec 16 2020
Dec 16

The corona situation forced DrupalCon Barcelona 2020 to be fully hosted online, and was therefore renamed to DrupalCon Europe 2020. How was the experience? And how did I end up here?

You will find my first impressions about the conference in this article, as well as a bit of a background story, and some tips. Enjoy.

Background

I have been working as a backend developer on PHP projects for more than 15 years now. I joined Liip in Lausanne a bit more than two years ago, and at first, I was mostly involved in Moodle projects.

About one year ago in late 2019, we founded a team (we call it Circle ®) to craft digital solutions based on Drupal in Lausanne. The Drupal knowledge has already been within Liip for many years, as we use and contribute to it in many of our locations, including Fribourg, Bern and Zurich. I was onboarded and coached by others Liipers; I grew my skills and got in touch with the Swiss Drupal community. Everything looked promising! After a couple of months, we nevertheless decided to stop the adventure, and continue with other projects. That being said, I had the opportunity to work with Drupal 8 for a couple of months, and it was far more evolved, than the somewhat difficult memories I got from earlier versions of it.

So I decided to keep my ticket for DrupalCon Barcelona, even if it meant to spend a few days at home watching talks instead of being at a great venue full of people in the beautiful city of Barcelona. Let’s be clear, it is the first DrupalCon I have ever attended, I did attend some conferences for other projects (Symfony and Moodle).

The conference format

Well it all started with an email, telling us how to get familiar with the online platform, and how to use it or seek for help. I was surprised to see that the online event did not abandon the "networking part" of a conference. A “virtual exhibition” was available where you could find the different sponsors and meet them. A “meeting hub” was available to connect with other attendees. You could even ask for a buddy that can catch up with you and help you through the conference. DrupalCon Europe even planned social events in the evenings, but I wasn’t in the mood to attend them (yet).

The rest was as usual, you had different tracks you could subscribe to and watch. A chat and live Q&A area were available for each talk and it’s all quite straightforward to use. The platform uses a Zoom integration. Unfortunately, it did not work on my Linux distribution on the first day. It’s quite an unpleasant experience to miss a few minutes of the first talk because of technical issues. Fortunately, a workaround was available, and the issue got more or less fixed on day 2.
Furthermore, all the sessions were recorded and are available to watch later. I guess that this can be expected for a first full-online experience, and overall the platform was great. I can’t imagine how much work it has been to turn this event from an in-person to an all- virtual one. I was quite impressed by the result!

The talks

I attended a few talks, they all focused on specific topics, but some are more “developer”-oriented than others. I did a bit of everything, including “business” oriented talks. I still can’t figure what to say about them, some were more than excellent, others felt basic or too simplified. There was something for every kind of profile, but overall I felt disappointed by most of them. (To be honest, it’s something that has happened in the past. I probably enjoy the social part more at these events, or I don’t choose the right talks). However, there were very good talks that I personally enjoyed:

The feeling

Having mastermind speakers is quite a thing. You can listen to talks by people that have been doing Drupal for years, sharing their overall experiences on Drupal and no matter the topic they share, it’s a pleasure to listen!
It makes me realise how huge the community is and how difficult it is to drive it in an embracing, contributive and constructive way. Drupal has evolved a lot, specifically since the switch to Drupal 8. But managing the technical aspect is not all there is in a community. Finding ways so people can have a safe place to discuss, interact and contribute is something too. A strategy to center humans and their rights in Open Source Design is one aspect they tackle, but there are many more that are worth the efforts. I can say that I like the direction that Drupal is taking, and it’s a pleasure to see that everything is built together to provide one of the best CMS out there! Even if the learning curve is still pretty steep and should not be neglected.

I was worried about having a fully remote conference, but I shouldn’t have!. The experience was great, I had very little issues, and the number of talks was impressive.
I recommend you to have a look at the talks in advance, book them, and don’t hesitate to switch to another one if your gut feeling is telling you to do so. I also recommend you to keep some space in your schedule for your daily business, ongoing projects, in case you will have to answer some emails or do a few meetings here and there. Last but not least, I recommend connecting with the community, there are amazing people out there, and it’s always great to share and build connections.

Congratulations and a big thank you to DrupalCon Europe 2020 and everybody involved, making this event a great online experience!

Picture taken from:

Oct 06 2020
Oct 06

This regional bank located in eastern Switzerland sets great store by forward-looking concepts. The UX and Development team at Liip St. Gallen helped acrevis relaunch their website.

Discover more about the services UX Design, Content, Custom Development and SEO our digital agency has to offer for you.

Do you just offer development or are there other strings to your bow?

It all began with a preliminary project. The partnership between acrevis and Liip started in the early summer of 2019 with some technical implementation. It quickly became clear to all that this was a set-up that worked, both professionally and on a human level – so we soon received a request to relaunch their entire website www.acrevis.ch. The mission: not just a simple redesign, but rather a complete rebuild that would also provide scope for innovative further development in terms of both design and technology.

These high requirements did not just spur on our Development team, they also fired up the UX researchers, visual designers, content strategists and SEO & analytics experts on our UX team. They ultimately won the management team over with three design concepts, and in doing so laid the foundation for the project.

*Hands-on: we used joint workshops with the acrevis project team to develop personas, their requirements, and specific user journeys.*

What customers want

It was clear from the outset that it was the external perspective, that was important, rather than the internal one. In short: acrevis’s customers took centre stage. We used joint workshops to hone acrevis’s vision and mission. We developed and refined personas, and tracked and scrutinised the user journeys of acrevis's website customers. This was combined with stakeholder interviews to ensure that acrevis’s expectations would also be incorporated. All of this provided an initial foundation for the design and structure – from wireframes through to the information architecture.

Of course, an innovative website also needs a new visual design. This was no sooner said than done, with new web-optimised typography, further development of the icon library, and fresh use of the corporate colours, moving away from severe petrol blue as a secondary colour and towards the subtle application of red as a targeted primary colour. The generous use of images and a mix of static and dynamic content bring the website to life. The website also has a clean and tidy look thanks to new element layouts and subtle micro-interactions.

*Large-scale images, lots of white space, a clear structure: initial screens for the acrevis website.*

Storytelling and thematic areas rather than just financial products

In terms of content, there was one key question: how do you captivate acrevis’s customers in the rather dry world of finance? Our response: with storytelling! Staying true to the slogan ‘My bank for life’, acrevis’s products and services needed more context, more emotion, and stronger links to customers’ everyday lives. This was clear to both our content strategists and the acrevis project team.

Four (or rather eight) thematic areas focussing on acrevis’s main business segments for both private and professional customers were developed: accounts and cards, financing home ownership (or a company), investing money, and retirement planning (pensions and succession). The thematic areas were presented in a colourful range of formats – from true-to-life stories to clear product overviews to personal contacts. The story protagonists were selected to closely match the personas previously developed. The final touches came from the ‘microcopy’, with our UX writers coming up with the perfect wording for buttons, forms, error messages and the cookie banner.

State-of-the-art technology for a flexible future

‘After the go-live is before the go-live’ was the technological thrust of the website relaunch. In other words, the platform needed to remain capable of development in future years, even if that would involve meeting demanding requirements. This meant that the obvious choice for a content management system was Directus, a headless CMS that keeps the back-end and front-end separate. This is based on a service-oriented architecture hosted on acrevis’s own Openshift cluster.

Now it gets technical: the headless content is linked with the page structure via a routing service developed specifically for this purpose. An Elasticsearch service offering full text indexing for content and PDFs via GraphQL ensures optimum search results. In addition, the website uses a VueJS front-end that also supports server-side rendering. The content is supplied via a Django application that offers GraphQL and REST endpoints. The images are hosted on Rokka, ensuring that the website offers high performance even with such high levels of visual content.

A nose for what is needed

Transparency, openness and regular exchange were also the building blocks of this project. Close collaboration with the acrevis project team as well as other partners ensured that any challenges were rapidly identified and could be solved quickly and easily. We even ran an internal collaboration day to enable us all to work together in a focused way as an interdepartmental, inter-site team. This meant that feedback and findings from reviews were quickly incorporated, and the website increasingly began totake shape. But what would acrevis’s customers say?

*The new website concept was put through its paces in usability tests.*

The biggest challenge was just before the go-live: usability tests. Potential customers put the new website through its paces by performing specific searches on laptops and smartphones. The whole project team was delighted to see that other than a few details, no changes were required – it seemed that we had hit the nail on the head for customers with the storytelling, a clear structure, a fresh design and high technological performance.

The new acrevis.ch website has been live since July 2020.
Thank you to the entire acrevis project team for the wonderful collaboration! To be continued...

Thank yous
We would like to take this opportunity to thank all of the (almost exclusively) local people involved in the project: JOSHMARTIN for making valuable contributions to the web page design, AMMARKT for developing the branding and image concept for acrevis, and Arcmedia for assisting with the online forms. Thank you very much!

‘Liip understood us right from the start and supported us with innovative proposals for the concept, design, content and technology.’

Mona Brühlmann, Overall project manager for the acrevis website relaunch

‘Excellent work! The stories were fantastic and implemented in a very appealing way.’

Andrea Straessle, Marketing & Communication acrevis Bank AG

‘At every turn, it was incredible to see the high quality with which the individual requirements were implemented and how stable and sustainable the ultimate solution was.’

Michael Weder, Technical project manager for the acrevis website relaunch

Aug 17 2020
Aug 17

A different way to discover Zurich. Are you a tourist in your own country? Are you visiting Switzerland? The Zürich Card makes Zurich affordable.

Discover more about the services UX Design and CMS our digital agency has to offer for you.

The Zürich Card has been around for a long time. People landing at Zurich airport or arriving at the main station will see posters and leaflets advertising it. Since May, the Zürich Card is now also available online via zuerich.com. With the Zürich Card, you can travel around the city for 24 or 72 hours and visit museums, restaurants and boat tours with a significant discount.

Go-live despite Corona and without much noise

Visitors could already buy the Zürich Card from the SBB and ZVV ticket shop, but it was not yet available for purchase via the Zurich Tourism website. Zurich Tourism therefore asked us to develop the online shop’s frontend as part of the discover.swiss project. We love projects like this, as we also call Zurich our home.

Discover.swiss is a platform supported by the Swiss State Secretariat for Economic Affairs and was designed to digitalise tourism. We therefore set about creating the minimum viable product (MVP). For us, this was a good example of interdepartmental and cross-company collaboration, as the agency Ubique Innovation AG contributed the app, discover.swiss made the API, and we provided the frontend. The plan was for the project to go live at the beginning of the year. However, the arrival of the Covid-19 crisis meant that advertising tourist attractions made little sense, as museums and restaurants across the whole of Switzerland were closed from mid-March onwards.

Nevertheless, we continued to work on the project with Zurich Tourism. A ‘silent go live’ was implemented in May and the MVP was launched. The Zürich Card has been available from zuerich.com ever since. Now, three months later, things are returning to the new normal, and this opportunity has been widely used given the circumstances. This is also a very attractive offer for Swiss tourists in Switzerland. For all adventurers – order the Zurich Card now!

High technological standards

The demands on the frontend were challenging. An Iframe integrated into the zuerich.com website that allowed users to purchase the Zürich Card quickly and efficiently was the key to success. We used proven technologies such as Nuxt.js (Vue.js) to achieve this. The discover.swiss API was used as the interface, and payment is made via Stripe. Users can also add multiple people.

The price of a card varies according to age and is paid directly once all the information about the relevant travellers has been provided. Once the price has been calculated, it is displayed in different currencies – in accordance with the users’ requirements. After the purchase, a confirmation email is sent that also serves as a sales receipt, and contains a deep link that imports the Zürich Card into the new Zürich City Guide App. Ordering really can be this easy.

Flexibility over strength

The collaboration with discover.swiss and Zurich Tourism was fantastic. Times of crisis call for flexibility, and an ability to draw the best out of what is on offer – which we managed to achieve.

However, projects like these are not always easy, especially at the moment. Coordination took time, as there were various groups involved in the project.

When working with MVPs (the API was already an MVP), documentation is a constant challenge. However, as soon as we defined clear roles and tackled all parts of the end product, flexibility became our strength. To ensure a successful project, it is essential to incorporate everyone at an early stage. We were able to do just this thanks to the flexible individuals behind the project.

Our collaboration with Liip worked very well right from the start, and the work was tackled with great focus. We also valued direct communication with the development team.
Matthias Drabe
Product Owner and Team Lead Online, Zurich Tourism

Jul 29 2020
Jul 29

Have you ever had the dream of a lightning fast developer experience? Loading your page should be as fast as on the productive system? Your dream came true with Lando/ Docker on WSL 2 on Windows.

For years, I'm using DrupalVM and Vagrant on my Windows machine combined with Vagrant WinNFSd. It worked well, but it was painfully slow. composer update took minutes on large projects and every page load was slow.

I also used WSL 1, but accessing files from an NTFS drive under /mnt/c/docs was slow.

By the end of May 2020, Microsoft started distributing the Windows 10 2004 update. This was the first release, where WSL 2 (Windows Subsystem Linux) was officially available as a part of Windows 10. WSL 2 is a new architecture that completely changes how Linux distributions interact with Windows. It's basically a native Linux kernel in Windows 10. The goal is to increase the file system performance and adding full system call compatibility.

The best feature of WSL 2 is the fact, that this is the new de facto-standard backend for Docker Desktop on Windows. Docker Desktop uses the dynamic memory allocation feature in WSL 2 to greatly improve the resource consumption. This means, Docker Desktop only uses the required amount of CPU and memory resources it needs, while enabling CPU and memory-intensive tasks such as building a container to run way faster.

Additionally, with WSL 2, the time required to start a Docker daemon after a cold start is significantly faster. It takes less than 10 seconds to start the Docker daemon compared to almost a minute in the previous version of Docker Desktop.

I combined these new technologies with Lando and created a perfect developer setup for any PHP / Symfony / Drupal-driven development stack. The missing piece to make it fly was a file sync option with mutagen.io Described later in this article.

Install WSL 2 on Windows 10

Follow the official documentation to install and enable WSL 2:
https://docs.microsoft.com/en-us/windows/wsl/install-win10

At some point, you will have to enable Hyper-V and set WSL 2 as your default version:
wsl --set-default-version 2

Install the Distro Ubuntu from the Microsoft Store

Open the Microsoft Store and install Ubuntu.

*Fig. 1: Screenshot from the Microsoft Store*

The first time you launch a newly installed Linux distribution, a console window will open and you'll be asked to wait for a minute or two for files to decompress and be stored on your PC. All future launches should take less than a second.

If you already have a WSL 1 distro you can upgrade it:
wsl --list --verbose
wsl --set-version

Install Docker Desktop Edge on Windows

Next, install Docker Desktop Egde on WINDOWS!. By the time of writing, the current version was 2.3.3.2. Download it here.

Be careful!

There are a few tutorials online, that claim that you have to install docker inside your Linux distribution. This is wrong. You have to install Docker Desktop on Windows!

Install Lando > 3.0.9 inside your Ubuntu distro

Now we will install Lando inside our brand new WSL 2 distro "Ubuntu". This is a bit tricky, because docker-ce is a hard dependency of the package. But the official documentation has a solution for that.
Use at least Lando 3.0.9 found on github.
wget https://github.com/lando/lando/releases/download/v3.0.10/lando-v3.0.10.deb
dpkg -i --ignore-depends=docker-ce lando-v3.0.10.deb

To fix the package manager / apt-get you have to remove the Lando Package from /var/lib/dpkg/status.
nano /var/lib/dpkg/status, search for Lando and remove the entry from the file. Done.

Integrate Lando in an existing Drupal project

I assume, that you already have a running project and want to integrate it with Lando and WSL 2. The most important thing is, that your files have to life inside your Ubuntu distro e.g. /home/username/projects and not on /mnt/c/ something. Inside your Ubuntu distro, you can profit from an EXT4 file system and native file system speed. We will sync back the file to Windows for editing in your favourite IDE later.
Go ahead and checkout your project from Git inside your home folder:
/home/username/projects/yourproject

Add a .lando.yml file.

Please refer to the official Lando documentation. Attached you will find my optimized recipe for Drupal 8 with Solr. I also added lando/php.ini with some optimized PHP variables.

name: demo
recipe: drupal8
config:
  webroot: web
  php: '7.3'
  xdebug: 'false'
  config:
    php: lando/php.ini

proxy:
  fulltext:
    - admin.solr.fulltext.lndo.site:8983

services:
  appserver:
    build:
      - composer install
    xdebug: false
    overrides:
      environment:
        # support debugging Drush with XDEBUG.
        PHP_IDE_CONFIG: "serverName=appserver"
        LANDO_HOST_IP: "host.docker.internal"
        XDEBUG_CONFIG: "remote_enable=1 remote_host=host.docker.internal"
        DRUSH_OPTIONS_URI: "https://demo.lndo.site"

  database:
    # You can connect externally via "external_connection" info from `lando info`.
    portforward: true
    creds:
      # These credentials are used only for this specific instance.
      # You can use the same credentials for each Lando site.
      user: drupal
      password: drupal
      database: drupal

  fulltext:
    type: solr:8.4
    portforward: true
    core: fulltext_index
    config:
      dir: solr/conf

  memcached:
    type: memcached
    portforward: false
    mem: 256

tooling:
  phpunit-local:
    service: appserver
    description: Runs phpunit with config at web/sites/default/local.phpunit.xml
    cmd: /app/vendor/bin/phpunit -v -c /app/web/sites/default/local.phpunit.xml
  xdebug-on:
    service: appserver
    description: Enable xdebug for apache.
    cmd: docker-php-ext-enable xdebug && /etc/init.d/apache2 reload
    user: root
  xdebug-off:
    service: appserver
    description: Disable xdebug for apache.
    cmd: rm /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini && /etc/init.d/apache2 reload
    user: root

Build your Lando project and start the docker container

Run lando start to spin up your containers and see if everything goes green.

Install mutagen.io on Windows to sync the files on a Windows drive.

Go to the mutagen.io download page and download the Windows x64 binary.

Copy the binary to a folder like C:\Program Files\Mutagen and add this folder to your PATH variable. Type Windows Key + Break, from there, select "Advanced system settings" → "Environment Variables".

Fig. 2: Screenshot from the my System Settings

Synchronize files back to Windows for editing with PHPStorm / VisualStudio Code

Maybe you asked yourself: How can I edit my files, if they are inside my distro?

Microsoft added the \\wsl$\ file share, but as soon as your project has more than 100 files, it's unusable with PHPStorm or any other IDE. The file system performance of mounted volumes on WSL 2 is even 10x slower than on WSL 1. A lot of discussions and rants about this topic have been on on github.com. Here you go. At some point the issue was closed, so I couldn't post my solution to it.

Mutagen resolves this issue by syncing the files between a docker container and Windows in both directions. And it's blazing fast.
Mutagen allows you to do a "two-way-resolved" sync between a local folder on Windows with a docker container. Lando creates a yourname_app_server_1 docker container with a mount to /app. All you have to do is start mutagen and sync the files back to Windows. After that, you can edit them in PHPStorm. They get insta-synced back to your container and you can still enjoy two-time native speed: Inside the docker and inside your IDE. It also works well with files generate on the server like drush config-export in Drupal 8.

For my setup I removed the .git folder inside Ubuntu / distro and excluded syncing VCS as proposed by mutagen. I use a Git client on the windows side. But you can change that.

Important:

  • mutagen has to be run on the Windows side in a Powershell because we want to sync back to Windows!
  • The lando appserver docker container has to be running before you can start the synchronization

Starting a sync with mutagen is dead simple:
mutagen sync @source @target. See here for the full documentation.
Example:
mutagen sync . docker://[email protected]_appserver_1/app

The first sync takes a while (5-8 minutes) with > 40k files. From then on it's basically instant.

You can see the status / errors of the sync progress by running in a separate PowerShell:
mutagen sync list or mutagen sync monitor

Unfortunately, there is no easy way to exclude certain folders from being synced except from a global .mutagen.yml file. Therefore I added a mutagen.yml file to my project folder and used mutagen project start and mutagen project terminate to start a predefined configuration including excluded folders:

mutagen.yml

# Synchronize code to the shared Docker volume via the Mutagen service.
sync:
  defaults:
    flushOnCreate: true
    ignore:
      vcs: true
  demo:
    alpha: "."
    beta: "docker://[email protected]_appserver_1/app"
    mode: "two-way-resolved"
    ignore:
      paths:
        - "mutagen.yml"
        - "mutagen.yml.lock"
        - ".vagrant"
        - ".git"
        - ".idea"
        - "deploy"
        - "example_frontend/node_modules"
        - "example_frontend/.nuxt"

In this example, I excluded a few folders which didn't have to be synced. Adapt it to your needs.

Final thoughts

Enjoy your blazing fast development environment, edit your files in your favourite IDE on Windows and thank all the maintainers of the involved Open Source projects.

At some point in the future, we might be able to integrate mutagen into Lando and combine the lando start or lando stop events with mutagen. By now, I didn't find an easy way to integrate / call it from the Ubuntu distro.

Jun 07 2020
Jun 07

Teams in Liip Zürich, Bern and Fribourg have been crafting digital solutions with Drupal for years. Now we also do it at our office in Lausanne.

Discover more about the services E-commerce and CMS our digital agency has to offer for you.

How much do you like Drupal? We still have some healthy disagreements among the Liipers when it comes to Drupal, one of the most well-known frameworks running on PHP. Nonetheless, we witness a high demand for digital solutions developed with Drupal, and we love that.

Because geographical proximity to our clients is very important to us, we now offer Drupal solutions in Lausanne too. Thanks to our fluid structure, we are very close to the market and can adapt to the needs of our clients. The pace in doing that is super fast. In the past months, a couple of Liipers in Lausanne at ease with PHP developed their Drupal skills. Internal training, peer programming and knowledge sharing with experienced Drupalistas within Liip were on the menu.

But wait, what is Drupal in fact? We put together the most frequently asked questions about this framework. And here are our answers!

What is Drupal?

Drupal is an open-source content management system (CMS). In its base installation, it includes a built-in frontend and backend. As a web administrator, you can see content in the interface that you manage in the backend.

What can Drupal do?

Everything that is possible on the web. With Drupal, you can build a simple website for your content, which can be maintained in the backend. And there is way more to it. You can develop complex and powerful e-commerce solutions as well as headless projects too.

How come? Drupal comes along with a lot of basic features. They allow you to create and update your website or blog in an intuitive manner. Yet, Drupal is made to be altered. Tons of additional modules are developed by the community to extend the basic features. This is where the magic happens.

How does Drupal work?

Teams in Zürich, Bern, Fribourg and now Lausanne work on tailor-made Drupal websites. Each client project is unique and requires a dedicated solution. We develop custom made modules according to what the end-users and our clients need.

Drupal’s base installation is not designed for the end-users to have a lot of interactions with the raw data in the backend. That is why we add a client specific layer to our Drupal distribution – be it by adding contributed modules or custom solutions – to allow a lot of interactions on the end-user side.

While Drupal can be used with the built-in front end, you can employ it as a powerful headless CMS too. In this case, Drupal allows you to manage the content in the backend and exposes content through an application programming interface (API). A mobile app or a website built with a frontend framework such as VueJS can then use the exposed content.

Wingo is one of the headless CMS projects we recently did using Drupal. The content is stored in the Drupal backend. We developed an API to give access to the content to various websites.

Why Drupal?

Drupal is incredible for content-heavy sites. No matter if it’s video or copy, this CMS handles a lot of content in a useful and flexible way. As a web administrator, you can easily manage the content of your website to be displayed to the end-users. Drupal is probably the way to go when numerous products, services or pieces of information have to be shown.

Both Die Mobiliar and the Banque Cantonale de Fribourg (BCF), have a lot of products and different types of content to showcase. In addition, their website is in multiple languages. The Freiburger Nachrichten website is another example – gigabytes of written content are managed via the CMS.

Who do we recommend Drupal to?

Companies and organizations that need to provide the end-users with a lot of information, such as products, services and other useful data are likely to use Drupal.

That being said, a lot more can be done with Drupal. The moment you understand how Drupal works, it becomes incredibly easy to alter and extend it – parole de développeur! Within a week, we were able to develop a custom made solution for a shop. Initially, the client had a showcase website. Due to the coronavirus, it became urgent to transform the website into an online shop. That’s how we helped our client to address their customers’ needs.

While the developer learning curve – due to Drupal's idiosyncrasies – is usually considered quite steep with a more difficult start, once you get into Drupal everything is simple. Our Drupal team in Lausanne has been developing and strengthening its expertise while being trained by more experienced Liipers, to support you with your digital challenges. We have the skills to build tailor-made solutions that match your end-users needs.

What are Drupal modules?

Drupal modules mostly deal with providing backend functionality and underlying services, while the frontend is usually client-specific. Due to the large and active community, as well as the overall good governance, a module for nearly every use case already exists.

For example, two well-known modules – which are nearly applications on their own – from the community are:

  • Commerce. Provides a full e-commerce suite right in your CMS. Neo is the e-commerce and content platform we developed for Freitag.
  • Webform. A complete form builder allowing CMS editors to create their own highly complex forms. For instance on the Die Mobiliar website more than 270 forms are managed with webforms.

And two modules Liip developed:

  • Rokka. This module connects Drupal to Rokka, our digital image processing tool. The Rokka image converter supports web admin in storing digital images.
  • Site Search 360. We developed the connector to link Site Search 360 to Drupal. Site Search 360 is a service which aggregates all the content of a website and allows users to search quickly and easily.

See here our contributions to other modules.

These modules contribute to extending Drupal core capabilities. They are available to the community. That’s the beauty of open source – developed by the community for the community. Each client project is unique and will require a different set of modules, by combining the right ones a custom tailored solution will be created.

Can Drupal be used for mobile apps?

The short answer is: No, you can't create mobile apps with Drupal. But, it depends. ;-)

The long answer is: Yes. As mentioned earlier, everything that is possible on the web can be done with Drupal, because Drupal can be used as a headless CMS too. In this case, only the Drupal backend is used to handle the content. A dedicated API is developed to let a distinguished frontend access the content stored in the backend. This means that you need a different frontend solution to build an app with Drupal.

This is the case for progressive web apps (PWA). The app – which represents the frontend – fetches the content in the backend, stores it locally or caches it if needed.

Drupal 8 made it easy to create endpoints and APIs. One of the strengths of this Drupal version is to support headless CMS projects.

Apr 30 2020
Apr 30

More than just a form – something we can all agree on. All of the discussions about the new Mobiliar product were intense, exciting and fun.

Discover more about the service CMS our digital agency has to offer for you.

Dear Liip,

Although the premium calculator superficially looks like a form, there is a lot more going on in the background. There are countless permutations to be taken into account in the calculation: the number of people whose bicycles are being insured, their ages, the combined total of the sum insured and the excess – and whether or not to include cover for the theft of your bicycle from in front of the restaurant where you have just treated yourself to a Quattro Stagioni pizza. I also want to make the calculator look attractive, rather than it just looking like a boring, standard cookie-cutter form.
Best wishes,
Mobiliar

Dear Mobiliar,

So, this is something where the results need to look ultra-simple, even though there is a lot going on under the hood. That’s something we can do. So let’s get to work. Our suggestion: we take the Webform module for Drupal 8, which enables powerful forms to be built without the need for any technical expertise, and pimp it up with two or three little upgrades to ensure the calculations are spot on. This much you know already. However, to ensure a perfect user experience on the website, instead of the normal Drupal front end, we need to build a module that offers a Vue-js handler for the web forms. This will enable us to combine the endless possibilities of web forms with a state-of-the-art front end. As a result, we will never have to reload the page and will have precise control over exactly what visitors to the website can see.
Best wishes,
Liip

Dear Liip,

Sounds great. Let’s do that. It means that we can keep using the web forms that we are already familiar with, and create new versions of the calculator without needing any programming skills – for A/B testing, for example, or to make a calculator for bicycle retailers. And if we ever need to create a new calculator for one of our other products, we can reuse the same framework and save a whole load of money. You can get started.
Best wishes,
Mobiliar

Dear Mobiliar,

We have now finished the premium calculator and everything is working well all round. One particular headache was building something that was generic enough to enable you to create other versions and further calculators in the future, as you requested. We also had to incorporate the many different options offered by web forms. For example, there are conditional fields – parts of the form that are only visible if you tick a particular field that comes before it. Say you tick to say that you are over 26 years old – in that case, your date of birth must also be more than 26 years ago. The birthday field check must take this into account. This was a bit of a challenge, but as you can see, we succeeded.
Best wishes,
Liip

Dear Liip,

Fantastic! Thanks to the prototyping at the beginning of the project and the iterative process that followed, we have managed to develop a thoroughly impressive application, despite us not initially being 100% clear on our requirements. My favourite feature is its extremely generic implementation, meaning that we can re-use the solution to create additional premium calculators for other products, or even some less complicated forms.
Best wishes,
Mobiliar

Dear customers,

Whatever you're looking for, we can help you find it quickly and easily.
Best wishes,
Mobiliar and Liip, your digital agency

Apr 28 2020
Apr 28

As a Drupal newbie, I started working on migrating data from a Drupal 7 project to a Drupal 8 project. I’ve put together how-to migrate your data, especially the complex fields. Our project uses the Migrate API, which is part of the Drupal core.

Discover more about the service CMS our digital agency has to offer for you.

You can generate migrations for your Drupal 7 content and configuration using the Migrate Drupal module. The Migrate Drupal module is based on the Migrate API.

The generated content migrations work fine for your standard fields from the field module but Migrate API doesn’t provide out of the box migrate plugins for custom fields. A way to migrate them is by customizing the migration.

I had to write a custom migration for the domain access records. The domain access module manages access for nodes and users on multiple domains.

The challenge for those records was that they’re stored completely differently in Drupal 7. The Migrate API can’t fetch their data out of the box into a row. For less complex fields you could implement a custom process plugin. In this, you’d hook yourself into the process of a migration from a single field. There, you can transform your data as needed, such as filtering, mapping, or generally transforming values.

Matching data structures

To migrate complex data from source to destination it is often helpful to model their relationship in an entity relation diagram.

As you can see in the following picture Drupal 8 splits the fields for the domain access records into separate tables while Drupal 7 stores multiple fields in one table.

Entity relation diagram of domain records in Drupal 7 and Drupal 8Domain access records, Drupal 7 vs. Drupal 8

Extending a migration plugin

Existing plugins can be extended with additional data through protected and public methods. The Drupal\migrate\Row class has a method setSourceProperty() which allows you to add properties on the source row. Afterwards, you can access the new property in your migration YAML file in the same format as standard fields from the field module.

field_YOUR_CUSTOM_FIELD:
 -
   plugin: sub_process
   source: ADDED_SOURCE_PROPERTY
   process:
     ID_XY: PROPERTY_ATTRIBUTE

To shorten the feedback loop while developing use this drush command:
drush mim migration_id --limit=1 --idlist=YOUR_ID --update
With the combination of --limit and --idlist only the rows listed in --idlist are being migrated and it stops after reaching the limit of --limit.

Do you need to migrate data into Drupal? I learned that going through the following steps is helpful:

  1. Check if there’s a migration template for your case. These exist for nearly all sources with relevant plugins. Generate them automatically with Migrate Drupal for Drupal-to-Drupal migrations.
  2. Get yourself an overview of the data structure for the more complex fields. If Migrate API doesn’t provide a plugin that contains the field data in its source row, implement a custom migration, or extend an existing one.
  3. Do you have to deal with multiple field values? Check out the SubProcess plugin.
  4. In a custom process plugin, the properties are accessible from the fetched row. That’s how you’re able to filter, map, or transform those properties as you like.
Nov 17 2019
Nov 17

What is a CMS? What is the value of it? We answer your questions about Content Management Systems and help you choose the appropriate CMS for your digital solution.

Discover more about the service CMS our digital agency has to offer for you.

What is a CMS?

A CMS is the abbreviation of Content Management System. When working with us, a CMS is usually a web-based software application that allows for collaborative editing of content. Content Management Systems are specifically designed to allow non-developers to work with them.

Content Management Systems drive the front-end of a website but can also be drivers for mobile apps, voice assistants, third-party data pools or any other expression of content in the front-end. Some CMS incorporate online shops or complex business processes. A CMS can even be designed to be a company’s control center of all the online activities.

Often, CMS are categorized in proprietary and open source solutions. In proprietary CMS – like AEM or Sitecore – the source code is not free to use neither public. This is the case for open source CMS only. They are often accompanied by large communities that provide a significant feature set completely free to use. We strongly believe in open source and only work with open source CMS such as Drupal, Wordpress, Wagtail, Django CMS or October. If you’d like to learn more, this video gives you a great and playful explanation about open source.

What is the best CMS for us as a company?

There is no right answer to this question. The best CMS for your digital product(s) depends on your business goals in the short, mid and long-term.

Are you planning on selling products online? Would you like to connect other systems with relevant data to the CMS? What are your requirements regarding the front-end? What kind of devices would you like to play out content to? What are your resources (time and money)?

If this all sounds a bit overwhelming to you, don’t worry. The described situation is actually a great opportunity for you to ask yourself those questions early in a project and talk to experts, if needed. The best CMS for your business case might not be the one with the best marketing appearance. The ideal CMS fits your technical, procedural and structural needs. Each CMS has strengths and weaknesses. We recommend you to do a professional review. If done right and early in the process, it will save you a lot of money and time to market in the long run.

What are smart questions to ask when choosing a CMS?

  • What are the project and business goals? Will the CMS help reach these goals?
  • Which functionalities should the CMS cover?
  • Which channels should be fed via the CMS? Which marketing channels are relevant?
  • Which systems should be integrated into the digital platform?
  • What size would the platform be?
  • What is the CMS’s Total Cost of Ownership (TCO)?
  • How many active developers are in the open source community? Is it easy to find qualified developers for the technology/CMS?
  • Who will be editing the content of the new web platform? Are the content editors experienced with CMS? Do they have good visual awareness or should the system provide support in that area?
  • Does the CMS support the latest SEO requirements?
  • Does the CMS support multilanguage well?
Feb 25 2019
Feb 25

Using a smart SQL query with subqueries, we reduced the time to build a data export by a factor of (at least) 50. On the development system with little data, we got from the response from 11 seconds to about 0.25 seconds.

To make this possible, we had to use a generated column to be able to create an index on a field inside a JSON column. Doctrine does not know Generated Columns, so we had to hide the column and index from the Doctrine Schema Generator.

Our use case is that we have a table with orders. We have to build a report that shows sums of the orders by regions (zip codes in our case). The address on the order is not allowed to change, the goal is to record to what address an order has actually been shipped. Rather than linking to a different table with foreign key, we decided to denormalize the address on the order as a JSON MySQL field.

The first approach queried the zip codes table and then looped over the zip codes to query the order database for each of the 3 different sums the report contains. This of course leads to 3*n queries. Add to this that each query is highly inefficient because it needs to do a full table scan because one criteria involves accessing the zip code in the JSON field with MySQL JSON functions. At some point we started hitting timeout limits for the web request to download the export...

Using Subqueries

This is one place where using the ORM for reading is a trap. Writing direct SQL is a lot easier. (You can achieve the same with DQL or the Doctrine Query Builder and hydrating to an array.)

We converted the query into one single query with subqueries for the fields. Instead of looping over the result of one query and having a query for each row in that result, we unified those into one query:

SELECT 
    a.zip,
    (
        SELECT COUNT(o.id) 
        FROM orders AS o
        WHERE o.state = ‘confirmed’ 
          AND JSON_CONTAINS(a.zip, JSON_UNQUOTE(JSON_EXTRACT(o.delivery_address, '$.zip'))
          ) = 1
    ) AS confirmed,
    (
        SELECT COUNT(o.id) 
        FROM orders AS o
        WHERE o.state = ‘delivered’ 
          AND JSON_CONTAINS(a.zip, JSON_UNQUOTE(JSON_EXTRACT(o.delivery_address, '$.zip'))
          ) = 1
    ) AS delivered,
    ...
FROM areas AS a
ORDER BY a.zip ASC

Each subquery still needs to do a table scan for each row to determine which orders belong to which region. We found no fundamentally easier way to avoid having to select over all orders for each row in the areas table. If you have any inputs, please use the comments at the bottom of this page. What we did improve was having an index for those subqueries.

MySQL Generated Columns

Since version 5.7, MySQL supports “Generated Columns”: A column that represents the result of an operation on the current row. Among other things, generated columns are a neat workaround for creating an index on a value stored inside a JSON data field. The MySQL configuration is nicely explained in this article. For our use case, we have something along the following lines:

ALTER TABLE orders 
     ADD COLUMN generated_zip CHAR(4) GENERATED ALWAYS AS
        (JSON_UNQUOTE(JSON_EXTRACT(delivery_address, '$.zip'))
CREATE INDEX index_zip ON orders (generated_zip)

With that, our query can be simplified to be both more readable and use a field where we can use an index:

SELECT 
    a.zip,
    (
        SELECT COUNT(o.id) 
        FROM orders AS o
        WHERE o.state = ‘confirmed’ 
          AND o.generated_zip = a.zip
    ) AS confirmed,
    (
        SELECT COUNT(o.id) 
        FROM orders AS o
        WHERE o.state = ‘delivered’ 
          AND o.generated_zip = a.zip
    ) AS delivered,
    ...
FROM areas AS a
ORDER BY a.zip ASC

So far so good, this makes the query so much more efficient. The rest of this blogpost is not adding further improvements, but explains how to make this solution work when using the Doctrine Schema tool / Doctrine Migrations.

Working around Doctrine

While Doctrine is an awesome tool that helps us a lot in this application, it does not want to support generated columns by design. This is a fair decision and is no impediment for us using them for such queries as the one above.

However, we use Doctrine Migrations to manage our database changes. The migrations do a diff between the current database and the models, and produce the code to delete columns and indices that do not exist on the models.

It would help us if this issue got implemented. Meanwhile, we got inspired by stackoverflow to use a Doctrine schema listener to hide the column and index from Doctrine.

Our listener looks as follows:

getTable()) {
            if ('generated_zip' === $eventArgs->getTableColumn()['Field']) {
                $eventArgs->preventDefault();
            }
        }
    }

    public function onSchemaIndexDefinition(SchemaIndexDefinitionEventArgs $eventArgs)
    {
        if ('orders' === $eventArgs->getTable() 
            && 'index_zip' === $eventArgs->getTableIndex()['name']
        ) {
            $eventArgs->preventDefault();
        }
    }

    /**
     * Returns an array of events this subscriber wants to listen to.
     *
     * @return string[]
     */
    public function getSubscribedEvents()
    {
        return [
            Events::onSchemaColumnDefinition,
            Events::onSchemaIndexDefinition,
        ];
    }
}
Feb 12 2019
Feb 12

We built a fast serializer in PHP with an overall performance gain of 55% over JMS for our use-case, and it’s awesome. We open sourced it and here it is: Liip Serializer. Let's look more at how it works, and how we made it so much faster!

For Serialization (From PHP objects to JSON, and Deserialization, the other way around), we have been using JMS Serializer for a long time in one of our big Symfony PHP projects, we are still using it for parts of it. We were and still are very happy with the features of JMS Serializer , and would highly recommend it for a majority of use cases.

Some of the functionality we would find difficult to cope without:

  • Different JSON output based on version. So that we can have “this field is here until version 3” etc.
  • Different Serializer groups so we can output different JSON based on whether this is a “detail view” or a “list view”.

The way that JMS Serializer works is that it has “visitors” and a lot of method calls, this in PHP in general cases is fine. But when you have big and complicated JSON documents, it has a huge performance impact. This is a bottleneck in our application we had for years before we built our own solution.

To find the bottleneck blackfire helped us a lot. This is a screenshot from blackfire when we were using JMS serializer, here you can see that we called visitProperty over 60 000 times!!

Our solution removed this and made our application a LOT faster with an overall performance gain of 55%, 390 ms => 175 ms and the CPU and I/O wait both down by ~50%.

Memory gain: 21%, 6.5 MB => 5.15 MB

Let’s look at how we did this!

GOing fast outside of PHP

Having tried a lot of PHP serializer libraries we started giving up, and started to think that it’s simply a bottleneck we have to live with. Then Michael Weibel (Liiper, working in the same team at the time) came with the brilliant idea of using GoLang to solve the problem. And we did. And it was fast!

We were using php-to-go and Liip/sheriff.

How this worked:

  • Use php-to-go to parse the JMS annotations and generate go-structs (basically models, but in go) for all of our PHP models.
  • Use sheriff for serialization.
  • Use goridge to interface with our existing PHP application.

This was A LOT faster than PHP with JMS serializer, and we were very happy with the speed. Integration between PHP and the GO binary was a bit cumbersome however. But looking at this, we thought that it was a bit of an unfair comparison to compare generated go code with the highly dynamic JMS code. We decided to try the approach we did with GO with plain PHP as well. Enter our serializer in PHP.

Generating PHP code to serialize - Liip Serializer

What Liip Serializer does is that it generates code based on PHP models that you specify, parsing the JMS annotations with a parser we built for this purpose.

The generated code uses no objects, and minimal function calls. For our largest model tree, it’s close to 250k lines of code. It is some of the ugliest PHP code I’ve been near in years! Luckily we don’t need to look at it, we just use it.

What it does is that for every version and every group it generates one file for serialization and one for deserialization. Each file contains one single generated function, Serialize or Deserialize.

Then when serializing/deserializing, it uses those generated functions, patching together which filename it should use based on which groups and version we have specified. This way we got rid of all the visitors and method calls that JMS serializer did to handle each of these complex use cases - Enter advanced serialization in PHP, the fast way.

If you use the JMS event system or handlers they won't be supported by the generated code. We managed to handle all our use cases with accessor methods or virtual properties.

One challenge was to make the generated code expose exactly the same behaviour as JMS serializer. Some of the edge cases are neither documented nor explicitly handled in code, like when your models have version annotation and you serialize without a version. We covered all our cases, except for having to do a custom annotation to pick the right property when there are several candidates. (It would have been better design for JMS serializer too, if it would allow for an explicit selection in that case.)

In a majority of cases you will not need to do this, but sometimes when your JSON data starts looking as complicated as ours, you will be very happy there’s an option to go faster.

Feel free to play around! We open sourced our solutions under Liip/Serializer on Github.

These are the developers, besides me, in the Lego team who contributed to this project, with code, architecture decisions and code reviews: David Buchmann, Martin Janser, Emanuele Panzeri, Rae Knowler, Tobias Schultze, and Christian Riesen. Thanks everyone! This was a lot of fun to do with you all, as working in this team always is.

You can read more about the Serializer on the repository on GitHub: Liip/Serializer

And the parser we built to be able to serialize here: Liip/Metadata-Parser

Note: The Serializer and the Parser are Open Sourced as-is. We are definitely missing documentation, and if you have trouble using it, or would like something specific documented, please open an issue on the GitHub issue tracker and we would happily document it better. We are in the process of adding Symfony bundles for the serializer and the parser, and putting the Serializer on packagist, and making it easier to use. Further ideas and contributions are of course always very welcome.

Flash image from: https://www.flickr.com/photos/questlog/16347909278

Nov 15 2018
Nov 15

A couple of days ago, the PHP Framework Interoperability Group (PHP-FIG) approved the PSR-18 "HTTP Client" standard. This standard was the last missing piece to build applications that need to send HTTP requests to a server in an HTTP client agnostic way.

First, PSR-7 "HTTP message interfaces" defined how HTTP requests and responses are represented. For server applications that need to handle incoming requests and send a response, this was generally enough. The application bootstrap creates the request instance with a PSR-7 implementation and passes it into the application, which in turn can return any instance of a PSR-7 response. Middleware and other libraries can be reused as long as they rely on the PSR-7 interfaces.

However, sometimes an application needs to send a request to another server. Be that a backend that uses HTTP to communicate like ElasticSearch, or some third party service like Twitter, Instagram or weather. Public third party services often provide common client libraries. Since PSR-17 "HTTP Factories", this code does not need to bind itself to a specific implementation of PSR-7 but can use the factory to create requests.

Even with the request factory, libraries still had to depend on a concrete HTTP client implementation like Guzzle to actually send the request. (They can also do things themselves very low-level with curl calls, but this basically means implementing an own HTTP client.) Using a specific implementation of an HTTP client is not ideal. It becomes a problem when your application uses a client as well, or you start combining more than one client and they use different clients - or even more when needing different major versions of the same client. For example, Guzzle had to change its namespace from Guzzle to GuzzleHttp when switching from version 3 to 4 to allow both versions to be installed in parallel.

Libraries should not care about the implementation of the HTTP client, as long as they are able to send requests and receive responses. A group of people around Márk Sági-Kazár started defining an interface for the HTTP client, branded HTTPlug. Various libraries like Mailgun, Geocoder or Payum adopted their HTTP request handling to HTTPlug. Tobias Nyholm, Mark and myself proposed the HTTPlug interface to the PHP-FIG and it has been adopted as PSR-18 "HTTP Client" in October 2018. The interfaces are compatible from a consumer perspective. HTTPlug 2 implements PSR-18, while staying compatible to HTTPlug 1 for consumers. Consumers can upgrade from HTTPlug 1 to 2 seamlessly and then start transforming their code to the PSR interfaces. Eventually, HTTPlug should become obsolete and be replaced by the PSR-18 interfaces and HTTP clients directly implementing those interfaces.

PSR-18 defines a very small interface for sending an HTTP request and receiving the response. It also defines how the HTTP client implementation has to behave in regard to error handling and exceptions, redirections and similar things, so that consumers can rely on a reproducable behaviour. Bootstrapping the client with the necessary set up parameters is done in the application, and then inject the client to the consumer:

use Psr\Http\Client\ClientInterface;
use Psr\Http\Client\ClientExceptionInterface;
use Psr\Http\Message\RequestFactoryInterface;

class WebConsumer
{
    /**
     * @var ClientInterface
     */
    private $httpClient;

    /**
     * @var RequestFactoryInterface
     */
    private $httpRequestFactory;

    public function __construct(
        ClientInterface $httpClient,
        RequestFactoryInterface $httpRequestFactory
    ) {
        $this->httpClient = $httpClient;
        $this->httpRequestFactory = $httpRequestFactory;
    }

    public function fetchInfo()
    {
        $request = $this->httpRequestFactory->createRequest('GET', 'https://www.liip.ch/');
        try {
            $response = $this->httpClient->sendRequest($request);
        } catch (ClientExceptionInterface $e) {
            throw new DomainException($e);
        }

        $response->...
    }
}

The dependencies of this class in the "use" statements are only the PSR interfaces, no need for specific implementations anymore.
Already, there is a release of php-http/guzzle-adapter that makes Guzzle available as PSR-18 client.

Outlook

PSR-18 does not cover asynchronous requests. Sending requests asynchronous allows to send several HTTP requests in parallel or to continue with other work, then wait for the result. This can be more efficient and helps to reduce response times. Asynchronous requests return a "promise" that can be checked if the response has been received or waited on, to block until the response has arrived. The main reason PSR-18 does not cover asynchronous requests is that there is no PSR for promises. It would be wrong for a HTTP PSR to define the much broader concept of promises.

If you want to send asynchronous requests, you can use the HTTPlug Promise component together with the HTTPlug HttpAsyncClient. The guzzle adapter mentioned above also provides this interface. When a PSR for promises has been ratified, we hope to do an additional PSR for asynchronous HTTP requests.

Dec 10 2017
Dec 10

Some of our applications are deployed to Amazaon Elastic Beanstalk. They are based on PHP, Symfony and of course use composer for downloading their dependencies. This can take a while, approx. 2 minutes on our application when starting on a fresh instance. This can be annyoingly long, especially when you're upscaling for more instances due to for example a traffic spike.

You could include the vendor directory when you do eb deploy, but then Beanstalk doesn't do a composer install at all anymore, so you have to make sure the local vendor directory has the right dependencies. There's other caveats with doing that, so was not a real solution for us.

Composer cache to the rescue. Sharing the composer cache between instances (with a simple up and download to an s3 bucket) brought the deployment time for composer install down from about 2 minutes to 10 seconds.

For that to work, we have this on a file called .ebextensions/composer.config:

commands:
  01updateComposer:
    command: export COMPOSER_HOME=/root && /usr/bin/composer.phar self-update
  02extractComposerCache:
    command: ". /opt/elasticbeanstalk/support/envvars && rm -rf /root/cache && aws s3 cp s3://rokka-support-files/composer-cache.tgz /tmp/composer-cache.tgz &&  tar -C / -xf /tmp/composer-cache.tgz && rm -f /tmp/composer-cache.tgz"
    ignoreErrors: true

container_commands:
  upload_composer_cache:
    command: ". /opt/elasticbeanstalk/support/envvars && tar -C / -czf composer-cache.tgz /root/cache && aws s3 cp composer-cache.tgz s3://your-bucket/ && rm -f composer-cache.tgz"
    leader_only: true
    ignoreErrors: true

option_settings:
  - namespace: aws:elasticbeanstalk:application:environment
    option_name: COMPOSER_HOME
    value: /root

It downloads the composer-cache.tgz on every instance before running composer install and extracts that to /root/cache. And after a new deployment is through, it creates a new tar file from that directory on the "deployment leader" only and uploads that again to S3. Ready for the next deployment or instances.

One caveat we currently didn't solve yet. That .tgz file will grow over time (since it will have old dependencies also in that file). Some process should clear it from time to time or just delete it on S3 when it gets too big. The ignoreErrors options above make sure that the deployment doesn't fail, when that tgz file doesn't exist or is corrupted.

Nov 18 2017
Nov 18

The VIPS image processing system is a very fast, multi-threaded image processing library with low memory needs. And it really is pretty fast, the perfect thing for rokka and we'll be transitioning to using it soon.

Fortunately, there's a PHP extension for VIPS and a set of classes for easier access to the VIPS methods. So I started to write a VIPS adapter for Imagine and came quite far in the last few days. Big thanks to the maintainer of VIPS John Cupitt, who helped me with some obstacles I encountered and even fixed some issues I found in a very short time.

So, without much further ado I present imagine-vips, a VIPS adapter for Imagine. I won't bore you with how to install and use it, it's all described on the GitHub repo.

There is still some functionality missing (see the README for details), but the most important operations (at least for us) are implemented. One thing which will be hard to implement correctly are Layers. Currently the library just loads the first image in for example an animated gif. Not sure, we will ever add that functionality, since libvips can't write those gifs anyway. But with some fallback to imagick or gd, it would nevertheless be possible.

The other thing not really well tested yet (but we're on it) are images with ICC colour profiles. Proper support is coming.

As VIPS is not really something installed on many servers, I don't expect a huge demand on this package, but it may be of use for someone, so we open sourced this with joy. Did I say, that it's really fast? And maybe someone finds some well hidden bugs or extends it to make it even more useful. Patches and reports are of course always welcome.

Sep 28 2017
Sep 28

Do you remember, I recently wrote about implementation of a small but handy extension for config search in Magento1? I have become so used to it, that I had to do the same for Magento 2. And since I heard many rumors about improved contribution process to M2, I also decided to make it as a contribution and get my hands “dirty”.

Since the architecture of the framework has drastically changed, I expected many troubles. But in fact, it was even a little bit easier than for M1. From the development point of view it was definitely more pleasant to work with the code, but I also wanted to test the complete path to the fully merged pull request.

Step #0 (Local dev setup)

For the local setup I decided to use Magento2  docker devbox, and since it was still in the beta state I ran the first command without any hope for smooth execution. But surprisingly I had no issues with the whole set up. By executing few commands in terminal and cup of coffee, Magento 2 was successfully installed and ready to use. Totally positive experience.

Step #1 (Configuration)

All I had to do is to declare my search model in di.xml, not too hard, right ?)

app/code/Magento/Backend/etc/adminhtml/di.xml

app/code/Magento/Backend/etc/adminhtml/di.xml

Step #2 (Implementation)

Implementation of search itself was trivial, we just have to look for matches for a given keyword in the ConfigStructure object using  mb_stripos().

app/code/Magento/Backend/Model/Search/Config.php

app/code/Magento/Backend/Model/Search/Config.php

Step #3 (View)

The same as for M1, the result of the search is a list of URLs to the matched configuration label. When the user clicks the selected URL, they are redirected to the config page and the searched field is highlighted.

That would be it regarding the implementation :)

Step #4 (Afterparty)

Too simple to believe? You are right. I thought that it is enough for submitting the PR. But I completely forgot about tests :) This one of main requirements for accepting pull request by Magento Team.

Since all implemented code was well isolated (had no strict dependencies), it was pretty easy to write tests. I have covered most of the code with unit tests, and for the main search method I wrote the integration test.

Conclusion

I would like to point out that during the whole cycle of the pull request, I had fast and high-quality support from the Magento team. They were giving useful recommendations and consulted me sometimes even during their vacations. This is what I call outstanding interaction with the community!

My special thanks to  Eugene Tulika and  Igor Miniailo, and of course Dmitrii Vilchinskii for the idea for creation this handy feature.

Mar 08 2017
Mar 08

Discover more about the service CMS our digital agency has to offer for you.

This year was the 6th edition of the DrupalDay Italy, the main event to attend for Italian-speaking drupalists.

Previous editions took place in other main Italian cities like Milan, Bologna and Naples.

This time Rome had the privilege to host such a challenging event, ideally located in the Sapienza University Campus.

The non-profit event, was free of charge .

A 2-days event

Like most development-related events these days, the event spanned over 2 days, March, 3rd and 4th.

The first day was the conference day, with more than 20 talks split in 3 different “tracks” or, better, rooms.

In fact, there was no clear separation of scopes and the same room hosted Biz and Tech talks, probably (but this is just my guess) in an attempt to mix different interests and invite people to get out of their confort zone.

The second day was mainly aimed at developers of all levels with the “Drupal School” track providing courses ranging from site-building to theming.

The “Drupal Hackaton” track was dedicated to developers willing to contribute (in several ways) to the Drupal Core, community modules or documentation.

The best and the worst

As expected, I've found the quality of the talks a bit fluctuating.

Among the most interesting ones, I would definitely mention Luca Lusso's “ Devel – D8 release party ” and Adriano Cori's talks about HTTP Client Manager module.

I was also positively surprised (and enjoyed a lot) the presentation about “ Venice and Drupal ” by Paolo Cometti and Francesco Trabacchin where I discovered that the City of Venice has an in-house web development agency using Drupal for the main public websites and services.

On the other hand, I didn't like Edoardo Garcia's Keynote “Saving the world one Open Source project at a time”.

It seemed to me mostly an excuse to advertise his candidature as Director of the Drupal Association,

I had the privilege to talk about “ Decoupled frontend with Drupal 8 and OpenUI 5“.

The audience, initially surprised by the unusual Drupal-SAP (the company behind OpenUI) association, showed a real interest and curiosity.

After the presentation, I had chance to go into the details and discuss my ideas with a few other people.

I also received some critics, which I really appreciated and will definitely make me improve as a presenter.

Next one?

In the end, I really enjoyed the conference, both the contents and the ambiance, and will definitely join next year.

Feb 28 2017
Feb 28

I started hearing about Drupal 8 back in 2014, how this CMS would start using Symfony components, an idea I as a PHP and Symfony developer found very cool.

That is when I got involved with Drupal, not the CMS, but the community.

I got invited to my first DrupalCon back in 2015. That was the biggest conference I have ever been to, thousands of people were there. When I entered the conference building I saw several things, one of them was that the code of conduct was very visible and printed. I also got a t-shirt that fit me really well – A rarity at most tech conferences I go to. The gender and racial diversity also seemed fairly high, I immediately felt comfortable and like I belonged – Super cool first impression.

I as many other geeks have social anxiety, so I was still overwhelmed with all these people, and I did not know who to talk to. Luckily Larry was there so I had someone to hug.

I went to many great talks as there were a lot of tracks – Including the Symfony one where I was speaking. A conference well worth going to for EVERYONE, this is also something that I like: They try to make every DrupalCon affordable for everyone.

That evening I felt a bit shy again and stood somewhere, all on my own, and couldn't see the two people out of thousands, that I knew. Then someone walked up to me and just started talking to me, making me feel welcome. I said I don't do Drupal at all and they said that that's nice! We talked about what I do and they were very interested.

This year I went to a local DrupalCamp here in Switzerland, drupal mountain camp, it was an event a lot more focused on Drupal, as you could expect, so I did not attend as many talks as I did at DrupalCon, but again the inclusiveness and the atmosphere was in the air – I felt so very very welcome and safe (Except maybe when sledging down a mountain…).

They mentioned the code of conduct in the beginning of the conference and then proceeded to organise an awesome event with winter sports around it.

I spoke at DrupalMountainCamp giving an introduction to Neo4j, a talk I have given many times with various results. People were extremely interested in graph databases, the concepts and how they work and they asked a lot of questions. Again – When I told them I don't do Drupal noone even tried to convince me to start, that is where our communities differ a bit.

I think that we can learn from Drupal, embrace our differences, and each other, and accept that we do different things and we are different people and it doesn't matter because that is what makes community work, that is what makes us awesome. Diversity matters, Drupal got this.

Thank you to the Drupal community for showing how to be inclusive the right way and how to not try to convince someone to try or be someone they are not, but rather support that person and try to learn from them, this is the best behaviour a community could ever have.

And hugs! So much hugs.

Oct 23 2016
Oct 23

Discover more about the services E-commerce and CMS our digital agency has to offer for you.

In this blog post I will present how, in a recent e-Commerce project built on top of Drupal7 (the former version of the Drupal CMS), we make Drupal7, SearchAPI and Commerce play together to efficiently retrieve grouped results from Solr in SearchAPI, with no indexed data duplication.

We used the SearchAPI and the FacetAPI modules to build a search index for products, so far so good: available products and product-variations can be searched and filtered also by using a set of pre-defined facets. In a subsequent request, a new need arose from our project owner: provide a list of products where the results should include, in addition to the product details, a picture of one of the available product variations, while keep the ability to apply facets on products for the listing. Furthermore, the product variation picture displayed in the list must also match the filter applied by the user: this with the aim of not confusing users, and to provide a better user experience.

An example use case here is simple: allow users to get the list of available products and be able to filter them by the color/size/etc field of the available product variations, while displaying a picture of the available variations, and not a sample picture.

For the sake of simplicity and consistency with Drupal's Commerce module terminology, I will use the term “Product” to refer to any product-variation, while the term “Model” will be used to refer to a product.

Solr Result Grouping

We decided to use Solr (the well-known, fast and efficient search engine built on top of the Apache Lucene library) as the backend of the eCommerce platform: the reason lies not only in its full-text search features, but also in the possibility to build a fast retrieval system for the huge number of products we were expecting to be available online.

To solve the request about the display of product models, facets and available products, I intended to use the feature offered by Solr called Result-Grouping as it seemed to be suitable for our case: Solr is able to return just a subset of results by grouping them given an “single value” field (previously indexed, of course). The Facets can then be configured to be computed from: the grouped set of results, the ungrouped items or just from the first result of each group.

Such handy feature of Solr can be used in combination with the SearchAPI module by installing the SearchAPI Grouping module. The module allows to return results grouped by a single-valued field, while keeping the building process of the facets on all the results matched by the query, this behavior is configurable.

That allowed us to:

  • group the available products by the referenced model and return just one model;
  • compute the attribute's facets on the entire collection of available products;
  • reuse the data in the product index for multiple views based on different grouping settings.

Result Grouping in SearchAPI

Due to some limitations of the SearchAPI module and its query building components, such plan was not doable with the current configuration as it would require us to create a copy of the product index just to apply the specific Result Grouping feature for each view.

The reason is that the features implemented by the SearchAPI Grouping are implemented on top of the “ Alterations and Processors” functions of SearchAPI. Those are a set of specific functions that can be configured and invoked both at indexing-time and at querying-time by the SearchAPI module. In particular Alterations allows to programmatically alter the contents sent to the underlying index, while the Processors code is executed when a search query is built, executed and the results returned.

Those functions can be defined and configured only per-index.

As visible in the following picture, the SearchAPI Grouping module configuration could be done solely in the Index configuration, but not per-query.

SearchAPI: processor settings

Image 1: SearchAPI configuration for the Grouping Processor.

As the SearchAPI Grouping module is implemented as a SearchAPI Processor (as it needs to be able to alter the query sent to Solr and to handle the returned results), it would force us to create a new index for each different configuration of the result grouping.

Such limitation requires to introduce a lot of (useless) data duplication in the index, with a consequent decrease of performance when products are saved and later indexed in multiple indexes.

In particular, the duplication is more evident as the changes performed by the Processor are merely an alteration of:

  1. the query sent to Solr;
  2. the handling of the raw data returned by Solr.

This shows that there would be no need to index multiple times the same data.

Since the the possibility to define per-query processor sounded really promising and such feature could be used extensively in the same project, a new module has been implemented and published on Drupal.org: the SearchAPI Extended Processors module. (thanks to SearchAPI's maintainer, DrunkenMonkey, for the help and review :) ).

The Drupal SearchAPI Extended Processor

The new module allows to extend the standard SearchAPI behavior for Processors and lets admins configure the execution of SearchAPI Processors per query and not only per-index.

By using the new module, any index can now be used with multiple and different Processors configurations, no new indexes are needed, thus avoiding data duplication.

The new configuration is exposed, as visible in the following picture, while editing a SearchAPI view under “Advanced > Query options”.

The SearchAPI processors can be altered and re-defined for the given view, a checkbox allows to completely override the current index setting rather than providing additional processors.

Drupal SearchAPI: view's extended processor settings

Image 2: View's “Query options” with the SearchAPI Extended Processors module.

Conclusion: the new SearchAPI Extended Processors module has now been used for a few months in a complex eCommerce project at Liip and allowed us to easily implement new search features without the need to create multiple and separated indexes.

We are able to index Products data in one single (and compact) Solr index, and use it with different grouping strategies to build both product listings, model listings and model-category navigation pages without duplicating any data.

Since all those listings leverages the Solr FilterQuery query parameter to filter the correct set of products to be displayed, Solr can make use of its internal set of caches and specifically the filterCache to speed up subsequent searches and facets. This aspect, in addition to the usage of only one index, allows caches to be shared among multiple listings, and that would not be possible if separate indexes were used.

For further information, questions or curiosity drop me a line, I will be happy to help you configuring Drupal SearchAPI and Solr for your needs.

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web