Upgrade Your Drupal Skills

We trained 1,000+ Drupal Developers over the last decade.

See Advanced Courses NAH, I know Enough
Sep 01 2021
Sep 01
  • 02 September 2021
  • Anca Verniceanu

DrupalCon Europe 2021 is less than two months away – and we can already feel the enthusiasm rising.The conference is packed with value from start to finish: innovative topics; inspiring speakers from all corners of the tech world and in-depth workshops.

On top of a diverse range of sessions at DrupalCon Europe, there are two workshops on creating great user experiences with Drupal and building a website with low code. A single DrupalCon ticket gives you access to all this and you will have the chance to do this among other developers, designers, editors, content strategists, marketers, end-users or leaders from the Drupal community.

Apply for offered tickets

This year, Liip wants to make DrupalCon even more widely available and accessible by offering free tickets. At Liip we always strive to help level the playing field for women and minority groups, who have been underrepresented in the tech industry for far too long. That is why we are thrilled to announce that we are sponsoring Grants and Scholarships that will help cover costs for many to attend this event.

Eligibility criteria

Participants must meet the following background requirements:

  • be a member of an underrepresented group in tech — this includes, but is not limited to people of colour, LGBTQIA+ people, women, disabled people or be unable to attend without financial assistance
  • be unable to get funding from the companies they work for
  • be 18 years of age or older
  • agree to follow DrupalCon code of conduct

First-time attendees might be given priority, and you can even apply if you already bought a ticket, but you feel you qualify to receive a scholarship.

What you will learn

If you’re chosen for a scholarship and get a free ticket for DrupalCon 2021, you will have the chance to:

  • gain knowledge and skills
  • expand your network
  • learn from companies that leverage technology transformation opportunities within their industries
  • share your thoughts, opinions and perspectives with other people from the Drupal community
  • help drive Drupal’s continued evolution and success by actively engaging in the program at DrupalCon

How to apply

Submit your application by 20th of September 2021.
We hope to see you there and remember - when you win, we all win!

Anca Verniceanu

Frontend Developer

  • Topics
  • Tags
Apr 28 2021
Apr 28

Welcome on board! We are happy and thrilled (!) to have you as a Liiper! Our onboarding program starts the day you sign your work contract. This is what you can expect.

Starting at a new company is a huge challenge. We feel you – been there, done that! ;-) That is why we have an onboarding program in place, which makes you feel part of the Liip family from day 1, hopefully. Your recruitment process was a success, let's make your onboarding great too. So let’s jump right in!

[embedded content]

Meet your onboarding buddy

Céline, Hendrik, Janina, Stefan... these are four of the almost twenty onboarding buddies at Liip spread over all the six locations. And you will get a buddy too! Your onboarding buddy is your go-to person, your sparring partner for social and cultural aspects at Liip.

They are usually based in the office you are joining, while not part of your work team. Additionally, you get a work buddy for work-related topics who tell you all about the projects you will work on. Responsibility and autonomy are key at Liip. We will support you to find your way and embrace our values.

Your buddies will guide you through the onboarding program. Providing you with an overview of the company, our processes and tools. Together, you will define your trial period goals and regularly meet to reflect and share impressions and feedback.

Last but not least, social gatherings! They are part of the #liipway, and your onboarding buddy aims for making you feel welcome at the (online) apéro, game session or yoga class – if you wish to take part. We are striving for a successful and pleasant integration to Liip for you.

At the end of your three-month trial period, you will be invited to the so-called “trial period meeting”. This is a dedicated time to identify what has been moving you forward, what has been holding you back and what opportunities and challenges might arise. You will self-reflect and get feedback from peers as well as your onboarding buddy. In other words, this is our ritual to finalize the onboarding process and confirm your hiring.

Go with the flow

Everything is organized so that you can “just” surf the wave. Technically speaking, you will get what you need to work, such as a brand-new laptop, an ergonomic desk and chair, a monitor, a keyboard, a mouse, etc. All of it is organized on-site or sent to your place (maybe not the desk nor the chair ;-) ) in case you start your onboarding program remotely.

Thanks to a clear and easy to follow checklist (aka your onboarding kanban board), you have an overview of your progress within the onboarding program at any time. Where to get support, how to contribute to the external communication of Liip, how our salary system works, and so much more will be taught to you.

Trainings are part of the program too. Holacracy, Agility and Feedback trainings are on the menu. And you will learn more about your personal education budget.

Take part in the #liipway

All you need is curiosity and enthusiasm. Let yourself be guided, and enjoy the ride! We can only advise you to participate in social and health activities to get a taste of the #liipway, either on-site (as soon as this will be possible again) or online like coffee breaks, game sessions, yoga or boot camp courses, ...and apéros! Our tool meet-a-liiper-with-donut makes meeting new work colleagues from all over Liip, even during remote times, a walk in the park.

Don’t be shy and dare to ask questions (all of them!). How you experience the onboarding program has the utmost importance to us. Please give us your honest feedback, so that we keep on improving it.

Apr 22 2021
Apr 22

We used PowerPoint slides as a prototype (#nokidding) and tested an early draft of our content strategy with stakeholders. Their inputs and feedbacks improved the quality of our final deliverable and reduced risk and uncertainty early and cheaply.

Discover more about the services UX Design and Content our digital agency has to offer for you.

Content Strategies are not typically tested with users

And we didn’t plan to do so in the beginning. The project started relatively conventional. My colleague Caroline Pieracci was asked to do the Content Strategy for a Swiss retailer and invited me to join the team. As a UX Designer I was asked to lead the stakeholder interviews for the client.
The first workshop was dedicated to get insights about the strategy of the client, their business goals and customer needs. To understand how we can help reach these goals with content, we are proposing a core message. Of course the main topics the client wants to talk about and what users are interested in was part of that too.

But why stakeholder interviews?

I mostly do stakeholder interviews at the very beginning of the project, to understand their goals, needs and pain points. It seemed a bit late in the process for me. When asked this question, the client explained that they wanted to inform the stakeholders, get their feedback, understand if the content strategy was valuable for them and get their buy in. During the reflection with Caroline about everything we heard from the client, I felt that what they actually wanted was to communicate and to test their Content Strategy.

A prototype to communicate important aspects of your project

Prototypes serve various purposes. Either to explore different solutions and opportunities, or to evaluate a solution, reduce the number of options and decide what to focus on. Or to communicate important aspects of your project. A communicative prototype can ignite meaningful discussions with your stakeholders and reduce friction and misunderstandings right from the start. It can be a valuable strategic tool to present, convince and inspire your management or stakeholders. That is why our client was all in and the prototyping began.

We used a PowerPoint presentation as a prototype #nokidding

Caroline had a poster in mind as the final delivery. A nicely designed visualisation of the core messages for the team to hang up in their office. In the end, our prototype was a PowerPoint presentation as this is the main medium for communication used by our stakeholders. It explained each part of the strategy and how it was created. The presentation was not polished at all, neither designed, and it had some blanks and question marks in it. We even put a “draft” sticker on each slide to make sure that the presentation was not judged by its bad looks.

Five interviews in one day

The testing was done remotely, all in one day and with the whole project team present. From the stakeholder map that we prepared in the kick-off meeting, our client picked the five most important people they wanted to talk to. We had 10 min to present the prototype and 20 min to ask our questions.

  • After seeing our content strategy prototype, what questions do you have? What is unclear for you?
  • What is your first impression of the content strategy, what goes through your mind?
  • What is still missing in our content strategy?
  • What is superfluous?
  • What opportunities do you see when we implement this content strategy?
  • What risks do you see?
  • What does it take to make this content strategy a success?
  • Is there anything else you would like to say on the subject?
    The whole project team took notes. After each interview all the details were collected on our Miroboard. This took about 20-25 min and after a short break the next interview took place. The technique and the questions are coming from Jake Knapps book Sprint and felt great to be used in the context.

    Confidence to release the Content Strategy

    The feedback was very positive and gave the client a great boost and a lot of confidence to be on track. The stakeholders were pleased to see the Content Strategy in an early stage. Their feedback also pointed out a couple of things that were missing or not clear enough.
    The next day we met again and put all the feedback from the interviews on a huge “pile” on the Miroboard and prioritized them to decide what we will implement for the first release of the Content Strategy. We used the MoSCoW method and organised them into “must have’s”,”should have’s” and “won't have’s”.

    Continuous adaptation

    The team that would mainly work with the Content Strategy on a daily basis, was invited to give their feedback on the prototype too. We are planning to invite them again, after they have worked with the Content Strategy for a while, to understand what works well for them and what doesn’t. It is a living document that needs to be adjusted and improved regularly.

    Reduce risk and uncertainty

    Experimentation, prototyping and testing is not always easy, but this project went down really smooth, and the collaboration with the client was great. The only challenge was to keep the prototype basic enough and to avoid putting too much effort in it or try to making it perfect. Something we’re not used to in our industry, where every typo is a sign of incompetence and a bad image can damage our customers' trust in us. The whole process was done in a few weeks and without a huge budget.
    But what was the value for the client in the end? I find these words from Service Design Doing sums it up perfectly: “Prototyping is an essential activity to reduce risk and uncertainty as early and as cheaply as possible, to improve the quality of your final deliverable and eventually implement your project successfully”

Apr 20 2021
Apr 20

Our circle recently revisited our definition of done. One point: code reviews. They did happen, but often couldn't unfold their full potential. Question: How could we improve them?

Answer: It's complicated. As always. But bear with me!

A few words about code reviews

From my point of view, code reviews should be an integral part of agile software development. There's an IEEE standard for code reviews that defines all kinds of reviews, but I won't go into much details here.

For me, the main goal of code reviews should be to improve collaboration within a development team and to improve the quality of the product. Each and every review should be seen as a learning opportunity for both the person reviewing and the author of the change.

Our approach to improvement

It's generally a good idea to know and analyze the status quo before implementing any change. And what's better to measure how code reviews are done than doing actual code reviews? No sooner said than done!

So we created a repository with a small application. Nothing too fancy, a simple calculator for the CLI written in Python, no libraries, no frameworks. Next, we created a bunch of merge requests and let all the devs review them in pairs in a workshop. We gave them around 30 minutes to review the code as they usually would.

And then came the fun part:

Reviewing the reviews

Each team explained how they did the code review, what they focussed on and what kind of problems they found. The results were rather interesting: There was a set of problems every team found, but also some issues that were unique per team.

And there's a good reason behind that: The approaches were vastly different. One or two teams started by cloning the repo and checked if they could get it running. Others started to analyze the code to find places it could crash. Yet another team read the documentation and the tests first, checking if it was up to date.

They were all doing a code review and they were all doing it very differently. There's nothing wrong with that. Every approach is good and has it's advantages, as long as it's not the only thing that's looked at.

The goal of this workshop was not only to analyze how code reviews were done so far. We also wanted to establish a common understanding of how code should be reviewed and why: A set of best practices and guide lines. We also added some well-established guidelines into the mix.

Some of our Dos and Don'ts

This is a structured, boiled down version of the best practies and guidelines we established during the workshop.

A review has three phases: Preparation, the actual review and post-processing. There's two parties involved: The author and the reviewer.

In general, for everyone

  • Every change is an opportunity to do a review
  • Avoid reviewing too much code at once. Going beyond 200-400 LOC per hour might have a negative impact on the quality of the reviews
  • Don’t assume malice
  • Strive to create learning opportunities for both the author and the reviewer
  • Be kind
  • Let different people review your code
  • Review code of others on a regular basis
  • Seniors are encouraged to let juniors review their code, too
  • Juniors are encouraged to ask to perform reviews
  • Code style and conventions can be checked automatically: Consider introducing automated sniffs/checks to eliminate any inconsistencies beforehand

Before a review, for the author

These are some steps that can be taken in order to prepare the code for the review. They aim to streamline the process and catch some points upfront.

  • Review the code yourself
  • Ask for a review proactively
  • For larger or more fundamental changes, several people may be asked to do reviews. They can do a review together or independently. This has the added benefit of more people knowing about a change and might induce further discussion about the architecture
  • Make sure it’s understood what your code tries to achieve
  • Add a comment to the ticket for handover with
  • If a solution is complex: Explain why, ideally directly in the code
  • Comments on the change can also be added by the author before a review happens. Those can be used to explain things where comments in the code are not possible (JSON files etc.)
  • Alternatively, do pair programming or even group programming

During the review, for the reviewer

These tips aim to optimize the quality of the review and its results. They also make sure that the review is exhaustive by covering for edge cases, checking documentation, trying different approaches in your head and identify oppourtunities.

  • Don’t neglect positive feedback: If something is well-structured, especially elegant etc., point these things out, too
  • Ask yourself how you would have solved the problem at hand
  • Ask for information if it is not present. Rule of thumb: If the author needs to explain something, the code should be adapted to either explain itself or contain an adequate level of documentation
  • If a solution doesn’t appear “right”, try to come up with alternative solutions
  • Keep a balance between showing how to do it right and pointing the author in the right direction
  • Write if your proposal is necessary (has to be fixed) or a suggestion (might be fixed for better understanding etc.)
  • Ask open-ended questions
  • Check for edge cases (such as unexpected input, NULL, MAX_INT, MIN_INT, “false”, FALSE, etc.) and try to force the code to fail
  • Consider under which conditions the code would not behave correctly anymore (loading too much data, too many users, etc.).
  • Encourage simplification where possible
  • Look for shortcuts that should not have been taken
  • If you deem it necessary, ask someone or multiple people to assist with the review or do the review together. This has the added benefit of inducing discussions about the solution and more people knowing about a certain change

After the review, for both

These three points target post-processing.

  • Walk through the review if necessary or helpful
  • Do pair programming if necessary or helpful
  • Review the following changes as well, if possible. Depending on the size or complexity of the code change, a pragmatic process of solving issues together can be preferable over several disconnected feedback loops

Takeaway thoughts

The feedback for the workshop was amazing. People loved the discussions that it sparked. The nice thing about this list is that everyone can agree with it. The whole dev team was involved. Everyone contributed and learned a lot along the way. It was a true collaboration!

Which of these guidelines would you adopt in your process and why? Leave a comment!

Apr 14 2021
Apr 14

2019 Romande Energie launched a pilot project on three levels: the redesign of the client area, the creation and training of a development team, and a transition towards agile development. We helped Romande Energie with this multifaceted transformation.

Discover more about the services UX Design, Custom Development, CMS, Agile Organisation and Agile Teams and Processes our digital agency has to offer for you.

Kim Bercht, as an Experience Designer and Product Owner at Romande Energie, why did you get in touch with us? What challenges had to be overcome?

We contacted Liip as part of the project to redesign our client area. At the time, we faced several challenges. We wanted to:

  • Include the client in a co-creative approach in developing a client area that met their needs as effectively as possible
  • Regain control of our apps and develop them independently without having to rely on an external partner
  • Set up a new technical architecture that is both modern and flexible
  • Speed up the development and delivery of new functions using an agile approach
  • Train our development teams to deliver a first version of the client area within a very short time frame

Why did you choose Liip to help Romande Energie? What elements played a role in making your choice?

For our first project to be developed using an agile approach, we needed a partner that could help us build our online platform whilst also supporting us to learn the Scrum methodology.

Being experts in this methodology, Liip was also able to pass these skills on to our teams. You coached us as we learned how to take on the roles of Scrum Master and Product Owner. You also taught those involved in the project all about how agility works. This all-round support was extremely beneficial for us.

In addition, you supported our internal developers to learn and get started with new technologies and still do so. Liip also temporarily bolstered the Romande Energie team, which initially consisted of just two developers.

How did your first agile project go for you? What were the initial results?

Our first sprints were very intense. We had to familiarise ourselves with the Scrum methodology and our respective roles, set up project monitoring tools, finalise our technology stack choice, and line up the first functions to be developed. And not to forget ensuring a good team spirit when the first lockdown arrived.

Given the challenges scale to be tackled, which went well beyond the client area redesign itself, we felt somewhat discouraged at the beginning of the project. Agile work requires a complete change of mindset. Contrary to what you might think, agility needs a great deal of discipline.

Adopting this methodology enabled us to focus on one problem at a time and not spread our efforts too thin. After ten sprints, we had sorted out the major bottlenecks and were progressing at a good pace with a better understanding of our roles. I want to point out that our team could solve every problem that came along and remained motivated despite working from home.

Agility requires us to be completely transparent and open across the team, enabling us to question our approach more frequently and ensure there is an ongoing improvement. A clear benefit of this methodology!

During the first sprints, we realised that creating a new client area has raised issues beyond improving the user experience for clients. How did you manage these additional challenges? How has Liip’s involvement helped you?

The project’s scope evolved to incorporate changes to our platform’s technical architecture by boosting the security of our customer data, and improving the integration into Romande Energie’s online ecosystem.

To prepare for the sprints in the best way possible, we had to revise our internal organisation so that we could integrate our technical SAP provider. This resulted in a significant amount of extra work in terms of analysis, development and coordination.

In my role as the Product Owner in this project, this represented an additional level of complexity that I was not familiar with, given that I am not a developer myself. We therefore had to prioritise purely technical developments so that we could implement the first functions that would offer tangible value for our end users.

Thanks to my exchanges with Thierry, my Product Owner coach in this project, I was able to gain new skills that enabled me to overcome these challenges. I am now much more autonomous in decision-making and prioritising my tasks.

Can you give us an example that best illustrates Liip’s agile approach to work?

I have three. You were able to:

  • Adapt the project as we went along to meet Romande Energie's additional technical requirements.
  • Shape a clear vision of the features we would offer our clients, and prioritise their deployment.
  • Integrate and train a new developer for the Romande Energie internal team in record time.

When you look back on your journey today, what do you think you will take away from this adventure? Would you do anything differently?

I was impressed by the quantity and quality of the work completed by the entire team. At Romande Energie, we started from scratch when introducing new technologies and a new working methodology (agility).

We were able to establish a strong team spirit during sprints. Agility also enabled us to continually find areas where we could improve, even after ten sprints!

I would do differently by undertaking less preparation and detailed work in advance on the functions to be implemented. It is important to have a vision of the elements that need to be developed during the next sprint, but no more than that. The project naturally adapts to changing requirements. If you plan too far ahead, you run the risk of reworking the same elements multiple times, thus of not making optimum use of your time.

And finally, how would you describe your collaboration with Liip in three words?

Supportive, kind and sympathetic!

Apr 11 2021
Apr 11

What if progress was nothing but a series of experiments?

Discover more about the service Agile Organisation our digital agency has to offer for you.

This idea, that progress happens best as a series of experiments, an idea at the heart of the Lean Change Management approach, still puzzles me while being nothing completely new to me.

A few reminiscences of my work life to navigate this tension…

Development spikes

The first echo comes from my Agile software development days, when from time to time, the team would get its joker out and propose to the product owner: "we need a spike for that". A what? A spike. A time-boxed (mostly technical) exploration of a possible solution. A spike, contrary to a user story, isn’t delivering an increment, but insights.

Those spikes were attempts to breakthrough into the unknown. Yet clients were reluctant to pay for that, calling on our expertise to take the right decision at first shot.

User tests on interface mockups

The idea of experiments also calls memories from my UX design days, when I desperately tried to sell some testing of my skeletons, the clients would rather blindly believe my expertise and be seduced by some user interface skeletons.

Here again, clients were unwilling to invest in something that did not seem like a deliverable and that would bring more questions than answers, more problems than solutions.

Online marketing

That field, in which I also acted for a few years, is full of experimental mindset. It’s a messy field full of tips and tricks, rumours and weird beliefs. Not a science at all.

Everyone long enough in the field thinks in experiments, forecasts success in a probabilistic manner. Many installed terms of online marketing denote the experimental nature of acting in this field: A/B testing, growth hacking, …

Holacracy Pilots

Nowadays introducing Holacracy to organizations , I have come to recognize the "power of pilots". Clients asking "Is it reversible? Until when can we decide to interrupt the process? Can we keep it an experiment and not do any promises?"

Here, interestingly, there seems to lie some interest in seeing that kind of change as experiments, as things we try out before we decide.

So What?

The doers know why experimenting

You can read through this short recollection that the "ones who execute" know why it’s worth experimenting. In all the situations described above, it’s the doers that wish for it.

Experimentation is a dialogue with uncertainty

At the heart of every experimentation, the same few words: "we don’t know how …", "we would like to figure out if …", "we should validate that …", … Experimenting allows exactly this: a dialogue with uncertainty. Every experimentation is a communication channel with the unknown, a probe in some remote corner of the (yet im-)possibles.

Experimenting is not toying around

Most experienced clients tended to say yes to the crucial spikes, user tests, and experiments. They also scrupulously checked what results the experiment would concretely bring, what was it that we were trying to move from "uncertain" to "tried" or even to "proven" and often challenged us on this.

Interestingly, Lean Change Management brings a nice template to formulate change experiments; a formula which covers such considerations:

We hypothesize that
by
we will
which will have
as measured by .

A formula I wish I would have had with me throughout these years.

What about adding a bit more experimentation in your worklife?

Artwork: LoC

Apr 09 2021
Apr 09

My setup with SvelteKit and Tailwind. Thanks to Windi CSS and its Vite plugin, it's very fast and simple to set up.

TL;DR

Check out this repo: github.com/munxar/sveltekit-tailwind
thanks and goodbye :-)

Setup SvelteKit

mkdir myproject
cd myproject
npm init [email protected]

Note: I choose TypeScript and only CSS when asked by the wizard, but you can choose whatever you like.

Install Windi CSS

Because SvelteKit now uses Vite 2 as a preprocessor in development mode, we use vite-plugin-windicss.

npm i -D vite-plugin-windicss

Now I import the plugin int the svelte.config.cjs file:

// svelte.config.cjs
const sveltePreprocess = require('svelte-preprocess');
const node = require('@sveltejs/adapter-node');
const pkg = require('./package.json');
const WindiCSS = require('vite-plugin-windicss').default

/** @type {import('@sveltejs/kit').Config} */
module.exports = {
    // Consult https://github.com/sveltejs/svelte-preprocess
    // for more information about preprocessors
    preprocess: sveltePreprocess(),         
    kit: {
        // By default, `npm run build` will create a standard Node app.
        // You can create optimized builds for different platforms by
        // specifying a different adapter
        adapter: node(),

        // hydrate the 

element in src/app.html target: '#svelte', vite: { ssr: { noExternal: Object.keys(pkg.dependencies || {}) }, plugins: [ WindiCSS(), ] } } };

Very basic stuff, import the plugin and add it to the vite plugins section.
Because the Windi CSS plugin is written as ES Module (ESM) and this file is a Common JS module (CJS), I need to add the ".default" to make it work.

Enable Windi CSS

Open the src/routes/$layout.svelte file and add this import:


Enable Developer Tools integration (optional)

Sometimes I like to tinker in the developer tools of the browser while prototyping. This can be done with this optional import.




By putting the import('virtual:windi-devtools') into the onMount callback, we delay the loading when the client is running in a real browser environment and not while server-side rendering.

Add Tailwind Plugins / Config

I need to add some settings, so I use my plain old tailwind config file. There are two important details:

  1. The file must be named tailwind.config.cjs (not .js)
  2. If you need plugins, import them from windicss/plugin
    Here is my example config:
// tailwind.config.cjs

module.exports = {
    dark: 'class',
    plugins: [
        require('windicss/plugin/forms'),
        require('windicss/plugin/aspect-ratio'),
        require('windicss/plugin/line-clamp'),
        require('windicss/plugin/filters'),
        require('windicss/plugin/scroll-snap'),     
    ],
}

Done. Really? Yes!
We have all set up to use what Tailwind offers and much more. Check out the Windi CSS documentation for the amazing details.
I use VS Code as an editor and love the Windi CSS plugin, check it out: WindiCSS IntelliSense

Recap

This is everything I had to do. Imho Windi CSS has some very nice advantages over plain Tailwindcss (even the @tailwindcss/jit).
If you have some questions or inputs drop a comment.
Build some amazing web applications!

Apr 05 2021
Apr 05

We recently made the rokka.io Symfony Bundle support uploading images from Twig templates. Sculpin is based on Symfony and can thuse use this bundle.

Discover more about the services CMS and SEO our digital agency has to offer for you.

Sculpin is a generator for static HTML websites. You write your content in Markdown, Twig Templates or HTML and Sculpin generates the whole page from it. An all-static HTML website is very easy to host, as you only need something to deliver static files like an AWS S3 bucket.

rokka.io is an image CDN that scales and optimizes images for the web. We recently released an update to the Rokka Symfony Bundle that adds support to upload images from Twig templates. As Sculpin is based on Symfony, Bundles can be loaded and configured like with a normal Symfony application.

In this blog post, I will show some examples of how the two can be used to build a static website with images scaled and delivered by rokka.io.

Background

My personal web site has been running with Drupal 6 since forever. I started a couple of times to move the website to a new platform, but always ended up doing something else instead of finishing that project... After 1 year of Covid slowdown, I finally got around to actually finish the rewrite.

In one of my previous efforts, I had written a script that extracted all Drupal nodes into markdown files, including, tags and image references and everything. With Sculpin, I was now able to generate the website. However, the images were always included in the original file size and scaled in CSS by the browser. It works but is of course bad for bandwith. I started pondering to do some automation with Imagemagick to generate different image sizes, but Imagemagick is not trivial to work with.

rokka.io offers a very convenient approach: You upload the image through an API and receive a hash for the image that is based on the image content, so it stays the same if you upload the same image again, but changes if the image is edited in any way. With the hash, you then build an URL to get the image from the rokka.io servers. Besides the hash, the URL specifies the "stack", the name of a configuration how to render the image. The stack allows to scale images and do all kinds of operations. The image is delivered through the rokka.io cloudfront CDN and is highly optimized.

So I wrapped up the Twig integration of the rokka-client-bundle. With release 1.3.0, it is now possible to upload image files to rokka.io while rendering a template.

Note: When using a static site generator, this is the most efficient way of handling things. If you use an interactive CMS, it is better to upload the images when they are uploaded by a content creator. In a CMS, the template with the image might only be rendered when the page is viewed by a client, and the first hit would get a penalty from uploading the image on the fly. If that delay for the first page load is acceptable to you, there is no technical reason preventing you from using the Twig methods in a CMS as well.

Setup

If you don't have an existing sculpin project, set it up with:

composer create-project sculpin/blog-skeleton myblog

Next, add the rokka bundle:

composer require rokka/client-bundle

This installs the code, now we need to enable it. You need to load the bundle in the kernel. If you don't have a kernel in your Sculpin application yet, create it (and otherwise, add the RokkaClientBundle in the getAdditionalSculpinBundles method):

# app/SculpinKernel.php

In the ./vendor/bin/sculpin cli, we now have the rokka commands. If you don't already have an account, you can create one with rokka:user:create. Copy the API key that is returned. If you don't yet have a rokka organization that is used for your project, create one with rokka:organization:create.

Then edit the kernel configuration file to add your API key and the organization name:

# app/config/sculpin_kernel.yml
rokka_client:
    api_key: 
    organization: myorganization

By default, image paths are expected to be relative to the public/ folder. This works for Symfony applications, but the Sculpin skeleton does not generate this folder. You could place the images in the source/ folder, but then the images would also be copied to the output, which wastes space on the server - the images are never loaded from your web server, but directly from rokka. I created a folder rokka_images/ on top level of my Sculpin installation and configured the path to that folder:

# app/config/sculpin_kernel.yml
rokka_client:
    ...
    web_path_resolver:
        root_dir: "%kernel.project_dir%/rokka_images/"

With the help of the sculpin cli, I used the rokka:stack:create command to create 3 stacks:

  • preview: scale to max 130x100 pixel
  • large: scale to max 1024x768
  • original: no operations, to download the image in original size

Rendering Images

Now that everything is set up, we can include the actual images in the page. A single image link with a preview image can be done with:


    

When the page is generated, the rokka client uploads the image and generates the correct image URL. To avoid having to re-upload every single time, the client stores the rokka hash. By default, the hash is stored in a file named like the image with .rokka.txt appended. The rokka twig integration allows other strategies to store the hashes, but those are not exposed in the bundle configuration. If you want to use a different strategy, you need to define your own services or do a pull request to make the rokka bundle configuration more flexible.

Note: It seems that the twig extension does not realize when an image file is changed but keeps using the existing hash. The easiest i found was to delete the hash file when changing the image, which triggers a re-upload.

Referencing the image directly in a content file is but one option. To specify a teaser image for every blog post, we can have the content file specify an image:

---
title: My Post
image: my-teaser-image.jpg
---
Markdown (resp. HTML) content of the post

In the overview template, we can show the image and link it to the post:

{% for post in page.pagination.items %}
    
                

{{ post.title }}

...

And then in the post, we can include the image again, e.g. as banner in the header or whatever makes sense for our design.

The Twig extension offers a couple more filters and functions that can be useful for some scenarios. Have a look at the documentation.

Outlook

From this, we can extend to do image overlays or even a full blown image gallery with a slide show. I used the boostrap modal and carousel components. Find the sample code in this gist.

If you want to see how much rokka.io would optimize the images in your website, use the savings calculator.

Mar 25 2021
Mar 25

Many Swiss people are directly or indirectly affected by addiction. SafeZone offers online counselling for those affected, their friends and family, specialists, and interested parties. Digital addiction counselling is a vital service for people like this.

Discover more about the service Custom Development our digital agency has to offer for you.

The SafeZone website is one of the health services offered by the Federal Office of Public Health (FOPH). It provides anonymous access to addiction support whenever and wherever it is needed. There are also links to specialist departments, self-help tools and further information. People seeking help can access anonymous addiction counselling from anywhere and at any time, without an appointment – which is how emergency aid works! Addiction experts from 24 specialist departments all over Switzerland are on hand, in three different languages.

‘SafeZone helps’ is our vision for SafeZone.ch and the counselling tool behind it. This means helping those seeking advice to overcome their worries and challenges, andcounselling for professionals in their work. We take this hugely meaningful task extremely seriously. From data retention to processes through to the design, we developed SafeZone to offer a smooth, case-specific, secure and low-threshold transition from offline to online counselling (i.e. blended counselling).

Addiction and trust

Given the taboo subject of ‘addiction’, trust was of vital importance - read more about mutual trust between those involved in the project as a success factor. We really appreciate the fact that users got involved in the open process of human-centred design. This meant that we could refer to actual needs rather than just hypotheses when making decisions. Feinheit is committed to human-centred design and implemented it in this project.

Low-threshold considerations were also key. Would people seeking help click on a suitable offering, or might they feel monitored? How long are people happy to wait in a chat until they are connected to a specialist counsellor? People in need of assistance may be desperate, or seeking help in secret. This may also be the case with friends and family members. Being discovered consuming illegal substances is also an understandable worry. Therefore, it is vital that all online actions mustbe absolutely anonymous, except when a person seeking help wishes to contact t a specialist directly. People seeking help are spread across all age groups and layers of society. This means that the relevant target groups are diverse. The website is tailored to the target groups. People seeking help are those immediately affected, their friends and family, and the website is tailored to their needs. The focus of the counselling platform is on specialists and people seeking help.

As well as public services, such as various chatbot self-tests, public questions and answers, and of course lots of information, users can also request online counselling from a specialist. Every single page of the website offers a link to counselling, so that visitors never need to search for it.

Counselling: online, secure, anonymous

People seeking advice can ask their questions quickly, easily and anonymously, and will receive an answer after no more than 72 hours. Once an anonymous question is submitted, it lands on the counselling platform and is automatically assigned to a suitable specialist. Counselling services can be one-off or take a more extended period of counselling that also includes personal sessions. This is the concept of blended counselling, which can take place flexibly, both online and offline. The part of the counselling application visible to people seeking help is fully responsive to allow complete use on a desktop, tablet or smartphone.

Counselling platform

On the counselling platform, communication is supported by two other open source tools – Mattermost and Jitsi. These enable group calls for discussion among counsellors, as well as supervisions and intervisions via Jitsi. Whether group or individual calls are required for the counsellors is decided individually and case-related. In the future, Jitsi will also be able to be used for client counselling. These conversations will be password-protected and can take place in a browser (without login or an app).

The counsellor tool needed to be very clear, especially given that counsellors need to manage different workspaces in parallel: physical locations such as counselling rooms, the main organisation’s mailbox, and the SafeZone tool.

  • Counsellors can view current counselling by date. As well as the username and title, counsellors can also see the most recent activity and anonymisation status.
  • Completed counselling is found in the ‘archive’, which can be easily sorted using smart filters.
  • Requests are automatically recorded as a file or assigned to an existing file.

High security standards and data protection for all

The website's service ensures absolute anonymity for people seeking help and advice, even if an account is required. All this account requires is a freely selectable username. Providing an email address is voluntary. The application runs on Swiss servers in a computer centre in Bern. The infrastructure is operated in docker containers in a Kubernetes cluster. A strong model of roles ensures that only one counsellor is granted access to each non-anonymised case. All other roles can only see the request in anonymised form. The software underwent two comprehensive external reviews, and of course, meets all security standards and the highest data protection requirements.

Mar 25 2021
Mar 25

SafeZone.ch was created with full focus on data security, as well as using the latest languages and frameworks. The FOPH’s online addiction counselling was redeveloped during the COVID pandemic.

Discover more about the service Custom Development our digital agency has to offer for you.

The solution

SafeZone.ch has been part of the services offered by the Federal Office of Public Health (FOPH) since 2016, and supplements existing local addiction counselling services. The online portal provides anonymous access to addiction help, from anywhere and at any time. There are also links to specialist departments, self-help tools and further information. The counselling portal was replaced as the core of the counselling services in 2021. Addiction experts can use the application to provide online counselling to those in need of help, quickly and at any time – which is how emergency aid works!

In just one year, we have developed an application that helps. This would have been impossible without agility. And we are also happy to share the development components.

The frameworks

The primary bases used were the Python framework Django for the application and Vue.js for the online front end. The application elements are connected to each other using a REST interface. Both frameworks are open-source and well established. They served as a basic structure for the individual implementation of the software.

Django

Django provides a lean toolbox for developing web applications that is particularly suitable for individual app development, as it provides the necessary basic tools without (strictly) defining the application architecture.

These tools make it easier for developers to create basic functionalities via robust ORM, a graphic interface for system administrators, and a smart user and role system. A standard expansion can also be used to easily attach a decoupled front end to the whole system as a REST API.

Vue.js

Vue.js is a popular framework for creating creative, lean, high-performance browser-based interfaces. Vue.js comes with several add-on modules that expand the framework, for example with an API connection. Its wide distribution means that the Vue.js ecosystem also offers a broad range of standard components such as a date picker, input validation, modal dialogues, accordions and much more. Vue.js also has various prefabricated UI libraries that enabled the counselling front end, in particular, to be implemented quickly.

The front end was developed using a modular approach: the functionalities are grouped into components and prioritised, meaning that it can be flexibly used and exchanged in various ways.

Modular structure

The module for people seeking help is designed to be incorporated into the various Swiss addiction counselling centres’ websites without any further effort. This means that the application can be used regardless of the addiction counselling provider, thus unifying the counselling app’s use. It is easy to use synergies in the application of the tool, which, among other things, makes it possible to bundle and save costs from the initial outlay. The tool enables interaction beyond cantonal borders and between addiction counselling centres – always with the necessary data protection. This means that the platform offers genuine added value, not just for all Swiss citizens but also for counselling centres.

Mar 25 2021
Mar 25

Agility in specification sheets? SafeZone.ch is an excellent example of how this can work. We show how agile development supports the solution and answers questions.

Discover more about the service Custom Development our digital agency has to offer for you.

‘SafeZone.ch’ is the national web platform providing online counselling relating to addiction. This project involved redeveloping the counselling application. Our client was the Federal Office of Public Health (FOPH), and the application was used by Infodrog, the Swiss Office for the Coordination of Addiction Facilities. The application is a platform for counselling those affected by addiction, their friends and family. It all began with a call for tenders as part of a procurement contract...

Hand on heart: our implementation teams are often not keen on public administration projects. The problem usually revolves around handling the tender specification sheets. When projects focus on working through the content of specifications defined early on, the actual objectives and considerations about the best ways to achieve the project are secondary.

The crunch point is precisely where we move from the tender to the actual work. Can we stay on the critical path for the next key priority? Or will a large backlog result in us doing everything, but never doing the important things well enough? How does this work, and what are the success factors?

What is blended counselling?

One key objective was to facilitate blended counselling via the application. Blended counselling means that counselling can take place both online and offline with a smooth transition between the two. The focus is flexible for each case. This means that counselling also takes place as a type of chat. For people affected, who might need help at short notice and in secret, this counselling service needs to be self-explanatory and accessible. The application is also a working tool for counsellors to help them coordinate their input and cases. Knowledge is also exchanged via the counselling platform, and it hosts intervision, supervision and coaching.

A project involving four service providers and two clients

The counselling platform needed to have multi-client capability. The project was implemented in accordance with Hermes 5.1. It also included migration from the old application to the new one. Of course, there were very strict security requirements, especially as sensitive data is involved. And if that were not enough, SafeZone.ch involved many people from different organisations: as well as the clients FOPH and Infodrog with their consultants, there was also another digital agency responsible for the part of the web portal accessible to the public, plus two CMS web service providers, a chatbot provider and external project management.

The beginning of the project was the crucial point. A business analysis helped us understand counselling using the existing application. Numerous complex operational processes resulted in an increasingly broad scope. This was also combined with known future needs. All of this came crashing down on the project team after the first month. In terms of implementing the project, it quickly became clear that a minimum viable product (MVP) approach was required.
The MVP focused on ensuring that the new tool functioned properly, as counsellors needed to continue their work. It was also about combining the safety of scope with an agile, iterative process in such a way that it was possible to start the first sprint.

What did this require?

  • Transparency: the knowledge of all stakeholders was incorporated when establishing the starting point. Everyone understood this starting point and what it meant. Feelings about this may have differed :-)
  • Timing is key: it is vital to identify risks and problems as early as possible and make them transparent. If these are made painfully evident, then they can be resolved. This enables a feedback loop that becomes an active part of the process.
  • Being courageous: the courage to make maximum use of the creative scope available within a tender under procurement law.
  • Trust in all partiesinvolved: a foundation of trust that holds firm in the face of all uncertainties.

It is also worth asking the following questions:

  • Is it really required?
  • Can a feature be implemented operationally instead, i.e. via workflows between users rather than using software logic?
  • Is there a suitable ready-made module such as a chat solution? This can sometimes radically simplify a large chunk of scope.

And yes: ‘Exceptional results wonderfully combined with Hermes 5.1’ can only be achieved with the right project culture.

Ultimately, we opted for an elephant. It is not yet fully grown and needs to develop additional skills, but it provides counsellors and people seeking help with secure, reliable support.

The elephant emerged during the initial implementation and was ‘born’ small in size but in finished form, and it is continuing to evolve.

P.S. If only one of these success factors had not been given, things would most likely not have gone quite so well:

Too little courage

The product is complete in terms of the plan or specification sheet. The full scope is minimal but not necessarily implemented in a ‘valuable’ way.

Too little feedback (agile fail)

The elephant was implemented iteratively. Unfortunately, there was a lack of coordination in terms of the vision and the big picture. Either because the vision and mission were not (correctly) understood initially, or because people lost sight of it during implementation. Alternatively, risks were spread across the duration of the project or were not thoroughly resolved.
Feb 23 2021
Feb 23

Join me for a walkthrough of the steps I had to perform to keep track of the Drupal 9 compatibility, upgrade underlying tools like Composer 2, BLT 12 and adapting contributed as well as custom modules to be compatible with Drupal 9.

Discover more about the service CMS our digital agency has to offer for you.

Recently, I had the opportunity to upgrade one of our projects from Drupal 8 to Drupal 9. In this blog post I would like to share some of the learnings I had while completing the upgrade. As you might expect, updating from Drupal 8 to Drupal 9 involves very little steps on the application layer. Most contributed modules are Drupal 9 ready and only a few exotic modules required me to work on a reroll of a Drupal 9 compatibility patch.

1. Keep track of Drupal 9 compatibility using Upgrade Status

To get started, I used Upgrade Status to analyse and keep track of the Drupal 9 readiness of the site.

It takes a while to scan all modules, but the UI is really helpful in identifying what is left for you to do. Follow these steps:

Run a full index from the command line:

drush us-a --all

Index individual projects:

drush us-a project_a project_b

You can access your upgrade report at yoursite.dev/admin/reports/upgrade-status.

2. Update to Composer 2

One fundamental step was to update to Composer 2. Refer to the documentation here. First we update composer itself:

composer selfupdate --2

If you have the composer version specified in your docker container, you might need to set it up there. In our case, we are using Lando, so let’s refer to the documentation on how to choose a composer version in Lando. In our lando.yml, we can explicitly specify the composer version as follows:

services:
  appserver:
    composer_version: 2

Updating to composer 2 may result in errors depending on the packages that you are using. When you run composer install, you might get an error like the following:

Your requirements could not be resolved to an installable set of packages.

Problem 1
    - Root composer.json requires wikimedia/composer-merge-plugin 1.4.1 -> satisfiable by wikimedia/composer-merge-plugin[v1.4.1].
    - wikimedia/composer-merge-plugin v1.4.1 requires composer-plugin-api ^1.0 -> found composer-plugin-api[2.0.0] but it does not match your constraint.

The according issue was just merged recently, but during the upgrade composer 2 support was only available via a fork of the original repository. In such a case, you can include a forked repository using the following approach. Add the following to your composer.json:

    "require": {
        "wikimedia/composer-merge-plugin": "dev-feature/composer-v2 as 1.5.0"
    }

    "repositories": {
        "wikimedia/composer-merge-plugin": {
            "type": "vcs",
            "url": "https://github.com/mcaskill/composer-merge-plugin"
        }
    }

3. Update to Drupal 9 and BLT 12 using Composer

We are using Acquia BLT to automate building and testing our Drupal sites.

Updating to Drupal 9 requires updating BLT to version 12. Make sure to follow the BLT 12 upgrade notes. Most importantly, some dependencies like PHPCS have been moved into their own plugins such as acquia/blt-phpcs. The following adaptations should be performed in composer.json:

{
    ...
    "require": {
        "acquia/blt": "^12",
        "cweagans/composer-patches": "~1.0",
        "drupal/core-composer-scaffold": "^9.1",
        "drupal/core-recommended": "^9.1",
    }
    "require-dev": {
        "acquia/blt-behat": "^1.1",
        "acquia/blt-drupal-test": "^1.0",
        "acquia/blt-phpcs": "^1.0",
        "drupal/core-dev": "^9",
    }
}

With the BLT update, some commands have changed. The BLT 11 versions of the commands, i.e.

blt validate:all
blt tests:all

Are now replaced with BLT 12 versions:

blt validate
blt tests

To perform the necessary updates, you need to run the following

composer update -w

Depending on your module dependencies, this might result in update errors. Follow the next sections for tips how to update your module dependencies for Drupal 9 compatibility.

4. Update contributed modules for Drupal 9

Because of the switch to support semantic versioning, modules might have changed their major release. For example devel has abandoned the 8.x-3.x series and uses now 4.x. You can always check the module page and verify that you find a version that requires Drupal ^9. Adapt the version in composer.json as follows:

{
    "require": {
        "drupal/devel": "^4.0",
    }
}

5. Notes on applying patches for module compatibility

Since drupal.org now supports issue forks & merge requests based on GitLab, .diff patch files might not need be available anymore within issues. You can still apply them using the following approach. Add “.diff” at the end of the merge request url. The following example illustrates how a merge request-based patch can be applied to a module in composer.json:

{
    "extra": {
        "patches": {
            "drupal/config_ignore": {
                "Support for export filtering via Drush (https://www.drupal.org/i/2857247)": "https://git.drupalcode.org/project/config_ignore/-/merge_requests/3.diff"
            }
        }
    }
}

When a module doesn’t state Drupal 9 as core_version_requirement or you need to have the composer.json to be added, you can use the following approach to include such a module using the composer workflow. You can use the module based on the version that is provided by the git branch that contains the fixes.

{
    "require": {
        "drupal/term_reference_tree": "dev-3123389-drupal-9-compatibility as 1.3-alpha3",
    },
    "repositories": {
       "drupal/term_reference_tree": {
            "type": "git",
            "url": "https://git.drupalcode.org/issue/term_reference_tree-3123389.git"
        }
    }
}

6. Update your custom code for Drupal 9 using Rector

Drupal 9 compatibility issues should be outlined by the Upgrade Status module mentioned previously. We are using drupal-check to automatically detect issues in the code base and this threw significantly more errors after the upgrade as code style requirements were increased. I used Rector to apply some automatic code style fixes for our custom modules. Rector wasn’t able to do all of them, so plan for some additional work here.

7. Working in multiple Lando instances of the same site

Because the Drupal 9 upgrade branch has a lot of dependencies that are different from Drupal 8, switching back and forth between branches might be cumbersome. I decided to run two instances in parallel, so that I don’t have to do full lando rebuilds.

Check out the same repository twice in two separate folders. Add and adapt the following .lando.local.yml within your second instance, so that you can run lando separately for both folders.

name: project_name_2

Use the following configuration to adapt url mappings, so that they don’t overlap with the original project.

proxy:
  appserver:
    - project_url_2.lndo.site
    - project_domain_2.lndo.site
  solr_index:
    - admin.solr.solr_index.project_2.lndo.site:8983

services:
  appserver:
    overrides:
      environment:
        DRUSH_OPTIONS_URI: "https://project_2.lndo.site"

In case you need have specified a portforward for the database, you should define a custom port for your second project instance

  database:
    portforward: 32145

Now you will be able to use lando start and respective commands within both project folders and access both site instances independently.

8. Conclusions

Thanks to semantic versioning, updating from Drupal 8 to Drupal 9 involves very little steps on the application layer. Most contributed modules are Drupal 9 ready and only a few exotic modules required me to work on a reroll of a Drupal 9 compatibility patch.

As you can see from the topics being mentioned, the effort to update the infrastructure certainly accumulates with upgrading from Composer 1 to 2, PHPUnit and making sure that other toolchain components are up to date.

Thank you Karine Chor & Hendrik Grahl for providing inputs to this post.

Jan 27 2021
Jan 27
  • 28 January 2021
  • Benoît Pointet

You are facilitating a group process. Post-its gather and start overlapping each other. Time to make sense of all that. You just launched the group into some clustering exercise when someone drops the bomb: "Wait! It all relates!".

Discover more about the services UX Design and Agile teams and processes our digital agency has to offer for you.

"But… it all relates!" A reaction so often heard while facilitating (or participating) to group reflexion processes (brainstorming, agile retrospectives, …).

"You ask us to group things … but everything is connected!"

It often comes with a contrived smile ("things are complex, you know!"). Sometimes also with a counterproposal "let us make a single group around the central thing here which is X, since obviously all things relate to X."

A very human reaction, which if you’re unprepared as facilitator, can take you aback. Keeping the following arguments in your mind can help.

  1. That it all relates does not mean that it all ought to conflate. It makes sense to distinguish the different aspects of a situation or a problem, the different knots of its web of complexity. Some seem to think that seeing the big picture implies refusing to distinguish the whole from its parts. Yet if we can see the links, the relationships, it is because we have identified the parts.

  2. Although a holistic view provides a definite advantage when facing a complex situation, it is good to remind ourselves that action cannot be holistic. You cannot act on the system as a whole. You may only act on precise points of the system.

Two simple arguments to help us facilitate these "everything is connected" moments and realize that in a (group) reflexion process, taking things apart is the first step towards deciding meaningful action.

Photo: Ruvande fjällripa

Benoît Pointet

Holacracy Coach

Related services
  • Topics
  • Tags
Jan 26 2021
Jan 26

In this walk through I show my preferred setup for SPAs with Svelte, Typescript and Tailwind.

TL;DR

For the very impatient among us:

npx degit munxar/svelte-template my-svelte-project
cd my-svelte-project
npm i
npm run dev

Enjoy!

Overview

In this article I'll give you some insights how I set up Svelte with TypeScript and style components with Tailwind. There are plenty of articles around, but I found a lot of them overcomplicate things, or don't fit my requirements.

So here are my goals for the setup:

  • stay as close to the default template as possible, to make updates easy
  • production build should only generate css that is used
  • use typescript wherever possible

What Do I Need?

You'll need at least some node version with npm on your machine. At time of writing I have node version 15.6.0 and npm version 7.4.0 installed on my machine.

node -v && npm -v
v15.6.0
7.4.0

Install the Svelte Default Template

To setup Svelte I open a terminal and use the command from the official Svelte homepage. TypeScript support has been already added to this template, so nothing special here.

npx degit sveltejs/template my-svelte-project
# or download and extract 
cd my-svelte-project

Enable TypeScript

# enable typescript support
node scripts/setupTypeScript.js

At this point I try out if the setup works by installing all dependencies and start the development server.

# install npm dependencies
npm i
# run dev server
npm run dev

If everything worked so far, pointing my browser at http://localhost:5000 displays a friendly HELLO WORLD. Let's stop the development server by hitting ctrl-c in the terminal.

Install Tailwind

Back in the Terminal I add Tailwind as described in their documentation.

npm install -D [email protected] [email protected]

After this step I generate a default tailwind.config.js file with

npx tailwindcss init

If you prefer a full Tailwind config use the --full argument:
npm tailwindcss init --full
See the Tailwind documentation for more infos about this topic.

Configure Rollup to use Postcss

The default Svelte template uses Rollup as a bundler. When I run the setupTypeScript.js from the first setup step, I get the famous svelte-preprocess plugin already integrated into the rollup setup. The only thing left is that I add the config for postcss as options to the svelte-preprocess plugin. Here are the changes that I make in rollup.config.js:

// rollup.config.js (partial)
...
export default {
  ...
  plugins: [
    svelte({      
       preprocess: sveltePreprocess({
         postcss: {
           plugins: [require("tailwindcss")],
         },
       }),
    }),
    ...
  ],
  ...
};

At this point Rollup should trigger postcss and therefore the Tailwind plugin. To enable it in my application, I still need one important step.

Adding a Tailwind Component to the App

Now it's time to create a Svelte component that contains the postcss to generate all the classes. I call mine Tailwind.svelte but the name doesn't really matter.

// src/Tailwind.svelte

Some things to note here:

  • The component only has a single style element with no markup.
  • The attribute global tells the svelte-preprocess plugin to not scope the css to this component. Remember by default Svelte scopes every css to the component it was declared, in this case I don't want this.
  • The lang="postcss" attribute is telling svelte-preprocess to use postcss for the content. As a goody, some IDE extensions now display the content with the correct syntax highlighting for postcss.

Now use the Tailwind component in src/App.svelte

// src/App.svelte


Hello Tailwind!

Now my browser displays a Tailwind styled div. Very nice!
Let's clean up the public/index.html and remove the global.css link tag and remove the corresponding file from public/global.css I don't use it.





    Svelte app

Let's finish the setup for production builds. Right now it's perfect for development. I can use any Tailwind class and except for the first start of the development server, where all the Tailwind classes get generated, it behaves very snappy on rebuilds.

Production Builds

Purge

When it comes to production builds, right now I have not configured anything so I'll get a bundle.css with all Tailwind classes. I don't want that for a production build, so I modify the tailwind.conf.js to use it's integrated purgecss for that purpose.

// tailwind.config.js
module.exports = {
  purge: ["src/**/*.svelte", "public/index.html"],
  darkMode: false, // or 'media' or 'class'
  theme: {
    extend: {},
  },
  variants: {
    extend: {},
  },
  plugins: [],
};

With this modification Tailwind removes all classes that are not used in .svelte files and in the public/index.html html file. I added the public/index.html file because sometimes I add containers or some responsive design utilities directly on the <body> tag. If you don't need this, you can remove the index.html file from the purge list, or add additional files I don't have listed here. For example: if I use some plugins that contain .js, .ts, .html, ... files that use Tailwind classes, I would add them to this purge array too.

There is one little detail about the Tailwind purge: it only is executed if NODE_ENV=production which makes sense. I set this environment directly in my package.json scripts:

// package.json (partial)
{
  ...
  "scripts": {
      "build": "NODE_ENV=production rollup -c",
      ...
  },
  ...
}

With these settings my bundle.css only contains the Tailwind classed I really use, plus the mandatory css reset code that Tailwind provides.

Autoprefixer

One last thing to add for production is vendor prefixes. I usually go with the defaults and just add autoprefixer as postcss plugin. If you need more control, add configuration as you please.

Install autoprefixer with npm:

npm i -D autoprefixer

Add it as postcss plugin in rollup.config.js:

// rollup.config.js (partial)
{
  ...        
  preprocess: sveltePreprocess({
    postcss: {
      plugins: [
        require("tailwindcss"), 
        require("autoprefixer"),
      ],
    },
  })
  ...      
}

That's it.

Features of this Setup

Tailwind Classes

I can apply every Tailwind class on a every html element even in the index.html template.

Tailwind @apply

Additionaly I can use @apply inside a style tag of a Svelte component like this:



  

This will generate a class scoped to the button of this component. Important part here is the attribute lang="postcss", without this postcss would not process the content of the style tag.

Typesave Components

Let's implement a simple logo component with an attribute name of type string and a default value of "Logo".

  
  
  

{name}

When I use this component the svelte language service of my IDE (visual studio code) will yell at me, if I try to pass something as the name attribute that is not of type string.

  
  
  
  

If you have a IDE that supports the svelte language service, you get all the intellisense stuff you would expect inside your editor. I use Visual Studio Code with the very good svelte.svelte-vscode extension.

Recap

I demonstrated how easy it is to setup a Svelte project with the default template enable TypeScript and add production ready Tailwind support.

I hope you find some helpful information and write some amazing apps!

The source code is available at: https://github.com/munxar/svelte-template

Jan 14 2021
Jan 14

Who reads what? For how long? And where?

As a digital agency we would like to know what is going on on our website. Therefore we use cookies. They help us to measure exactly this. And no worries - we do not recognise underwear colour and coffee consumption. However, these cookies are stored with you. So we can at least find out how often you come by.

Accept all

Decline all

Customize my preferences

Jan 13 2021
Jan 13

We all know that decisions bring us forward, but deciding often feels like too big of a step. Discover how to take decisions with a lighter heart.

Discover more about the services Agile Organisation and Teal culture and agile minds our digital agency has to offer for you.

The concept of decision entails a sense of finality. Often decisions feel like a Rodin sculpture: once for all perfectly cut. How terrible and scary is that? No wonder that many refrain from taking (major) decisions.

Can't we remove this sense of fate and rigidity from decisions and turn decision-making into a lighter thing?

Take smaller decisions

Does that decision feel too big? What could be a smaller decision in the same direction that is safe enough to take? Find it and take it. Breaking up a big decision in a series of small decisions often helps to move forward. "One Fear at a Time", as John Whitmore writes in Coaching for Performance.

Be it fear or decision, breaking it up in smaller pieces also allows you to adapt the course of action.

Embrace the imperfection of a decision

Make explicit the fact that the decision has been taken based on finite knowledge of a situation and thus corresponds to a local optimum. Finish any decision statement, with : " … until we know better".

Shouldn’t we wait then, to take better decisions? Sometimes yes. Gathering more info, giving it more thoughts is always an option. There however always comes the time when Pareto's Law kicks in, a point beyond which an imperfect decision will show greater ROI than a more perfect one.

Make it a pilot

A great question I make use of to ease my clients in taking virtuous yet still uncertain steps: "Is it safe enough to try?" Often it is. Often, this question eases the "fear of final decision".

So decide to try, before finally deciding– if you still believe that you will have to decide once for all.

Give it a revision date

Since a decision is made at a certain point in time in a certain context and based on finite knowledge, it seems only fair to review it later down the road, doesn't it? Fair and definitely smart. Even more in the case of a decision declared as a temporary one, like a pilot.

Define a revision date or install the license and/or duty to revise a decision when the need or new knowledge arises.

This works particularly well for any structural or strategic decision. Imagine how fit your organization would be if every agreement in it was due to be revised! Well, the distributed governance scheme of Holacracy makes it possible for anyone to trigger revision of the governance and Sociocracy 3.0 also advocates regularly reviewing agreements.

To go one step further down the road, I dream of an organizational system where decisions that are not revised get dropped, like an expiry date for anything decided, in order to keep organizational mass as low as possible.

Embrace exceptions to the decision

Just as a local optimum will make sense for most cases around, there will be exceptions. Let them be and shine on the light of the decision. No exception should be hidden, for hiding exceptions calls to rigidify the decision even more.

On the contrary, collecting exceptions to any decision seems to me like a good practice— I yet still have to find a domain where this happens. Every exception enriches the understanding of the decision, sharpens the scope and effects of the decision, and brings material for further revision of it.

That's all (for now) folks!

This list is not exhaustive, it simply exhausts my current thoughts on the topic. I yet decide here and now to share it with you as such. Definitely safe enough. And the digital medium gives me the license to revise it later down the road ;)

I hope this gives you a few concrete ways to take the next decision with a bit more joy and serenity.

Artwork: Stereophotography of the Grindelwald Glacier

Jan 11 2021
Jan 11

As I am frequently coaching individuals who start in the Scrum Master role, I realized there was one aspect that was rarely written about: how to begin.

Discover more about the service Agile teams and processes our digital agency has to offer for you.

Yes, it's a terrifying role

So that's it, you have passed the certification for Scrum Master and it is time for you to join the team and play the role. You are terrified, and I understand why. After all, you're lucky we've spared some budget for this role and you'd better make the team perform better right away so your position is maintained. All eyes on you. If you don't make it, we give it all back to the almighty Project Manager. In the spotlight of all these expectations, here is an invitation to take a step back and relativize.

It's not about you, it's about the team

Most importantly, it is not about you, and will never be. It is about the team. So do not carry its destiny upon your shoulders. All you will ever do is serve them and hold a mirror to them. That's it. You have to walk with them, not ahead of them. Your angle is the one of curiosity: "Oh, have you noticed that? What do you think about it?". Naive is powerful, because it blows away all preconceptions. You can, over and over, invite your team to look at the status quo with a fresh angle, which may inspire them to take action, or try new things (and follow up on them). If you've managed that, your job is done.

Start from where the team is

It is also a bad idea to go in with an upfront plan of how you want to "change how things are run". Chances are there are many assumptions in your head, which may be completely off. Instead of bulldozing your way into the team, blasting and criticizing whatever is present, I urge you to think that whatever is in place has been put in place by professionals. The way the team functions today is how it has best overcome problems so far, so respect that. I'll quote the Kanban principle: "Start from where you are". And from then, lead the team to experiment, little by little. It may come a long way.

Don't wait to be ready

The polar opposite of this attitude is also very tempting. It is to remain paralyzed. "I don't feel ready". Who does? While it is certainly a good thing to attend a course and obtain a certification, there are enough books, articles and conferences on Scrum and Agile to fill several lifetimes. For the benefit of your team, don't wait until you've read them all. Practice is going to be your teacher. The best there is. Just like the team, you are going to do the best you can, day after day, and for sure it's not going to be perfect...

Look for criticism

... So there will be criticism. That is great news. If nobody says anything, that means everybody thinks you're beyond the point of recovery and it's not even worth it anymore to give feedback. Constructive criticism is your ally in doing a better job for your team. I even advise you to actively seek feedback. There are retrospective activities tailored just for that, such as "Build Your Own Scrum Master". Make it a game for the team. That way, you show that though you take the role seriously, you certainly do not take yourself seriously.

About today

So, what about today? Day One? Well, two postures I've previously written about are always available: The Servant and The Mechanic. As a servant, there's probably a hand you can lend to the team right now. Ask around, and remember, a chore is not a chore. It's a chance to lead by example. If you pull your finger out for your teammates, you'll not only shine but you'll also inspire them to do it more as well. As a process mechanic, have a look at the team's Scrum. How is the next sprint coming along? Is the backlog prioritized? If you have chosen User Stories to express needs, are there enough of them in a ready state? What does "Ready" mean for your team? Those are great conversation starters. Dive in. And if anything's off, investigate, don't blame.

Get accompanied on the journey

Sure, all of this is still a handful. But you don't have to go it alone. There is a tremendous global community of practice and many local ones too. Don't be afraid to check out scrum.org forums, browse meetup.com for groups near you – or far away from you, as remote work has made the world even flatter than before. If there are several Scrum Masters in your organization, hook up with them, set up weekly coffees to exchange your war stories. And if you feel like getting accompaniment on your journey, don't hesitate to reach out. Whether it is me or one of my colleagues from the Liip coaching team, it would be with pleasure to walk along with you.

Jan 05 2021
Jan 05

How can we leverage Open Source contribution (in particular to Drupal) to maximize value for our customers? In this article, I would like to share the results of a recent workshop we held on this question as part of our internal gathering LiipConf.

Discover more about the service CMS our digital agency has to offer for you.

Together with a few colleagues we met for a brainstorming session. The goals set for this session were:

  • Share experiences about open source contribution at Liip and together with customers
  • Reflect on added value we can generate when contributing to Open Source
  • Mention any blockers, uncertainties or difficulties that you encounter when it comes to Open Source contribution
  • Come up with ways of including Open Source contribution into our workflows
  • Brainstorm what our customers would find valuable to know about Open Source contribution

Check-in

In our check-in, we asked which topics attracted people to come to the workshop. We had a good mix of engineers, product owners and UX folks from Drupal and Symfony in our meeting. The topics of interest spanned from “motivating clients to pay to create reusable solutions”, “sharing experiences in the context of contributions”, “getting started with contributions in 2021”, “listening in”, “finding ways to giving back”.

Method

Thanks to Emilie’s suggestion and facilitation, we used the Customer Forces Canvas to structure the discussion.

Open Source contribution board based on Miro.com and Customer Forces CanvasOpen Source contribution board based on Miro.com and Customer Forces Canvas

The canvas allowed us to capture different aspects of adopting contribution practices by asking structured questions:

  1. Triggering Event - What were those events that led to your decision to contribute back to Open Source?
  2. Desired Outcome - What outcome were you looking for?
  3. Old Solution - What solution were you using that was already in place?
  4. Consideration Set - What were alternative solutions that were considered?
  5. New Solution - What solution was selected? Why?
  6. Inertia - What were some concerns/anxieties you had before starting to contribute?
  7. Friction - What were some concerns after you started contributing?
  8. Actual Outcome - What was the actual outcome after starting to contribute? Did it meet your expectations?
  9. Next Summit - What would you like to see next for contribution? Why?

Discussion points

Examples mentioned were finding issues in existing Open Source solutions. Another key triggering event was that when the client understood how
Open Source works, they would be much more motivated to fund contributions. Often it is the motivation by an individual or the team striving to create better solutions without the need to maintain custom code individually for a customer project.

Goals we are striving for when contributing to Open Source include externalizing maintenance efforts to the community at large as well as doing good. By contributing back we are fueling the ecosystem that keeps our software up to date and innovative. We create more sustainable solutions when we are able to use standardized building blocks and follow community best practices.

When facing contribution opportunities, we are often presented with various ways to solve the issue. Fix the issue in custom code (miss the chance of contribution), fix the issue in a contributed module or fix the issue in Drupal core. Depending on the layer of abstraction, we can shoot for quick solutions or spend more time working on a generic solution. Alternatives to fixing the issues ourselves also include that we sponsor other maintainers to work on a sustainable solution that includes the resolution of the current issue.

We have also encountered issues where relying on too much abstract code created a risk for the project over time, especially when you deviate from the standard components it might become easier to internalize the functionality into the custom project’s code base so that it can be adapted without context switching but at the cost of needing to maintain the functionality without community support.

Even non-perfect code or work-in-progress can be released as Open Source so that others are able to build on it and eventually these building blocks will be further evolved. Sandbox projects or alpha releases can serve well as incubators for contributed code. Over time, when the project gets more mature, the semantic versioning approach with alpha & beta releases allows to specify well what users of the module can expect.

When discussing what was holding us back from contributing, many reasons can apply. Contributing to Drupal core takes more time than writing custom code. Sometimes it is just that folks involved don’t understand how Open Source works or what it is good for. When we create quick & dirty solutions, we sometimes don’t feel quite ready to Open Source them. Sometimes, we just don’t feel a need to contribute back because we can achieve our short term goals without doing so. Family folks mentioned that they can’t commit private time and focus on getting the job done during work time.

When discussing what was holding us back when making a contribution, we found that sometimes the effort invested doesn’t match the outcome. We need to invest more time than what we think is worth solving the problem. This can be especially driven by the fact that contributed code may imply higher quality standards enforced by peer-review from the community. It’s also the urge that once a solution is Open Source, we feel like we need to maintain it and invest more time continuously. If a custom solution is cheaper, why should the client pay for it when they cannot reuse it themselves? Sometimes we are not certain if anyone else will be willing to make use of our custom code.

We talked about the benefits that folks got when adopting contribution was adopted as a practise. Getting good community feedback on their solutions, having their solutions improved and evolved further so that it matches new use cases was mentioned. Giving speeches at conferences is also something that was found to be valuable. As a new step for contribution, folks mentioned that they would like to get help pushing their contributed modules so that they get adopted by a wider audience.

We also identified some USPs (Unique Selling Proposition) for contribution during the discussion. Clients would not need to pay for improvements contributed by the community. The maintenance of solutions based on contribution becomes more reliable. Contribution elevated self-esteem for clients and teams and helped increase visibility. It helps as a sales argument for agencies towards clients and also helps engineers to become hired by a Drupal agency like Liip. Some folks even manage to make money on platforms like GitHub sponsors or Open Collective.

Takeaways

We closed our meeting to collect some takeaways and what’s next for folks in contribution. Here’s a list of the key takeaways:

  • A “contrib-first approach” that incorporates the contribution mindset
  • Adding contribution checkpoints into the definition of ready/done
  • Inviting for cross-community contribution between Symfony and Drupal
  • Raising contribution in daily meetings, motivating each other to speak at conferences
  • Making sure that our contributions are used by others
  • Helping to find areas of contribution for non-developers
  • Balancing being a taker vs. a maker
  • Evolving a plan to communicate our efforts around contribution

What’s next for you in contribution? Have you experimented with the Customer Forces Canvas? Thank you for improving Open Source & let us know in the comments.

Image credit: Customer Forces Canvas (c) LEANSTACK
https://leanstack.com/customer-forces-canvas

Dec 20 2020
Dec 20

A mobile application can take many forms and have many functionalities.
If I say "mobile application", what kind of app do you think of?

Discover more about the services UX Design and Mobile Apps our digital agency has to offer for you.

Maybe you will think of an application like Facebook, Instagram or TikTok? Or maybe an app like the SBB, Twint or something to check the weather with.

At Liip, we particularly like the ones which are really useful to our users and solve a specific issue.

The issue

When our clients from the Lausanne-Echallens-Bercher railway line (LEB) contacted us, their problem was: the real-time timetable they were offering to their users was incomplete, because old trains cannot share their position. Without this essential information, the LEB Company could not provide its passengers accurate information, in case of delays for example.

The need was a simple and effective solution, until these trains will get replaced in a few years.

After the analysis of the situation, we agreed on designing a mobile application for the drivers in the cabin. It enables them to retrieve the GPS location of the train and push it every 30 seconds to the server to calculate the real-time timetable. Easy, right?

Unfortunately not at all. In fact, at certain points along the route covered by these old trains, there are tunnels, so the calculation of the exact position is not guaranteed. Therefore, it was necessary for the driver to be able to manually indicate the position of the train to the application.

That said, when a driver is working (whether he is driving a LEB train, a bus or a car), they have more important tasks to do than checking the position of the train: Concentration on speed, turns, passengers, schedules, etc.

Therefore, the application to be developed had to solve the problem of the position, without interfering with the drivers' work or distracting them.

Field trip

To solve this problem, we went on site, and thanks to the project manager, we were able to take a seat in one of the LEB trains. We made the trip in the cabin to understand their work environment. This is what we identified :

  • the device should be a tablet to have a big enough display;
  • the luminosity of the device would have to be low not to dazzle;
  • the colours of the interface should be contrasted for good readability;
  • an anti-reflection screen would be necessary, because at the end of the day, when the sun hits the train window, the screen is almost not readable;
  • the interface elements should be large with easily clickable areas.

The app

We came back rich with information from our field trip.

Listening to our client and its constraints, Nicolas, one of our mobile developers, started by testing the GPS locations. Once this worked well, he developed the application in just a few days. I worked on the colours and their brightness. The interface had to be very simple, that the driver didn't have to learn how it worked.

A list of stops automatically scrolls through, according to the GPS locations. If the device does not receive any more locations, at its next stop a visual and audible alert is triggered to attract the driver's attention. They only have to click on the name of the stop for the error message to disappear and for the position of the stop to be sent to the server which collects the data.

3 screenshots of Realtime LEB application

A simple but useful product

It took about 20 days of work only - from field observations to the implementation of the app on tablets - to get the application up and running. The collaboration with the project manager Pierre-Yves was excellent. In addition, Nicolas and I worked hand in hand to ensure that design and code got along and stayed within the budget. An application is not always the result of a huge and complex project costing tens of thousands of francs.

But even more than that, I believe that the greatest pride we take in our job is the satisfaction of solving a real problem of users with an application.

Dec 17 2020
Dec 17

Secret Santa came to Liip this year. In the gift bag, there was a virtual conference with 120 participants, a lot of business topics, fun, party and Christmas carols.

What we’ve done

This year is different for all of us. Even though we got used to working remotely after that many months, Liipers would prefer to meet in person. Nonetheless, we switched our yearly LiipConf, where we focus on ourselves and internal topics to an online conference. A digital agency can struggle with digital set-ups too. The first video conferencing tool didn’t work out the way it was planned, so we switched. Within 10 minutes a different tool to host the conference was found, and we nevertheless started the sessions on time. Every Liiper can propose sessions which we tackled in 45-minute dialogues, workshops, discussion rounds, lean coffees or whatever format the Liiper hosting it likes. Yesterday's sessions covered a huge spectrum from OKRs, leverage open source contributions, Alpine JS to employee representatives and what we want to be as a company. 17 sessions happened in a little over three hours.

Why all of this

As a company spread all over Switzerland, with 6 locations, it is amazing to get all of us together for one day - normally in one place. It helps us to spend quality time together, work on topics there is no time for in our daily business and have fun. Here are some of the highlights Liiper reported about::

Our highlights

  • The experience of "walking around" in wonder.me and chatting with people
  • Last minute change of the conference tool and still starting all the talks on time
  • Tech Talks
  • OKRs Workshop
  • Secret Santa gifts
  • Christmas carols karaoke in a Google meet chat with 90 Liipers
  • Hanging around until 2am - like it’s 2019 ;)
  • Seeing people, we haven't seen in ages
  • The fact that despite being virtual, we shared a strong connection to each other — outside our usual business roles, but just as charming group of people
  • Evelinn Trouble

The fun part

Parties are what we do too! This year is a little different. First there was secret Santa —were Liipers gifted each other in the most digital way possible… The solution to that is a slack-bot ;) And once all the Liipers got their secret Santa, the run for free, creative, presents in a physical or digital form started. In the evening we “unwrapped” our presents together. Liipers were so creative in crafting and sharing their digital gifts. But of course, there are lazy secret Santas at Liip too, that’s when we all had to sing a Christmas song together. Yes, 90 people singing jingle bells was one of the highlights too.
And that’s not all...there was a concert for us as well. Evelinn Trouble played a live Christmas concert for us and everyone else that wanted to listen. Are you curious? Watch it on youtube!

The different sessions were recorded. Most of them are for internal use. But, there is one particular session that we enjoyed a lot and which we are happy to share with you –— its pure infotainment:

[embedded content]
Dec 16 2020
Dec 16

The corona situation forced DrupalCon Barcelona 2020 to be fully hosted online, and was therefore renamed to DrupalCon Europe 2020. How was the experience? And how did I end up here?

You will find my first impressions about the conference in this article, as well as a bit of a background story, and some tips. Enjoy.

Background

I have been working as a backend developer on PHP projects for more than 15 years now. I joined Liip in Lausanne a bit more than two years ago, and at first, I was mostly involved in Moodle projects.

About one year ago in late 2019, we founded a team (we call it Circle ®) to craft digital solutions based on Drupal in Lausanne. The Drupal knowledge has already been within Liip for many years, as we use and contribute to it in many of our locations, including Fribourg, Bern and Zurich. I was onboarded and coached by others Liipers; I grew my skills and got in touch with the Swiss Drupal community. Everything looked promising! After a couple of months, we nevertheless decided to stop the adventure, and continue with other projects. That being said, I had the opportunity to work with Drupal 8 for a couple of months, and it was far more evolved, than the somewhat difficult memories I got from earlier versions of it.

So I decided to keep my ticket for DrupalCon Barcelona, even if it meant to spend a few days at home watching talks instead of being at a great venue full of people in the beautiful city of Barcelona. Let’s be clear, it is the first DrupalCon I have ever attended, I did attend some conferences for other projects (Symfony and Moodle).

The conference format

Well it all started with an email, telling us how to get familiar with the online platform, and how to use it or seek for help. I was surprised to see that the online event did not abandon the "networking part" of a conference. A “virtual exhibition” was available where you could find the different sponsors and meet them. A “meeting hub” was available to connect with other attendees. You could even ask for a buddy that can catch up with you and help you through the conference. DrupalCon Europe even planned social events in the evenings, but I wasn’t in the mood to attend them (yet).

The rest was as usual, you had different tracks you could subscribe to and watch. A chat and live Q&A area were available for each talk and it’s all quite straightforward to use. The platform uses a Zoom integration. Unfortunately, it did not work on my Linux distribution on the first day. It’s quite an unpleasant experience to miss a few minutes of the first talk because of technical issues. Fortunately, a workaround was available, and the issue got more or less fixed on day 2.
Furthermore, all the sessions were recorded and are available to watch later. I guess that this can be expected for a first full-online experience, and overall the platform was great. I can’t imagine how much work it has been to turn this event from an in-person to an all- virtual one. I was quite impressed by the result!

The talks

I attended a few talks, they all focused on specific topics, but some are more “developer”-oriented than others. I did a bit of everything, including “business” oriented talks. I still can’t figure what to say about them, some were more than excellent, others felt basic or too simplified. There was something for every kind of profile, but overall I felt disappointed by most of them. (To be honest, it’s something that has happened in the past. I probably enjoy the social part more at these events, or I don’t choose the right talks). However, there were very good talks that I personally enjoyed:

The feeling

Having mastermind speakers is quite a thing. You can listen to talks by people that have been doing Drupal for years, sharing their overall experiences on Drupal and no matter the topic they share, it’s a pleasure to listen!
It makes me realise how huge the community is and how difficult it is to drive it in an embracing, contributive and constructive way. Drupal has evolved a lot, specifically since the switch to Drupal 8. But managing the technical aspect is not all there is in a community. Finding ways so people can have a safe place to discuss, interact and contribute is something too. A strategy to center humans and their rights in Open Source Design is one aspect they tackle, but there are many more that are worth the efforts. I can say that I like the direction that Drupal is taking, and it’s a pleasure to see that everything is built together to provide one of the best CMS out there! Even if the learning curve is still pretty steep and should not be neglected.

I was worried about having a fully remote conference, but I shouldn’t have!. The experience was great, I had very little issues, and the number of talks was impressive.
I recommend you to have a look at the talks in advance, book them, and don’t hesitate to switch to another one if your gut feeling is telling you to do so. I also recommend you to keep some space in your schedule for your daily business, ongoing projects, in case you will have to answer some emails or do a few meetings here and there. Last but not least, I recommend connecting with the community, there are amazing people out there, and it’s always great to share and build connections.

Congratulations and a big thank you to DrupalCon Europe 2020 and everybody involved, making this event a great online experience!

Picture taken from:

Oct 06 2020
Oct 06

This regional bank located in eastern Switzerland sets great store by forward-looking concepts. The UX and Development team at Liip St. Gallen helped acrevis relaunch their website.

Discover more about the services UX Design, Content, Custom Development and SEO our digital agency has to offer for you.

Do you just offer development or are there other strings to your bow?

It all began with a preliminary project. The partnership between acrevis and Liip started in the early summer of 2019 with some technical implementation. It quickly became clear to all that this was a set-up that worked, both professionally and on a human level – so we soon received a request to relaunch their entire website www.acrevis.ch. The mission: not just a simple redesign, but rather a complete rebuild that would also provide scope for innovative further development in terms of both design and technology.

These high requirements did not just spur on our Development team, they also fired up the UX researchers, visual designers, content strategists and SEO & analytics experts on our UX team. They ultimately won the management team over with three design concepts, and in doing so laid the foundation for the project.

*Hands-on: we used joint workshops with the acrevis project team to develop personas, their requirements, and specific user journeys.*

What customers want

It was clear from the outset that it was the external perspective, that was important, rather than the internal one. In short: acrevis’s customers took centre stage. We used joint workshops to hone acrevis’s vision and mission. We developed and refined personas, and tracked and scrutinised the user journeys of acrevis's website customers. This was combined with stakeholder interviews to ensure that acrevis’s expectations would also be incorporated. All of this provided an initial foundation for the design and structure – from wireframes through to the information architecture.

Of course, an innovative website also needs a new visual design. This was no sooner said than done, with new web-optimised typography, further development of the icon library, and fresh use of the corporate colours, moving away from severe petrol blue as a secondary colour and towards the subtle application of red as a targeted primary colour. The generous use of images and a mix of static and dynamic content bring the website to life. The website also has a clean and tidy look thanks to new element layouts and subtle micro-interactions.

*Large-scale images, lots of white space, a clear structure: initial screens for the acrevis website.*

Storytelling and thematic areas rather than just financial products

In terms of content, there was one key question: how do you captivate acrevis’s customers in the rather dry world of finance? Our response: with storytelling! Staying true to the slogan ‘My bank for life’, acrevis’s products and services needed more context, more emotion, and stronger links to customers’ everyday lives. This was clear to both our content strategists and the acrevis project team.

Four (or rather eight) thematic areas focussing on acrevis’s main business segments for both private and professional customers were developed: accounts and cards, financing home ownership (or a company), investing money, and retirement planning (pensions and succession). The thematic areas were presented in a colourful range of formats – from true-to-life stories to clear product overviews to personal contacts. The story protagonists were selected to closely match the personas previously developed. The final touches came from the ‘microcopy’, with our UX writers coming up with the perfect wording for buttons, forms, error messages and the cookie banner.

State-of-the-art technology for a flexible future

‘After the go-live is before the go-live’ was the technological thrust of the website relaunch. In other words, the platform needed to remain capable of development in future years, even if that would involve meeting demanding requirements. This meant that the obvious choice for a content management system was Directus, a headless CMS that keeps the back-end and front-end separate. This is based on a service-oriented architecture hosted on acrevis’s own Openshift cluster.

Now it gets technical: the headless content is linked with the page structure via a routing service developed specifically for this purpose. An Elasticsearch service offering full text indexing for content and PDFs via GraphQL ensures optimum search results. In addition, the website uses a VueJS front-end that also supports server-side rendering. The content is supplied via a Django application that offers GraphQL and REST endpoints. The images are hosted on Rokka, ensuring that the website offers high performance even with such high levels of visual content.

A nose for what is needed

Transparency, openness and regular exchange were also the building blocks of this project. Close collaboration with the acrevis project team as well as other partners ensured that any challenges were rapidly identified and could be solved quickly and easily. We even ran an internal collaboration day to enable us all to work together in a focused way as an interdepartmental, inter-site team. This meant that feedback and findings from reviews were quickly incorporated, and the website increasingly began totake shape. But what would acrevis’s customers say?

*The new website concept was put through its paces in usability tests.*

The biggest challenge was just before the go-live: usability tests. Potential customers put the new website through its paces by performing specific searches on laptops and smartphones. The whole project team was delighted to see that other than a few details, no changes were required – it seemed that we had hit the nail on the head for customers with the storytelling, a clear structure, a fresh design and high technological performance.

The new acrevis.ch website has been live since July 2020.
Thank you to the entire acrevis project team for the wonderful collaboration! To be continued...

Thank yous
We would like to take this opportunity to thank all of the (almost exclusively) local people involved in the project: JOSHMARTIN for making valuable contributions to the web page design, AMMARKT for developing the branding and image concept for acrevis, and Arcmedia for assisting with the online forms. Thank you very much!

‘Liip understood us right from the start and supported us with innovative proposals for the concept, design, content and technology.’

Mona Brühlmann, Overall project manager for the acrevis website relaunch

‘Excellent work! The stories were fantastic and implemented in a very appealing way.’

Andrea Straessle, Marketing & Communication acrevis Bank AG

‘At every turn, it was incredible to see the high quality with which the individual requirements were implemented and how stable and sustainable the ultimate solution was.’

Michael Weder, Technical project manager for the acrevis website relaunch

Aug 17 2020
Aug 17

A different way to discover Zurich. Are you a tourist in your own country? Are you visiting Switzerland? The Zürich Card makes Zurich affordable.

Discover more about the services UX Design and CMS our digital agency has to offer for you.

The Zürich Card has been around for a long time. People landing at Zurich airport or arriving at the main station will see posters and leaflets advertising it. Since May, the Zürich Card is now also available online via zuerich.com. With the Zürich Card, you can travel around the city for 24 or 72 hours and visit museums, restaurants and boat tours with a significant discount.

Go-live despite Corona and without much noise

Visitors could already buy the Zürich Card from the SBB and ZVV ticket shop, but it was not yet available for purchase via the Zurich Tourism website. Zurich Tourism therefore asked us to develop the online shop’s frontend as part of the discover.swiss project. We love projects like this, as we also call Zurich our home.

Discover.swiss is a platform supported by the Swiss State Secretariat for Economic Affairs and was designed to digitalise tourism. We therefore set about creating the minimum viable product (MVP). For us, this was a good example of interdepartmental and cross-company collaboration, as the agency Ubique Innovation AG contributed the app, discover.swiss made the API, and we provided the frontend. The plan was for the project to go live at the beginning of the year. However, the arrival of the Covid-19 crisis meant that advertising tourist attractions made little sense, as museums and restaurants across the whole of Switzerland were closed from mid-March onwards.

Nevertheless, we continued to work on the project with Zurich Tourism. A ‘silent go live’ was implemented in May and the MVP was launched. The Zürich Card has been available from zuerich.com ever since. Now, three months later, things are returning to the new normal, and this opportunity has been widely used given the circumstances. This is also a very attractive offer for Swiss tourists in Switzerland. For all adventurers – order the Zurich Card now!

High technological standards

The demands on the frontend were challenging. An Iframe integrated into the zuerich.com website that allowed users to purchase the Zürich Card quickly and efficiently was the key to success. We used proven technologies such as Nuxt.js (Vue.js) to achieve this. The discover.swiss API was used as the interface, and payment is made via Stripe. Users can also add multiple people.

The price of a card varies according to age and is paid directly once all the information about the relevant travellers has been provided. Once the price has been calculated, it is displayed in different currencies – in accordance with the users’ requirements. After the purchase, a confirmation email is sent that also serves as a sales receipt, and contains a deep link that imports the Zürich Card into the new Zürich City Guide App. Ordering really can be this easy.

Flexibility over strength

The collaboration with discover.swiss and Zurich Tourism was fantastic. Times of crisis call for flexibility, and an ability to draw the best out of what is on offer – which we managed to achieve.

However, projects like these are not always easy, especially at the moment. Coordination took time, as there were various groups involved in the project.

When working with MVPs (the API was already an MVP), documentation is a constant challenge. However, as soon as we defined clear roles and tackled all parts of the end product, flexibility became our strength. To ensure a successful project, it is essential to incorporate everyone at an early stage. We were able to do just this thanks to the flexible individuals behind the project.

Our collaboration with Liip worked very well right from the start, and the work was tackled with great focus. We also valued direct communication with the development team.
Matthias Drabe
Product Owner and Team Lead Online, Zurich Tourism

Jul 29 2020
Jul 29

Have you ever had the dream of a lightning fast developer experience? Loading your page should be as fast as on the productive system? Your dream came true with Lando/ Docker on WSL 2 on Windows.

For years, I'm using DrupalVM and Vagrant on my Windows machine combined with Vagrant WinNFSd. It worked well, but it was painfully slow. composer update took minutes on large projects and every page load was slow.

I also used WSL 1, but accessing files from an NTFS drive under /mnt/c/docs was slow.

By the end of May 2020, Microsoft started distributing the Windows 10 2004 update. This was the first release, where WSL 2 (Windows Subsystem Linux) was officially available as a part of Windows 10. WSL 2 is a new architecture that completely changes how Linux distributions interact with Windows. It's basically a native Linux kernel in Windows 10. The goal is to increase the file system performance and adding full system call compatibility.

The best feature of WSL 2 is the fact, that this is the new de facto-standard backend for Docker Desktop on Windows. Docker Desktop uses the dynamic memory allocation feature in WSL 2 to greatly improve the resource consumption. This means, Docker Desktop only uses the required amount of CPU and memory resources it needs, while enabling CPU and memory-intensive tasks such as building a container to run way faster.

Additionally, with WSL 2, the time required to start a Docker daemon after a cold start is significantly faster. It takes less than 10 seconds to start the Docker daemon compared to almost a minute in the previous version of Docker Desktop.

I combined these new technologies with Lando and created a perfect developer setup for any PHP / Symfony / Drupal-driven development stack. The missing piece to make it fly was a file sync option with mutagen.io Described later in this article.

Install WSL 2 on Windows 10

Follow the official documentation to install and enable WSL 2:
https://docs.microsoft.com/en-us/windows/wsl/install-win10

At some point, you will have to enable Hyper-V and set WSL 2 as your default version:
wsl --set-default-version 2

Install the Distro Ubuntu from the Microsoft Store

Open the Microsoft Store and install Ubuntu.

*Fig. 1: Screenshot from the Microsoft Store*

The first time you launch a newly installed Linux distribution, a console window will open and you'll be asked to wait for a minute or two for files to decompress and be stored on your PC. All future launches should take less than a second.

If you already have a WSL 1 distro you can upgrade it:
wsl --list --verbose
wsl --set-version

Install Docker Desktop Edge on Windows

Next, install Docker Desktop Egde on WINDOWS!. By the time of writing, the current version was 2.3.3.2. Download it here.

Be careful!

There are a few tutorials online, that claim that you have to install docker inside your Linux distribution. This is wrong. You have to install Docker Desktop on Windows!

Install Lando > 3.0.9 inside your Ubuntu distro

Now we will install Lando inside our brand new WSL 2 distro "Ubuntu". This is a bit tricky, because docker-ce is a hard dependency of the package. But the official documentation has a solution for that.
Use at least Lando 3.0.9 found on github.
wget https://github.com/lando/lando/releases/download/v3.0.10/lando-v3.0.10.deb
dpkg -i --ignore-depends=docker-ce lando-v3.0.10.deb

To fix the package manager / apt-get you have to remove the Lando Package from /var/lib/dpkg/status.
nano /var/lib/dpkg/status, search for Lando and remove the entry from the file. Done.

Integrate Lando in an existing Drupal project

I assume, that you already have a running project and want to integrate it with Lando and WSL 2. The most important thing is, that your files have to life inside your Ubuntu distro e.g. /home/username/projects and not on /mnt/c/ something. Inside your Ubuntu distro, you can profit from an EXT4 file system and native file system speed. We will sync back the file to Windows for editing in your favourite IDE later.
Go ahead and checkout your project from Git inside your home folder:
/home/username/projects/yourproject

Add a .lando.yml file.

Please refer to the official Lando documentation. Attached you will find my optimized recipe for Drupal 8 with Solr. I also added lando/php.ini with some optimized PHP variables.

name: demo
recipe: drupal8
config:
  webroot: web
  php: '7.3'
  xdebug: 'false'
  config:
    php: lando/php.ini

proxy:
  fulltext:
    - admin.solr.fulltext.lndo.site:8983

services:
  appserver:
    build:
      - composer install
    xdebug: false
    overrides:
      environment:
        # support debugging Drush with XDEBUG.
        PHP_IDE_CONFIG: "serverName=appserver"
        LANDO_HOST_IP: "host.docker.internal"
        XDEBUG_CONFIG: "remote_enable=1 remote_host=host.docker.internal"
        DRUSH_OPTIONS_URI: "https://demo.lndo.site"

  database:
    # You can connect externally via "external_connection" info from `lando info`.
    portforward: true
    creds:
      # These credentials are used only for this specific instance.
      # You can use the same credentials for each Lando site.
      user: drupal
      password: drupal
      database: drupal

  fulltext:
    type: solr:8.4
    portforward: true
    core: fulltext_index
    config:
      dir: solr/conf

  memcached:
    type: memcached
    portforward: false
    mem: 256

tooling:
  phpunit-local:
    service: appserver
    description: Runs phpunit with config at web/sites/default/local.phpunit.xml
    cmd: /app/vendor/bin/phpunit -v -c /app/web/sites/default/local.phpunit.xml
  xdebug-on:
    service: appserver
    description: Enable xdebug for apache.
    cmd: docker-php-ext-enable xdebug && /etc/init.d/apache2 reload
    user: root
  xdebug-off:
    service: appserver
    description: Disable xdebug for apache.
    cmd: rm /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini && /etc/init.d/apache2 reload
    user: root

Build your Lando project and start the docker container

Run lando start to spin up your containers and see if everything goes green.

Install mutagen.io on Windows to sync the files on a Windows drive.

Go to the mutagen.io download page and download the Windows x64 binary.

Copy the binary to a folder like C:\Program Files\Mutagen and add this folder to your PATH variable. Type Windows Key + Break, from there, select "Advanced system settings" → "Environment Variables".

Fig. 2: Screenshot from the my System Settings

Synchronize files back to Windows for editing with PHPStorm / VisualStudio Code

Maybe you asked yourself: How can I edit my files, if they are inside my distro?

Microsoft added the \\wsl$\ file share, but as soon as your project has more than 100 files, it's unusable with PHPStorm or any other IDE. The file system performance of mounted volumes on WSL 2 is even 10x slower than on WSL 1. A lot of discussions and rants about this topic have been on on github.com. Here you go. At some point the issue was closed, so I couldn't post my solution to it.

Mutagen resolves this issue by syncing the files between a docker container and Windows in both directions. And it's blazing fast.
Mutagen allows you to do a "two-way-resolved" sync between a local folder on Windows with a docker container. Lando creates a yourname_app_server_1 docker container with a mount to /app. All you have to do is start mutagen and sync the files back to Windows. After that, you can edit them in PHPStorm. They get insta-synced back to your container and you can still enjoy two-time native speed: Inside the docker and inside your IDE. It also works well with files generate on the server like drush config-export in Drupal 8.

For my setup I removed the .git folder inside Ubuntu / distro and excluded syncing VCS as proposed by mutagen. I use a Git client on the windows side. But you can change that.

Important:

  • mutagen has to be run on the Windows side in a Powershell because we want to sync back to Windows!
  • The lando appserver docker container has to be running before you can start the synchronization

Starting a sync with mutagen is dead simple:
mutagen sync @source @target. See here for the full documentation.
Example:
mutagen sync . docker://[email protected]_appserver_1/app

The first sync takes a while (5-8 minutes) with > 40k files. From then on it's basically instant.

You can see the status / errors of the sync progress by running in a separate PowerShell:
mutagen sync list or mutagen sync monitor

Unfortunately, there is no easy way to exclude certain folders from being synced except from a global .mutagen.yml file. Therefore I added a mutagen.yml file to my project folder and used mutagen project start and mutagen project terminate to start a predefined configuration including excluded folders:

mutagen.yml

# Synchronize code to the shared Docker volume via the Mutagen service.
sync:
  defaults:
    flushOnCreate: true
    ignore:
      vcs: true
  demo:
    alpha: "."
    beta: "docker://[email protected]_appserver_1/app"
    mode: "two-way-resolved"
    ignore:
      paths:
        - "mutagen.yml"
        - "mutagen.yml.lock"
        - ".vagrant"
        - ".git"
        - ".idea"
        - "deploy"
        - "example_frontend/node_modules"
        - "example_frontend/.nuxt"

In this example, I excluded a few folders which didn't have to be synced. Adapt it to your needs.

Final thoughts

Enjoy your blazing fast development environment, edit your files in your favourite IDE on Windows and thank all the maintainers of the involved Open Source projects.

At some point in the future, we might be able to integrate mutagen into Lando and combine the lando start or lando stop events with mutagen. By now, I didn't find an easy way to integrate / call it from the Ubuntu distro.

Jun 07 2020
Jun 07

Teams in Liip Zürich, Bern and Fribourg have been crafting digital solutions with Drupal for years. Now we also do it at our office in Lausanne.

Discover more about the services E-commerce and CMS our digital agency has to offer for you.

How much do you like Drupal? We still have some healthy disagreements among the Liipers when it comes to Drupal, one of the most well-known frameworks running on PHP. Nonetheless, we witness a high demand for digital solutions developed with Drupal, and we love that.

Because geographical proximity to our clients is very important to us, we now offer Drupal solutions in Lausanne too. Thanks to our fluid structure, we are very close to the market and can adapt to the needs of our clients. The pace in doing that is super fast. In the past months, a couple of Liipers in Lausanne at ease with PHP developed their Drupal skills. Internal training, peer programming and knowledge sharing with experienced Drupalistas within Liip were on the menu.

But wait, what is Drupal in fact? We put together the most frequently asked questions about this framework. And here are our answers!

What is Drupal?

Drupal is an open-source content management system (CMS). In its base installation, it includes a built-in frontend and backend. As a web administrator, you can see content in the interface that you manage in the backend.

What can Drupal do?

Everything that is possible on the web. With Drupal, you can build a simple website for your content, which can be maintained in the backend. And there is way more to it. You can develop complex and powerful e-commerce solutions as well as headless projects too.

How come? Drupal comes along with a lot of basic features. They allow you to create and update your website or blog in an intuitive manner. Yet, Drupal is made to be altered. Tons of additional modules are developed by the community to extend the basic features. This is where the magic happens.

How does Drupal work?

Teams in Zürich, Bern, Fribourg and now Lausanne work on tailor-made Drupal websites. Each client project is unique and requires a dedicated solution. We develop custom made modules according to what the end-users and our clients need.

Drupal’s base installation is not designed for the end-users to have a lot of interactions with the raw data in the backend. That is why we add a client specific layer to our Drupal distribution – be it by adding contributed modules or custom solutions – to allow a lot of interactions on the end-user side.

While Drupal can be used with the built-in front end, you can employ it as a powerful headless CMS too. In this case, Drupal allows you to manage the content in the backend and exposes content through an application programming interface (API). A mobile app or a website built with a frontend framework such as VueJS can then use the exposed content.

Wingo is one of the headless CMS projects we recently did using Drupal. The content is stored in the Drupal backend. We developed an API to give access to the content to various websites.

Why Drupal?

Drupal is incredible for content-heavy sites. No matter if it’s video or copy, this CMS handles a lot of content in a useful and flexible way. As a web administrator, you can easily manage the content of your website to be displayed to the end-users. Drupal is probably the way to go when numerous products, services or pieces of information have to be shown.

Both Die Mobiliar and the Banque Cantonale de Fribourg (BCF), have a lot of products and different types of content to showcase. In addition, their website is in multiple languages. The Freiburger Nachrichten website is another example – gigabytes of written content are managed via the CMS.

Who do we recommend Drupal to?

Companies and organizations that need to provide the end-users with a lot of information, such as products, services and other useful data are likely to use Drupal.

That being said, a lot more can be done with Drupal. The moment you understand how Drupal works, it becomes incredibly easy to alter and extend it – parole de développeur! Within a week, we were able to develop a custom made solution for a shop. Initially, the client had a showcase website. Due to the coronavirus, it became urgent to transform the website into an online shop. That’s how we helped our client to address their customers’ needs.

While the developer learning curve – due to Drupal's idiosyncrasies – is usually considered quite steep with a more difficult start, once you get into Drupal everything is simple. Our Drupal team in Lausanne has been developing and strengthening its expertise while being trained by more experienced Liipers, to support you with your digital challenges. We have the skills to build tailor-made solutions that match your end-users needs.

What are Drupal modules?

Drupal modules mostly deal with providing backend functionality and underlying services, while the frontend is usually client-specific. Due to the large and active community, as well as the overall good governance, a module for nearly every use case already exists.

For example, two well-known modules – which are nearly applications on their own – from the community are:

  • Commerce. Provides a full e-commerce suite right in your CMS. Neo is the e-commerce and content platform we developed for Freitag.
  • Webform. A complete form builder allowing CMS editors to create their own highly complex forms. For instance on the Die Mobiliar website more than 270 forms are managed with webforms.

And two modules Liip developed:

  • Rokka. This module connects Drupal to Rokka, our digital image processing tool. The Rokka image converter supports web admin in storing digital images.
  • Site Search 360. We developed the connector to link Site Search 360 to Drupal. Site Search 360 is a service which aggregates all the content of a website and allows users to search quickly and easily.

See here our contributions to other modules.

These modules contribute to extending Drupal core capabilities. They are available to the community. That’s the beauty of open source – developed by the community for the community. Each client project is unique and will require a different set of modules, by combining the right ones a custom tailored solution will be created.

Can Drupal be used for mobile apps?

The short answer is: No, you can't create mobile apps with Drupal. But, it depends. ;-)

The long answer is: Yes. As mentioned earlier, everything that is possible on the web can be done with Drupal, because Drupal can be used as a headless CMS too. In this case, only the Drupal backend is used to handle the content. A dedicated API is developed to let a distinguished frontend access the content stored in the backend. This means that you need a different frontend solution to build an app with Drupal.

This is the case for progressive web apps (PWA). The app – which represents the frontend – fetches the content in the backend, stores it locally or caches it if needed.

Drupal 8 made it easy to create endpoints and APIs. One of the strengths of this Drupal version is to support headless CMS projects.

Apr 30 2020
Apr 30

More than just a form – something we can all agree on. All of the discussions about the new Mobiliar product were intense, exciting and fun.

Discover more about the service CMS our digital agency has to offer for you.

Dear Liip,

Although the premium calculator superficially looks like a form, there is a lot more going on in the background. There are countless permutations to be taken into account in the calculation: the number of people whose bicycles are being insured, their ages, the combined total of the sum insured and the excess – and whether or not to include cover for the theft of your bicycle from in front of the restaurant where you have just treated yourself to a Quattro Stagioni pizza. I also want to make the calculator look attractive, rather than it just looking like a boring, standard cookie-cutter form.
Best wishes,
Mobiliar

Dear Mobiliar,

So, this is something where the results need to look ultra-simple, even though there is a lot going on under the hood. That’s something we can do. So let’s get to work. Our suggestion: we take the Webform module for Drupal 8, which enables powerful forms to be built without the need for any technical expertise, and pimp it up with two or three little upgrades to ensure the calculations are spot on. This much you know already. However, to ensure a perfect user experience on the website, instead of the normal Drupal front end, we need to build a module that offers a Vue-js handler for the web forms. This will enable us to combine the endless possibilities of web forms with a state-of-the-art front end. As a result, we will never have to reload the page and will have precise control over exactly what visitors to the website can see.
Best wishes,
Liip

Dear Liip,

Sounds great. Let’s do that. It means that we can keep using the web forms that we are already familiar with, and create new versions of the calculator without needing any programming skills – for A/B testing, for example, or to make a calculator for bicycle retailers. And if we ever need to create a new calculator for one of our other products, we can reuse the same framework and save a whole load of money. You can get started.
Best wishes,
Mobiliar

Dear Mobiliar,

We have now finished the premium calculator and everything is working well all round. One particular headache was building something that was generic enough to enable you to create other versions and further calculators in the future, as you requested. We also had to incorporate the many different options offered by web forms. For example, there are conditional fields – parts of the form that are only visible if you tick a particular field that comes before it. Say you tick to say that you are over 26 years old – in that case, your date of birth must also be more than 26 years ago. The birthday field check must take this into account. This was a bit of a challenge, but as you can see, we succeeded.
Best wishes,
Liip

Dear Liip,

Fantastic! Thanks to the prototyping at the beginning of the project and the iterative process that followed, we have managed to develop a thoroughly impressive application, despite us not initially being 100% clear on our requirements. My favourite feature is its extremely generic implementation, meaning that we can re-use the solution to create additional premium calculators for other products, or even some less complicated forms.
Best wishes,
Mobiliar

Dear customers,

Whatever you're looking for, we can help you find it quickly and easily.
Best wishes,
Mobiliar and Liip, your digital agency

Apr 28 2020
Apr 28

As a Drupal newbie, I started working on migrating data from a Drupal 7 project to a Drupal 8 project. I’ve put together how-to migrate your data, especially the complex fields. Our project uses the Migrate API, which is part of the Drupal core.

Discover more about the service CMS our digital agency has to offer for you.

You can generate migrations for your Drupal 7 content and configuration using the Migrate Drupal module. The Migrate Drupal module is based on the Migrate API.

The generated content migrations work fine for your standard fields from the field module but Migrate API doesn’t provide out of the box migrate plugins for custom fields. A way to migrate them is by customizing the migration.

I had to write a custom migration for the domain access records. The domain access module manages access for nodes and users on multiple domains.

The challenge for those records was that they’re stored completely differently in Drupal 7. The Migrate API can’t fetch their data out of the box into a row. For less complex fields you could implement a custom process plugin. In this, you’d hook yourself into the process of a migration from a single field. There, you can transform your data as needed, such as filtering, mapping, or generally transforming values.

Matching data structures

To migrate complex data from source to destination it is often helpful to model their relationship in an entity relation diagram.

As you can see in the following picture Drupal 8 splits the fields for the domain access records into separate tables while Drupal 7 stores multiple fields in one table.

Entity relation diagram of domain records in Drupal 7 and Drupal 8Domain access records, Drupal 7 vs. Drupal 8

Extending a migration plugin

Existing plugins can be extended with additional data through protected and public methods. The Drupal\migrate\Row class has a method setSourceProperty() which allows you to add properties on the source row. Afterwards, you can access the new property in your migration YAML file in the same format as standard fields from the field module.

field_YOUR_CUSTOM_FIELD:
 -
   plugin: sub_process
   source: ADDED_SOURCE_PROPERTY
   process:
     ID_XY: PROPERTY_ATTRIBUTE

To shorten the feedback loop while developing use this drush command:
drush mim migration_id --limit=1 --idlist=YOUR_ID --update
With the combination of --limit and --idlist only the rows listed in --idlist are being migrated and it stops after reaching the limit of --limit.

Do you need to migrate data into Drupal? I learned that going through the following steps is helpful:

  1. Check if there’s a migration template for your case. These exist for nearly all sources with relevant plugins. Generate them automatically with Migrate Drupal for Drupal-to-Drupal migrations.
  2. Get yourself an overview of the data structure for the more complex fields. If Migrate API doesn’t provide a plugin that contains the field data in its source row, implement a custom migration, or extend an existing one.
  3. Do you have to deal with multiple field values? Check out the SubProcess plugin.
  4. In a custom process plugin, the properties are accessible from the fetched row. That’s how you’re able to filter, map, or transform those properties as you like.
Nov 17 2019
Nov 17

What is a CMS? What is the value of it? We answer your questions about Content Management Systems and help you choose the appropriate CMS for your digital solution.

Discover more about the service CMS our digital agency has to offer for you.

What is a CMS?

A CMS is the abbreviation of Content Management System. When working with us, a CMS is usually a web-based software application that allows for collaborative editing of content. Content Management Systems are specifically designed to allow non-developers to work with them.

Content Management Systems drive the front-end of a website but can also be drivers for mobile apps, voice assistants, third-party data pools or any other expression of content in the front-end. Some CMS incorporate online shops or complex business processes. A CMS can even be designed to be a company’s control center of all the online activities.

Often, CMS are categorized in proprietary and open source solutions. In proprietary CMS – like AEM or Sitecore – the source code is not free to use neither public. This is the case for open source CMS only. They are often accompanied by large communities that provide a significant feature set completely free to use. We strongly believe in open source and only work with open source CMS such as Drupal, Wordpress, Wagtail, Django CMS or October. If you’d like to learn more, this video gives you a great and playful explanation about open source.

What is the best CMS for us as a company?

There is no right answer to this question. The best CMS for your digital product(s) depends on your business goals in the short, mid and long-term.

Are you planning on selling products online? Would you like to connect other systems with relevant data to the CMS? What are your requirements regarding the front-end? What kind of devices would you like to play out content to? What are your resources (time and money)?

If this all sounds a bit overwhelming to you, don’t worry. The described situation is actually a great opportunity for you to ask yourself those questions early in a project and talk to experts, if needed. The best CMS for your business case might not be the one with the best marketing appearance. The ideal CMS fits your technical, procedural and structural needs. Each CMS has strengths and weaknesses. We recommend you to do a professional review. If done right and early in the process, it will save you a lot of money and time to market in the long run.

What are smart questions to ask when choosing a CMS?

  • What are the project and business goals? Will the CMS help reach these goals?
  • Which functionalities should the CMS cover?
  • Which channels should be fed via the CMS? Which marketing channels are relevant?
  • Which systems should be integrated into the digital platform?
  • What size would the platform be?
  • What is the CMS’s Total Cost of Ownership (TCO)?
  • How many active developers are in the open source community? Is it easy to find qualified developers for the technology/CMS?
  • Who will be editing the content of the new web platform? Are the content editors experienced with CMS? Do they have good visual awareness or should the system provide support in that area?
  • Does the CMS support the latest SEO requirements?
  • Does the CMS support multilanguage well?
Feb 25 2019
Feb 25

Using a smart SQL query with subqueries, we reduced the time to build a data export by a factor of (at least) 50. On the development system with little data, we got from the response from 11 seconds to about 0.25 seconds.

To make this possible, we had to use a generated column to be able to create an index on a field inside a JSON column. Doctrine does not know Generated Columns, so we had to hide the column and index from the Doctrine Schema Generator.

Our use case is that we have a table with orders. We have to build a report that shows sums of the orders by regions (zip codes in our case). The address on the order is not allowed to change, the goal is to record to what address an order has actually been shipped. Rather than linking to a different table with foreign key, we decided to denormalize the address on the order as a JSON MySQL field.

The first approach queried the zip codes table and then looped over the zip codes to query the order database for each of the 3 different sums the report contains. This of course leads to 3*n queries. Add to this that each query is highly inefficient because it needs to do a full table scan because one criteria involves accessing the zip code in the JSON field with MySQL JSON functions. At some point we started hitting timeout limits for the web request to download the export...

Using Subqueries

This is one place where using the ORM for reading is a trap. Writing direct SQL is a lot easier. (You can achieve the same with DQL or the Doctrine Query Builder and hydrating to an array.)

We converted the query into one single query with subqueries for the fields. Instead of looping over the result of one query and having a query for each row in that result, we unified those into one query:

SELECT 
    a.zip,
    (
        SELECT COUNT(o.id) 
        FROM orders AS o
        WHERE o.state = ‘confirmed’ 
          AND JSON_CONTAINS(a.zip, JSON_UNQUOTE(JSON_EXTRACT(o.delivery_address, '$.zip'))
          ) = 1
    ) AS confirmed,
    (
        SELECT COUNT(o.id) 
        FROM orders AS o
        WHERE o.state = ‘delivered’ 
          AND JSON_CONTAINS(a.zip, JSON_UNQUOTE(JSON_EXTRACT(o.delivery_address, '$.zip'))
          ) = 1
    ) AS delivered,
    ...
FROM areas AS a
ORDER BY a.zip ASC

Each subquery still needs to do a table scan for each row to determine which orders belong to which region. We found no fundamentally easier way to avoid having to select over all orders for each row in the areas table. If you have any inputs, please use the comments at the bottom of this page. What we did improve was having an index for those subqueries.

MySQL Generated Columns

Since version 5.7, MySQL supports “Generated Columns”: A column that represents the result of an operation on the current row. Among other things, generated columns are a neat workaround for creating an index on a value stored inside a JSON data field. The MySQL configuration is nicely explained in this article. For our use case, we have something along the following lines:

ALTER TABLE orders 
     ADD COLUMN generated_zip CHAR(4) GENERATED ALWAYS AS
        (JSON_UNQUOTE(JSON_EXTRACT(delivery_address, '$.zip'))
CREATE INDEX index_zip ON orders (generated_zip)

With that, our query can be simplified to be both more readable and use a field where we can use an index:

SELECT 
    a.zip,
    (
        SELECT COUNT(o.id) 
        FROM orders AS o
        WHERE o.state = ‘confirmed’ 
          AND o.generated_zip = a.zip
    ) AS confirmed,
    (
        SELECT COUNT(o.id) 
        FROM orders AS o
        WHERE o.state = ‘delivered’ 
          AND o.generated_zip = a.zip
    ) AS delivered,
    ...
FROM areas AS a
ORDER BY a.zip ASC

So far so good, this makes the query so much more efficient. The rest of this blogpost is not adding further improvements, but explains how to make this solution work when using the Doctrine Schema tool / Doctrine Migrations.

Working around Doctrine

While Doctrine is an awesome tool that helps us a lot in this application, it does not want to support generated columns by design. This is a fair decision and is no impediment for us using them for such queries as the one above.

However, we use Doctrine Migrations to manage our database changes. The migrations do a diff between the current database and the models, and produce the code to delete columns and indices that do not exist on the models.

It would help us if this issue got implemented. Meanwhile, we got inspired by stackoverflow to use a Doctrine schema listener to hide the column and index from Doctrine.

Our listener looks as follows:

getTable()) {
            if ('generated_zip' === $eventArgs->getTableColumn()['Field']) {
                $eventArgs->preventDefault();
            }
        }
    }

    public function onSchemaIndexDefinition(SchemaIndexDefinitionEventArgs $eventArgs)
    {
        if ('orders' === $eventArgs->getTable() 
            && 'index_zip' === $eventArgs->getTableIndex()['name']
        ) {
            $eventArgs->preventDefault();
        }
    }

    /**
     * Returns an array of events this subscriber wants to listen to.
     *
     * @return string[]
     */
    public function getSubscribedEvents()
    {
        return [
            Events::onSchemaColumnDefinition,
            Events::onSchemaIndexDefinition,
        ];
    }
}
Feb 12 2019
Feb 12

We built a fast serializer in PHP with an overall performance gain of 55% over JMS for our use-case, and it’s awesome. We open sourced it and here it is: Liip Serializer. Let's look more at how it works, and how we made it so much faster!

For Serialization (From PHP objects to JSON, and Deserialization, the other way around), we have been using JMS Serializer for a long time in one of our big Symfony PHP projects, we are still using it for parts of it. We were and still are very happy with the features of JMS Serializer , and would highly recommend it for a majority of use cases.

Some of the functionality we would find difficult to cope without:

  • Different JSON output based on version. So that we can have “this field is here until version 3” etc.
  • Different Serializer groups so we can output different JSON based on whether this is a “detail view” or a “list view”.

The way that JMS Serializer works is that it has “visitors” and a lot of method calls, this in PHP in general cases is fine. But when you have big and complicated JSON documents, it has a huge performance impact. This is a bottleneck in our application we had for years before we built our own solution.

To find the bottleneck blackfire helped us a lot. This is a screenshot from blackfire when we were using JMS serializer, here you can see that we called visitProperty over 60 000 times!!

Our solution removed this and made our application a LOT faster with an overall performance gain of 55%, 390 ms => 175 ms and the CPU and I/O wait both down by ~50%.

Memory gain: 21%, 6.5 MB => 5.15 MB

Let’s look at how we did this!

GOing fast outside of PHP

Having tried a lot of PHP serializer libraries we started giving up, and started to think that it’s simply a bottleneck we have to live with. Then Michael Weibel (Liiper, working in the same team at the time) came with the brilliant idea of using GoLang to solve the problem. And we did. And it was fast!

We were using php-to-go and Liip/sheriff.

How this worked:

  • Use php-to-go to parse the JMS annotations and generate go-structs (basically models, but in go) for all of our PHP models.
  • Use sheriff for serialization.
  • Use goridge to interface with our existing PHP application.

This was A LOT faster than PHP with JMS serializer, and we were very happy with the speed. Integration between PHP and the GO binary was a bit cumbersome however. But looking at this, we thought that it was a bit of an unfair comparison to compare generated go code with the highly dynamic JMS code. We decided to try the approach we did with GO with plain PHP as well. Enter our serializer in PHP.

Generating PHP code to serialize - Liip Serializer

What Liip Serializer does is that it generates code based on PHP models that you specify, parsing the JMS annotations with a parser we built for this purpose.

The generated code uses no objects, and minimal function calls. For our largest model tree, it’s close to 250k lines of code. It is some of the ugliest PHP code I’ve been near in years! Luckily we don’t need to look at it, we just use it.

What it does is that for every version and every group it generates one file for serialization and one for deserialization. Each file contains one single generated function, Serialize or Deserialize.

Then when serializing/deserializing, it uses those generated functions, patching together which filename it should use based on which groups and version we have specified. This way we got rid of all the visitors and method calls that JMS serializer did to handle each of these complex use cases - Enter advanced serialization in PHP, the fast way.

If you use the JMS event system or handlers they won't be supported by the generated code. We managed to handle all our use cases with accessor methods or virtual properties.

One challenge was to make the generated code expose exactly the same behaviour as JMS serializer. Some of the edge cases are neither documented nor explicitly handled in code, like when your models have version annotation and you serialize without a version. We covered all our cases, except for having to do a custom annotation to pick the right property when there are several candidates. (It would have been better design for JMS serializer too, if it would allow for an explicit selection in that case.)

In a majority of cases you will not need to do this, but sometimes when your JSON data starts looking as complicated as ours, you will be very happy there’s an option to go faster.

Feel free to play around! We open sourced our solutions under Liip/Serializer on Github.

These are the developers, besides me, in the Lego team who contributed to this project, with code, architecture decisions and code reviews: David Buchmann, Martin Janser, Emanuele Panzeri, Rae Knowler, Tobias Schultze, and Christian Riesen. Thanks everyone! This was a lot of fun to do with you all, as working in this team always is.

You can read more about the Serializer on the repository on GitHub: Liip/Serializer

And the parser we built to be able to serialize here: Liip/Metadata-Parser

Note: The Serializer and the Parser are Open Sourced as-is. We are definitely missing documentation, and if you have trouble using it, or would like something specific documented, please open an issue on the GitHub issue tracker and we would happily document it better. We are in the process of adding Symfony bundles for the serializer and the parser, and putting the Serializer on packagist, and making it easier to use. Further ideas and contributions are of course always very welcome.

Flash image from: https://www.flickr.com/photos/questlog/16347909278

Nov 15 2018
Nov 15

A couple of days ago, the PHP Framework Interoperability Group (PHP-FIG) approved the PSR-18 "HTTP Client" standard. This standard was the last missing piece to build applications that need to send HTTP requests to a server in an HTTP client agnostic way.

First, PSR-7 "HTTP message interfaces" defined how HTTP requests and responses are represented. For server applications that need to handle incoming requests and send a response, this was generally enough. The application bootstrap creates the request instance with a PSR-7 implementation and passes it into the application, which in turn can return any instance of a PSR-7 response. Middleware and other libraries can be reused as long as they rely on the PSR-7 interfaces.

However, sometimes an application needs to send a request to another server. Be that a backend that uses HTTP to communicate like ElasticSearch, or some third party service like Twitter, Instagram or weather. Public third party services often provide common client libraries. Since PSR-17 "HTTP Factories", this code does not need to bind itself to a specific implementation of PSR-7 but can use the factory to create requests.

Even with the request factory, libraries still had to depend on a concrete HTTP client implementation like Guzzle to actually send the request. (They can also do things themselves very low-level with curl calls, but this basically means implementing an own HTTP client.) Using a specific implementation of an HTTP client is not ideal. It becomes a problem when your application uses a client as well, or you start combining more than one client and they use different clients - or even more when needing different major versions of the same client. For example, Guzzle had to change its namespace from Guzzle to GuzzleHttp when switching from version 3 to 4 to allow both versions to be installed in parallel.

Libraries should not care about the implementation of the HTTP client, as long as they are able to send requests and receive responses. A group of people around Márk Sági-Kazár started defining an interface for the HTTP client, branded HTTPlug. Various libraries like Mailgun, Geocoder or Payum adopted their HTTP request handling to HTTPlug. Tobias Nyholm, Mark and myself proposed the HTTPlug interface to the PHP-FIG and it has been adopted as PSR-18 "HTTP Client" in October 2018. The interfaces are compatible from a consumer perspective. HTTPlug 2 implements PSR-18, while staying compatible to HTTPlug 1 for consumers. Consumers can upgrade from HTTPlug 1 to 2 seamlessly and then start transforming their code to the PSR interfaces. Eventually, HTTPlug should become obsolete and be replaced by the PSR-18 interfaces and HTTP clients directly implementing those interfaces.

PSR-18 defines a very small interface for sending an HTTP request and receiving the response. It also defines how the HTTP client implementation has to behave in regard to error handling and exceptions, redirections and similar things, so that consumers can rely on a reproducable behaviour. Bootstrapping the client with the necessary set up parameters is done in the application, and then inject the client to the consumer:

use Psr\Http\Client\ClientInterface;
use Psr\Http\Client\ClientExceptionInterface;
use Psr\Http\Message\RequestFactoryInterface;

class WebConsumer
{
    /**
     * @var ClientInterface
     */
    private $httpClient;

    /**
     * @var RequestFactoryInterface
     */
    private $httpRequestFactory;

    public function __construct(
        ClientInterface $httpClient,
        RequestFactoryInterface $httpRequestFactory
    ) {
        $this->httpClient = $httpClient;
        $this->httpRequestFactory = $httpRequestFactory;
    }

    public function fetchInfo()
    {
        $request = $this->httpRequestFactory->createRequest('GET', 'https://www.liip.ch/');
        try {
            $response = $this->httpClient->sendRequest($request);
        } catch (ClientExceptionInterface $e) {
            throw new DomainException($e);
        }

        $response->...
    }
}

The dependencies of this class in the "use" statements are only the PSR interfaces, no need for specific implementations anymore.
Already, there is a release of php-http/guzzle-adapter that makes Guzzle available as PSR-18 client.

Outlook

PSR-18 does not cover asynchronous requests. Sending requests asynchronous allows to send several HTTP requests in parallel or to continue with other work, then wait for the result. This can be more efficient and helps to reduce response times. Asynchronous requests return a "promise" that can be checked if the response has been received or waited on, to block until the response has arrived. The main reason PSR-18 does not cover asynchronous requests is that there is no PSR for promises. It would be wrong for a HTTP PSR to define the much broader concept of promises.

If you want to send asynchronous requests, you can use the HTTPlug Promise component together with the HTTPlug HttpAsyncClient. The guzzle adapter mentioned above also provides this interface. When a PSR for promises has been ratified, we hope to do an additional PSR for asynchronous HTTP requests.

Oct 28 2018
Oct 28

In 2017, Drupal Association decided not to host a DrupalCon Europe 2018 due to waning attendance and financial losses. They took some time to make the European event more sustainable. After this, the Drupal community decided to organise a Drupal Europe event in Darmstadt, Germany in 2018. My colleagues and I joined the biggest European Drupal event in October and here is my summary of few talks I really enjoyed!

Driesnote

By Dries Buytaert
Track: Drupal + Technology
Recording and slides

This year, Dries Buytaert focuses on improvements made for Drupal users such as content creators, evaluators and developers.

Compared to last year, Drupal 8 contributions increased by 10% and stable modules released by 46%. Moreover, a steady progress is noticeable. Especially in many core initiatives like the last version of Drupal 8 which is shipped with features and improvements created from 4 core initiatives.

Content creators are the key-decision makers in the selection of a CMS now. Their expectations have changed: they need flexibility but also simpler tools to edit contents. The layout_builder core module gives some solutions by enabling to edit a content inline and drag-and-dropping elements in different sections. The management of medias has been improved too and there is a possibility to prepare different “states” of contents using workspaces module. But the progress doesn’t stop here. The next step is to modernize the administrative UI with a refresh of the Seven administration theme based on React. Using this modern framework makes it familiar to Javascript (JS) developers and is building a bridge with the JS community.

Drupal took a big step forward for evaluators as it provides a demo profile called “Umami” now. Evaluators have a clear understanding of what kind of websites can be produced by Drupal and how it works by navigating through the demo website.
The online documentation on drupal.org has also been reorganized with a clear separation of Drupal 7 and Drupal 8. It provides some getting-started guides too. Finally, a quick-install link is available to have a website running within 3 clicks and 1 minute 27 seconds!

Developers experience has been improved as well: minor releases are now supported for 12 months instead of the former 4 weeks. Teams will have more time to plan their updates efficiently. Moreover, Gitlab will be adopted within the next months to manage the code contributions. This modern collaborative tool will encourage more people to participate to projects.

Regarding the support of the current Drupal versions, Dries shares that Symfony 3, the base component of Drupal 8 will be end-of-life by 2021. To keep the CMS secure, it implies to be end-of-life by November 2021 and Drupal 9 should be released in 2020. The upgrade from Drupal 8 to Drupal 9 should be smooth as long as you stay current with the minor releases and don’t use modules with deprecated APIs.
The support of Drupal 7 has been extended to November 2021 as the migration path from Drupal 7 to Drupal 8 is not stable with multilingualism yet.

This is a slide from Driesnote presentation showing a mountain with many tooltips: Slide from Driesnote showing current state of Drupal.

Last but not least, DrupalCon is coming back next year and will be held in Amsterdam!

JavaScript modernisation initiative

By Cristina Chumillas, Lauri Eskola, Matthew Grill, Daniel Wehner and Sally Young
Track: Drupal + Technology
Recording and slides

After a lot of discussions on which JS framework will be used to build the new Drupal administrative experience, React was finally chosen for its popularity.

The initiative members wanted to focus on the content editing experience. This affects a big group of Drupal users. The goal was to simplify and modernize the current interface. Furthermore, embracing practices that are familiar to JS developers so they can easier join the Drupal community.
On one hand, a UX team ran some user tests. Those showed that users like the flexibility they have with Drupal interface but dislike its complexity usually. A comparative study was ran to know what has been used in other tools or CMSs too. On the other hand, the User Interface (UI) team worked on the redesign of the administrative interface and built a design system based on components. The refreshment of the Seven administration theme is ongoing.
Another group worked on prototyping the User Experience (UX) and User Interface (UI) changes with React. For instance, if an editor quits a page without saving they's last changes, a popup appears to restore the last changes. This is possible due to contents stored to the state of the application.

You can see a demo of the new administrative UI in the video (go to 20 minutes 48 seconds):

[embedded content]

If you are interested, you can install the demo and of course join the initiative!

Drupal Diversity & Inclusion: Building a stronger community

By Tara King and Elli Ludwigson
Track: Drupal Community
Recording

Diversity in gender, race, ethnicity, immigration status, disability, religion etc. helps a lot. Proven it makes a team more creative, collaborative and effective.

Tara King and Elli Ludwigson who are part of the Drupal Diversity and Inclusion team presented how Drupal is building a stronger and smarter community. The initial need was to make Drupal a safer place for all. Especially for the less visible ones at community events such as women, minorities and people with disabilities.
The group addressed several issues, such as racism, sexism, homophobia, language barriers etc. with different efforts and initiatives. For example, diversity is highlighted and supported in Drupal events: pronoun stickers are distributed, #WeAreDrupal hashtag is used on Twitter and social events are organized for underrepresented people as well. Moreover, the group has released an online resource library, which collects articles about diversity. All of this is ongoing and new initiatives were created. Helping people finding jobs or attracting more diverse people as recruiters are only two to name.

Flyer put on a table with the text Diversity and Inclusion flyer, photo by Paul Johnson, license CC BY-NC 2.0Sign mentionning All-gender restrooms sign, photo by Gábor Hojtsy, license CC BY-SA 2.0

If you are interested in the subject and would like to be involved, there are weekly meetings in #diversity-inclusion Drupal Slack channel. You can join the contrib team or work on the issue queue too.

Willy Wonka and the Secure Container Factory

By Dave Hall
Track: DevOps + Infrastructure
Recording

Docker is a tool that is designed to create, deploy and run applications easily by using containers. It is also about “running random code downloaded from the internet and running it as root”. This quote points out how it is important to maintain secure containers. David Hall illustrates this with practical advice and images from the “Willy Wonka and the chocolate factory” movie. Here is a little recap:

  • Have a light image: big images will slow down deployments and also increase the attack surface. Install an Alpine distribution rather than a Debian which is about 20 times lighter;
  • Check downloaded sources very carefully: for instance, you can use wget command and validate checksum for a file. Plus you can scan your images to check vulnerabilities using tools like Microscanner or Clair;
  • Use continuous development workflows: build a plan to maintain your Docker images, using a good Continous Integration / Continous Delivery (CI/CD) system and document it;
  • Specify a user in your dockerfile: running root on a container is the same as running root on the host. You need to reduce the actions of a potential attacker;
  • Measure your uptime in hours/days: it is important to rebuild and redeploy often to potentially avoid having a compromised system for a long time.

Now you are able to incorporate these advice into your dockerfiles in order to build a safer factory than Willy Wonka’s.

Decoupled Drupal: Implications, risks and changes from a business perspective

By Michael Schmid
Track: Agency + Business
Recording

Before 2016, Michael Schmid and his team worked on fully Drupal projects. Ever since they are working on progressive and fully decoupled projects.
A fully decoupled website means that frontend is not handled with Drupal but with a JS framework such as React. This framework is “talking” to Drupal via an API such as GraphQL. It also means, that all interactions from Drupal are gone: views with filters, webforms, comments etc. If a module provides frontend, it is not useable anymore and needs to be somehow re-implemented.
When it comes to progressive decoupled websites, frontend stack is still built with Drupal. But some parts are implemented with a JS framework. You can have data provided by APIs or injected from Drupal too. The advantage is that you can benefit from Drupal components and don’t need to re-implement everything. A downside of it are conflicts with CSS styling and build systems handled on both sides. Therefore you need to have a clear understanding of what does what.

To be able to run such projects successfully, it is important to train every developer in new technologies: JS has evolved and parts of the logic can be built with it. We can say that backenders can do frontend now. In terms of hiring it means, you can hire full stack developers but also JS engineers. Attracting more developers as they love working with JS frameworks such as React on a global level.

Projects are investments which continue over time and expect failures at the beginning. These kinds of projects are more complex than regular Drupal ones, they can fail or go over budget. Learn from your mistakes and share them with your team in retrospectives. It is also very important to celebrate successes!
Clients request decoupled projects to have a faster and cooler experience for users. They need to understand that this is an investment that will pay off in the future.

Finally, fully decoupled Drupal is a trend for big projects and other CMSs are already using decoupled out of the box. Drupal needs to focus on a better editor experience and a better API. There might also be projects that require simple backend edition instead of Drupal.

Hackers automate but the Drupal Community still downloads updates on drupal.org or: Why we need to talk about Auto Updates

By Joe Noll and Hernani Borges de Freitas
Track: Drupal + Technology
Recording and slides

In 2017, 59% of Drupal users were still downloading modules from drupal.org. In other words, more than half of the users didn’t have any automatisation processes to install modules. Knowing that critical security updates were released in the past months and it is only a matter of hours until a website gets potentially hacked, it comes crucial to have a process to automate these updates.
The update can be quite complex and may take time: installing the update, reviewing the changes, deploying on a test environment, testing either automatically or manually and deploying on production. However this process can be simplify with automation in place.

There is a core initiative to support small-to-medium sites owners that usually are not taking care of security updates. The idea is a process to download the code and update sources in the Drupal directory.
For more complex websites, automating the composer workflow with a CI pipeline is recommended. Everytime a security update is released, the developer pushes it manually in the pipeline. The CI system builds an installation containing the security fix within a new branch. This will be deployed automatically to a non-productive environment where tests can be done and build approved. Changes can be merged and deployed on production afterwards.

A schema showing the update strategy through all steps from a CI pipelineUpdate strategy slide by Joe Noll and Hernani Borges de Freitas

To go further, the update_runner module focuses on automatizing the first part by detecting an update and firing up a push for an update job.

Conclusion

Swiss Drupal community members cheering at a restaurantMeeting the Swiss Drupal community, photo by Josef Dabernig, license CC BY-NC-SA 2.0

We are back with fresh ideas, things we are curious to try and learnings from great talks! We joined social events in the evenings too. Therefore we exchanged with other drupalists, in particular with the Swiss Drupal community! This week went so fast. Thank you Drupal Europe organizers for making this event possible!

Header image credits: Official Group Photo Drupal Europe Darmstadt 2018 by Josef Dabernig, license CC BY-NC-SA 2.0.

Dec 10 2017
Dec 10

Some of our applications are deployed to Amazaon Elastic Beanstalk. They are based on PHP, Symfony and of course use composer for downloading their dependencies. This can take a while, approx. 2 minutes on our application when starting on a fresh instance. This can be annyoingly long, especially when you're upscaling for more instances due to for example a traffic spike.

You could include the vendor directory when you do eb deploy, but then Beanstalk doesn't do a composer install at all anymore, so you have to make sure the local vendor directory has the right dependencies. There's other caveats with doing that, so was not a real solution for us.

Composer cache to the rescue. Sharing the composer cache between instances (with a simple up and download to an s3 bucket) brought the deployment time for composer install down from about 2 minutes to 10 seconds.

For that to work, we have this on a file called .ebextensions/composer.config:

commands:
  01updateComposer:
    command: export COMPOSER_HOME=/root && /usr/bin/composer.phar self-update
  02extractComposerCache:
    command: ". /opt/elasticbeanstalk/support/envvars && rm -rf /root/cache && aws s3 cp s3://rokka-support-files/composer-cache.tgz /tmp/composer-cache.tgz &&  tar -C / -xf /tmp/composer-cache.tgz && rm -f /tmp/composer-cache.tgz"
    ignoreErrors: true

container_commands:
  upload_composer_cache:
    command: ". /opt/elasticbeanstalk/support/envvars && tar -C / -czf composer-cache.tgz /root/cache && aws s3 cp composer-cache.tgz s3://your-bucket/ && rm -f composer-cache.tgz"
    leader_only: true
    ignoreErrors: true

option_settings:
  - namespace: aws:elasticbeanstalk:application:environment
    option_name: COMPOSER_HOME
    value: /root

It downloads the composer-cache.tgz on every instance before running composer install and extracts that to /root/cache. And after a new deployment is through, it creates a new tar file from that directory on the "deployment leader" only and uploads that again to S3. Ready for the next deployment or instances.

One caveat we currently didn't solve yet. That .tgz file will grow over time (since it will have old dependencies also in that file). Some process should clear it from time to time or just delete it on S3 when it gets too big. The ignoreErrors options above make sure that the deployment doesn't fail, when that tgz file doesn't exist or is corrupted.

Nov 18 2017
Nov 18

The VIPS image processing system is a very fast, multi-threaded image processing library with low memory needs. And it really is pretty fast, the perfect thing for rokka and we'll be transitioning to using it soon.

Fortunately, there's a PHP extension for VIPS and a set of classes for easier access to the VIPS methods. So I started to write a VIPS adapter for Imagine and came quite far in the last few days. Big thanks to the maintainer of VIPS John Cupitt, who helped me with some obstacles I encountered and even fixed some issues I found in a very short time.

So, without much further ado I present imagine-vips, a VIPS adapter for Imagine. I won't bore you with how to install and use it, it's all described on the GitHub repo.

There is still some functionality missing (see the README for details), but the most important operations (at least for us) are implemented. One thing which will be hard to implement correctly are Layers. Currently the library just loads the first image in for example an animated gif. Not sure, we will ever add that functionality, since libvips can't write those gifs anyway. But with some fallback to imagick or gd, it would nevertheless be possible.

The other thing not really well tested yet (but we're on it) are images with ICC colour profiles. Proper support is coming.

As VIPS is not really something installed on many servers, I don't expect a huge demand on this package, but it may be of use for someone, so we open sourced this with joy. Did I say, that it's really fast? And maybe someone finds some well hidden bugs or extends it to make it even more useful. Patches and reports are of course always welcome.

Sep 28 2017
Sep 28

Do you remember, I recently wrote about implementation of a small but handy extension for config search in Magento1? I have become so used to it, that I had to do the same for Magento 2. And since I heard many rumors about improved contribution process to M2, I also decided to make it as a contribution and get my hands “dirty”.

Since the architecture of the framework has drastically changed, I expected many troubles. But in fact, it was even a little bit easier than for M1. From the development point of view it was definitely more pleasant to work with the code, but I also wanted to test the complete path to the fully merged pull request.

Step #0 (Local dev setup)

For the local setup I decided to use Magento2  docker devbox, and since it was still in the beta state I ran the first command without any hope for smooth execution. But surprisingly I had no issues with the whole set up. By executing few commands in terminal and cup of coffee, Magento 2 was successfully installed and ready to use. Totally positive experience.

Step #1 (Configuration)

All I had to do is to declare my search model in di.xml, not too hard, right ?)

app/code/Magento/Backend/etc/adminhtml/di.xml

app/code/Magento/Backend/etc/adminhtml/di.xml

Step #2 (Implementation)

Implementation of search itself was trivial, we just have to look for matches for a given keyword in the ConfigStructure object using  mb_stripos().

app/code/Magento/Backend/Model/Search/Config.php

app/code/Magento/Backend/Model/Search/Config.php

Step #3 (View)

The same as for M1, the result of the search is a list of URLs to the matched configuration label. When the user clicks the selected URL, they are redirected to the config page and the searched field is highlighted.

That would be it regarding the implementation :)

Step #4 (Afterparty)

Too simple to believe? You are right. I thought that it is enough for submitting the PR. But I completely forgot about tests :) This one of main requirements for accepting pull request by Magento Team.

Since all implemented code was well isolated (had no strict dependencies), it was pretty easy to write tests. I have covered most of the code with unit tests, and for the main search method I wrote the integration test.

Conclusion

I would like to point out that during the whole cycle of the pull request, I had fast and high-quality support from the Magento team. They were giving useful recommendations and consulted me sometimes even during their vacations. This is what I call outstanding interaction with the community!

My special thanks to  Eugene Tulika and  Igor Miniailo, and of course Dmitrii Vilchinskii for the idea for creation this handy feature.

Mar 08 2017
Mar 08

Discover more about the service CMS our digital agency has to offer for you.

This year was the 6th edition of the DrupalDay Italy, the main event to attend for Italian-speaking drupalists.

Previous editions took place in other main Italian cities like Milan, Bologna and Naples.

This time Rome had the privilege to host such a challenging event, ideally located in the Sapienza University Campus.

The non-profit event, was free of charge .

A 2-days event

Like most development-related events these days, the event spanned over 2 days, March, 3rd and 4th.

The first day was the conference day, with more than 20 talks split in 3 different “tracks” or, better, rooms.

In fact, there was no clear separation of scopes and the same room hosted Biz and Tech talks, probably (but this is just my guess) in an attempt to mix different interests and invite people to get out of their confort zone.

The second day was mainly aimed at developers of all levels with the “Drupal School” track providing courses ranging from site-building to theming.

The “Drupal Hackaton” track was dedicated to developers willing to contribute (in several ways) to the Drupal Core, community modules or documentation.

The best and the worst

As expected, I've found the quality of the talks a bit fluctuating.

Among the most interesting ones, I would definitely mention Luca Lusso's “ Devel – D8 release party ” and Adriano Cori's talks about HTTP Client Manager module.

I was also positively surprised (and enjoyed a lot) the presentation about “ Venice and Drupal ” by Paolo Cometti and Francesco Trabacchin where I discovered that the City of Venice has an in-house web development agency using Drupal for the main public websites and services.

On the other hand, I didn't like Edoardo Garcia's Keynote “Saving the world one Open Source project at a time”.

It seemed to me mostly an excuse to advertise his candidature as Director of the Drupal Association,

I had the privilege to talk about “ Decoupled frontend with Drupal 8 and OpenUI 5“.

The audience, initially surprised by the unusual Drupal-SAP (the company behind OpenUI) association, showed a real interest and curiosity.

After the presentation, I had chance to go into the details and discuss my ideas with a few other people.

I also received some critics, which I really appreciated and will definitely make me improve as a presenter.

Next one?

In the end, I really enjoyed the conference, both the contents and the ambiance, and will definitely join next year.

Feb 28 2017
Feb 28

I started hearing about Drupal 8 back in 2014, how this CMS would start using Symfony components, an idea I as a PHP and Symfony developer found very cool.

That is when I got involved with Drupal, not the CMS, but the community.

I got invited to my first DrupalCon back in 2015. That was the biggest conference I have ever been to, thousands of people were there. When I entered the conference building I saw several things, one of them was that the code of conduct was very visible and printed. I also got a t-shirt that fit me really well – A rarity at most tech conferences I go to. The gender and racial diversity also seemed fairly high, I immediately felt comfortable and like I belonged – Super cool first impression.

I as many other geeks have social anxiety, so I was still overwhelmed with all these people, and I did not know who to talk to. Luckily Larry was there so I had someone to hug.

I went to many great talks as there were a lot of tracks – Including the Symfony one where I was speaking. A conference well worth going to for EVERYONE, this is also something that I like: They try to make every DrupalCon affordable for everyone.

That evening I felt a bit shy again and stood somewhere, all on my own, and couldn't see the two people out of thousands, that I knew. Then someone walked up to me and just started talking to me, making me feel welcome. I said I don't do Drupal at all and they said that that's nice! We talked about what I do and they were very interested.

This year I went to a local DrupalCamp here in Switzerland, drupal mountain camp, it was an event a lot more focused on Drupal, as you could expect, so I did not attend as many talks as I did at DrupalCon, but again the inclusiveness and the atmosphere was in the air – I felt so very very welcome and safe (Except maybe when sledging down a mountain…).

They mentioned the code of conduct in the beginning of the conference and then proceeded to organise an awesome event with winter sports around it.

I spoke at DrupalMountainCamp giving an introduction to Neo4j, a talk I have given many times with various results. People were extremely interested in graph databases, the concepts and how they work and they asked a lot of questions. Again – When I told them I don't do Drupal noone even tried to convince me to start, that is where our communities differ a bit.

I think that we can learn from Drupal, embrace our differences, and each other, and accept that we do different things and we are different people and it doesn't matter because that is what makes community work, that is what makes us awesome. Diversity matters, Drupal got this.

Thank you to the Drupal community for showing how to be inclusive the right way and how to not try to convince someone to try or be someone they are not, but rather support that person and try to learn from them, this is the best behaviour a community could ever have.

And hugs! So much hugs.

Oct 23 2016
Oct 23

Discover more about the services E-commerce and CMS our digital agency has to offer for you.

In this blog post I will present how, in a recent e-Commerce project built on top of Drupal7 (the former version of the Drupal CMS), we make Drupal7, SearchAPI and Commerce play together to efficiently retrieve grouped results from Solr in SearchAPI, with no indexed data duplication.

We used the SearchAPI and the FacetAPI modules to build a search index for products, so far so good: available products and product-variations can be searched and filtered also by using a set of pre-defined facets. In a subsequent request, a new need arose from our project owner: provide a list of products where the results should include, in addition to the product details, a picture of one of the available product variations, while keep the ability to apply facets on products for the listing. Furthermore, the product variation picture displayed in the list must also match the filter applied by the user: this with the aim of not confusing users, and to provide a better user experience.

An example use case here is simple: allow users to get the list of available products and be able to filter them by the color/size/etc field of the available product variations, while displaying a picture of the available variations, and not a sample picture.

For the sake of simplicity and consistency with Drupal's Commerce module terminology, I will use the term “Product” to refer to any product-variation, while the term “Model” will be used to refer to a product.

Solr Result Grouping

We decided to use Solr (the well-known, fast and efficient search engine built on top of the Apache Lucene library) as the backend of the eCommerce platform: the reason lies not only in its full-text search features, but also in the possibility to build a fast retrieval system for the huge number of products we were expecting to be available online.

To solve the request about the display of product models, facets and available products, I intended to use the feature offered by Solr called Result-Grouping as it seemed to be suitable for our case: Solr is able to return just a subset of results by grouping them given an “single value” field (previously indexed, of course). The Facets can then be configured to be computed from: the grouped set of results, the ungrouped items or just from the first result of each group.

Such handy feature of Solr can be used in combination with the SearchAPI module by installing the SearchAPI Grouping module. The module allows to return results grouped by a single-valued field, while keeping the building process of the facets on all the results matched by the query, this behavior is configurable.

That allowed us to:

  • group the available products by the referenced model and return just one model;
  • compute the attribute's facets on the entire collection of available products;
  • reuse the data in the product index for multiple views based on different grouping settings.

Result Grouping in SearchAPI

Due to some limitations of the SearchAPI module and its query building components, such plan was not doable with the current configuration as it would require us to create a copy of the product index just to apply the specific Result Grouping feature for each view.

The reason is that the features implemented by the SearchAPI Grouping are implemented on top of the “ Alterations and Processors” functions of SearchAPI. Those are a set of specific functions that can be configured and invoked both at indexing-time and at querying-time by the SearchAPI module. In particular Alterations allows to programmatically alter the contents sent to the underlying index, while the Processors code is executed when a search query is built, executed and the results returned.

Those functions can be defined and configured only per-index.

As visible in the following picture, the SearchAPI Grouping module configuration could be done solely in the Index configuration, but not per-query.

SearchAPI: processor settings

Image 1: SearchAPI configuration for the Grouping Processor.

As the SearchAPI Grouping module is implemented as a SearchAPI Processor (as it needs to be able to alter the query sent to Solr and to handle the returned results), it would force us to create a new index for each different configuration of the result grouping.

Such limitation requires to introduce a lot of (useless) data duplication in the index, with a consequent decrease of performance when products are saved and later indexed in multiple indexes.

In particular, the duplication is more evident as the changes performed by the Processor are merely an alteration of:

  1. the query sent to Solr;
  2. the handling of the raw data returned by Solr.

This shows that there would be no need to index multiple times the same data.

Since the the possibility to define per-query processor sounded really promising and such feature could be used extensively in the same project, a new module has been implemented and published on Drupal.org: the SearchAPI Extended Processors module. (thanks to SearchAPI's maintainer, DrunkenMonkey, for the help and review :) ).

The Drupal SearchAPI Extended Processor

The new module allows to extend the standard SearchAPI behavior for Processors and lets admins configure the execution of SearchAPI Processors per query and not only per-index.

By using the new module, any index can now be used with multiple and different Processors configurations, no new indexes are needed, thus avoiding data duplication.

The new configuration is exposed, as visible in the following picture, while editing a SearchAPI view under “Advanced > Query options”.

The SearchAPI processors can be altered and re-defined for the given view, a checkbox allows to completely override the current index setting rather than providing additional processors.

Drupal SearchAPI: view's extended processor settings

Image 2: View's “Query options” with the SearchAPI Extended Processors module.

Conclusion: the new SearchAPI Extended Processors module has now been used for a few months in a complex eCommerce project at Liip and allowed us to easily implement new search features without the need to create multiple and separated indexes.

We are able to index Products data in one single (and compact) Solr index, and use it with different grouping strategies to build both product listings, model listings and model-category navigation pages without duplicating any data.

Since all those listings leverages the Solr FilterQuery query parameter to filter the correct set of products to be displayed, Solr can make use of its internal set of caches and specifically the filterCache to speed up subsequent searches and facets. This aspect, in addition to the usage of only one index, allows caches to be shared among multiple listings, and that would not be possible if separate indexes were used.

For further information, questions or curiosity drop me a line, I will be happy to help you configuring Drupal SearchAPI and Solr for your needs.

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web