Upgrade Your Drupal Skills

We trained 1,000+ Drupal Developers over the last decade.

See Advanced Courses NAH, I know Enough
Sep 23 2018
Sep 23

This is the second post about the latest developments regarding the editorial experience in Drupal 8 based on a couple of presentations at Drupal Europe 2018.

Gutenberg editor

One project that could make a huge difference in the way the editors perceive Drupal could be Gutenberg.

Gutenberg was being presented at Drupal Europe by the Norwegian agency Frontkom. This contrib module integrates Gutenberg, the React javascript editor that originated from Wordpress, into Drupal.

Gutenberg can be enabled on a per content type level and replaces the node edit form with a blank canvas where the editor can create content using Gutenberg blocks as shown in the demo.

By default, various types of blocks are available to the editor, such as headings, text paragraphs, images and Drupal blocks (like the ones for example provided by the Views module). Other Gutenberg blocks can be custom made and the authors are about to launch the Gutenberg Cloud, a library from where blocks via a UI in Drupal can be installed on your website.

What remained unclear form the presentation was how Gutenberg blocks are being stored in the database and whether the individual blocks can be retrieved in a structured way for example to expose it as a REST resource.

The plan is to launch Gutenberg at the end of this year.

The full presentation is available on Youtube:

Improve Paragraphs with lesser known features

More and more site builders implement Paragraphs to let the users build structured content in a very flexible way. Therefore it was great to see Milos Bovan of MD Systems demonstrate at Drupal Europe-about a couple of lesser known features.

Using the following features you can make Paragraphs even better than it already is.

  • Use the style plugin to give each paragraph a specific style that can be used for CSS styling. The style can be chosen from the node edit form.
  • Add paragraphs to a library so you can reuse them elsewhere in the site. A listing is available to show all the paragraphs that are available in the library. You can promote a paragraph to the library and change it once to have it automatically updated everywhere in the site. If you dont want that then unlink it from the library so that the changes do not affect the paragraphs elsewhere.
  • Use the drag and drop mode to make it easier to order the paragraphs on de entity edit form. In combination with the collapse mode you can drastically improve the paragraphs UI which, often can be quite messy.
  • Organize a long messy list of paragraph types creating type groups. In the UI these groups will become available as separate tabs and by using icons for the types you can make the UI a bit more intuitive.
  • Convert paragraph types. This will allow you for example to convert an existing unstructured text field into a structured card paragraph type.

Multistep forms

Multi Step forms are an important feature of a website or application as it gives users a much better experience when submitting their data. It increases the users motivation to finish filling in the form leading in the end to a much higher conversion rate.

The contrib module form steps seems to to a good job in managing the complexity of the multistep form.

Several contributed modules among them Webform, allow building a multistep form but they are often limited in scope, hard to customize or are simply only available for Drupal 7. Alternatively a multistep form can be achieved by writing your own custom code which could at some point lead to an unmaintainable situation.

The form step module on the other allows creating multistep forms by leveraging the new Drupal 8 core feature of form modes. Much like view mode, form modes are different ways of presenting a drupal form (for example a user profile form or a node edit form).

The Form steps modules, as demonstrated at Drupal Europe by the Drupal agency Actency , lets you create workflows where that are collections of different form modes so that you can present the user with a multistep form. Each step in the workflow is linked to a particular form mode of a specific content type. As a result the user creates several nodes (possibly from different content types) when he follows the steps of the multiforms.

The workflow also manages the progress bar of the multistep form, giving the user the option to navigate through the different steps of the form.

The form step seems to provide a robust solution to a feature that many of us would like implement or should starting to implement in our Drupal websites.

Sep 21 2018
Sep 21

When I was at Drupal Europe 2018, I had the opportunity to see the new Drupal core profile Umami being demonstrated. Umami is available in Drupal core since version 8.6 and it aims at demonstrating Drupal’s features.

Umami is an installation profile that shows anyone who is considering using Drupal (so called “evaluators”) what it can do out of the box.

It provides the content (recipes), configuration and the theme (including all resources like javascript, css, images and fonts) for a fictional Food magazine called “Umami”.

Umami Drupal profile screenshot

Umami has been developed by several members of the Drupal community over the last one and a half years and the result is quite impressive. It makes a huge difference from the out of the box experiences that we have become accustomed to in the past. We were used to be presented an empty screen in the layout of default themes Bartik, Garland or even Bluemarine.

It is important to note that Umami only uses Drupal core. So that means that no contributed modules (not even Paragraphs!!), no custom code and no experimental core modules (like the Layout builder) are included.

Besides that restriction, there were many other challenges such as the licensing of web fonts and whether to use British or American standards and wording in the recipes. This makes the end result even more impressive.

Umami is really meant as a demonstration and cannot be used as a starter kit for new sites. Future updates will not guarantee any backwards compatibility.

As soon as they are stable features of Drupal core will be expanded with, among others:

  • media library for easy organizing and adding images, videos and such;
  • layout builder for easy adding block content to a page;
  • content migration for importing the demo content. This feature depends on a CSV migration plugin to become available in core;

The developers did a great job in providing Drupal core for the first time in its history with tasty demo content out of the box!

Sep 18 2018
Sep 18

At Drupal Europe 2018 I had the chance to learn the latest developments regarding the editorial experience in Drupal 8.

Content planner

One improvement that can make a big impact on the daily work of the editors is the Content planner which was being demonstrated by Lukas Fischer of Netnode.

Currently Drupal’s out of the box content overview screen (admin/content) provides a somewhat Spartan experience. Thus the need arose of a more feature rich content dashboard. With that need in mind, the team of Netnode found inspiration in content planning tools like Buffer, Gathercontent, Trello and Scompler.

This resulted in the Content planner project. This contributed module will provide a content planning dashboard that allows editors to easily find the content they need to work on.

Content planner features

Some of the features of Content planner are:

  • a content status giving quick overview of the state the websites content is in
  • a calendar that allows scheduling the publication of the nodes
  • a recent content list giveing the editor quick access
  • a kanban board voor content with columns for the content statuses draft published archived and so forth

De module is quite young and still needs improvement, but it seems useful enough to start using in your projects. By adding Content planner to your website you will probably increase your popularity among your editorial colleagues tenfold!

Autosave form

Another development that could make many editors working with Drupal happy is autosaving forms and resolving conflicts.

The autosave form contrib module was being demonstrated at Drupal Europe by Hristo Chonov of Biologis.

It automatically saves the field values every minute when you are filling out a form (for example a node or a contact form). To be able to do this correctly it bypasses all form validation, disables any implemented forms hooks and keeps the form ID intact so that the normal Drupal form editing workflow is not being disturbed.

At the moment the module is not able to autosave when creating a new node because essential information like the node ID is not available at that moment.

Autosave states are saved per user and it’s disabled when two users are working on the same content.

Conflicts

If multiple users are working on the same content then conflicts may arise. The conflict module aims at resolving those conflicts by comparing the following versions of the content:

  • the initial content;
  • the content that’s being edited;
  • the content that’s stored (which could be the content that’s been edited in the meantime by another user);

Most fields will be merged automatically but fields that have conflicting values are presented to the user so he can choose how to resolve them. The UI for resolving conflicts is currently being re-evaluated and contributions in this area are more than welcome.

If you are looking for ways to improve the editor experience of your projects then put Autosave form and Conflict on your checklist.

Sep 16 2018
Sep 16

Introduction

Today, we will take you on a journey through some important insights we achieved as builders of educational portals. Portals in which Drupal plays a part and how we managed to create added value to educational portals we built over the years. Of course we like to give some examples in how we succeeded, but it is just as interesting to look at some flaws that actually helped us to do it better the next time.

Educational portal?

But first, just what is an educational portal?

An educational portal has a broad understanding. In this blog, we would like to focus at applications that have a clear internal focus for our university as a whole, our students, our teachers and staff. You can think of electronic learning platforms, digital learning (and working) environments and intranet systems for universities.

Recent years: digital workspace

digital workspace

As part of the digital transformation every university is going through, the term “digital workspace” is floating around in this context. A digital workspace brings together all the aforementioned subsystems into one intuitive, platform. We’ll touch on that subject later on.

Role of Drupal

top universities

Secondly, how do educational portals / digital workspaces relate to Drupal?

Universities around the world have been using Drupal for some years now, even going back to version 4.x. Drupal is particularly popular because of:

  • High modularity
  • Flexible API for integrations
  • Identity and access management
  • Authentication with external providers, OAuth, SSO in place via contribs
  • Open source nature / General Public License
  • Very flexible but yet fine-grained management of roles & permissions

And that is exactly where we would like to start today.

Target Audiences

target

We could say that the typical challenge of education is the broad collection of target audiences. When developing an educational portal it’s important to know your target audience, not only are you gonna deal with Teachers and Students and cater to their needs, but you’d also have to keep in mind that Parents may be visiting the site, as are Alumni, Heads of Faculties, potential sponsors, researchers, general staf, journalists and the general public.

And we are probably still not complete in our list.

personas

One way to tackle this is making use of personas, a method of visualising your potential users. With this method you create fictional characters that represent one of the user roles. (Afbeelding user roles)

With the personas defined you can make an educated guess of the type of user journey the users of the portal are gonna follow. The next step is wire framing. An efficient way to achieve a shared view on “what we need” is to invite the target audiences to literally DRAW what they have in mind and bring all these views together in user experience sessions.

deadpool

After this, we can use these views in wire frames. This is quite essential in managing expectations. And there is a hidden advantage in this way of working: it can be a superb way of bringing together groups that are not necessarily ‘best friends’ or at least have opposite goals. Prototyping the application and perform usertests with a select group of users which represent the roles defined earlier.

usertest

[dit nog aanvullen en bruggetje naar technische tip hieronder]

From a Drupal perspective we would like to share another important insight we achieved during development of portals. As we concluded that Drupal has a flexible basis for role and access management, we need to make sure it is still manageable. The actual handing out of permissions can of course be carried out in Drupal itself, but large organisations should prevent this multilayered approach. In easier words: we want to make sure all permissions are stored in one central location, like for instance Active Directory. In the end this will prevent flaws like abusing the system while no one notices it.

Politics in Education

Watch-Pakistani-Politicians-Fighting-On-Live-TV

Working with large educational institutes brings some special challenges in the form of different fractions in the organisation. There are not only the IT and business side of the organisation, but also lots of different faculties who all think they are the most important faculty of the university. Getting all these different teams on the same page can be a daunting task and sometimes lead to extensive rework on your project. Essential in preventing these issues is understanding what the goal of the various stakeholders is and realising that, sometimes, it just isn’t possible to please everybody and still have a great product, so you have to make compromises now and then. There are however some factors which can either make your life a little better, the most important being a good product owner and a competent analyst to really get a feel of what is essential in your project.

Another crucial part of the process is to make proper wireframes, mockups and have a clear content strategy so all parties involved can get a good feel of the expected functionalities. Make sure everybody is on the same page as early in the process as possible!

Also having proper personas, having people involved and taking a good survey can be of great help in preventing bickering and arguing.

Integrations

Organisations in Higher Education probably already have a multitude of systems and programs that need to be incorporated in some way in the portal. Examples of types of application you’d have to interface with are: HR applications, Scheduling programs, Learning Management systems, Publications repositories, mailing lists, external websites, Document Repositories, Course management software, and so on, the list seems endless.

singlepoint

Of course you could write an importer for the xml which comes from the HR application, a feed processor for the external websites’ RSS feed and a file fetcher and processor for the archaïc publication repository.

The universities we saw do not have 3 systems.

tools

Abetter way to handle all these streams of data would be to create a standalone piece of software to act as a middleman, a so called Enterprise Service Bus or ESB.

Garbage in, Garbage out!

trashgiphy

The ESB is built to adapt multiple intgrations and normalize the data, which is distributed in a uniform way to our portal and any other clients. With an enterprise service bus Drupal only has to take care of a standardized and centralized integration. This heavily reduces complexity in our portal.

Some of the advantages of using an ESB are:

  • decoupling the requesting party and the distributing party
  • Simplifying and standardising the interfaces between the two parties
  • Stimulating the re-use of data (as they are centrally available it promotes the re-use)
  • Centralised location of monitoring of services
  • Reducing time-to-market
  • Sanetising and validating

esb

While the ideal of an ESB is great, reality is unfortunately different and in some cases you will have to manage external connections to the portal within Drupal. This simply means that there will probably exist some point-to-point integrations in your portal.

To handle this not so ideal situation, we should implement some control management inside Drupal. To be more specific: standardize this within your Drupal application.

We need a referee

ezgif-1-11cbf96784

A Gatekeeper, or, as you wish, some kind of referee

This will require two essential things for each integration: - Some sort of gatekeeper functionality which will prevent to import garbage.
- Proper logging system which will help keeping track of unwanted side effects of integrations with third party software.

Testing

QuarterTo9

Yes, it is a clock and it is a quarter to nine. True.

It actually represents the starting time of the students who were going to use the new portal first day at school after holiday break. We proudly launched the portal the week before. As teachers were already using it, we had a positive shared view on the use and especially the performance of the system. But, as the students day schedule now was part of the portal, and somehow we could have foreseen that, well, EVERYONE would check their day schedule at the latest moment possible, we ran into some big time performance problems. This is a typical example of peak traffic. We hadn’t taken peak times into account. As a development team we found out that we failed to address the cost of quality on this matter. It would have been better to have some proper stress testing in place. So, we quickly fixed it by shoveling some extra power to our servers and immediately sitting down with IT people of our client.

plane

Although it is quite tempting. Running away will eventually bring more problems. We sat down with IT people and created the solution we wanted.

meeting

Different types of tests

  • Unit / Kernel / Browser & Javascript tests
    Tests which check if your code is working as supposed

  • Behavior tests (e.g. Behat)
    With behavioral test you run scenario’s / user stories

  • Visual Regression tests (e.g. BackstopJS)
    Visual regression tests check visually if anything changed on the page

  • Performance tests (e.g. JMeter)
    Test the performance of the application

Performance testing = Being prepared

Steps

Some general steps to running tests on your application.

  • Analyse existing data
    • Google Analytics / Logs
    • What are the top pages
    • What pages give errors
  • Prepare testscenario
    • Use the results of the analysis
  • Configure tooling
    • Pick your tool (Jmeter?)
  • Run the test
  • Analyse results
    • Profiling & monitoring

APDEX

apdex

APDEX is a standard for measuring user satisfaction based on pageload time. Basically it works like this, you set a baseline that’s acceptable and one that’s frustrating for your application (which for an LMS might be a different baseline then for a webshop). Then when you run your test, firing of a gazzilion requests to your application, you get a set of results mapped to your baselines following a pretty simple formula:

apdexruntime

Unfortunately…

APDEX is not the holy grail

Nowadays there are a lot of onepage / javascript apps, you have bigpipe which skews results. Also the resulting APDEX score is an average, so shifting the numbers might give you the same score, while the frustrated results can be higher. So you should always use monitoring, alerts and, if available analytics to be sure that during expected peak times the system is working as expected. A nice thing to mention here is the current trend of containerisation of environments, making use of systems like Docker, Kubernetes and OpenShift. A hugely effective feature is autoscaling of the environment without facing downtime. For the first day, when facing problems of performance nature, it can take away the harshness of coping with organisational agitation and disgrace. Moreover, it gives you time to fix things the right way.

Technical Choices / architecture

cloud 1

So we were talking about the ESB. What would happen if we considered Drupal as actually being one of the distributing systems, a client to the ESB? We would simply consider Drupal as a content management system, out there to serve content to whatever external system we want. This is typically the case when operating in an omnichannel communication strategy.

cloud 1

A user experience is partly determined by the online ‘obstacles’ that a user encounters. Removing frictions within the customer journeys of users makes the experience positive. We also believe that omnichannel communication is becoming increasingly important in the online world. In concrete terms, this means that the publication and distribution of content and information via a website is no longer sufficient. Channels such as (native and mobile) apps and other websites are becoming more and more part of the communication strategy. It is crucial that a CMS has the right interfaces and tools to make this possible. The CMS as a publishing machine, which provides direct communication to all channels and chain partners in combination with a front-end interface for the distribution of function, form and content, will form the basis.

Go away, obstacle!

The news here is: Drupal is not necessarily the portal itself.

In fact, we are aiming to serve a unified experience for our users across all channels.

multichannel---didital-workspace_11

A definition:

“The digital workspace is a new concept that enables tech-savvy employees to access the systems and tools they need from any device—smartphone, tablet, laptop, or desktop—regardless of location. ”

Forbes

And that, in our opinion, is a very unpretentious definition. Because the digital workspace is device-independent as well as adaptive. One could image that, for instance, when you work together in a group for a school project, that all these subsystems “know” you and also “know” you are part of that group. When asking questions to tutors that apply to the group, you would expect the whole group to be found in the correspondence. So when the teacher answers to the question, every member of the group receives the feedback.

Future role of Drupal

So, having written down all these insights, we are left with a question. Just what is the role of Drupal when a portal really is a set of collaborating systems?

I am happy to hear your thoughts on this matter below.

Jun 17 2018
Jun 17

So, we are building this great community portal for all the world to see. We thought about all aspects that would make this project a huge success. We are heavily in control. And finally the day has come: our portal hits the worldwide web. And so it seems, the website is well received! A little too well received. Soon traffic rates hit the ceiling. And so does our carefully configured web server. End of this little story. We’ve become the victim of our own success.

Performance testing is all about being prepared

OK, so perhaps this tale is a little bit over dramatized. Still, in my daily work I see this kind of misery happening a lot. To be honest… we tend to step into this pitfall as well from time to time. And we shouldn’t have to.

Testing the performance of your site means that you can actually prepare for large amounts of traffic on your site. You can be predictable.

If we really wanted this site to be a success, we might have had to prepare better for that success.

This is where performance testing comes in. We simply want to know how our website is performing once it hits higher traffic rates.

What are we testing?

But what do we actually test? What about busy periods? Does the caching do its job? Or have we configured things incorrectly? And when we really tighten the screw, what will happen? Will it bring down our site? What about peak times, just after our project went viral? And are we fully utilizing the server capacity?

We use a load test to closely monitor the use of the site. This way we can make statements about the performance.

But… it comes down to the same problems anyway?

Without performance testing we can look at common causes that can give rise to performance problems. Wrongly deployed caching is obvious. And perhaps you simply load too much data on a page, stressing the database. If you are an experienced Drupal developer, there is a fair chance that guess right we can and sleep peacefully again. In many cases, the real problem is something you did not think of.

Performance tests

There are many different types of performance tests. We will discuss some.

Load testing

With a load test we mainly look at the expected load on the site. So: if we build an intranet, we want to know what happens when, for example, 60% of the company simultaneously uses the intranet intensively.

Stress testing

With stress testing we look more at deviant situations. An exceptional load. In the example of our fansite: we appear in the news and therefore suddenly have a multitude of visitors. Our example is of course exaggerated. It is difficult to anticipate an unexpected and absurdly high number of visitors. But what we do want to know: where is the border?

Endurance testing

With an endurance test, we test the site measured over a longer period, to investigate whether the site will perform worse over time, for example due to the increase of logged-in users or content.

Peak testing

With peak testing we mainly look at intensity peaks. In the example of the intranet you can think of a peak in the morning when everyone arrives at the office.

Load testing: 5 steps

Because we want to know if our site can handle the desired number of visitors, and we also want to know how far we can go, we focus on load and stress testing.

The steps are the following.

1. Analysis of any existing data

Simply retrieve from existing statistics, for example visitor numbers. This helps to create realistic scenarios in our test preparation. In the analysis we can fall back on tools such as Google Analytics. But … if the site is new, we have to make statements about traffic in a different way. Often the customer can share some insights about this. And you might be building a new version of the same site so you can still say a lot about expected visits.

2. Think of some scenarios

Make sure you will test with realistic scenarios. Prepare the test well! One scenario is not enough. We want to simulate that a large number of visitors, simultaneously, click on several pages, download documents, log in, send forms, etc. Therefore, make sure you have enough CPU on your machine to perform the tests. And … determine limit values ​​for the test, to assign a score to the performance.

APDEX

The APDEX may be of help here. The APDEX is an industry standard to make statements about satisfaction with performance based on tests. Actually, we can thus determine whether the performance meets the expectations! Response times alone do not say much. We can interpret this with the APDEX. The APDEX is a score based on satisfaction surrounding the performance. A lot of tools, such as New Relic and Jmeter, therefore use the APDEX.

The APDEX calculates a score based on measurements: Satisfying, Tolerating and Frustrating. We will apply a time aspect here, for example: Satisfying: 0-1.5 seconds; Tolerating: 1.5-6 seconds; Frustrating: 6 seconds or more.

We can plot the JMeter stats on the APDEX we discussed above

  • Satisfying results count for 100% (this is great)
  • Tolerating results count for 50% (this is ok)
  • Frustrating results do not yield a score (this is not good)

We agreed with our client an APDEX treshold of 0.85. So let’s say we fired 10.000 simulated visits. 4.200 are satisfying, 5.800 are tolerating and 1.000 are frustrating.

APDEX = Satisfied + (Tolerated / 2) / Total samples

4,200 + (5,800 / 2) / 10,000 = 0.71

The treshold was not met.

The APDEX can help you to meet agreements set in the project and is a helpful methodology. It hase become a standard, but that does not mean there are no critical spectators around.

3. Pick the right tool

Determine which tooling we can use to carry out the test. And, well ok, there is a lot to be found out there. Let’s try to highlight some.

  • AB - Apache Bench - What can we say, this is very easy. It proves some low level local testing is not that hard.
  • Gatling - A paid (and great) alternative for powerful open source tooling like JMeter. Check out this extensive comparison.
  • Locust: Define user behavior with Python code.
  • Siege - Similar to AB, but more extensive, useful to perform a quick load test.
  • Blazemeter - Simple characterization: a hosted version of Jmeter. Less simple characterization: testing on steroids. Offering A LOT when it comes to testing. Test from all over the world, very large numbers, hosted Selenium, Gatling, Locust and more…
  • Jmeter - Open source, many possibilities, the old, the proven solution. Still goin’ strong.

We should point out that Blazemeter stands out as a modern, versatile cloud test platform (not only load testing!).

Some pros: - Easy / quick boarding - Drupal (D7 + D8) stable module available - Important advantages with respect to JMeter: geography is not limited to your own local environment and processing is not limited to your own machine.

Yes, a con: It is a commercial solution, it will cost you money…

4. Carry out the test

In part 2 of this series we will actually carry out a test in Siege and Jmeter.

5. Analyze

After running the test we will analyze the results. Do we learn more about problems in our application? In part 3 of this series, we will look at profiling and analyzing.

Load testing

Coming up in Load testing in Drupal (part 2):

  • Performing load tests to your Drupal application in Siege and JMeter.

Coming up in Load testing in Drupal (part 3):

  • Profiling and monitoring your Drupal application with XHProf / XHGui and New Relic.
May 03 2016
May 03

With the release of Drupal 8.1 on April 20th the BigPipe module was added to core to increase the speed of Drupal for anonymous and logged in visitors.

What does BigPipe do in general?

BigPipe is a technique to render a webpage in phases. It uses components to create the complete page ordered by the speed of the components themselves. This technique gives the visitors a feeling that the website is faster then it may actually be. Thus giving a boost in user experience.

This technique, originally developed by Facebook, deploys the theory of multi threading, just like processors do. It disperses multiple calls to a single backend to make full use of the web server and thus rendering a webpage faster then conventional rendering does.

What does BigPipe do in Drupal?

For “normal” websites with anonymous visitors, BigPipe doesn’t do much. If you use a caching engine like Varnish, or even Drupal cache itself, pages are generally rendered fast enough. When using dynamic content like lists of related, personalized or localized content BigPipe can kick in and really make a difference. When opening the website BigPipe returns the page skeleton that can be cached. Elements like the menus, footer, header and often even content. And then rendering of the dynamic content will start. This means that the visitor of your website is already reading the most import content, and is able to see the dynamic related list later on after it’s loaded asynchronously.

For websites with logged in users BigPipe can be a real boost in performance. Standard Drupal cache doesn’t work out of the box for logged in users. For Drupal 7 you had the Authenticated User Page Caching (Authcache) module (which had some disadvantages), but for Drupal 8 there was nothing. Until Drupal 8.1!

With BigPipe Drupal is now able to cache certain parts of the page (the skeleton which I mentioned above) and to multithread some other parts. And these parts are cacheable by themselves.

Click on the imgae below to watch a video on YouTube where you can see for yourself how much effect BigPipe can have. BigPipe in Drupal 8

Video is made by Dries Buytaert

BigPipe in Drupal

As I said, starting from Drupal 8.1. BigPipe is added as a core module. And everybody can use it. Whether you are using a budget hosting platform or you are hosting your own website with state-of-the-art servers, it is basically just one (1) click away. You can just enable the module and get all the benefits BigPipe has to offer!

Jan 06 2016
Jan 06

Introduction

What if we had a Drupal theme that does exactly what we want? No more divs, in divs, in divs. No more weird css. Out of the box responsiveness, both adaptive as fluid. Debuggers, good javascript additions, Gulp. Maybe even a little styleguide generator.

That’s what we thought when we started working on our own custom Drupal starter theme in July 2013. We called it “FortyTwo”.

A brief history

Before we we started working with FortyTwo, we did theming, like many others, based on the Zen or Omega base themes. It worked, it got us where we wanted, but it had its downsides. Besides Zen we also looked at Omega and other major base themes. They all had their pros but also a lot of cons.

During a talk I had with one of the frontenders at Finalist about this we started to think: Why not create our own theme. Let’s combine all of the pros from all those themes to one, and maybe add some other nifty features.

The start

The first ever site we developed with FortyTwo, and the site that actually developed a big part of FortyTwo was the Finalist website. Back then it was a good starting point for us to develop website frontends the way we wanted to. Combined with good backend developers we could start working on good frontends.

Finalist.nl websiteThe Finalist website as built with FortyTwo

Because we developed FortyTwo as a base theme for Drupal we could use it to create all kinds of different websites, no matter the shape or size.

Features

Let me enumerate some of the best features FortyTwo has right now:

  • SASS; FortyTwo has completely been made using the SASS css preprocessor.
  • Drush integration; Easily create your sub theme using our custom drush command.
  • Out of the box responsiveness; Without no effort themes developed using FortyTwo are responsive. Both adaptive or fluid. Extending the responsiveness is easy by just copying some files. Want to change the breakpoints? They are defined in the sass settings file.
  • Gulp; We have added gulp support for some extra features.
    • SASS watch and compiling with auto prefixes for IE8 and IE9.
    • JSLint integration, while watching the theme folder the console will output errors or warnings from JSLint if there are any.
    • Uglify of JavaScript files.
    • Clearing of Drupal cache on changes in PHP, include or info files in the theme.
    • And last, but not least automatic reloading of the browser on file changes.
  • Icomoon integration; For easy development we have created a starter set of icomoon icons and added them to the theme. You can easily extend those icons and use theme using our custom icomoon SASS mixin.
  • Layout styles; It is possible to choose different styles of layout, whether you want your content column on the left, right or the middle, it is just a setting;
  • Debuggers; We have added some handy debuggers in FortyTwo:
    • It is possible to show the grid on the background of the page. The grid uses the settings set in the sass settings file.
    • It is possible to show a responsive identifier. This makes it easier to create responsive websites. A bar at the bottom of the website shows you on what breakpoint you currently are viewing.

Debuggers in actionThe above image shows the mentioned debuggers in action.

There is more, but you have to see for yourselves what FortyTwo can offer you!

Drupal 8

Since August 2015 we have also started developing the Drupal 8 version of the theme. Since Drupal 8 is becoming the new standard we wanted to be ready for it when it came out. On the date Drupal 8 was officially released FortyTwo was already finished and is now in active development.

I will write a separate blog about the porting of FortyTwo from Drupal 7 to Drupal 8 on a later date.

How about the future?

We are still in active development. Since the release of FortyTwo-8.x-1.0 Only bugfixes and really nice features are added to the Drupal 7 version of FortyTwo. Other new features are only added for Drupal 8.

On the roadmap are:

  • KSS; Knyle Style Sheets integration. Using KSS you can automatically create style guides from CSS and SASS.
  • FortyTwo admin theme; Admin themes are always difficult to pick. Because we work with a lot of clients, different kinds of content and content managers we believe that we can create a admin theme (of course based on FortyTwo) that suits the needs of everybody.
  • FortyTwo example theme; Since the start of development we always wanted to create a example theme that everybody can use, out-of-the-box.

Get involved!

Yes, we would love to see you around on the project page and the issue queue ! We heavily depend on your input, testing and feedback, but also on your commitment as a community developer, by helping us implementing new features and fixes.

Jan 04 2016
Jan 04

Introduction

What if we had a Drupal theme that does exactly what we want? No more divs, in divs, in divs. No more weird css. Out of the box responsiveness, both adaptive as fluid. Debuggers, good javascript additions, Gulp. Maybe even a little styleguide generator.

That’s what we thought when we started working on our own custom Drupal starter theme in July 2013. We called it “FortyTwo”.

A brief history

Before we we started working with FortyTwo, we did theming, like many others, based on the Zen or Omega base themes. It worked, it got us where we wanted, but it had its downsides. Besides Zen we also looked at Omega and other major base themes. They all had their pros but also a lot of cons.

During a talk I had with one of the frontenders at Finalist about this we started to think: Why not create our own theme. Let’s combine all of the pros from all those themes to one, and maybe add some other nifty features.

The start

The first ever site we developed with FortyTwo, and the site that actually developed a big part of FortyTwo was the Finalist website. Back then it was a good starting point for us to develop website frontends the way we wanted to. Combined with good backend developers we could start working on good frontends.

Finalist.nl websiteThe Finalist website as built with FortyTwo

Because we developed FortyTwo as a base theme for Drupal we could use it to create all kinds of different websites, no matter the shape or size.

Features

Let me enumerate some of the best features FortyTwo has right now:

  • SASS; FortyTwo has completely been made using the SASS css preprocessor.
  • Drush integration; Easily create your sub theme using our custom drush command.
  • Out of the box responsiveness; Without no effort themes developed using FortyTwo are responsive. Both adaptive or fluid. Extending the responsiveness is easy by just copying some files. Want to change the breakpoints? They are defined in the sass settings file.
  • Gulp; We have added gulp support for some extra features.
    • SASS watch and compiling with auto prefixes for IE8 and IE9.
    • JSLint integration, while watching the theme folder the console will output errors or warnings from JSLint if there are any.
    • Uglify of JavaScript files.
    • Clearing of Drupal cache on changes in PHP, include or info files in the theme.
    • And last, but not least automatic reloading of the browser on file changes.
  • Icomoon integration; For easy development we have created a starter set of icomoon icons and added them to the theme. You can easily extend those icons and use theme using our custom icomoon SASS mixin.
  • Layout styles; It is possible to choose different styles of layout, whether you want your content column on the left, right or the middle, it is just a setting;
  • Debuggers; We have added some handy debuggers in FortyTwo:
    • It is possible to show the grid on the background of the page. The grid uses the settings set in the sass settings file.
    • It is possible to show a responsive identifier. This makes it easier to create responsive websites. A bar at the bottom of the website shows you on what breakpoint you currently are viewing.

Debuggers in actionThe above image shows the mentioned debuggers in action.

There is more, but you have to see for yourselves what FortyTwo can offer you!

Drupal 8

Since August 2015 we have also started developing the Drupal 8 version of the theme. Since Drupal 8 is becoming the new standard we wanted to be ready for it when it came out. On the date Drupal 8 was officially released FortyTwo was already finished and is now in active development.

I will write a separate blog about the porting of FortyTwo from Drupal 7 to Drupal 8 on a later date.

How about the future?

We are still in active development. Since the release of FortyTwo-8.x-1.0 Only bugfixes and really nice features are added to the Drupal 7 version of FortyTwo. Other new features are only added for Drupal 8.

On the roadmap are:

  • KSS; Knyle Style Sheets integration. Using KSS you can automatically create style guides from CSS and SASS.
  • FortyTwo admin theme; Admin themes are always difficult to pick. Because we work with a lot of clients, different kinds of content and content managers we believe that we can create a admin theme (of course based on FortyTwo) that suits the needs of everybody.
  • FortyTwo example theme; Since the start of development we always wanted to create a example theme that everybody can use, out-of-the-box.

Get involved!

Yes, we would love to see you around on the project page and the issue queue ! We heavily depend on your input, testing and feedback, but also on your commitment as a community developer, by helping us implementing new features and fixes.

Dec 17 2015
Dec 17

Introduction

Drupal provides a lot of functionality to build community driven websites. For the award winning platform ModeMuze, we have developed a platform to expose the different fashion collections of different Dutch museums. Besides exposing the different collection items, one of the goals is to engage the Dutch fashion community and enrich the metadata of the collection.

We want to make it easy for people to join the community. Captcha was not really an option. We enabled the possibility to provide anonymous comments, optionally creating an account through the comment registration module. The standard registration form was enabled as well. Once they are registered, the users are able to create theme related expositions of collection items and help out with the tagging of these items.

It would be great if all users are enthousiastic people who create beautiful content, but unfortunately this is not always the case.Every community website is going to be targeted by malicious users who create spam and/or try to hack their way into the site one way or the other. This is why Drupal has a lot of options to secure websites and fight spam.

For this particular project, we found that the setup described below worked really well in stopping the creation of spam users, content and comments.

Honeypot

The Honeypot module is the most basic form of protection. We basically add this to all sites we build. This module adds the honeypot method to the forms (it is possible to configure which forms you want to protect) and a timestamp. In a nutshell, when a user submits a form too fast, or fills in a hidden field that shouldn’t contain a value, the modules stop the form submission from completion.

The Comment verification module is used to add an extra check for comment by anonymous users. When an anonymous user adds a comment, they need to verify their comment through an e-mail link.

Spambot

The Spambot module protects the user registration form from spammers and spambots by verifying registration attempts against the Stop Forum Spam online database. It also adds some useful features to help deal with spam accounts. The module allows up to 20.000 checks per day. In the end, this module helped the most. It is also possible to delay the request for malicious users, which helped to bring the number of stopped spam accounts from 10 per minute to about 3 per minute.

Userone

The Userone module main purpose is to protect the user with uid 1. An important and special user in a Drupal installation. The module also has an important extra feature which helped a lot in stopping hackers. It can automatically block IP addresses when they have a certain amount of failed logins.

Cloudflare

The Cloudflare module provides beter Drupal integration with the online Cloudflare service. CloudFlare is a FREE reverse proxy, firewall, and global content delivery network. It has a ton of features. Besides improving performance by caching your pages for anonymous users, it can also provide SSL options (even in the free version). Since your domain points to the reverse proxy of Cloudflare, hackers will not find out the IP of your server. This will make it harder to attack your site. It also provides options to serve a captcha to users when it detects malicious behaviour. The performance is the most important feature, but all the bonus options are really nice to have.

Mollom

Last but not least it is good to mention the Mollom module that provides integration with the external Mollom service. This service can check user input for possible spam, and is very effective in stopping malicious users. For this project we made the choice not to use Mollom. The client did not feel comfortable about an external service checking their content. That is something to seriously consider when using external services like Mollom.

Jul 20 2015
Jul 20

What ?!

Last year we built this tool called Drupal Flight Control. You might think “Flight ?? Control ?? Like … air balloons and zeppelins ??” Nope, it gets you in control of your Drupal projects, it’s servers and your whole DTAP environment.

Let’s have a quick walkthrough…

Some background

Lots of companies - not just the smaller ones! - still tend to keep their deployment and release process a manually performed task, which might just ruin your friday afternoon, when you should be drinking your beer.

At Finalist we had this problem in the past. Most of us are developers; we want to develop, not deploy. In my experience, a developer is not a sysadmin - like a guitarist will never be a bass-player, even if they think they are - , nor on the same level of OS and hosting knowledge. This probably means you do your deploys in one of these ways: * Ask your sysadmin to deploy your stuff, wait for it, and tell your project manager or customer to have a look. This won’t allow you to deploy any time you want. * Deploy yourself. This will frustrate the sysadmin because you are a developer … and developers forget about filemodes, ACL’s, nice vhost redirects, good backups etc etc.

To solve this problem, Flightcontrol was created. It deploys for you in a few clicks, just from the dashboard. Want to use DTAP ? Just make a snapshot, and restore it to your test environment.

Have a look below to read about it’s features and potential.

What it really is

Flightcontrol is just a Drupal 7 distribution (Drupal packed with a couple of modules and some configuration), providing you with a dashboard, ready to use just after you installed it.

You don’t need to do lots of creepy tutorial-installation-yum-what-stuff, and since you’re probably a Drupal developer yourself, this should be installed in a breeze.

This tiny beast can be installed on any Ubuntu, Debian, CentOS or other GNU/Linux machine, and only needs some basic PHP and OS packages. After that, provide Flightcontrol with some Bitbucket credentials (other git systems are supported as well), create a new client, project and environment, and you’ll be ready to do your funky stuff.

Most important features

These are by far the most important features, just working out of the box: * Easy installation * Drupal nodes, Drupal views, Drupal anything * Visual dashboard containing your customers and projects * Real-time list of commits or tags when you will make your deploy * Backup of database and/or files (multi-sites supported) * Auto-rollback in case of deploy failure * Snapshot restoring on any environment in the same project (DTAP? PATD? PTAD?) * Support for Bitbucket, Github, Stash, GitLab and more to come

Get involved!

Yes, we would love to see you around on the project page and the issue queue ! We heavily depend on your input, testing and feedback, but also on your commitment as a community developer, by helping us implementing new features and fixes.

Jun 28 2015
Jun 28

Introduction

At Finalist we use the Search API and the Search API Solr modules to provide the search functionality for most of our websites. With a little bit of configuration you can get a great search experience that works perfectly for a basic website. However, sometimes customers want more then a standard implementation. In this post I’ll explain more about some of the improvements we make and how these work. The following topics will be covered in a series of blogs:

  • Stemming
  • Partial search
  • Better search excerpts
  • Custom boosting

Custom Boosting

The Search API Solr module gives the user the ability to add a boost to different fields to help Apache Solr to determine the relevance for each search result. The relevance can be used to order your search results and help the user find the most important search results. One important thing is missing from the boosting options in the Search API Solr module. It does not allow users to add a boost for different values within fields. This is what hook_search_api_solr_query_alter() can be used for.

Solr has the options to add values to the bq parameter in the search results. This parameter can be used to boost specific field and/or values. You can read more about this parameter on the Apache Solr wiki.

Example hook_search_api_solr_query_alter() Below you find an example of hook_search_api_solr_query_alter() to easily implement this yourself. This example allow you to add an extra boost to the search results for specific node types. A similar approach can be used to boost results based on taxonomy terms etc.

function mymodule_search_api_solr_query_alter(array &$call_args, SearchApiQueryInterface $query) {
  // Boost news and blog nodes in Solr results.
  $call_args['params']['bq'][] = '(ss_type:”news”^4 OR ss_type:”blog”^2)';
}

Boosting exact matches

As explained in the previous chapter, Solr allows boosting for custom fields or conditions. While you might want to find more results based on stemming, you probably want the results matching the exact search phrase to appear higher in the search results.

FieldType in schema.xml The basic fieldType text in the schema.xml file has some filters to support stemming etc. For exact search boosting this could be a problem. That’s why it is probably a good idea to make a separate fieldType with better support for exact matches.

<!-- add textExact field to boost exact matched -->
<fieldType name="textExact" class="solr.TextField" positionIncrementGap="100">
   <analyzer type="index">
       <charFilter class="solr.MappingCharFilterFactory" mapping="mapping-ISOLatin1Accent.txt"/>
       <tokenizer class="solr.WhitespaceTokenizerFactory"/>
       <filter class="solr.LengthFilterFactory" min="2" max="100" />
       <filter class="solr.LowerCaseFilterFactory"/>
       <filter class="solr.RemoveDuplicatesTokenFilterFactory"/>
   </analyzer>
   <analyzer type="query">
       <charFilter class="solr.MappingCharFilterFactory" mapping="mapping-ISOLatin1Accent.txt"/>
       <tokenizer class="solr.WhitespaceTokenizerFactory"/>
       <filter class="solr.LengthFilterFactory" min="2" max="100" />
       <filter class="solr.LowerCaseFilterFactory"/>
       <filter class="solr.RemoveDuplicatesTokenFilterFactory"/>
   </analyzer>
   <analyzer type="multiterm">
       <charFilter class="solr.MappingCharFilterFactory" mapping="mapping-ISOLatin1Accent.txt"/>
       <tokenizer class="solr.WhitespaceTokenizerFactory"/>
       <filter class="solr.LengthFilterFactory" min="2" max="100" />
       <filter class="solr.LowerCaseFilterFactory"/>
       <filter class="solr.RemoveDuplicatesTokenFilterFactory"/>
   </analyzer>
</fieldType>

After creating this fieldType, we need to apply this to all fields where we want exact matching. For this example we will add the textExact fieldType to the title field. We do this by making a copy of the original.

<field name="title" type="text" stored="true" indexed="true"/>
<!-- add titleExact field to boost exact matched -->
<field name="titleExact" type="textExact" indexed="true" stored="true" />
<copyField source="title" dest="titleExact"/>

Example hook_search_api_solr_query_alter() After changing the schema.xml file, the search result should not have changed yet. This is because the Solr query doesn’t use our new field yet. To boost the exact search field, we can implement hook_search_api_solr_query_alter() and make sure our new field is used.

We need to add our new titleExact field to the query field throught the qf param. We can add a boost for matches in this field through the pf param. This boosts the result based on every exact word match in the title, but doesn’t boost entire phrase matches yet. To give an extra boost to exact phrases, we can fetch the search keyword in our query and use the bq param to give a bug boost to titleExact field that match the entire keyword phrase.

function mymodule_search_api_solr_query_alter(array &$call_args, SearchApiQueryInterface $query) {
        // Boost exact title matched.
        $keys = $query->getKeys();
        unset($keys['#conjunction']);
        $call_args['params']['qf'][] = 'titleExact^5';
        $call_args['params']['pf'][] = "titleExact^5";
        $call_args['params']['bq'][] = 'titleExact:"' . implode(' ', $keys) . '"^10';
}
Jun 21 2015
Jun 21

Introduction

At Finalist we use the Search API and the Search API Solr modules to provide the search functionality for most of our websites. With a little bit of configuration you can get a great search experience that works perfectly for a basic website. However, sometimes customers want more then a standard implementation. In this post I’ll explain more about some of the improvements we make and how these work. The following topics will be covered in a series of blogs:

  • Stemming
  • Partial search
  • Better search excerpts
  • Custom boosting

Better search excerpts

The Search API Solr module allows you to configure your servers to provide excerpts for each search result. On the server configuration page you need to check ‘Return an excerpt for all results’ in the advanced section to make it work (in the latest version of the module you also need to check ‘Highlighting’ processor on the ‘Filters’ tab of your search index!). After this you can configure your search views to return an excerpt for your search results.

We’ve had a lot of questions about these excerpts. By default there are several issues which can be easily solved.

Words of 3 characters (or less) are ignored when returning excerpts

The spell field in the schema.xml file is used to store a long string of all content of a specific document. The spell field is also used by the Search API Solr module to get excerpts for each search result. The following section in the schema.xml file contains the spell field definition:

<fieldType name="textSpell" class="solr.TextField" positionIncrementGap="100">
   <analyzer>
       <tokenizer class="solr.StandardTokenizerFactory" />
       <filter class="solr.StopFilterFactory" ignoreCase="true" words="stopwords.txt"/>
       <filter class="solr.LengthFilterFactory" min="3" max="25" />
       <filter class="solr.LowerCaseFilterFactory" />
       <filter class="solr.RemoveDuplicatesTokenFilterFactory" />
   </analyzer>
</fieldType>

The LengthFilterFactory limits the length of each word that can be used in the excerpts. Since the min length is 3, searching for the keywords ‘Apache Solr API’ will not highlight the word ‘API’ in all excerpts. By changing the min and max values you can make sure that all keywords in the search are properly highlighted.

When stemming is used, the excerpt for stemmed results is not returned

Stemmed words as described in a previous blog post can return search results, but by default when a user searches for the word ‘cars’, the word ‘car’ will not be highlighted in the search result excerpts. To add this, we need something like to SnowballPorterFilterFactory to provide stemming in our excerpts. By adding the line below to the section for the fieldType textSpell, stemmed words will also be highlighted:

<filter class="solr.PorterStemFilterFactory"/>

The excerpts are not very clear, random pieces of text are shown

Apache Solr provides an extended list of highlighting parameters to help creating better excerpts. Drupal has some pretty good defaults for this. There are several parameters that we change to make it a little better for the user. There parameters can be changed in hook_search_api_solr_query_alter() which allows you to change all the URL parameters right before the query is sent to Solr. We like to change the following parameters.

hl.snippets The snippets parameter is used to define how much different highlight snippets Apache Solr should return based on the keywords in your search. All excerpts are concatenated and separated by a couple of dots by Drupal. Since it looks like a full sentence, users get confused when there are too many snippets.

hl.fragsize The fragsize parameter is used to define how many characters each of the highlight snippets should be. Making these too short will give very strange excerpts, but making these too long will also make the search page harder to scan for proper results. Especially when you also return multiple highlighting snippets through the hl.snippets parameter.

hl.mergeContiguous The mergeContiguous parameter determines if Solr should combine multiple highlighting results as a single excerpt. We’ve noticed it is harder to read the excerpts when this parameter is true.

Example hook_search_api_solr_query_alter() Below you find an example of hook_search_api_solr_query_alter() to easily implement this yourself.

function mymodule_search_api_solr_query_alter(array &$call_args, SearchApiQueryInterface $query) {
  // Change hl settings for better excerpts.
  $call_args['params']['hl.snippets'] = '2';
  $call_args['params']['hl.fragsize'] = '100';
  $call_args['params']['hl.mergeContiguous'] = 'false';
}
Jun 14 2015
Jun 14

Introduction

At Finalist we use the Search API and the Search API Solr modules to provide the search functionality for most of our websites. With a little bit of configuration you can get a great search experience that works perfectly for a basic website. However, sometimes customers want more then a standard implementation. In this post I’ll explain more about some of the improvements we make and how these work. The following topics will be covered in a series of blogs:

  • Stemming
  • Partial search
  • Better search excerpts
  • Custom boosting

Partial search is supported by Apache Solr, but is not activated by default. Partial search can be added through the so called NGramFilterFactory. This could look something like this:

<filter class="solr.NGramFilterFactory" mingramsize="3" maxgramsize="25"/>

The NGramFilterFactory uses a min/max ngramsize attribute to define how big each ngram can be. Implementing partial search can have a great impact on the performance, so please test this properly with different ngram sizes. The quote below (read more in the original blog) explains how the performance impact works:

“There is a high price to be paid for n-gramming. Recall that in the earlier example, Tonight was split into 15 substring terms, whereas typical analysis would probably leave only one. This translates to greater index sizes, and thus a longer time to index. Note the ten-fold increase in indexing time for the artist name, and a five-fold increase in disk space. Remember that this is just one field!”

The best place to add this to the schema.xml file for Apache Solr is to the following section:

<fieldType name="text" class="solr.TextField" positionIncrementGap="100">

Basically add this in the same places the SnowballPorterFilterFactory is already added.

Jun 07 2015
Jun 07

Introduction

At Finalist we use the Search API and the Search API Solr modules to provide the search functionality for most of our websites. With a little bit of configuration you can get a great search experience that works perfectly for a basic website. However, sometimes customers want more then a standard implementation. In this post I’ll explain more about some of the improvements we make and how these work. The following topics will be covered in a series of blogs:

  • Stemming
  • Partial search
  • Better search excerpts
  • Custom boosting

Stemming

As Wikipedia puts it: “Stemming is the term used in linguistic morphology and information retrieval to describe the process for reducing inflected (or sometimes derived) words to their word stem, base or root form—generally a written word form.”

Basically stemming will help your users find what you are looking for. Search for Cars will also return search results for the word car. Searching for working will also return results for work or worked etc.

Apache Solr uses the so called SnowballPorterFilterFactory to add stemming. In your schema.xml file for Solr you will probably find something like this:

<filter class="solr.SnowballPorterFilterFactory" language="English" protected="protwords.txt"/>

As you can see, the stemming algoritm needs a language to make sure the stemming will be accurate. For example, the root form for English words is different than the root form for Dutch words. Most languages are supported. You can find a list of languages in the documentation.

Dutch stemming For the Dutch language (which a lot of our customers use), there are 2 different supported languages: Dutch and Kp (Kraaij-Pohlmann). We’ve found the Kp stemmer provides much better results than the Dutch stemmer.

May 27 2015
May 27

Introduction

In this article we are going to develop a full module. This is a step by step example. This module will give users the ability to create single or multi-step forms. I will call this module: WEBFORM.

Built and tested on Drupal: 8.0.0-dev, PHP : 5.4.24 and Apache.

This is not intended to replace or upgrade the drupal 7 web form module but instead it is a module to show case how the concept of drupal 7 web form module can be achieved using plugins/entities OR how these forms can be built using the already existing fields that are already provided in the Drupal 8 core. I won’t build the whole concept of drupal 7 web form but i will develop only parts of that module that i find interesting.

Here are some of the things that will be covered.

  • The module’s folder structure.
  • The basic files needed for a module to be installed
  • Installation of custom tables and implementing the {MODULE_NAME}.schema.yml
  • Defining entities and bundles.
  • Controlling access to the created entities and routers.
  • Creating routers and controllers
  • Defining static and dynamic permissions.
  • Defining static and dynamic routers
  • Using TAGS and the necessary tag options when exposing a class as a service.
  • Exposing defined classes of interest as services.
  • Creating forms.
  • Creating custom fields.
  • Adding a custom administration link to a Drupal administration UI or any other existing page.
  • Adding action links on a page.
  • Adding twig module specific templates.
  • Exposing the entity data to views.

Let’s start the game…

The module structure

I will put the module in drupal8_root_folder/modules. Inside the modules folder create a custom sub-folder where we shall put all our custom made modules i.e modules/custom Inside the custom folder go on to create the sub-folders below.

webform/config/schema webform/src/Access webform/src/Controller webfom/src/Entity webform/Plugin webform/Plugin/Field webform/Plugin/Field/FieldFormatter webform/Plugin/Field/FieldType webform/Plugin/Field/FieldWidget webform/templates

I will explain what to put in the folders as we develop.

NB: Take note of the case sensitivity of the folder names. Your classes may not load if that is not taken into account.

Basic module files and installation

To enable and install our module we only need webform.info.yml. So go on to create that file and inside it add the code below.

name: Webform
description: 'Create custom web forms.'
package: Custom
type: module
version: 0.1
core: 8.x

At this time you may go to the module’s page : admin/modules And enable the module. But because we we want to add custom tables you may wait till we have described the tables.

Note: no more info file

You are no longer required to add a *.module file to enable your module. The *.info file is gone. No longer applicable to Drupal 8 modules.

The info file is now a yml file.

Create or add a webform.install file in the webform folder. Inside that file implement the hook_schema(). Inside the schema describe two tables.

Sample code:

$schema = array();
$schema['webform'] = array(/* webform fields  */);
$schema['webform_field_data'] = array(/* Webform field storage  */);

Note that I have 2 tables. The first table “webform” will store general entity data like uuid, bundle type etc. This is what will be used as the “base table” of our webform entity. The second table “webform_field_data” will store other field entity data like status, created etc…… This is what will be used as the “data table” of the webform entity.

Note: use of one table

You may as well store all the entity data in one table. I am choosing this approach just because of trying out new ways of doing stuff and again to have control of my table data. i don’t like putting so many fields in one table.

Now go on to enable the module. The fun has just begun…

Defining entities and bundles

We are going to create two types of entities. One is a ConfigEntityType and the other is a ContentEntityType. As the name suggests the config entity type will store all the admin configurations and the content entity type will store data submitted by the users. In this case the content entity type will be the bundle of the config entity type.

Let’s call our config entity type: webform_type and then call the content entity type: webform So let’s go on to create the necessary files:

Add the files below.

  • webform/src/Entity/WebformType.php
  • webform/src/Entity/Webform.php

Inside the WebformType.php declare the ConfigEntityType as below.

/**
 * Defines the Webform type configuration entity.
 *
 * @ConfigEntityType(
 *   id = "webform_type",
 *   label = @Translation("Webform type"),
 *   handlers = {
 *     "form" = {
 *       "add" = "Drupal\webform\WebformTypeForm",
 *       "edit" = "Drupal\webform\WebformTypeForm",
 *     },
 *     "list_builder" = "Drupal\webform\WebformTypeListBuilder",
 *   },
 *   admin_permission = "administer webform types",
 *   config_prefix = "type",
 *   bundle_of = "webform",
 *   entity_keys = {
 *     "id" = "type",
 *     "label" = "name"
 *   },
 *   links = {
 *     "edit-form" = "/admin/structure/webform-types/manage/{webform_type}",
 *     "delete-form" = "/admin/structure/webform-types/manage/{webform_type}/delete",
 *     "collection" = "/admin/structure/webform-types",
 *   }
 * )
 */

class WebformType extends ConfigEntityBundleBase {
...

Inside the class add all the necessary logic specific to the entity. For example in that class I added custom button configurations to be used on the multi step web forms.

protected $lable_previous = 'Previous';
  protected $lable_next = 'Next';
  protected $lable_cancel = 'Cancel';
  protected $lable_submit = 'Save';

Description of the annotation properties:

handlers.form.add : build a form for adding a web form type. handlers.list_builder : Build a page that will list all the web form types in the system config_prefix : A string that will be used in the yml files and also in code when accessing this config type properties or settings e.g {MODULE_NAME}.config_prefix.config_entity_type i.e webform.type.contact

That is if we have created or added a config web form type with the name “contact”.

bundle_of: This points to the content entity type that will act as the bundle.

The other properties are self explanatory e.g the links.edit-form

Define all your web form type entity fields in a {MODULE_NAME}.schema.yml Add the file : webform/config/schema/webform.schema.yml Describe the fields under the key: webform.type.*:

See /webform.schema.yml of how these fields were defined.

Things to note in the file.

The parent key is : {MODULE_NAME}.{config_prefix}.* i.e webform.type.* Note the type is config_entity i.e the type of entity.

Map the custom entity type fields e.g

mapping:
    name:
    lable_next:
          type: text
          label: 'The lable of the next button'

This is done for all form fields we add to the config entity form and whose values we want to store.

Inside the Webform.php declare the contentEntityType.

/**
 * Defines the webform entity class.
 *
 * @ContentEntityType(
 *   id = "webform",
 *   label = @Translation("Webform"),
 *   bundle_label = @Translation("Webform type"),
 *   handlers = {
 *     "access" = "Drupal\webform\WebformAccessControlHandler",
 *     "storage" = "Drupal\webform\WebformStorage",
 *     "form" = {
 *       "default" = "Drupal\webform\WebformForm",
 *     },
 *   },
 *   base_table = "webform",
 *   data_table = "webform_field_data",
 *   translatable = TRUE,
 *   entity_keys = {
 *     "id" = "wid",
 *     "bundle" = "type",
 *     "label" = "title",
 *     "langcode" = "langcode",
 *     "uuid" = "uuid"
 *   },
 *   bundle_entity_type = "webform_type",
 *   field_ui_base_route = "entity.webform_type.edit_form",
 *   permission_granularity = "bundle",
 * )
 */

class Webform extends ContentEntityBase { ...

Description of the annotation properties

handlers.access : add logic to deny all allow access to certain routers of the web form content entity. handlers.storage : Change the way we want some of our web form field types are saved e.g this is used to override how the web form grid field type data is saved in the database. field_ui_base_route : This takes in a named router that holds the path to where we add,edit and delete (manage) fields of the entity. permission_granularity: This gives us the possibility to automatically generate permissions for all the added entity types generically.

Describing custom database table fields fields to Drupal core.

Because we have added custom tables that are to be used in our content entity, it’s important to inform the Drupal core what type of fields they are. This we do by implementing the baseFieldDefinitions method. Here we are basically creating actual Drupal core fields from our table field names e.g In the table “webform_field_data” we added a UID field that references the account/user entity of the user who has submitted the form. So in this field we go on to tell Drupal that the field UID is a reference to another entity hence the code below:

$fields['uid'] = BaseFieldDefinition::create('entity_reference')
        ->setDescription(t('The username of the user that submitted the webform.'))
        ->setSetting('target_type', 'user')
        ->setSetting('handler', 'default') ...

More of this can be seen in the code its self.

Things to take note of when declaring the entity types

  • Annotations are used to add metadata.
  • Don’t forget your name space declaration
  • Include all the necessary classes used in your code using the USE keyword.
  • Regularly empty cache as you develop so as to see new changes.
  • Describe your custom table fields by implementing the baseFieldDefinitions method.

Controlling access to the created entities and routers

From the annotation properties we defined an access handler with a value: “Drupal\webform\WebformAccessControlHandler”.

Create a new php file as shown below. webform/src/WebformAccessControlHandler.php

The code looks like:

class WebformAccessControlHandler extends EntityAccessControlHandler {
...

In the class implement the access methods e.g i override the create access method so as to allow submission of the web form. In here i check if the user/account object has the “submit $entity_bundle webform data” or “submit any web form” permission. Then giving access to the user to submit a web form.

Basically I am dynamically allowing access according to a certain condition. Cool… Dive into the parent class and see allot that core is already doing for us. Add logic and override as you wish to control access.

Note that those are custom permissions defined by our custom module. More on that in the next section.

Defining static and dynamic permissions

Static permissions: Create and add permissions in the file below.

webform/webform.permissions.yml

e.g

administer webform types:
  title: 'Administer webform types'
  description: 'Perform tasks across all webform types.'
  restrict access: true

Dynamic permissions

These are permissions added automatically as a user adds new web form types e.g when some one adds a new web form type : contact. New contact permissions responsible for submitting, viewing etc… of that web form will be added. To be able to generate those permissions you need to add a property or key “permission_callbacks” to the {MODULE_NAME}.permissions.yml file Under that property add the necessary classes and the corresponding methods that are responsible for generating the permissions. Add this property or key.

Permission_callbacks

  • \Drupal\webform\WebformPermissions::webformTypePermissions

The webformTypePermissions method will generate the necessary permissions depending on the logic added.

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web