Upgrade Your Drupal Skills

We trained 1,000+ Drupal Developers over the last decade.

See Advanced Courses NAH, I know Enough
Nov 18 2020
Nov 18
Generates a username automatically in Drupal site

First of all, generating a username automatically in your Drupal site is very useful for the site admins and website visitors. Let's start with the fact that today, to use the services of the site, users have to log in.

As a rule, registration includes filling in the following data:

Nov 04 2020
Nov 04
speed up your website with the Quicklink module for Drupal 8

Wondering if you need the Quicklink module for the Drupal 8 website? Wonder no more — you need this module, and we will explain why.

To attract and keep the attention of users on the Internet, you need to apply a lot of effort and ingenuity. One of the key user expectations is that your site loads quickly.

Oct 21 2020
Oct 21
Image Effects on the Drupal 8 website

Drupal 8 has already proven its versatility. Its core already has modules for working with images, but installing additional modules will not hurt (for example, the Image effects module).

If you stop using just your Drupal site's standard features, there is no question that your site will increase in convenience, attractiveness, and victory over your competitors!

Our web development agency has created today's blog to expand your site's horizons and introduce you to the main features of the Image effects module. Enjoy your reading!

Sep 30 2020
Sep 30
Drupal 8/9 modules for intuitive website navigation

Good website navigation is the best way to provide an enjoyable user experience. To create intuitive website navigation on Drupal 8 or 9, you should use the appropriate website navigation modules.

As we have already mentioned in our previous blogs, if you want users to be comfortable using your site and find all the information they need, you should develop website navigation that would be easy for children to use.

Sep 23 2020
Sep 23
Drupal 10 release is planned: what does it mean for website owners?

Drupal keeps evolving in order to give you even more advanced digital capabilities.

We love to share useful content about upgrades with you — like the blog post about Drupal 7 to Drupal 8/9 Module Upgrader.

Time flies fast, and there is already a Drupal 10 release planned. What does all this mean for your website? Let’s take a closer look.

Sep 03 2020
Sep 03

Context

The requirement

In a majority of cases, document (and on a bigger picture, media) management is done on Drupal directly. Especially now that there are the Media and Media Library modules in Core, media management in Drupal has received a lot of improvements during the last years.

But sometimes, you need to be able to connect to a DMS (document management system) for one or several of the following reasons:

  • The tool is already in place into your customer’s information system architecture.
  • This allows you to delegate access control management on documents to this external tool. And therefore keep your Drupal website “simple”.
  • DMS, by their specialization/name/designation, have more advanced features than Drupal on documents:
    • Versioning,
    • Metadata,
    • Edition,
    • Etc.

The project driving our contribution on the module was to develop an intranet for storing documents and delivery forms on an Alfresco DMS, using Drupal as a simplified user interface to interact with the documents in the DMS.

 
What is CMIS?

“CMIS (Content Management Interoperability Services) is an open standard that allows different content management systems to interoperate over the Internet.“

CMIS query example:

CMIS query examples

As you may have guessed, the CMIS API module has been used as it provides the link between Drupal and Alfresco (or any CMIS compliant system).

 

Module’s history

 
Drupal 7 and before (hypothesis)

There is an ecosystem of CMIS modules already present on Drupal.org, here is an overview of its:

  • CMIS Alfresco (D6)
    • Deprecated in favor of the CMIS API module.
  • CMIS API (D7)
    • Initially an initiative (maybe partnership?) between Acquia and Alfresco because a lot of people involved indicate working at Alfresco on their drupal.org profile.
  • CMIS Views (D7)

Now we will focus on CMIS API as it is the one providing a generic implementation and tools for other modules.

 
Drupal 8

There were 2 Drupal 8 branches on the CMIS API module :

  • 8.x-1.x-dev
    • Port of the Drupal 7 version but non-optimal usage of object oriented code provided by Drupal 8 (no service, object non easy of use (DX), …)
  • 8.x-2.x-dev (our contribution started from this branch)
    • Use a new version of the dkd/php-cmis-client library (Guzzle 6 compatible)
    • Codebase stabilization (Drupal Coding Standard, deprecations removal) 
    • Add new features

We focused on the 8.x-2.x branch and among fixing coding standards, removing deprecated code, fixing some bugs, we added new features. And now we will see the features of the module.

 

Current features

 
CMIS connection

The first thing to do with the CMIS module is to configure a connection to the DMS.

CMIS Connection

We added the “CMIS folder ID” setting to get a file browser already opened from this folder (new).

 
Sub module: CMIS Alfresco Auth User (new)

When the user logs in using Drupal login form, the username and password are also sent to an Alfresco endpoint to authenticate on Alfresco too and retrieve an authentication “ticket”.

Authentification ticket 1

Authentification ticket 2

Authentification ticket 3

This allows you to use different users for the CMIS connection without having to define a CMIS connection for each user.

 
File browser

The main feature of the module is to provide a file browser allowing users to view, upload or delete folders and documents.

Now the access control if the user can upload or delete a file / create or delete a folder, is no more using a Drupal permission but checking permission in the DMS directly (new).

CMIS Browser

CMIS query page

A page to make CMIS queries directly without UI limitation.

Request CMIS

Field formatters:

There is a CMIS field type that allows you to select a CMIS document or folder. This information can be displayed using two field formatters:

  • Link to the CMIS document
  • File browser starting on the selected folder (new)

Meeting

 


Next steps

The module currently provides some nice features that may answer a lot of needs. But the current codebase is difficult to work with, so adding new features is hard, or impossible.

Therefore to ensure the module will be and stay healthy, we may be doing the following steps which are a mix of refactoring and features addition:

  • Rethink codebase’s architecture:
    • Implement services
    • Add a connections factory service
    • Routes: to have proper non-ajax support if needed and better organized routes
  • Rethink how the display is done to ease the theming and overrides
  • Audit the UI strings
    • Translating strings in French highlighted that even in English some UI strings are not correct
  • Refactor authentication part
    • Support Nuxeo, etc.
  • Implement sort and pager on the file browser
    • Depends on the PHP CMIS Client library
  • Media integration
    • Import a CMIS document as a Media entity so it can be used as any “standard” image for example
  • Implement automated tests
    • The setup will be hard to do

Conclusion

Thanks to Ian Norton for granting us access to the CMIS API module.

And thanks to our customer’s project needing improvements on the CMIS API module to have permits us to contribute back.

Now the Drupal 8 version of the CMIS API module is back on track and by releasing the 8.x-2.0-alpha1 version, we hope that this will help raise the Drupal 8 module’s usage and gather momentum.

PS: On Tuesday, September 1st, Florent Torregrosa held a meetup on various topics such as Entity Share V3, CMIS and File Extractor. You can find his article here [FR] and the presentation slides here [EN]
 

Aug 19 2020
Aug 19

When a new product rolls out, it raises a lot of interest. The Drupal 9 release is no exception. This interest is easy to understand because new versions of Drupal bring new digital opportunities. Today, interactions with potential customers have shifted into the virtual sphere, so websites have an especially important mission to perform.

Whenever you have questions, our Drupal development team is ready to provide the answers! We offered you a Drupal 9 upgrade checklist for website owners, and now that Drupal 9 is out, it’s time to take a new look at it and answer all your questions in our Q&A session.

Drupal 9 release: Top Questions and Expert Insights

Question: What was the release date of Drupal 9?

Answer: Drupal 9.0.0 was released on June 4, 2020.

Question: What are the new features of Drupal 9?

Answer: The Drupal 9.0 release is not game-changing in terms of features. It is very similar to Drupal 8.9, but cleaned-up from obsolete code and furnished with the latest versions of third-party dependencies. One of Drupal 9’s outstanding capacities is super easy upgrades.

When it comes to features, from Drupal 8.0 all the way to Drupal 9.0, Drupal has been growing and evolving in many aspects. This will continue in Drupal 9. It is significantly easier to use for marketers, content editors and designers. New Media and Media Library, Layout Builder, Content Moderation and Workflows, and Workspaces have transformed the editing processes forever. Drupal’s API-first nature with JSON: API in the core makes websites more open to third-party integration and multi-channel reach.

Layout Builder in Drupal 8 and 9

The Drupal creators will keep inventing more interesting features based on the strategic tracks — better editor-friendliness, compliance with website accessibility standards, integration, and more. For example, the new default front-end theme Olivero planned for Drupal 9.1 ships with the best web accessibility practices included. And it’s just the beginning of Drupal 9’s forward movement.

Question: What are the updated dependencies in Drupal 9?

Answer: Among the major third-party libraries and components in Drupal 9 are the template-making engine Twig 2, useful components of the high-level framework Symfony 4.4 (with Symfony 5 in plans), the modern content editor CKEditor 5 in plans, and many more.

Question: What are the benefits of the updated dependencies in Drupal 9?

Answer: Modern versions of dependencies make websites faster, more efficient, and more secure. Their code gets much cleaner, which leads to easier website maintenance, more efficient development, friendliness to search engines, and so on.

Question: What are the reasons to upgrade to Drupal 9 from Drupal 7?

Answer: Drupal 7 is getting outdated in its digital capabilities. Between Drupal 7 and Drupal 8/9, there is a technological abyss, but you can cross the bridge to the other side.

You will get a version with all features of both Drupal 8 and Drupal 9. Multilingual options, third-party integrations, mobile-friendliness, better accessibility, outstanding changes for content editors, and improved configuration management are just a few examples.

Question: What are the reasons to upgrade to Drupal 9 from Drupal 8?

Answer: Drupal 9 has huge plans for innovative development. In addition, an upgrade from D8 to D9 is very easy and requires actions you need to take anyway.

To move from Drupal 8 to Drupal 9, you will need to clean up the deprecated code from your site and update it to the latest minor version of Drupal 8. You need these updates in any case to use all the new features that have appeared from Drupal 8.0 to Drupal 9.0, as well as to get more security for your website.

So the question “Why upgrade to Drupal 9 from Drupal 8?” need no extra explanation — this is just your simple, natural, and effortless move to the future.

Question: How do I prepare for Drupal 9?

Answer: The key requirements are to update your website to Drupal 8.8 or 8.9, update your modules and themes, check your site for deprecated APIs and functions and replace them with new alternatives, and make sure your hosting server uses PHP 7.3 or higher. Some of the useful tools to prepare for Drupal 9 are the Drupal Check command-line tool, the Upgrade Status module, the Drupal-rector tool, the Upgrade Rector contributed module, and the Drupal 9 Deprecation Status page.

Drupal 9 readiness module status page

Question: How long will it take to upgrade from Drupal 8 to Drupal 9?

Answer: A well-prepared site makes the upgrade instant, so the answer depends on the state of your website’s readiness. This includes how long ago you made updates between the minor versions (maybe you are still on 8.1 or something), how many custom modules you have that need to be cleaned up from obsolete code, and so on.

Upgrade to Drupal 9 with us!

The above questions about the Drupal 9 release are the most commonly asked, but our Drupal team will be glad to answer any of your questions. Of course, we will seamlessly upgrade you so you can enjoy the latest Drupal version. Our prices are affordable and a quote is free. Contact us for a new digital era of your website!

Aug 10 2020
Aug 10

Context

Have you ever needed to display contents or parts of a content differently depending on the user’s segmentation?

Have you ever needed to schedule contents or parts of a content for publication/depublication?

Have you ever needed to preview what your website will look like depending on those visibility conditions?

Being able to address the right piece of information to the right person:

  • reduce the effort to access to the information,
  • increase engagement.

Allowing marketers and content contributors to preview exactly how the website will behave depending on personalization helps greatly those teams to be more effective.

We encountered this problem on a customer project which started in the last trimester of 2017. So keep in mind, this is the state of art from this period and that the website was built around the Paragraphs module.

We are now going to see the requirements we had.

Two main axes of changing content

Scheduled content publication/depublication
 

In the case of our customer, it was very important to be able to schedule content publication/depublication because of advertisement contracts on certains parts of certain pages.

Constraints:

  • be able to schedule when content will be available or not:
    • visitors should not have access to irrelevant content,
    • contributors can prepare content and schedule publication.

The existing solutions we evaluated were:

  • Scheduler:
    • only for nodes,
    • at this time (end of 2017) no integration with Content Moderation,
    • no preview.
  • Schedules Updates:
    • scheduling is not handled per entity (no element on the entity’s form),
    • no preview.

Both solutions were not satisfying for our needs.

Content segmentation

Show different content depending on the user's segmentation.

Constraints:

  • the website is autonomous: no external services dedicated to that,
  • must work for authenticated users.

The existing solutions we evaluated were:

Both solutions were not satisfying for our needs.

Website preview

Our customer was migrating from a proprietary Java’s CMS from Oracle to Drupal 8. In this CMS, there was the feature to preview exactly how the website will look like at a precise moment in time.

This is the kind of (very advanced) feature, Drupal is lacking currently. But we must recognize that humongous efforts had been made during the last years with Content Moderation in core, Workspaces, etc. So Drupal core is closing the gap with proprietary CMS proposing such advanced features.

Fortunately in the last Drupal product survey https://dri.es/drupal-2020-product-survey, there were options to tell if you want such features in Drupal core.

The existing solution we evaluated was:

  • Workspace:
    • not stable enough at the time,
    • not fined grained,
    • too hard to use for the client,
    • not a preview tool,
    • no temporal dimension.

This solution was not satisfying our needs.

So for our customer’s project we implemented a custom solution.

The Entity Visibility Preview module

Based on this experience, at the end of 2019, we developed a Drupal 8 module called Entity Visibility Preview to have a generic and reusable implementation.

Its goal is to allow site owners to add visibility conditions (date interval, segmentation, other) on content entities to be able to display different contents to visitors depending on their segmentation, the current date or other conditions.

It also allows previewing the website in the combination of conditions you want, to test what you expect your visitors will see.

Architecture

To provide a generic solution, the module provides:

  • a new field type:
    • inspired from Metatag for the storage and field widget.
  • a new plugin type to:
    • handle access control logic,
    • provide and handle form element on the field widget,
    • provide and handle form element on the preview form and a preview message.
  • a preview form:
    • allow to preview the website depending on conditions,
    • preview values are stored in session.

Website preview form


Provided conditions
 

Currently the module provides two visibility conditions:

  • Date range
    • start date only: content will be visible from this date,
    • end date only: content will not be visible from this date,
    • start and end dates: content will be visible in this interval only.
  • Taxonomy (segmentation)
    • choose one or several taxonomy terms (OR between terms),
    • configure:
      • which vocabularies to reference terms from,
      • which taxonomy reference fields on the user entity to compare with,
      • set values for the anonymous users on a dedicated form.

Each plugin can be enabled and configured per field instance for flexibility.

Test content

Limitations

The primary use case of the Entity Visibility Preview module is for authenticated users. There is no distinction between anonymous users.

The implementation of Entity Visibility Preview lies on the hook_entity_access hook which in the case of nodes is only triggered when viewing the node’s page. So a node with visibility conditions will still be accessible if it is in a View’s results for example.

The Node Grant API is not supported in a generic way, so there is no access control in:

  • Views,
  • entity queries,
  • Search API views,
  • ...

Fortunately, we managed to create a sub-module, Entity Visibility Preview Simple Grants, to provide Node Grant API support for the two provided plugins (date range and taxonomy). So if you have to create custom condition plugins, you can take a look at this sub-module how it was done.

Conclusion

With the Entity Visibility Preview module you can change parts of your page, based on paragraphs for example, depending on date and segmentation and other conditions depending on your needs and be able to preview the combinations. All of that with great flexibility in terms of configuration.

You can also see an article with a demonstration video on https://innovation.smile.eu/videos/2020-02-03-drupal-entity-visibility-preview.html.

At the same time we published the first versions of the module, we had a customer project already requiring it!

We will be very proud if our module could serve as an experiment or example of what people want to be available in Drupal core directly in the future.
 

Aug 10 2020
Aug 10

Starting point

What is Entity Share?

Entity Share is a collection of modules allowing you to share content like articles or media between different Drupal instances.

Its development has started at the beginning of 2017 to answer the problem of deploying content in webfactory type projects we had for our clients.

At that moment, the state of art was not satisfying enough to correctly answer our needs.

How does Entity Share work?

In short, when you have at least two websites:

You configure one to be a content server with the Entity Share Server sub-module. On this website, you configure “channels” which allows you to expose which content types(, media types, etc.) in which languages are available for sharing.

You configure the other website to be a client with the Entity Share Client sub-module. On this website, you enter the URL and authentication information to the other website. Then you have access to a “Pull form” allowing you to select which content you want to import.

Other presentations to go deeper

For more details about the current version (8.x-2.0-alpha11) of Entity Share, you can take a look at the presentations we have done before:

You can also take a look at this very well detailed article: Entity Share - A cost-effective solution to manage multisite content.

To go beyond this article, which is focused on the current version, this article will present the new version in development.

Limitations

Currently, on the client site, when you want to pull content, you have to:

  • select the website you want to pull content from (auto selected if only one website is available),
  • select one of the available channels (auto selected if only one channel is available),
  • select the contents you want to import.

Then all the import processing is done, and there is no way to configure that using the back-office (there are partial ways for developers using events). The import processing consists of creating or updating an entity, then recursively import all the referenced entities and import the physical files.

For example, you can’t limit the recursion depth when referenced entities are imported. So depending on your website topology, if you have a lot of crossed-linked contents, importing one content can result in a ton of contents imported by recursion.

With the feedbacks accumulated on our client projects or by other Drupal community actors, the Entity Share’s issue queue has been filled with a lot of feature requests to handle more and more advanced usages.

Which is great! This means that the Entity Share module is more and more used and answers a recurring need!

But the current architecture is not ready to answer those features requests in a proper manner and not ready to allow developers to customize Entity Share properly for their projects. So it is time for a rework of the module’s architecture.

The new architecture

To ease maintenance, customization and to allow fine grained configuration, we introduced a dedicated plugin type and clearly defined steps in the import process, so that each plugin could react at any of these steps.

This is inspired by both code and ideas from the Search API module in which you can configure processors that can act on the way your data is indexed, queried, displayed, etc. with your search indexes.

Manage processor for search index content

When released, we will have a dedicated configuration entity, the “Import config” entity type, which stores:

  • which plugins is enabled or not,
  • the configuration of each plugin: for example, the depth of recursion of entity reference import,
  • the weight of each enabled plugin in each import steps: so you can have a total control on the order of execution of the import processing.

Add config import

With this new architecture, the steps to import entities are:

  • select one of the available import configurations (auto selected if only one import config is available),
  • select the website you want to pull content from (auto selected if only one website is available),
  • select one of the available channels (auto selected if only one channel is available),
  • select the contents you want to import.

Yes, this adds an extra step, but this extra step allows your import to behave differently depending on your needs.

Pull entities

For example, if there is a plugin to automatically import entities referenced in link fields (see this issue), there is no need to have it enabled if you don’t want or need it. Whereas with the previous architecture, such a feature would have been enabled by default on all imports. And so with each feature request implemented, this would have caused a lot of unnecessary processing depending on your needs.

Benefits for developers (DX)

With this new plugin system, developers:

  • are able to alter the import processing from code inside dedicated classes, no more need to patch Entity Share,
  • have more entry points to alter the import processing at the right time for their need,
  • can fully override or disable common behavior provided by the Entity Share module and sub-modules.

The main import process code had been revamped into new services, with clear interfaces to remove duplicated code and to ease the development of custom code.

For example, the new “ImportService” service allows to import specific entities from a channel or a whole channel using the “importEntities” or the “importChannel” methods. These methods had been thought to be used by providing only the strict minimum of arguments to ease the usage.

Also the command line integration will be revamped to benefit from the “import config” and therefore to have the same features and behavior if the import is launched from the command line or from the Drupal BO.

Benefits for administrators (UX)

With this new plugin system, administrators:

  • can have different import behaviors and select the one they want for their import,
  • are able to control with a fine grained granularity how they want their import to behave.

This rework is also the opportunity to improve all parts of the module and for example:

  • now administrators are able to import a whole channel using the “Pull form”, whereas it was only possible using the command line before,
  • some inconsistent behavior when indicating if an imported content is not synchronized will be fixed.

Conclusion and roadmap

 

After 3 years of existence, with more and more feedback and feature requests accumulated, it was time to rethink the module’s architecture to be prepared for future evolutions.

The new architecture empowers both developers, administrators and eases the module’s maintenance.

For a detailed and up-to-date roadmap, you can take a look at the following issue: https://www.drupal.org/project/entity_share/issues/3082611

Help is always welcomed, you can provide feedback and patches in the issue queue. And if you want to sponsor the module maintenance or some specific features, or if you need dedicated support time, get in touch with us!

Aug 05 2020
Aug 05
Your website upgrade from Drupal 7 to Drupal 9 alternatives

Drupal 7 website owners might have heard, perhaps not even once, that their websites are getting outdated.

With the recent release of Drupal 9, this topic is gaining special momentum. However, some business owners are reluctant to upgrade from Drupal 7 to Drupal 9. They need other solutions and alternatives.

Aug 03 2020
Aug 03

If you are using shared web hosting for your composer managed Drupal website, you have probably ran into the problem of your server running out of memory for simple commands such as "composer install". Shared web hosting usually have memory limits which are shared across many websites.

This problem becomes quickly apparent if you are trying to install a new Drupal website into your shared hosting account. Because this is probably the first time that you have to run "composer install" for this website. Composer will then have to go and fetch all the files and download them into your vendor directory. The high memory consumption in composer is mostly due to dependency resolution. This process is memory intensive. The end result is your process is abruptly killed without finishing.

Solution

A quick and easy solution is to install the site locally whereby you can successfully run "composer install". You would then have the vendor directory locally. Then use a service like FTP to manually upload the vendor folder into the correct directory on your hosting server. Then on your hosting server, try running "composer install" again. 

At this point, composer would not need to fetch all the files again. Composer may or may not have to update some files in the vendor directory. This is OK. But the process should now be able to complete within your memory limitations.

It should be noted that the vendor directory should not be committed to your git directory for a few important reasons. 

There are arguments for and against committing the vendor directory. It is not technically wrong but there some risks associated with it. The recommendation about not committing /vendor to the git repo is mostly related to avoiding headaches with git submodules (duplicated history, etc). The composer docs page shows some ways that you can prevent hitting those.

Jul 16 2020
Jul 16

Drupal 7 to Drupal 8/9 migration is something that should bring a great value to your business. Numerous Drupal 8 improvements and the fully revamped architecture impressed everyone when the 8th version was released.

Since then, Drupal 8 kept rapidly improving with better admin usability, performance, accessibility, third-party integration, multi-language, and other most modern web practices. The 9th version arrived as an even cleaner version, boasting instant upgrades from D8, equipped with modern libraries to make sites faster, and with plenty of ambitious plans for future innovation.

You might be already tempted to migrate from Drupal 7 to Drupal 8 or 9, but there is something stopping you. You may ask yourself: how much time does it take and how much does it cost to migrate from Drupal 7 to Drupal 8/9?

This question will be answered on a case-by-case basis, but we will do our best to give you the estimated time and cost of Drupal 7 to Drupal 8/9 migration in this post.

Getting ready for your Drupal 7 to Drupal 8/9 migration

Migration is always a great chance to refresh or revamp your website in many ways. To make sure the upgrade brings you the best value, as well as help the development team give you more precise estimates of Drupal 7 to Drupal 8/9 migration, you will need to do a little preparation.

Rethink your business goals: what new priorities have appeared and how would you like your website to reflect them? And, vice versa, what is no longer relevant and should be cleaned up? Talk to your admins and content editors: what functionality is missing and what could be improved in order to speed up their routine tasks?

Based on this, developers will help you review your website’s modules, configuration, UX design, and more. Some things will “travel” to the next level, some things will be replaced by more modern alternatives, some things will be cleaned-up, and so on.

How much time does it take to upgrade from Drupal 7 to Drupal 8/9?

The simplest of brochure sites without custom modules and with just a couple of content types can be migrated within 30 hours. However, as the site complexity grows, the migration process extends. For some websites, it can take a couple of months. The larger, the more complex, and the more custom-heavy your site is, the longer it will take to migrate. Let’s see how it works.

  • Initialization. The essence of every Drupal 7 to Drupal 8/9 migration is to move your website’s configuration and content to the newly created clean site instance. Content includes blog posts, images, files, and similar elements, while configuration is about your content types, fields, views, comment types, and so on. Developers analyze all this and decide on the migration specifics. They also prepare a fresh Drupal destination site, update your source site to the latest minor version, enable the necessary modules, etc.
  • Custom module rewriting. An important point is that your website’s custom logic in the form of custom modules will need to be rewritten according to the Drupal 8/9 standards (object-oriented programming, or OOP). This might take up a lot of the migration time.
  • Automated or manual migration process. Thanks to the migration pack of modules, it’s possible to automate many standard processes. However, in many cases, manual recreation of elements (e.g. views) is also needed. To import the website’s data in various formats (XML, CSV, RSS, etc.), developers will need to define the source and the destination. Some projects require complex field mapping, which influences the time. If you need to reorganize the content structure, it will take more time as well.
  • Validation. When the data has been migrated, developers carefully check that there are no errors and that the fields have been correctly filled with the data.
  • Website launch. When everything is thoroughly checked, developers deploy your new and shiny Drupal 8 or 9 website to live.

That said, the number of hours it takes to migrate increases when:

  • You have many custom modules.
  • Your website has a lot of content types and content.
  • Your content types need to be reorganized (split, consolidated, etc.)
  • Your content is multilingual.
  • There are custom fields on your site.
  • There is a need to recreate many views.
  • You need a website redesign.
  • And so on.

It should be noted that, even in the most complex projects, the time of Drupal 7 to Drupal 8/9 migration can always be reduced if you wish. To achieve this, you can hire a larger number of developers to perform the migration.

How much does it cost to upgrade from Drupal 7 to Drupal 8/9?

The cost of the Drupal 7 to Drupal 8/9 migration will depend upon the time and the hourly rate. The hourly rates may differ across companies. InternetDevels, together with our support team Drudesk that specializes in migrations, charges $30 per hour on average.

So, if a simple brochure site takes 30 hours to upgrade from Drupal 7 to Drupal 8 or 9, the cost starts at $900. This looks very lucrative, doesn’t it?

Upgrade from Drupal 7 to Drupal 8 or 9!

Show your website to our Drupal migration and upgrade experts team to discover the exact time and cost of your Drupal 7 to Drupal 8/9 migration. Our high-efficiency approach will allow you to save a lot of hours and budget. We also offer something totally free — our consultation and our attention. Contact us!

Jul 01 2020
Jul 01
Upgrade your website from Drupal 7 to 8/9 with the Drupal Module Upgrader

Upgrading a website from Drupal 7 to Drupal 8/9 is increasingly relevant.

Now that Drupal 9 is as an official release, it’s clear like never before that Drupal 7 is getting outdated. It’s time to upgrade Drupal 7 sites so they can give their owners a much better value. The Drudesk support team knows how to achieve this through upgrades and updates, as well as speed optimization, bug fixes, redesign, and so on.

Jun 17 2020
Jun 17
Build a secure website with Guardr: for governments, corporations, banks & more

In the world where cyber attacks are becoming more elaborate, you want to protect your website.

We know website security is a major priority for many businesses and organizations. They often ask our development & support team to build a secure website or perform a website security audit on an existing one.

Jun 03 2020
Jun 03
Drupal 9 is already out! Be ready for website migration

Today is June 3 — that means that Drupal 9 is already released. What can we expect?  What updates are offered? How can you migrate to Drupal 9 painlessly

Today DruDesk experts answer those questions and others.

We would like to note that DruDesk switched to a high-efficiency model of cooperation. This means that we complete your tasks twice as fast. If you have additional questions, write to us.

May 13 2020
May 13
Check a website for security vulnerabilities with the Security Review Drupal module

Security above all! The Drudesk team would love to help all businesses make their websites secure.

This is why we offer our very popular service — a Drupal website security audit that we offer at affordable prices. During the audit, we perform in-depth checks and find security vulnerabilities. After a good clean-up, we always recommend using helpful tools that will help users keep an eye on website security.

Apr 01 2020
Apr 01
Create a real estate website on Drupal with all the needed features

Real estate & property management businesses can reap huge profits from having a well-built website — and many of them actually do!

However, sites in this area are not built in one click. They require reliable and smooth third-party integration, excellent property categorization, advanced search, and more. This makes the choice of the CMS a responsible task. In this post, we take a tour of the features and opportunities that prove it’s a great idea to build a real estate website on Drupal.

Feb 28 2020
Feb 28
  • Core Modules: These modules are installed in all the Drupal installation packages; you can say that they are the basic (core) modules necessary for the site designing task, like handling the accounts of the users, providing menus for navigation.
  • Contributed Modules: This module can be downloaded from the official Drupal.org site.
  • Custom Modules: Custom modules are pretty self-explanatory – they’re explicitly coded for individual projects.

So here, we try to list out some of the modules from a Pandora of modules that can be used on the go for your web development, in a hassle-free way.

Read More - Move on to Drupal 8, Be Ready for Drupal 9!

Jan 20 2020
Jan 20

In this tutorial I will explain how I accomplished taking a listing of nodes that came from a View and created a Parallax effect using the Bootstrap theme from Drupal 8.

The code in this tutorial can be found on GitLab. 

To use the code in the repos, you can:

  1. Clone the repos locally
  2. Run composer install from the root folder
  3. Run drush config:import -y 

I would like to refer to this article which explains how to Add Parallax Blocks to Drupal 8 With ScrollMagic. I got my ideas and most of the code from there.

You will need to have Drupal 8 installed with Views and a Bootstrap subtheme set up.

For this tutorial, I am using a simple Page content type that I created with 3 fields: Title, Body and Background Image.

You should then create a View that lists this Page content type. I am listing the Body field and Background Image field in my example. I am keeping my Page content type and View very simple for explanation purposes. But you can set up more complex Views and content types as you wish. You just need at least an image field (or you can place a background image to an existing content type field via CSS as well).

Step #1 - Configure the Bootstrap theme

Because Parallax effects are usually full width, you should turn on the Fluid container option in the Bootstrap theme menu.

  • Enter your subtheme Boothstrap theme settings.
  • Under General >> Container check the Fluid container boxFluid checkbox

Step #2 - Set up your Content Type and Views.

Here is my Page content type with 3 fields: Title (not show in in screenshot), Body and Background image

Page content type fields

Here is my View. It's a pretty much standard View which just lists the Page content type and creates a page out of it. I am using the image field here as the background image for the Parallax. You can also use another field (text or whatever) and set a CSS background as the image.

View listing page content types

Now create some Page content and go to the View page to see the listing of your content.

Step #3 - Style the View using CSS

Here is my CSS:

.view-homepage {

  .views-row { text-align: center; position: relative; overflow: hidden; height: 500px; }

  .views-field-field-background-image { position: absolute; width: 100%; height: 140%; }

  .views-field-body {

    position: relative; top: 50%;
    -webkit-transform: translateY(-50%);
    -ms-transform: translateY(-50%);
    transform: translateY(-50%);
    color: #fff;
  }
}

Step #4 - Get the ScrollMagic files

Go to Github and download/extract library.

Now go to your theme folder and create a folder called /js/ and move these files into the folder

  • animation.gsap.min.js
  • ScrollMagic.min.js
  • TweenMax.min.js
  • and also manually create a blank parallax.js

You now need to tell the theme to load the js libraries. To do that, open your_theme/your_theme.libraries.yml:

global-styling:
  css:
    theme:
     css/style.css: {}

  js:
    js/ScrollMagic.min.js: {}
    js/animation.gsap.min.js: {}
    js/TweenMax.min.js: {}
    js/parallax.js: {}
  dependencies:
    - core/drupal
    - core/jquery

Step #5 Add the Parallax JS code to parallax.js

(function ($) {
    'use strict';
    Drupal.behaviors.myBehavior = {
        attach: function (context, settings) {

            var controller = new ScrollMagic.Controller();

            $('.views-row').each(function (index) {

                var $bg = $(this).find('.views-field-field-background-image');
                var $content = $(this).find('.views-field-body');

                var tl = new TimelineMax();
                tl
                    .from($bg, 2, {y: '-40%', ease: Power0.easeNone}, 0)
                    .from($content, 1, {autoAlpha: 0, ease: Power0.easeNone}, 0.4)
                ;

                var scene = new ScrollMagic.Scene({
                    triggerElement: this,
                    triggerHook: 1,
                    duration: "100%"
                })
                .setTween(tl)
                .addTo(controller);
            });
        }
    }
}(jQuery));

Make sure the images you are using are big enough for the max width you want to display.

You should now have a nice parallax scrolling effect for each node in the View list.

Parallax example

Jan 03 2020
Jan 03

Ok, the problem is clear:

  • Your composer based Drupal site puts your code base to the /web folder
  • You are using a shared hosting which maps your primary domain to /public_html, and you can't change that

Now your users will have to browse your site as http://example.com/web . And it is not cool.

So how to serve your site from the subfolder /public_html/web but removing the /web suffix so it becomes transparent to users?

Here are the steps, as I learned from this thread on Drupal.org

1. Open your settings.php file and add the following code:

if ( isset($GLOBALS['request']) && '/web/index.php' === $GLOBALS['request']->server->get('SCRIPT_NAME') ) {
    $GLOBALS['request']->server->set('SCRIPT_NAME', '/index.php');
}

2. Create a .htaccess file on the /public_html folder with:


RewriteEngine on
# Redirect to the subdirectory because that's where Drupal is installed
RewriteRule (.*) web/$1 [L]

3. Update .htaccess under /public_html/web folder

Uncomment the line RewriteBase and set it to:

RewriteBase /web

4. Clear the cache and run update.php

Your site should work by browsing http://example.com now (without the /web suffix). Your menu items may still have the /web part, but it will be gone after some hard refresh.

5. (Bonus) If you want to redirect http/https and www/non-www:

On the .htaccess file under /public_html/web, please add those lines between the tag:

This is to redirect non-https + www to https + non-www:

  RewriteCond %{HTTPS} off [OR]
  RewriteCond %{HTTP_HOST} ^www\.example\.com [NC]
  RewriteRule (.*) https://example.com/$1 [L,R=301]

And this is to redirect non-https + non-www to https + www"

  RewriteCond %{HTTPS} off [OR]
  RewriteCond %{HTTP_HOST} !^www\. [NC]
  RewriteCond %{HTTP_HOST} ^(.*)$  [NC]
  RewriteRule (.*) https://www.%1/$1 [R=301,L]

You can see those examples on Htaccess guide.

Dec 07 2019
Dec 07
 'custom_drush_create_node',
    'description' => dt('Triggers a node-save'),
    'aliases' => array('nc'),
    'arguments'   => array(
      'title'     => "Title of node",
    ),
    'options' => array(
      'repeat' => 'Number nodes to create.',
    ),
    'examples' => array(
      'drush nc error' => 'Prints error as node title is blank.',
      'drush nc Test --repeat=5' => 'Creates 5 nodes with title Test.',
    ),
  );
  return $items;
}

Running the Drush help command for our own command drush help nc will list some useful information about this command (arguments, options, description, aliases, examples etc)

Callback Function

As we have written callback function, Drush expects a function to be declared called drush_create_node(). This default naming structure starts with drush followed by the name of the command all connected with underscores.

function MYMODULE_create_node($title) {
  $repeat = drush_get_option('repeat', 1);
  for ($i=0; $i < $repeat; $i++) {
    $node = new stdClass();
    $node->type = "page";
    $node->title = $title;
    $node->language = LANGUAGE_NONE;
    $node->uid = 1;
    $node = node_submit($node);
    node_save($node);
  }
  drupal_set_message(t('Created ' . $repeat . ' node with title ' . $title));
}

Now clear the drush cache: drush cc drush

And run command: drush nc OR drush nodecreate

Arguments and options

-> Arguments are mandatory whereas options are not.

-> Arguments are passed as function parameters (in order) while options are retrieved in the callback using a special helper function (drush_get_option).

We declared 1 argument (called title) and one option called repeat.

-> The argument type will be the first string that gets written after the command name in the terminal (drush nodecreate or drush nc).

-> The option will be an integer value that gets assigned to the --repeat flag in the command.

Ex. drush nc Test --repeat=2

This will create 5 nodes, with title “Test”

User input

Let’s make it so that if a user doesn’t pass an argument, we ask them what argument they’d like to pass and use the value they provide interactively.

This goes to the top of the command callback function before checking whether the correct argument was passed.

// Check for existence of argument
if (!$title) {
  $options = array(
    'Test' => t('Test'),
    '0' => t('Error'),
  );
  $title = drush_choice($options, t('Please choose a option.'));
}
…
  1. Everything happens only if the user has not passed an argument.
  2. We create an array of key-value pairs that will represent the choices we give the user. The array keys represent the machine name of the choice while the values, the human readable name.
  3. We pass this array along side a question string to the drush_choice() function that will return the machine name of the choice the user makes. And that becomes our new $title variable (the argument).

If the title is not set or error is returned, we can add code to print error in terminal.

…
  if(!$title || $title='error') {
    drupal_set_message(t('Error! No title set.'));
  }
…

Full Code will look something like this:

 'custom_drush_create_node',
    'description' => dt('Triggers a node-save'),
    'aliases' => array('nc'),
    'arguments'   => array(
      'title'     => "Title of node",
    ),
    'options' => array(
      'repeat' => 'Number nodes to create.',
    ),
    'examples' => array(
      'drush nc error' => 'Prints error as node title is blank.',
      'drush nc Test --repeat=5' => 'Creates 5 nodes with title Test.',
    ),
  );
  return $items;
}
 
function MYMODULE_create_node($title) {
  if (!$title) {
    $options = array(
      'Test' => t('Test'),
      '0' => t('Error'),
    );
    $title = drush_choice($options, t('Please choose a option.'));
  }
  if(!$title || $title='error') {
    drupal_set_message(t('Error! No title set.'));
  }
  else {
    $repeat = drush_get_option('repeat', 1);
    for ($i=0; $i < $repeat; $i++) {
      $node = new stdClass();
      $node->type = "page";
      $node->title = $title;
      $node->language = LANGUAGE_NONE;
      $node->uid = 1;
      $node = node_submit($node);
      node_save($node);
    }
    drupal_set_message(t('Created ' . $repeat . ' node with title ' . $title));
  }
}

Feel free to drop your any queries/concern related to this blog. For Drupal Web Development we are always ready to help :) Stay tuned! 

Oct 28 2019
Oct 28

Lazy load images in Drupal with BLazy

Recently, we involved in a local project with Ecoparker.com. It is a directory of restaurants, cafes, entertainments, services, real properties ... in Ecopark Hanoi, Vietnam. This site is based on our best selling directory theme BizReview.

On this site, there is a page which lists all kindergartens around the area. It has 20 listings and will continue to grow. It is a very typical page built with Drupal views.

Kindergartens at Ecopark

By curious, we ran a PageSpeed Insight test, a Goolge provided test for accessing how fast your page is loading, to see how it performs.

The score on Desktop was 75, which is quite good. But let's see how we can improve it.

Page speed test - before

Scroll down to the Opportunities section which suggests how to help your page load faster, we see an interesting point "Defer offscreen images" with the suggestion:

"Consider lazy-loading offscreen and hidden images after critical resources have finished loading to lower time to interative

Page speed test - suggestions

Lazy loading is a technique that only serve content when it becomes visible to users, ie, when users scroll to it. If it is off screen, we don't load it to save bandwidth. It will be much useful when your sites contain a lot of images, so we don't have to load them all everytime an user browse the page. Only when the user scroll, images load and become visible.

It brought me to a question that how to lazy load on Drupal.

We had a look a the Blazy module, because it was a prerequisite of another module on our site. Previously we haven't been curious to know what it does. It turns out to be a very popular module with 30K+ sites reporting using this module.

Looking in more details, this module is very promising:

On private benchmarks, Blazy saves a page with lots of images from 14MB to 3MB, 200 http requests to 20, loading time 30s to 3s. Elevating performance grade from F/E to A/B via gtmetrix. Overall ~5-10x better.

On the description page, Blazy offers:

  1. Blazy as field formatter
  2. Blazy filter as HTML filter (on your CKEditor)
  3. Blazy Grid as Views style

That's all we need, so we started to get our hands dirty.

1. Install module:

Install and enable the module as usual, via the admin interface or composer: https://www.drupal.org/docs/8/extending-drupal-8/installing-drupal-8-mod...

Note: if you use Slick module for slideshows, it requires Blazy 1.0. But to get Blazy as a HTML filter, you need the 2.0 version.

2. Blazy configuration:

Blazy configuration is available at /admin/config/media/blazy. There are a bunch of options, like custom placeholder image, offscreen view port etc ... You are good to go with default settings.

Drupal Blazy UI configuration

3. HTML filter

Just go to /admin/config/content/formats, edit the Full HTML format, and enable the Blazy filter.

Drupal Blazy filter as HTML filter

Your HTML content will be automatically applied Blazy filter, ie, images and iframes will be lazy loaded.

4. Lazyload iframe

On our project, we found that images lazy loading works properly, but iframes do not work by default.

After some investigation, we found a workaroud, by manually forcing iframes to lazyload.

First you need to turn off Video lazyload option on the Blazy settings tab of Full HTML filter.

Drupal Blazy, turn off iframe at HTML filter

After that, when you paste an iframe to CKEditor, for example, a Youtube video iframe:

[embedded content]

You need to edit the HTML to add class and data-src, like this:

class="b-lazy" width="560" height="315" data-src="https://www.youtube.com/embed/uLcS7uIlqPo" frameborder="0" allow="accelerometer; autoplay; encrypted-media; gyroscope; picture-in-picture" allowfullscreen>

Then iframes will be lazy loaded properly.

5. Views

We editted the view page, on the Image field, there is a new Image format. Choose Blazy and set the image style, Save it.

Drupal Blazy on Views

All images on this view based page are now lazy loaded. If you scroll fast enough, you will see the placeholder are blinking first, then your images will show. Awesome!

After that, we ran the PageSpeed Insight test again.

Page Speed test - after

As you can see, the issue with "Defer offscreen images" is gone. The point have raised to 81, which is slightly better. That's what we need.

In conclusion, please consider to apply the lazy load technique to all of your Drupal sites, as it is highly recommended for a high performance site.

Jun 11 2019
Jun 11

We have been an active Drupal community member since the past 6+ years, 7+ Drupal projects supported, 5000+ successfully delivered international projects and 500+ international Drupal projects - out of which 100+ projects are of Drupal Migration. And hence, we can help you in migrating your current Drupal 6/7 site to Drupal 8 and that too in a way that you will not have to spend a single penny for migrating to Drupal 9 in future. There’s a bunch of rational reasons to back this statement and offer of ours, which we’ll like to share with you:
 

  • Change in Drupal Philosophy
    Previously, every Drupal upgrade was considered to be tedious and more of a technical task as compared to its counterpart CMS platforms. This is because Drupal 8 was created with a philosophy of bridging the gap between the technical developer and a layman-like admin. And taking this philosophy of positive change, Drupal 9 is going to bridge the gap of upgrade issue by introducing compatibility between its older and newer version - making the entire process effortless and inexpensive.
     

  • Upgrade-based Modules
    The compatibility between the older and newer version of Drupal majorly depended upon the modules and themes used while building the older version. Until and unless these modules and themes aren’t upgraded, the migration was a time-taking task and tedious task that required technical assistance. This has been changed with the change in the upgrade path of the content, which makes the migration easier if prepared.
     

  • Drupal Core Deprecating Policy
    Drupal 8 capable of introducing new APIs and features against the old ones. And once these new ones are launched, the old ones automatically get deprecated. Though these old APIs cannot be removed in the minor release of  Drupal 8, it will be removed in the next major version of Drupal 9. Hence, if you migrate to Drupal 8 now, the migration to Drupal 9 can easily be done with just a handful of changes to make it compatible.
     

Looking at the above three major reasons, it must be clear to you that migrating to Drupal 9 from Drupal 8 is far easier as compared to the migration from Drupal 6/7 to Drupal 9. Dries Buytaert, the founder of Drupal, has also shared similar information about planning to be done for Drupal 9. According to him, Drupal 9 is basically built in Drupal 8 instead of a different codebase, altogether. This implies that the new features are added as backward-compatible code and experimental features, which means once the code is stable the old functionality will be deprecated.
 

Dries, in his blog on ‘Plan for Drupal 9’, has quoted contributed module authors as one of the core reasons behind the easy migration from Drupal 8 to Drupal 9. On this, he says that these are the module authors are already well-equipped with the upcoming technologies of Drupal 9 and hence they can priorly work in a manner that is Drupal 9 compatible. AddWeb, being one of these contributing members of the community, can assure you of the easy and inexpensive migration to Drupal 9 as and when it arrives.
 

Why Vouch for Drupal 9?
Now, after grasping all the above information regarding the upcoming major release of Drupal 9, you must be wondering what’s in Drupal 9 to vouch for. Let us throw some light on the same, to be able to bring some clarity for you. Drupal 9 is all about eliminating the use of deprecated modules and APIs. Drupal 8, which runs on the dependency of Symfony 3, will run out from the market by November 2021. And hence, it is highly advisable to upgrade and avail the benefits of all that’s latest!
 

Concluding Words:
As an expert #Drupal-er and active community member, AddWeb is all set to offer you with this amazing opportunity to migrate from your current Drupal 6/7 site to Drupal 8, in a way that the future migration to Drupal 9 will be super easy and inexpensive. Share your details with us in here and let our Drupal Migration Experts get back to you. In case, of any queries or suggestions feel free to get in touch with us!

Apr 29 2019
Apr 29

Advantages of govCMS:

The Australian Government created the govCMS distribution by combining Drupal Core and a specific set of Drupal modules. So that uniformity is maintained across all the Australian Government’s websites and it the creation of the same also becomes easy. Let us how else does this, govCMS distribution proves to be advantageous:

Cost-Effective

Individual web hosting and creation of the sites demand time and money. The higher the security and quality of these sites, the higher the costing. govCMS saves on both of these factors and simplifies the entire process by choosing a single provider and hosting platform on Acquia Cloud Site Factory PaaS Service. In fact, whenever there’s an increase in resource usage, one can always upgrade the platform, which is eventually beneficial to all the other govCMS sites also.

Government Standards Compliance

The entire govCMS is created in a way that it perfectly complies with the standards of the Federal Government. Hence, this makes the further process quite smooth and sorted. Security being one of the major concerns while creating a website for such Government organisations. And hence, govCMS has complied with their guideline by completing the program process of Information Security Registered Accessors. Plus, every issue that is found and rectified in govCMS will also automatically be implemented to other govCMS sites too.

Software Maintenance

Drupal is one of the largest open-source platform available today and hence it has a large team of 600 expert community members, who work on making govCMS a consistently growing and highly efficient product. This is taken care by Acquia, which also provides 24x7 assistance for govCMS at application and hardware level.   

Security Compliance

One constantly needs to confirm that there are no issues with the govCMS sites, for which a continuous process of testing, bug-fixing and other such process is required to be followed. This is very well taken care of, when it comes to govCMS and hence the security of this platform is kept intact. In fact, an automated testing process has also been set across the entire network by using Behat.

Responsive

In today’s day and age, a website that is not responsive is outdated. Fortunately, govCMS has been created with a base-theme that is responsive by default. This provides the developers in quick-creation of custom themes. This also helps in the creation of a standard look and feel of the Australian Government’s website, which converts into a user-friendly experience.

Accessibility Compliance

Every single Government website needs to be in compliance with the Web Content Accessibility Guideline (WCAG) AA 2.0. And hence, WCAG AA 2.0 has been at the base of creating the base-theme and hence it’s in complete compliance with the Government standards. This also helps in elevating the user-friendliness of the website. govCMS also provides a list of the accessible elements either via the content pages or the WYSIWYG editor.

Data Retention

Backups are a very critical and significant part of any website. So when it is a Government website, the stakes are even higher. The creation of govCMS is done in complete compliance with the National Archives of Australia Standards. This includes about 7 years of data retention on backups, which is a highly beneficial factor that works in the favour of these govCMS-based Australian Government’s websites.  

govCMS is a pool of perfection when it comes to a platform meant for Government - highly secure, affordable and effective! govCMS platform is a pool of perfection when it comes to a platform meant for creating any Government-based website. Because it is highly secure, affordable and effective! AddWeb is glad to have built a website created by using a govCMS platform, which is in complete compliance with WCAG 2.0 and government standards. And we’re all-equipped to work more on other such govCMS-based websites. If there’s anything specific in your mind that you wish to learn about govCMS then free to write to us in here and we’ll be happy to include the relevant topics in our future blogs.

Mar 27 2019
Mar 27

What is govCMS?

The govCMS distribution is supported in Drupal 7 and Drupal 8 version, which has installation profiles for Australian government websites and it is being actively managed on Github (https://github.com/govCMS/govCMS8) and features are maintained from https://www.govcms.gov.au/.
 

To work with govCMS, we have to take care of some factors in order to create a site with the govCMS platform.
 

They have limited number of the module they support and we have to stick to that only and need to find alternate options with the twig and preprocess functions only and they do not allow us to create custom modules as well.
 

Here is the list of modules that they support: https://www.govcms.gov.au/govCMS-d7-modules
 

Unfortunately supported modules for Drupal 8 are less compare to Drupal 7, but we can create support request of community and if it's valid then they can include a module on SaaS platform.
 

Our Experience about working with govCMS

We have worked with govCMS for one of our clients from Australia, who is working with another agency before we met, and already he had started site development with them, and fortunately, he gets to know about the quality of work done so far from the previous agency.

They have added lots of contrib modules and to achieve some functionality they have created custom modules as well, but as the govCMS platform doesn't support such modules we have to flush out all existing implementation and started from the beginning.

All features which are build using custom/contrib modules, we had to find alternatives and get things done only with supported modules and using preprocess functions and twig alters. and along with that, we have to make sure that site is WCAG compliance as it is a government website they must be. So all things we delivered to the client successfully as per the client's expectation with the boundary of govCMS restriction.


 

More details about what is govCMS

Drupal gets big in the Australian Government
 

With almost half of Australian Government departments now running Drupal, and hundreds of more sites now live within various agencies, Drupal has transformed the way government websites are built and managed.
 

Drupal 7: https://www.drupal.org/project/govcms

Drupal 8: https://www.drupal.org/project/govcms8
 

The aim is to provide a single solution for unclassified websites using a common codebase and a shared feature set on a scalable and secure list infrastructure.

govCMS distribution is supported as SaaS by amazeelabs in collaboration with govCMS community and it supports several contributed modules which are available, here is a list of modules which can be used with SaaS https://www.govcms.gov.au/govCMS-d7-modules.
 

Workflows and Ahoy

It is interesting to see that the .ahoy.yml is just a set of command shortcuts, which is similar to the scripts section of a composer.json. Every implementation can be smoothed over by a single Ahoy command, and the underlying implementation can evolve without the developer even noticing.

Speculating, I think the hardest part about adding Ahoy commands will be naming them. Even then, the govCMS team will have the luxury of focussing on the "SaaS govcms 8 on Lagoon" use case, rather than something like BLT which attempts to have commands for "any Drupal anywhere".

Feb 20 2019
Feb 20

Hi everyone,

we are excited to share a few program updates on Drupal Mountain Camp as the team behind the scenes is working hard preparing the last bits before the conference in just 2 weeks.

We are extremely grateful for all the quality session submissions people have submitted. The full schedule over 4 days includes 9 workshops, 2 keynotes, 4 featured sessions and 42 regular sessions in 3 different tracks. 

Besides the already promoted keynotes, we would like to highlight the following featured sessions:

Thanks to the collaboration with the Drupal Recording Initiative by Kevin Thull, we'll be able to provide video recordings for you after the conference.

Contribution is a key topic for Drupal Mountain Camp. Make sure to sign-up for one of the 7 different initiatives or propose your own using our contribution sign-up sheet.

We also updated our social events page so you can start preparing for some fun in the snowy Swiss mountains.

So far, more than 95 tickets have been sold. Regular tickets are available for CHF 120 until 1st of March, afterwards we sell tickets for CHF 140.

We are looking forward seeing you at Drupal Mountain Camp in Davos, 7-10 of March 2019.

Josef / dasjo on behalf of the Drupal Mountain Camp team.

Jan 31 2019
Jan 31

Do you have a Drupal 8 website that you have migrated from Drupal 4 to 5 to 6 to 7 and then to 8 (or anywhere along this timeline)?

Do you have a Drupal 8 website that is not managed with Composer? (Maybe you installed the site manually?)

If your answer is Yes to any of the above, then this quick guide tutorial is for you. This tutorial will explain in 10 easy and proven steps how to take a Drupal 8 website and get it to be managed under Composer. 

Read how Drupal 8 uses Composer.

For this tutorial to be effective for you, you must take care of all the Assumptions I mentioned below BEFORE you start.

Assumptions

  • You have a Drupal 8 website and it is under GIT version control (using master branch).
  • You have not hacked core (but if you did note where you did it).
  • You have contrib modules and/or themes in your code base.
  • Any custom module or custom theme is in their usual Drupal folder (for e.g. drupal_root/modules/custom_module OR drupal_root/themes/custom_theme)
  • You have updated all Drupal core, contributed modules and themes to their latest stable version. (This is important for Step 6 to go smoothly)
  • For this tutorial code base means your Drupal root directory
my current codebaseMy current code base looks like this.

Step 1

Backup the following folders:

  • your public files folder

  • database

Do not worry about the rest of your code base because it is under version control.

 

Step 2

Git clone your code base locally and create a new branch off your master branch (or whatever branch your live repos sits on).

Commands

git clone

Now change directory into your newly cloned git repos.

git checkout -b composerize-drupal master

composerize-drupal is just the name of the branch we will be working in for this tutorial so we don't mess with our master branch. Once we are successful, we will merge this branch back into master.

 

Step 3

Note down the names of the contributed modules/themes that you currently use. It is ok to skip this step because your git version control (master branch) will have these folders still so you can refer to it from there through the git web interface.

But here are the Drush commands if you wish to do it this way:

To list all the contributed module: drush pm-list --type=Module --no-core --status=enabled

To list all the contributed themes: drush pm-list --type=Theme --no-core --status=enabled

If you hacked core or made custom patches, note down where these hacks/patches are in your codebase (again it should be viewable in your git interface in the master branch so you do not need to actually write this down)

Step 4

In your local branch , delete everything including all the files that begin with “.” (dot) from your Drupal root EXCEPT the .git folder. So now your code base should only have the .git folder.

Now you can stage and commit this change to the current branch composer-drupal:

git add -A

git commit -m "Remove all files and folders except .git"

 

Step 5

Install a new version of Drupal using Composer:

composer create-project drupal-composer/drupal-project:8.x-dev composerized-drupal --no-interaction

The name of the folder that composer will install Drupal too is composerized-drupal.

Now you need to move ALL of the files and folders from the folder composerized-drupal into the root of your original code base hence consequently leaving the composerized-drupal folder empty. Don't forget to move all the files that begin with a dot.

So from your code base root:

mv composerized-drupal/* ./

mv composerized-drupal/.* ./

rm -rf composerized-drupal

Now your code base should have the default Drupal composer files like Figure 2.

Figure 2Figure 2

Now it’s a good time to commit these changes.

git add -A

git commit -m "Added default Drupal composer files"

 

Step 6

For each contributed module, theme or profile from your old site you need to add it to the current code base by using composer such as:

composer require drupal/adsense drupal/backup_migrate drupal/codefilter drupal/ctools drupal/insert drupal/pathauto drupal/rules drupal/sharethis drupal/tagadelic drupal/token drupal/typed_data

Composer will then fetch the latest stable release and put it in the correct directory automatically.

Now it’s a good time to stage and commit your changes.

git add -A

git commit -m "Add contrib projects to the codebase."

Step 7

For any of your custom code:

  • Place modules in drupal_root/web/modules/custom/

  • Place themes in drupal_root/web/themes/custom/

  • Place profiles in drupal_root/web/profiles/custom/

Visit this link if you use 3rd party libraries.

If you made customizations to .htaccess or robots.txt, here is how to apply those changes in a composer managed Drupal site. (Don't forget to commit your changes to composer.json)

If you patched core, contributed modules or themes see this link to patch with composer.

NB - Do not just randomly add your customizations in without letting composer know about it. If you do this, your changes will be lost next time you run composer install or composer update.

 

Step 8

TEST, TEST, TEST!

Restore a copy of the LIVE database to your local machine and visit the URL of the local site in your browser.

Drupal will automatically go to the install step. Follow through with the process and fix any errors along the way.

Clear the cache: drush cr

You may have to find other ways to clear Drupal cache if Drush does not work at this point.

Check the status reports and click around the site. Any errors you find, fix it and commit it.

The status page may have Drupal updates to the database available. Update the Drupal database if you need too.

Usually I visit admin/config/media/file-system to double check the public and private folder paths are set correctly.

NB - your custom files folder will be missing from your code base at this point. You can copy the files folder locally to drupal_root/web/sites/default/files if you need it for testing purposes but this will be taken care of in Step 10 on the LIVE site.

At the end of this step, drush status and drush cr should return clean results with no errors.

 

Step 9

This is the big step where you push the changes up to the master branch.

So push your changes up: git push -u origin composerize-drupal

Now we want to merge our composer branch into master:

git checkout master

git merge --no-ff composerize-drupal

Delete the composer branch as we no longer need it:

git branch -d composerize-drupal

Push the changes up:

git push -u origin master

 

Step 10

Now we need to pull in the changes on your LIVE server. Your LIVE site will be down for a few minutes in this step so you should do this at an off-peak time. In the event that something drastically goes wrong, you would need to revert the commit with the merge and restore your database and files folder.

On LIVE:

Change your Webserver document root to point to code_base/web since composer would have placed all the Drupal files into a sub directory named “web.” This is a very important step.

Then

git pull

composer install --no-dev

After running the above command, if you had previously made customizations to core, contrib modules or contrib themes, and you didn't tell composer about it, they will be lost like I said in Step 7.

At this point you may notice your original sites folder is still at code_base/sites. You can delete the folder completely because the web/sites folder will be in use now.

Copy your custom files folder to code_base/web/sites/default/files

Now visit your site and repeat Step 8 (except restoring a copy of the LIVE database to your local as you would be on LIVE now).

I hope this tutorial is helpful to you. You can leave a comment if you had any edge cases and the solutions that you found.

Jan 16 2019
Jan 16

Happy 18th birthday to Drupal!

                      

We’re all engrossed in the mode of celebration and the festive fly game was on point. In fact, everyone else was too painted in that mood, cheering and celebrating the spirit of flying. But our zest for celebration was a notch hire, for our reason for celebration was doubled. 15th January is not merely a date when the kite-flying festival falls but that’s the very day when our dearest of all - Drupal came into existence!

Eighteen years before this very day, the very first version of Drupal - 1.0.0 was released by its founder - Dries Buytaert. And just as it happens with all other path-breaking changes world, this one too came with a lot of faith but a humble approach towards its future. And look, how proficiently has it grown in all these years. Today, it’s one of the largest and most-trusted open-source community and the future looks even brighter.  

In the age of data-threats, Drupal is trusted for its security, worldwide. Constantly moving towards strengthening the open-source community, Drupal has never compromised on the security, content, and scope. Drupal is also known for its power of personalisation and flexibility. Drupal Commerce is also the preferred one when it comes to building an easy-looking e-commerce platform with complex functionalities. And if that was not enough, the launch of Decoupled Drupal has blown the tech world like a boss!

We might sound a little biased here, but we’re speaking nothing but the truth. Everyone from ‘The Beatles’ to ‘Estee Lauder’, ‘Columbia University’, ‘NBC Universal’, ‘NBA’, ‘Paramount’ and many more have trusted and adapted Drupal for years now. Dries has rightly quoted about it in his birthday note for Drupal and let us also conclude, our birthday note for Drupal, on the very same note -

Jan 10 2019
Jan 10

Preview

Introduction

Drupal Mountain Camp brings together experts and newcomers in web development to share their knowledge in creating interactive websites using Drupal and related web technologies. We are committed to unite a diverse crowd from different disciplines such as developers, designers, project managers as well as agency and community leaders.

Drupal Mountain Camp Group Picture

Keynotes

The future of Drupal communities

For the first keynote, Drupal community leaders such as Nick Veenhof and Imre Gmelig Meijling will discuss about successful models to create sustainable open source communities and how we can improve collaboration in the future to ensure even more success for the open web. This keynote panel talk will be moderated by Rachel Lawson.

Drupal Admin UI & JavaScript Modernisation initiative

In the second keynote Matthew Grill, one of the Drupal 8 JavaScript subsystem maintainers, will present about the importance and significance of the Admin UI & JavaScript Modernisation initiative and Drupal’s JavaScript future.

Drupal Mountain Camp Attendee

Sessions

In sessions, we will share the latest and greatest in Drupal web development as well learn from real world implementation case studies. Workshops will enable you to grow your web development skills in a hands-on setting. Sprints will teach you how contributing to Drupal can teach you a lot while improving the system for everyone.

Swiss Splash Awards

As a highlight, the Swiss Splash Awards will determine the best Swiss Drupal web projects selected by an independent jury in 9 different categories. These projects will also participate in the global Splash Awards at DrupalCon Europe 2019.

Splash Awards 2019

Location

Drupal Mountain Camp takes place at Davos Congress. As tested by various other prominent conferences and by ourselves in 2017, this venue ensures providing a great space for meeting each other. We are glad to be able to offer conference attendees high quality equipment and flawless internet access all in an inspiring setting. Davos is located high up in the Swiss alps, reachable from Zurich airport within a beautiful 2 hours train ride up the mountains.

The camp

The Drupal Mountain Camp is all about creating a unique experience, so prepare for some social fun activities. We’ll make sure you can test the slopes by ski and snowboard or join us for the evening activities available to any skill level such as sledding or ice skating.

Drupal Mountain Camp Davos

Tickets

Drupal Mountain Camp is committed to be a non-profit event with early bird tickets available for just CHF 80,- covering the 3 day conference including food for attendees. This wouldn't be possible without the generous support of our sponsors. Packages are still available, the following are already confirmed: Gold Sponsors: MD Systems, platform.sh, Amazee Labs. Silver: soul.media, Gridonic, Hostpoint AG, Wondrous, Happy Coding, Previon+. Hosting partner: amazee.io.

Key dates

  • Early bird tickets for CHF 80,- are available until Monday January 14th, 2019

  • Call for sessions and workshops is open until January 21st, 2019

  • Selected program is announced on January 28th, 2019

  • Splash Award submissions is open until February 4th, 2019

  • Regular tickets for CHF 120,- end on February 28th, 2019 after that late bird tickets cost CHF 140,-

  • Drupal Mountain Camp takes place in Davos Switzerland from March 7-10th, 2019

Join us in Davos!

Visit https://drupalmountaincamp.ch or check our promotion slides to find out more about the conference, secure your ticket and join us to create a unique Drupal Mountain Camp 2019 - Open Source on top of the World in Davos, Switzerland March 7-10th, 2019.

Drupal Mountain Camp is brought to you by Drupal Events, the Swiss Drupal Association formed striving to promote and cultivate the Drupal in Switzerland.

Jan 03 2019
Jan 03

In this article we are going to look at how we can render custom images using image styles in your Twig template in Drupal 8. By custom images, I mean images that originate from a custom module/theme (not an image originating from a field entity/node). In other words, the image would not even exist in "public://your-image.png" but rather it might exist somewhere in your custom module or theme.

In YOUR_THEME.theme file, put at the top:

use Drupal\image\Entity\ImageStyle;

Then, depending on which Twig template you want to use the image style, you need to create the appropriate preprocess function. In this example, let's use theme_preprocess_page

So in YOUR_THEME_preprocess_page:

function YOUR_THEME_preprocess_page(array &$variables) {

  $style = ImageStyle::load('custom_style');

  $variables['rounded_image'] = $style->buildUrl(drupal_get_path('theme', 'YOUR_THEME').'/images/your_image.png');

}

Then in your Twig template (for e.g page.html.twig):

Dec 31 2018
Dec 31

Past is a place, thoroughly familiar and yet the experience of revisiting it varies drastically. Revisiting some leaves you with happy nostalgia, some with innocent laughter, some with a moment of pride, and some, a prick of sadness or regret. But yet we choose to visit this place called ‘past’ through our memory, time and again. In fact, we recently did so by revisiting the year 2018, like many others, that is about to end in just a handful of hours. And fortunately, it was filled with a host of happy moments to rejoice and relish, topped by several breakthrough changes and chances we’ve embraced with all love & warmth.


The year 2018 has been the most eventful year for the AddWeb-traveling, counting right from the moment of moving into altogether new office space to officially being a supporting partner of Drupal.org and everything in between. It’s a journey no less than a cinematic experience, with all the drama, emotions and heroic ending - full of catharsis. Let us take you through this marvelous journey, as experienced by AddWeb-ians.

Welcoming 2018 - The Journey Begins, Quite Literally!

Dec 21 2018
Dec 21

Continuous Delivery - A trending word in the world of technology. Continuous Delivery(CD), along with Continuous Integration(CI), is becoming a popular term even with the non-technical people. And hence, every IT company is seeing a flood of clients coming with the demand of both of them. Both of these techniques - CI/CD are closely associated with the quality-oriented work methodologies - Agile and DevOps. And so are we!

Team AddWeb has been for years been associated with and following Agile and DevOps. Just as we’re associated with Drupal. No wonder, we have been ardently following continuous delivery with Drupal for years now. So let us first throw some light on this popular concept of ‘Continuous Delivery with Drupal’.

What is Continuous Delivery?

Continuous Delivery is a process of automatically deploying all your changes made on development stage, directly to the production stage. This kind of delivery is done by accepting all the unit-cases followed by coding standards. Once the code is merged with the stage branch from the development branch, the same stage branch also gets an automatic update with the help of Jenkins and git-webhook, which is triggered by merging the branches. A similar process of automatic delivery is also followed on the production site; where the code is merged with the master branch from stage branch, which is later deployed to the production servers.

Team AddWeb, as we mentioned previously, has been persistent followers of CD with Drupal via Jenkins, Ansible, and RocketChat. We believe, in today’s day and age, CD, and CI hold so much of significance because one can define repetitive tasks for one time and then on every build the same mentioned steps will run in order to update the new content. And when we speak of so much so of its importance, let us also share the tools, block diagram and process that we, at AddWeb, choose to follow for Continuous Delivery.

Nov 16 2018
Nov 16

In February 2017 the Drupal Mountain Camp in Davos was held for the first time. More than 120 experts from 17 countries came to the Swiss mountains and experienced a unique weekend. The event was a huge success with many highlights.

Keynotes

Both Laura Gaetano's (Travis Foundation) keynote about the open source community in general and Preston So's (Acquia) keynote about the future of Drupal with API first were well attended, very interesting and led to lively discussions.

Drupal Mountain Camp Davos

Discussions in the venue

Laura Gaetano on stage

Preston So on stage

Sessions

There were sessions by more than 30 different speakers with topics from the Drupal world from different areas like sitebuilding, frontend & design, business & showcases, coding & development. Drupal Mountain Camp featured presentations on Drupal 8 in general, Commerce, Translation Management, GraphQL, Media, Paragraphs and much more.

Session at the Mountain Camp

Sessions in Davos

Sprints

Sprints were organised in several rooms, where both beginners and experts met to code and develop ideas together.

Developers sprinting

Sprinting at the Mountain Camp in Davos

Social Events

After the varied, interesting and intensive daily program, various events took place in the evening at which the "campers" could experience a great time. They joined the skating rink, sled down the mountain, attended a game of the local ice hockey team, ate a cheese fondue and/or enjoyed the slopes on skis or snowboard.

Sledging in Davos

Fondue in Davos

Also, the venue and its food were amazing.

All in all it was a great weekend! Thanks to all our great sponsors that made this possible!

Read more reports from camp participants:

Alle Bilder von Josef Dabernig @dasjo

Nov 03 2018
Nov 03

Team AddWeb has worked for a distinctive list of industries counting from hospitability to technology and retailers to an online lottery purchase system based website. Yes, we recently collaborated with a Japan-based company to build their website with lottery purchase system, using Drupal 8. We’ve been Drupal-ing even before our inception and have been an active member of the Drupal community, globally. Our association and experience of Drupal were the base of the client’s immense faith in us and we knew that we’re going to stand true to that.

About the Project
The project requirement of the client was to build a website for them in Drupal 8. The website is basically an online lottery purchase system. Due to confidential reasons, we can not share the name of the company/client but would like to share that the experience of working on this project was new and enriching.

 

Major Features/Functionalities
We personally love experimenting and implementing innovative features to enhance the client’s website. Plus, we get a little more excited when its a Drupal 8 website. We integrated a host of futuristic features to this very website too. But since, it’s an online lottery purchase system we knew that the integration of the Payment Gateway is going to be one of an integral part. Hence, we created three types of Payment Gateway, as follows:

  • GMO Payment

  • Coins Payment

  • WebMoney Payment

The user is an integral part of this entire online lottery system and hence several functionalities are crafted around them. Like, a user can purchase coins by WebMoney Payment method and can also buy lottery from choosing any product bundle. A user also has an option to select the quantity of the product or go for the complete set. The payment for either of it can be done by the coins, GMO credit card or points.

Draw system is used for the selection of the lottery winner. Other than the lottery prize, the user also stands a chance to win the Kiriban Product as a prize. The Kiriban Product is based on the product bundle configuration, which is an additional product that a user gets as defined by an admin user.

The Problem

Any e-commerce website will definitely have multiple users buying for the same product. In this situation, the backend technicalities should be as such that it updates the quantity left of the product after the last purchase is made. Issues occur when two or more users place the order at the same time. This is an issue that is involved in concurrent shopping. In this case, the lottery opened for some specific time. Hence, the issue occurred in showcasing the updated quantity. This problem came to our notice when the site went live and around 7-8 users made the transaction at one specific time. We immediately started working on the issue.

Challenges Faced:

We quickly picked up the problem and started searching for the resolution. We have had several times, prior to this, created an e-commerce website. Hence, we used multiple methods to resolve the issues, mentioned below, but none of them worked in this particular case.

  • Initially, we tried using a Drupal lock to resolve the issue, but in vain.

  • We, later on, used the MySQL lock but this too didn’t work, due to the involvement of multiple quantities inside for loop.

  • The usage of sleep time with random sleep time also did not work, because it created the nearby value and not the exact one.

Though the method of random sleep time did not work in this case, it gave birth to the final resolution that worked. And hence, we did a minor modification to the same and divided the sleep time in a range of 3. Also, to avoid the possibility of any further clash, we adopted a table of 100.

The Final Resolution:

After trying out a handful of methods, we finally came up with a method that did work out in our favor. Let us share what steps did finally help us in addressing the problem of concurrent shopping that we faced:

  • A table consisting of 1 to 100 numbers was taken, with the sleep time by a range of 3.

  • Later, a random number was picked and a flag value for the same was set.

  • Then, a greater number from those numbers with the range of 3 was picked.

Below is the table that was created to bring out the final solution:

Apr 09 2018
Apr 09

drupal securityHere is a brief account of how we applied the most critical Drupal security update in the past couple of years to web projects we support and monitor.

As you probably know, our company supports and monitors the performance of several dozen Drupal-powered sites. On 2018.03.21 it was announced that on 28.03, around 22:00 +0300, a critical Drupal security update will be released. Of course, it was absolutely necessary to apply it to all sites for which we are responsible, and do that within the shortest time possible.

As you understand, the web projects we support are not uniform, they are in fact quite different from each other, run different versions of Drupal and occupy different servers. Many sites endured radical changes in their development teams before we undertook their support and performance monitoring.

We tasked our DevOps engineers with developing a solution that allows:
1) applying the security update to all supported and monitored projects within one (1) hour;
2) updating the Drupal core or applying the patches available;
3) backing up sites before applying the updates.

Within a week we developed and tested the solution. We used Ansible, git, and bash. Also, we integrated the solution with our monitoring system.

The critical update was released on schedule. Our specialists checked the changes made to the kernel and greenlighted the automated update solution we have developed. Nevertheless, to avoid any problems with operation of our clients' websites, we did a test first: run the automated update for a small group of sites, which included our projects and test sites. The test run returned a number of issues that were remedied promptly. After that, we run the update solution for all the supported web projects.

The results:
1) All sites continued to work as usual, our monitoring tools never reported any problems;
2) The entire update procedure took 1 hour, as we have planned (issues remedying included);
3) We now have an excellent solution that automates the uncomplicated but labor-intensive process of applying security updates.

From now on, this automated Drupal update solution will be used for all projects and servers that we support.

 

Mar 09 2018
Mar 09

Hey all! Today, we shall show you some examples of master-slave replication setups.

A bit of theory first

Why do you need replication in the first place? There are at least two reasons to set it up. First off, it is your insurance that helps avoid downtime when/if your master MySQL server goes down: with replication, slave server picks up and fills for the master. Secondly, replication allows decreasing load suffered by the master server: you use it for writing only and pass read queries to slave.

The replication process

Nothing really complicated here. Master server writes binlogs that contain operations performed on the DB(s) and keeps tabs on journal records of shifts from the start to the current record. Slave connects to master, compares positions and reads changes found in the journal starting from its own position and finishing with the master’s position, and then applies the changes (commands) found to the DBs it hosts.

Master server setup

First off, we change my.cnf on the master server:

#server’s ID goes here
server-id = 1
#and this is where you specify the log’s name and path to it.
log_bin = /var/log/mysql/mysql-bin.log

A comment is due here: by default, master writes binlogs for all DBs, but you can change that with the help of binlog-do-db. The command tells the server to fill logs only when the specified DB is touched, and changes made to other DBs never make it to that log. You could also set logs expiration time and max size (expire_logs_days and max_binlog_size parameters).

Now, we grant a MySQL user permission to do replication:

GRANT replication slave ON *.* TO [email protected]_server_ip IDENTIFIED BY "password";

replication slave - allows user to read binlogs.
slave_server_ip -ip of the server the user will connect from.

Next, we restart our MySQL server:

/etc/init.d/mysql restart

and check the master’s status:

show master status;

The reply should contain the binlog’s name and position record it has. When you run queries to DB, the position will change.

Slave server setup

my.cnf should receive the following:

#slave’s ID, must be different from master’s ID.
server-id = 2
#same as the binary log,it is a list of numbered files that contain events describing changes made to the DB
relay-log = /var/lib/mysql/mysql-relay-bin
#index file containing names of all used relay journal files.
relay-log-index = /var/lib/mysql/mysql-relay-bin.index
replicate-do-db = [the DB to be replicated]. 

Important! When you cross DBs, i.e. use one DB but update the data in another, you don’t need to add binlog-do-db to the master’s setting. You want binlogs written for all DBs. Slave’s settings should receive replicate-wild-do-table=db_name.%, (db_name - name of the replicated DB) instead of replicate-do-db.

Restart MySQL server:

/etc/init.d/mysql restart

Switching on the replication

Now, we need to backup master’s DB while keeping the position in the binlog intact. The trick is to first block DB, make it read-only:

SET GLOBAL read_only = ON;

Check the master’s status:

show master status;

Write down File and Position values. The Position should not change now.

Next, we create a master’s dump:

mysqldump -uname -ppassword db_master_name > dump_db 

The parameters are pretty obvious here:

  • name - username,
  • password - password (obviously),
  • db_master_name - name of the DB,
  • dump_db - name for the dump.

Once the dump is done, we need to make the DB writable again:

SET GLOBAL read_only = OFF;

Next, we move the dump to the slave server and restore it there:

mysql -uname -ppassword db_slave_name < dump_db

Replication setup:

CHANGE MASTER TO MASTER_HOST = “master’s ip”, MASTER_USER = "username", MASTER_PASSWORD = "password ", MASTER_LOG_FILE = "log’s name", MASTER_LOG_POS = position;

Again, everything is just what it seems here:

  • master’s ip is the ip of the server that host the master,
  • username is the name of the user created on the master server,
  • log’s name is the File value on the master server as of the moment the dump was made,
  • position is the Position value on the master server as of the aforementioned moment.

Now we start the slave:

start slave;

We can monitor the replication status:

SHOW MASTER STATUS\G SHOW SLAVE STATUS\G

Security setup for master server

bind-address in /etc/mysql/my.cnf specifies the IP address MySQL server should listen to waiting for connection.

Typically, the value is bind-address = 127.0.0.1. However, once we have the slave server set up, we need to allow connection from that server and keep local connections functioning. Bind-address does not allow to specify more than one IP address, so we comment it to allow connections from all IPs. This is a very dangerous situation, which we can remedy with the help of iptables:

#first, we allow connection from the slave’s IP
iptables -I INPUT -p tcp -s ip_slave_server-a --dport 3306 -j ACCEPT
#secondly, we disallow connections from all other IP addresses.
iptables -I INPUT -p tcp --dport 3306 -j DROP

And that is how you do it: we now have two MySQL servers operating as master and slave, which makes the site much more resilient and even speeds up some Drupal-powered sites. One of the next pieces will cover switching between master and slave modes when the master is down. Stay tuned!

Jan 12 2018
Jan 12

In the previous article, we covered How to stay out of SPAM folder? and today we will learn how to secure our Drupal web server.

Setting up Firewall

So, we have Debian OS powering our Drupal web server, and we need to make it secure, adjust everything so as to minimize all risks. First of, we want to configure the firewall. Basic stuff. Our "weapon of choice" here is IPTables.

Initially, the firewall is open, all traffic passes through it unimpeded. We check the list of IPTables rules with the following command:

# iptables -L -v -n

Chain INPUT (policy ACCEPT 5851 packets, 7522K bytes)
pkts bytes target prot opt in out source destination

Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source destination

Chain OUTPUT (policy ACCEPT 320M packets, 19G bytes)
pkts bytes target prot opt in out source destination

All clear. To remove all IPTables rules, we use the following command:

iptables -F

Default IPTables rules

Default rules are useful and convenient. In IPTables, they are set with the help of policies (-P). It is a common practice to drop all packets and have a series of permission rules for specific cases.

The following rule allows dropping all packets:

iptables -P INPUT DROP

Additionally, you can up the security by outlawing forwarded packets, that is, packets routed by firewall to their destination. To do that, introduce the following rule:

iptables -P FORWARD DROP

For loopback, we allow local traffic:

iptables -A INPUT -i lo -j ACCEPT

The following rule makes use of the state (-m) module. It allows checking the state of the connection, which can be RELATED or ESTABLISHED. The connection is made only when it meets the rule. ESTABLISHED means there were packets already sent through the connection, and RELATED indicates it is a new connection made by forwarding a packet, but this new connection is associated with an existing connection.

The following rule allows operation of all previously initiated connections (ESTABLISHED) and connections related to them:

iptables -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT

And this rule allows new connections too:

iptables -A OUTPUT -m conntrack --ctstate NEW,ESTABLISHED,RELATED -j ACCEPT

This rule below allows forwarding new, established and related connections:

iptables -A FORWARD -m conntrack --ctstate NEW,ESTABLISHED,RELATED -j ACCEPT

The following couple of rules says all packets that cannot be identified (and given a status) should be dropped.

iptables -A INPUT -m conntrack --ctstate INVALID -j DROP

iptables -A FORWARD -m conntrack --ctstate INVALID -j DROP

Allowing what needs to be allowed

The next step we make implies setting up rules that allow this and that and thus ensure correct operation of our web server. We arrange those rules in a separate chain.

First, we create custom chains:

iptables -N my_packets

The following command makes the switch to a user-defined chain:

iptables -A INPUT -p tcp -j my_packets

There is a number of restrictions imposed on going through the chains

  1. the chain must be created before it is switched to;

  2. the chain must be in the same table as the chain from which the switch is made.

Next, we open the port for SSH. Important: be sure to specify your port if it was changed!

iptables -A my_packets -p tcp -m tcp --dport 22 -j ACCEPT

If SSH is only available to a number of persons using static IPs, it makes sense to set up IP-based restrictions. To do this, we run the following command instead of the previous one:

iptables -A my_packets -s х.х.х.х -p tcp -m tcp --dport 22 -j ACCEPT

with х.х.х.х being the IP from which the connection is made.

Since we have a web server running, we need to allow firewall listening on ports 80 and 443. We do this with the following commands:

iptables -A my_packets -p tcp -m tcp --dport 80 -j ACCEPT

iptables -A my_packets -p tcp -m tcp --dport 443 -j ACCEPT

To complete all these rules, we can set up a some more for specific cases.

Remote connections to MySQL server

If such is possible, it is better to have the connections restricted by IP:

iptables -A my_packets -s х.х.х.х -p tcp -m tcp --dport 3306 -j ACCEPT

Server receives mail

The following set of rules helps in such a case:

iptables -A my_packets -p tcp -m tcp --dport 110 -j ACCEPT

iptables -A my_packets -p tcp -m tcp --dport 143 -j ACCEPT

iptables -A my_packets -p tcp -m tcp --dport 993 -j ACCEPT

iptables -A my_packets -p tcp -m tcp --dport 995 -j ACCEPT

That is all. These rules are sufficient for our web server to work correctly. Other ports and protocols follow the REJECT rule we set up for them, i.e. they do not accept anything:

iptables -A INPUT -p udp -j REJECT --reject-with icmp-port-unreachable

iptables -A INPUT -p tcp -j REJECT --reject-with tcp-reset

iptables -A INPUT -j REJECT --reject-with icmp-proto-unreachable

Compared to DROP, REJECT implies sending the "Port unreachable" ICMP message to the sender. --reject-with option allows changing type of ICMP message; its args are as follows:

  • icmp-net-unreachable — network unreachable;

  • icmp-host-unreachable — host unreachable;

  • icmp-port-unreachable — port unreachable;

  • icmp-proto-unreachable — protocol unreachable;

  • icmp-net-prohibited — network prohibited;

  • icmp-host-prohibited — host prohibited.

    By default, the message has the port-unreachable arg.

You can reject TCP packets with tcp-reset arg, which implies sending an RST message. In terms of security, this is the best way. TCP RST packets are used to close TCP connections.

Setting up fail2ban

Fail2ban is a simple local service that keeps track of log files of running programs. Guided by the rules, the service blocks IPs from which attacks come.

Fail2ban successfully protects all popular *NIX setups (Apache, Nginx, ProFTPD, vsftpd, Exim, Postfix, named, etc.), but its main advantage is the SSH server brute-force protection enabled right after you launch it.

Installing fail2ban

Fail2ban is listed in the repository, so it is very easy to install it:

apt-get install fail2ban

That's it, your SSH is now protected from brute-force attacks.

Setting up fail2ban

First of all, we need to configure SSH protocol protection. It takes finding the [ssh] section in jail.conf file and ensuring the enabled parameter is set to true.

Next, we set up monitoring with fail2ban:

  • filter - used filter used. The default is /etc/fail2ban/filter.d/sshd.conf;

  • action - actions performed by fail2ban when detecting an ip the attack comes from; the response rules are listed in /etc/fail2ban/action.d., so the value of this parameter cannot be something that is not there;

  • logpath - full path to the file storing data on attempts to access VPS;

  • findtime - time (in seconds) the suspicious activity lasted;

  • maxretry - maximum allowed number of attempts to connect to the server;

  • bantime - the time the blacklisted ip is banned for.

Important: it is not necessary to provide values for all settings, if you skip anything, the main [DEFAULT] settings (found in the namesake section) will be applied. The most important thing here is to make sure the setting's enabled, i.e. the value for enabled is true.

Making SSH protocol secure

Let's look into details of response settings. Below is an example of fail2ban configuration on the SSH port:

[ssh]
enabled = true
port     = ssh
filter = sshd
action = iptables[name=sshd, port=ssh, protocol=tcp]
    sendmail-whois[name=ssh, dest=****@yandex.ru, [email protected]***.ru]
logpath = /var/log/auth.log
maxretry = 3
bantime = 600

All these lines mean the following: If there were more than 3 failed attempts to connect to the server through the main SSH ports, the ip used for authorization is blocked for 10 minutes. The ban rule is added to IPTables, and the server owner receives a notification to the address specified in the dest variable. In the notification contains the blocked ip, WHOIS-data about this ip and the ban reason.

An extra measure to protect SSH implies activating the following section:

[ssh-ddos]
enabled = true
port     = ssh
filter = sshd-ddos
logpath = /var/log/auth.log
maxretry = 2

Configuring site files access permissions

A server cannot be considered safe if files access permissions were not configured. The following example is one of the ways to change owner and permissions on files/directories of a Drupal-powered site. Here:

Permissions setup routine:

#cd /path_to_drupal_installation
#chown -R webmaster:www-data .
#find . -type d -exec chmod u=rwx,g=rx,o= '{}' \;
#find . -type f -exec chmod u=rw,g=r,o= '{}' \;

www-data user must have write permissions for the directory, so the "files" directory in the sites/default (and any other site directories if we have a multi-site setup) has different permissions. The owner group's s bit applies to the directory, too, since we need all files created there by the web server to have the identifier of the webmaster directory group and not that of the group of owner that created the file in this directory.

#cd /path_to_drupal_installation/sites

#find . -type d -name files -exec chown -R www-data:webmaster '{}' \;
#find . -type d -name files -exec chmod ug=rwx,o=,g+s '{}' \;
#for d in ./*/files
do
  find $d -type d -exec chmod ug=rwx,o=,g+s '{}' \;
  find $d -type f -exec chmod ug=rw,o= '{}' \;
done

Temp directory here also takes on different permissions since we need web server writing into this directory.

#cd /path_to_drupal_installation
#chown -R www-data:webmaster tmp
#chmod ug=rwx,o=,g+s tmp

settings.php and .htaccess: special permissions

settings.php file contains database password and user name, and they are plain text there. To avoid problems, we want to allow just some users read it and nothing else. Typically, it means banning "other" out:

#chmod 440 './sites/*/settings.php'
#chmod 440 './sites/*/default.settings.php'

.htaccess is the Apache configuration file, which gives power over operation of the web server and site settings. Accordingly, only a handful of users should have permission to read the file's contents:

#chmod 440 ./.htaccess
#chmod 440 ./tmp/.htaccess
#chmod 440 ./sites/*/files/.htaccess

Secure configuration of the web server

To make the system as secure as possible, we want to make some changes to the SSH server settings. It is best to run it on a non-standard port, otherwise it will be constantly attacked by bruteforcing bots picking passwords. Same as other Linux distributions, Debian has SSH on port 22. We can change it to, say, 2223. In addition, it would be wise to change the setup so as to allow root's connection with ssh key only. By default, in Debian, root cannot be authorized with only a password over SSH.

It's better to change SSH port before configuring the firewall. If you forgot that, you need to do the following:

  1. Add the rule allowing connection on a new port to IPTables before changing it:

iptables -A my_packets -p tcp -m tcp --dport 2223 -j ACCEPT
  1. Change the SSH server port in # nano /etc/ssh/sshd_config. Find the relevant lines and make them look like this:

Port 2223
PermitRootLogin prohibit-password

PubkeyAuthentication yes
ChallengeResponseAuthentication no

Do not forget to save the changes! Then restart the SSH server:

# service sshd restart

Now, we want to check the changes:

# netstat -tulnp | grep ssh

tcp 0 0 0.0.0.0:2223 0.0.0.0:* LISTEN 640/sshd
tcp6 0 0 :::2223 :::* LISTEN 640/sshd

Everything is fine. The SSH server listens on port 2223, from now on, new connections will go through this port only, and after restarting SSH the old connection will not be dropped.

Attention! Before disabling password authorization for the root user, make sure you have your public key in the /root/.ssh/authorized_keys file. Also, it is wise to have another account to connect to the server with a password.

Dec 30 2017
Dec 30

I presented at the first Drupal NYC Meetup of the year!

I've been hard at work building a new Provision: the command-line interface for Aegir. We are moving off Drush commands into our own CLI built in Symfony. The tool is finally a working MVP, so I figured it's time to spread the word! 

First I give a bit of background on the Aegir project and my goals for the project which can be summed up in one word: easy.

Then, I dive into the code that makes this system possible. If this project has any chance of surviving, I must win over developers to help grow and maintain it! Developer Experience is a high priority.

In the next few weeks, I'll be posting a series of blog posts about the new Provision 4.x, both for end users and developers.

Provision 4.x: Developer Sneak Peak

Aegir's back-end is getting an overhaul after 10 years of service. We're developing a brand new Symfony console based CLI for all of your website management needs: Provision 4.x.

With the power of Symfony console we've completely re-written Provision to be as easy and clear as possible, and flexible enough to work anywhere: cloud or workstation.

Can this new CLI become the defacto dev-test-production website management tool? Is it wrong to write DevOps tools in PHP? Can we keep Aegir going for another 10 years? 

Slides and Video are available.

*Photo credit: https://www.meetup.com/drupalnyc/photos/28451193/467353793/ 

Nov 11 2017
Nov 11

In the previous article, we covered teaching your Drupal installation to send mail to users. But that is only half the battle, now we need to make sure the mail we send hits Inbox and not Spam folder. This article describes some options you have that offer relevant solutions. Unfortunately, no one can guarantee 100% inbox hits, but keeping the amount of mail filtered to Spam to a minimum is quite possible.

You have the following tools to make your mail more trustworthy and thus keep it out of Spam:

  • PTR record;
  • SPF record;
  • DKIM.

PTR

PTR record shows the association of the IP address and your site’s domain name. DNS servers keep A records, which is the association of a domain name and an IP of the server hosting the website going under that name. PTR is the opposite of A: it starts with the IP and returns the domain name. Some call PTR “Reverse DNS”.

To fight spam most mail services check PTR for servers that send incoming mail. Depending on the results of that check, they either put letters to Inbox or filter them out to Spam. Thus, when you have PTR for your server, and it matches the domain name standing after @ in the address occupying the From field, the receiving mail server has more confidence in your mail.

How to add a PTR record

PTR record can be added by the owner of IP of server that hosts your site. The record makes sense only in case you use a dedicated server or a VPS. If you occupy a simple virtual hosting account, you most likely don’t need it, since in most cases the record is already there and it points to the name of the hosting provider’s server.

How to check a PTR record

There is a number of commands you can run to check the PTR:

nslookup

nslookup -type=PTR ip-address

dig

dig -x ip-address

Use command prompt (terminal) to run any of them; replace “ip-address” in the examples above with the real IP.

SPF record

SPF means Sender Policy Framework. This is an extension for SMTP that allows adding a TXT type DNS records to a domain name and specify IP addresses from which you can send mail.

SPF is a factor that helps make your mail more trustworthy and less seen in Spam. Also important is the domain reputation that SPF helps protect: when sending spam or phishing letters, plotters can put any address into the From field, which may result in problems for owners of domain names that were used for this purpose. But mail server’s IP address cannot be forged like that, so when you have an SPF record, the receiving server checks it and acts accordingly.

How to add an SPF record

SPF record is a text cooked in a specific way. Here is an example:

"v=spf1 +a +mx -all"

This record tells that it is allowed to accept mail from IP addresses specified in A and MX records of the domain the SPF was added for. In case the addresses don’t match, it would be better to refuse receiving the mail. The record can be made shorter, "v=spf1 a mx -all", and still produce the same effect.

SPF record syntax

"v=spf1" — used SPF version.
"+" — accept mail. You can omit this sign.
"-" — refuse mail.
"~" — accept mail but filter it to Spam.
"?" — apply regular rules to mail.
"mx" — IP addresses of all servers specified in MX records of the domain.
"ip4" — this is where IPv4 addresses go.
"ip6" — and this is where you find IPv6 addresses.
"a" — IP addresses specified in A record of the domain.
"include" — allows applying SPF record from some other domain.
"all" — rules for all other domain that have no SPF record.

SPF record example

Let’s dig into the following SPF record:

"v=spf1 mx a ip4:154.56.125.94 a:example.com mx:example.com include:example.com ~all"

mx — accept mail from own mail servers.
a — accept mail from servers that are listed in the A records for own domain.
ip4:154.56.125.94 — accept mail sent from IP 154.56.125.94. Here, you can also specify subnets as follows: 154.56.125.0/24.
a:example.com — accept mail from servers specified in A records of example.com. Here, you can specify subnets as follows: example.com/24.
mx:example.com — accept mail from servers specified in MX records of example.com. Subnets can be specified the same way as for A records.
include:example.com — accept mail following the rules dictated by SPF of example.com.
~all — all mail from domains not explicitly specified in SPF will be filtered out into Spam. Replace tilde with a minus (-all) to refuse such mail outright.

DKIM

DKIM means DomainKeys Identified Mail. It is an authentication method that allows checking if the letter was really sent from the domain specified in the From field. DKIM is an efficient anti-spam and anti-phishing measure.

Making DKIM keys with opendkim-tool

1. First off, you need to install opendkim-tools. Do that by running the following command:

apt-get install opendkim-tools

2. Next, create a dkim directory. This is where the keys will be stored:

mkdir /etc/exim4/dkim

3. Change permissions on that directory from root to Debian-exim:

chown -R Debian-exim:Debian-exim /etc/exim4/dkim

4. Next command generates open and private keys for example.com:

opendkim-genkey -D /etc/exim4/dkim/ -d mydomain.com -s mymail

Here:
D - directory for the generated keys;
d - domain the keys will be used for;
s - mymail — name of selector, line identifier, which can be anything.

As a result, you get a couple of files, etc/exim4/dkim/mymail.private and /etc/exim4/dkim/mymail.txt , private and public keys, accordingly.

5. Now you need to go to /etc/exim4/dkim/ and rename mail.private to example.com.key:

cd /etc/exim4/dkim/
mv  mymail .private mydomain.com.key

6. Change permissions on example.com.private (private key file) from root to Debian-exim:

chown -R Debian-exim:Debian-exim /etc/exim4/dkim/mydomain.com.key
chmod 640 /etc/exim4/dkim/mydomain.com.key

Done!

Setting up DNS

mail.txt (cat /etc/exim4/dkim/mymail.txt) should have the following:

mymail._domainkey       IN      TXT     ( "v=DKIM1; k=rsa; "     "p=MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQC57fv+meeGTF2gtQ/FO1WAT7hYrPTnKir06k3YR6ZBCLhAVbfEOAZ9OkVTAEf67T61eRY8w8hojnN9dxd07XIZ8KyatNXajWfYo3g0YDWopTfVfoaI4XFXqQH8V6iXyobArpSe3MSTSTqNFuS+w498JoHAkeXXhcl6kmjdSGkPtwIDAQAB" )  ;

—--- DKIM key mymail for mydomain.com

This info should be added to the TXT record of DNS zone. Name field receives:

mymail._domainkey

Content field receives:

v=DKIM1; k=rsa; "     "p=MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQC57fv+meeGTF2gtQ/FO1WAT7hYrPTnKir06k3YR6ZBCLhAVbfEOAZ9OkVTAEf67T61eRY8w8hojnN9dxd07XIZ8KyatNXajWfYo3g0YDWopTfVfoaI4XFXqQH8V6iXyobArpSe3MSTSTqNFuS+w498JoHAkeXXhcl6kmjdSGkPtwIDAQAB

Now, you can simply delete mymail.txt.

To check if everything is fine, run the following command:

dig txt mymail._domainkey.mydomain.com | grep DKIM

The reply should look like this:

mymail._domainkey.mydomain.com. 2214 IN TXT "v=DKIM1\; k=rsa\...

Setting up DKIM Author Domain Signing Practices (DKIM ADSP)

To specify DKIM Author Domain Signing Practices (DKIM ADSP), you need to add one more record to TXT DNS:

_adsp._domainkey.mydomain.com IN TXT "dkim=all"

where:

all — no unsigned letters can be sent;
discardable — all unsigned letters should be locked at the receiver’s side;
unknown — domain can sign all or some letters.

Setting up Dkim in Exim

Setting up Exim starts with adding the following lines to /etc/exim4/exim4.conf.template (above the remote_smtp section):

# DKIM:

DKIM_DOMAIN = ${lc:${domain:$h_from:}}

DKIM_KEY_FILE = /etc/exim4/dkim/DKIM_DOMAIN.key

DKIM_PRIVATE_KEY = ${if exists{DKIM_KEY_FILE}{DKIM_KEY_FILE}{0}}

DKIM_SELECTOR = mymail

The config can be broken into smaller files when installing exim4. If that is the case, you need to add those lines to /etc/exim4/conf.d/transport/30_exim4-config_remote_smtp .

Another option is to create the config /etc/exim4/exim4.conf manually. In such a case, the lines mentioned above should be added to this file.

You need to repeat the actions for each domain and then restart exim:

/etc/init.d/exim4 restart

Checking records in Exim config

To check records in the Exim config, you need to run the following command:

exim -bP transports | grep dkim

The reply should look something like this:

dkim_domain = ${lc:${domain:$h_from:}}

dkim_private_key = ${if exists{/etc/exim4/dkim/${lc:${domain:$h_from:}}.key}{/etc/exim4/dkim/${lc:${domain:$h_from:}}.key}{0}}

dkim_selector = mymail

where dkim_selector is the first word before ._domainkey in public key.

So, we have added SPF and PTR records and prepared the server for DKIM. The result of these actions is more letters in Inbox and less of them filtered to Spam.

Oct 05 2017
Oct 05

Once you have the web server set up and running, you need to make sure your Drupal site can send emails, like registration confirmation, password change etc.

For inbound email, you may want to use email services like that offered by Google Apps. They offer good spam protection and secure storage.

To send email from a Drupal-powered site, you can use an SMTP module and do it all through a third party mail service. Alternatively, you can set up your own SMTP server. This article describes setting up an Exim server that allows sending emails.

Exim installation and set up

aptitude install exim4
dpkg-reconfigure exim4-config

Once you run the commands, you get a wizard asking you a number of questions.

General type of mail configuration: internet site; mail is sent and received directly using SMTP
System mail name: Reverse DNS for server’s IP, (e.g., drupal-admin.com )
IP-addresses to listen on for incoming SMTP connections: 127.0.0.1
Other destinations for which mail is accepted: domain name we plan to send emails from. You can enter a number of domains separated by spaces.
Domains to relay mail for: leave empty
Machines to relay mail for: leave empty
Keep number of DNS-queries minimal (Dial-on-Demand)? No
Delivery method for local mail: Maildir
Split configuration into small files? No.

 

After that we can run the following command to check if everything is fine:

echo "This is a short email" | mail -n -s "Sending email" [email protected]ail.com

Replace [email protected] with your actual address and check the inbox, you are supposed to receive an email with "Sending email" as the subject and "This is a short email" as the body.

You may want to contact your hosting provider and ask them to change server name and Reverse DNS to the domain name you send emails from. This allows landing more email in the inbox and not spam folder.

Changing Drupal settings

To make Drupal put your desired address into the From field, you need to specify that address in the site’s settings at [your domain]/admin/config/system/site-information

Pages

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web