Nov 05 2018
Nov 05

By Harold JuárezFull Stack Developer | November 05, 2018

By Harold JuárezFull Stack Developer | November 05, 2018

BADCamp 2018 was the first real big event I attended, aside from actively participating in Drupal Camp Costa Rica for three years. Kindly enough some co-workers who had already assisted shared with me their experience which gave me great expectations. In addition, I was excited to sightsee San Francisco and Berkeley.

After dedicating this year to front-end, BADCamp sessions left me more than satisfied, with refreshed knowledge and practices. So I would like to share my experience and the content of sessions I participated:

The second day was a highlight, assistants were given challenges and tools, dialogue tables enriched my personal experience by listening to others talk about ways to improve development applications.

My first BadCamp 03

On Friday Pattern Lab sessions were quite interesting, practising the creation of themes without relying on a backend. Although I already had the experience of using this tool before, it provided new knowledge to improve its implementation at work.

React + Gatsby’s potential to create static sites was explored, and I learned compelling ways to take advantage of these new tools to improve the performance of an application using React to render the page and Drupal as an API to enter data. This talk was presented by my co-worker Jesus in his session HOW TO KEEP DRUPAL RELEVANT IN THE GIT-BASED AND API-DRIVEN CMS ERA.

My first BadCamp 04

On Saturday I attended an Accessibility session that showed tools for people with different types of disability, some may be paid or free to implement on the site, it all depends on the needs of the specific project.

Another talk that caught my attention was Artificial Intelligence in Drupal, by using Google Cloud Vision API in sites that provide tagging of images, face, logo and explicit content detection through Machine Learning.

A fantastic experience and I am very grateful to weKnow for helping me attend. It was a great success that I hope to repeat in a near future!

My first BadCamp 05

Most Interesting Sessions

Nov 05 2018
Nov 05

Last year, Drupal Association has decided to take a break to consolidate and not organize an official DrupalCon Europe in 2018. Twelve community members stepped in and developed a plan to organize a DrupalCon replacement − named Drupal Europe. The final result was outstanding.

More than 1000 Drupalers from all over the world gathered in Darmstadt, Germany from 10th to 14th September 2018 to attend the yearly biggest European Drupal event. The new event brought a new concept. It featured 10 amazing tracks that guaranteed high-quality content for all of the Drupal target groups − developers, marketers and agencies. Additionally, it created more room for contribution and collaboration outside session slots.

Official Group Photo Drupal Europe Darmstadt 2018

We supported the event by giving three talks in two different tracks.

Miro Dietiker - Connecting media solutions beyond Drupal

On Tuesday, Miro Dietiker, founder of MD Systems, gave an interesting talk about integrating Digital Asset Management systems with Drupal. He compared the benefits of existing solutions and provided a practical guide what kind of solutions you could use to fulfill your media management needs.

More information about the session and slides can be found on https://www.drupaleurope.org/session/connecting-media-solutions-beyond-drupal while the session recording is available below.

[embedded content]

On Wednesday, Miloš Bovan held a session about Paragraphs module and its best practices. The full room of people proves that people love and use Paragraphs a lot. The talk focused on answering frequently asked questions when it comes to working with Paragraphs. Those covered some of the new features that are not well known as well as ideas on how to improve the editorial experience easily.

Miloš Bovan - Enrich your Paragraphs workflow with features you didn’t know about

The session summary is available at https://www.drupaleurope.org/session/enrich-your-paragraphs-workflow-features-you-didnt-know-about.

The conference featured many interesting sessions that provided ideas and actual implementations on how to make the Paragraphs user interface better (Creating an enterprise level editorial experience for Drupal 8 using React, Front-end page composition with Geysir, Improving the Editor Experience: Paragraphs FTW). The discussions about Paragraphs continued during all the conference and resulted in a BoF that happened on Thursday where Drupalers have discussed the future of Paragraphs UI. We look forward to fruitful collaboration.

John Gustavo Choque Condori - Drupal PKM: A personal knowledge management Drupal distro

John Choque, our colleague, gave a talk about the personal knowledge management distribution he has created as part of his bachelor thesis. The talk gathered people interested in education to get ideas how to improve knowledge management in their organizations. The session summary as well as slides are available at https://www.drupaleurope.org/session/drupal-pkm-personal-knowledge-management-drupal-distro

[embedded content]

Social activities

Besides sessions, contributions and coding we enjoyed attending social events as well. On Tuesday, the organizers brought participants to the Bavarian Beer Garden to taste traditional German bratwurst and beers. The place was located in the park and there was no chance to miss the great event. 


On Wednesday, we joined our fellow Drupalers from the Swiss community for a dinner. It is always nice to catch up with local people in an international event such as Drupal Europe.

As usual, the event came to an end with traditional and yet always exciting Trivia Night.

What’s next?

During Drupal Europe, the project founder, Dries Buytaert has announced that Drupal Association signed an agreement with Kuoni, an international company specialized in events organizations. This results in bringing the official DrupalCon Europe event back in 2019 and it is going to happen in the capital of The Netherlands − Amsterdam.

We hope to see you all there!

Nov 04 2018
Nov 04

I have spent a lot of time (years) in development of the Permissions by Term Drupal module. Now it's time to create a blogpost about it.

Per default, Drupal allows you only to restrict access to Drupal nodes by coupling node content types to user roles. The Permissions by Term module extends Drupal by functionality for restricting view and edit access to single nodes via taxonomy terms. If you have installed the Permissions by Entity sub-module, any other content entity type, such as media entities, can be controlled in access restriction, too.

Taxonomy term permissions can be coupled to specific user accounts and/or user roles. Taxonomy terms are part of the Drupal core functionality. Since Permissions by Term is using Node Access Records, every other core system will be restricted:

  • search results
    • Works well with Search API  modules search result lists, since PbT version 8.x-2.0
    • Drupal core search
  • menu items
  • views list items
  • content from all content entity types (nodes, media)

After you install Permissions by Term, you can easily access its configuration:

Then you are able to configure the module to suite your purpose.

Config option: Require all terms granted

By default users are granted access content, as long they have access to a single related taxonomy term. If the require all terms granted option is checked, they must have access to all related taxonomy terms to access an node.

Config option: Permission mode

This mode makes nodes accessible (view and edit) only, if editors have been explicitly granted the permission to them. Users won't have access to nodes matching any of the following conditions: 

  • nodes without any terms 
  • nodes without any terms which grant them permission

Config option: Disable node access records

By disabling node access records, PbT won't hide nodes in: 

  • listings made by the Views module (e.g. search result pages) 
  • menus

This setting can be useful, if you just want to restrict nodes on node view and node edit. Like hiding unpublished nodes from editors during a content moderation workflow. Disabling node access records will save you some time on node save and taxonomy save, since the node access records must not be rebuild.

Then it's time to edit your taxonomy terms. You will see the Permissions by Term fieldset for adding your users and/or roles permissions. See the extended form:

Afterwards let's edit an node. You will see that the advanced widget for term related permissions will be filled with info (via AJAX) as soon you edit any taxonomy term related information.

Restricting access to taxonomy terms

Editors won't be able to use taxonomy terms for content relations, if specific taxonomy terms have access restrictions, which deny them access. See the following screenshots:

Securing documents in your filesystem

You can enable the Permissions by Entity module, which is integrated as a sub-module in the Permissions by Term base module. Then access to all content entity types can be handled via taxonomy terms. Not only the node content entity type. A prominent content entity type could be a document bundle from the media type.

I will show you, how you can restrict the access to documents from your file system via a document media type bundle.

  1. Firstly enable drupals private file system
  2. Make sure your fields file is located in the private file system (field settings)
  3. install the Permissions by Entity module
  4. and implement hook_file_download() by adapting the following code into your custom modules *.module file:

/**
 * Implements hook_file_download().
 */
function fancy_module_media_document_file_download(string $uri) {
  
$files = \Drupal::entityTypeManager()
    ->
getStorage('file')
    ->
loadByProperties(['uri' => $uri]);

  foreach ($files as $file) {
    
$multipleMedia = \Drupal::entityTypeManager()
      ->
getStorage('media')
      ->
loadByProperties(['field_document' => $file->id()]);

    $oneMedia array_shift($multipleMedia);

    if ($oneMedia instanceof Media) {
      
/**
       * @var AccessChecker $accessChecker
       */
      
$accessChecker = \Drupal::service('permissions_by_entity.access_checker');
      
$accessChecker->isAccessAllowed($oneMedia);
    }

  }

Now requests to https.//my-website/private-file-folder/secure-document.pdf will be handled over your permissions. Users without permissions will be redirected to a "Access denied" page (HTTP status code 403)

Permissions by Term Redirect

This module builds upon the functionality provided by Permissions by Term in the following ways:

  • registers a subscriber for the event fired by PbT in case of Access Denied
  • sends a redirect to the login form if:
    • the user is anonymous
    • they are trying to directly access the restricted node
  • after a successful login sends a redirect back to the originally requested node

Code quality via automated tests in a continuous integration environment

Drupal.org offers the ability to execute automated tests. However, this tests are not executed immediatelly after any code changes occur. Developers must wait until their modules code gets queued. For this I am using Bitbucket as a CI environment. I have learned a lot via developing the Permissions by Term module. I am using not only Drupal tests like PHP unit test or functional tests like tests which are inheriting from the KernelTestBase class. I am also using Behat tests, which are testing the module against a real web browser. Permissions by Term is containing JavaScript code. So I am testing the JavaScript code via QUnit and Webpack plus Babel for packaging my JavaScript code.

Thanks

I would like to thank all people which have been contributing to Permissions by Term via sharing bug reports, patches, feature improvements, documentation, spelling and other. Also I would like to thank my employer publicplan for providing me paid time to work on Permissions by Term during my regular work hours. If you are searching support for your governmental website, then you should really consider publicplan. Furthermore publicplan is constantly looking for Drupal talents. So do not miss your opportunity.

Feedback?

If you have any feedback in forms of suggestions or questions, I would be happy, if you could leave here a comment. Also the Permissions by Term issue queue is a good place for sharing information.

Nov 03 2018
Nov 03

Team AddWeb has worked for a distinctive list of industries counting from hospitability to technology and retailers to an online lottery purchase system based website. Yes, we recently collaborated with a Japan-based company to build their website with lottery purchase system, using Drupal 8. We’ve been Drupal-ing even before our inception and have been an active member of the Drupal community, globally. Our association and experience of Drupal were the base of the client’s immense faith in us and we knew that we’re going to stand true to that.

About the Project
The project requirement of the client was to build a website for them in Drupal 8. The website is basically an online lottery purchase system. Due to confidential reasons, we can not share the name of the company/client but would like to share that the experience of working on this project was new and enriching.

 

Major Features/Functionalities
We personally love experimenting and implementing innovative features to enhance the client’s website. Plus, we get a little more excited when its a Drupal 8 website. We integrated a host of futuristic features to this very website too. But since, it’s an online lottery purchase system we knew that the integration of the Payment Gateway is going to be one of an integral part. Hence, we created three types of Payment Gateway, as follows:

  • GMO Payment

  • Coins Payment

  • WebMoney Payment

The user is an integral part of this entire online lottery system and hence several functionalities are crafted around them. Like, a user can purchase coins by WebMoney Payment method and can also buy lottery from choosing any product bundle. A user also has an option to select the quantity of the product or go for the complete set. The payment for either of it can be done by the coins, GMO credit card or points.

Draw system is used for the selection of the lottery winner. Other than the lottery prize, the user also stands a chance to win the Kiriban Product as a prize. The Kiriban Product is based on the product bundle configuration, which is an additional product that a user gets as defined by an admin user.

The Problem

Any e-commerce website will definitely have multiple users buying for the same product. In this situation, the backend technicalities should be as such that it updates the quantity left of the product after the last purchase is made. Issues occur when two or more users place the order at the same time. This is an issue that is involved in concurrent shopping. In this case, the lottery opened for some specific time. Hence, the issue occurred in showcasing the updated quantity. This problem came to our notice when the site went live and around 7-8 users made the transaction at one specific time. We immediately started working on the issue.

Challenges Faced:

We quickly picked up the problem and started searching for the resolution. We have had several times, prior to this, created an e-commerce website. Hence, we used multiple methods to resolve the issues, mentioned below, but none of them worked in this particular case.

  • Initially, we tried using a Drupal lock to resolve the issue, but in vain.

  • We, later on, used the MySQL lock but this too didn’t work, due to the involvement of multiple quantities inside for loop.

  • The usage of sleep time with random sleep time also did not work, because it created the nearby value and not the exact one.

Though the method of random sleep time did not work in this case, it gave birth to the final resolution that worked. And hence, we did a minor modification to the same and divided the sleep time in a range of 3. Also, to avoid the possibility of any further clash, we adopted a table of 100.

The Final Resolution:

After trying out a handful of methods, we finally came up with a method that did work out in our favor. Let us share what steps did finally help us in addressing the problem of concurrent shopping that we faced:

  • A table consisting of 1 to 100 numbers was taken, with the sleep time by a range of 3.

  • Later, a random number was picked and a flag value for the same was set.

  • Then, a greater number from those numbers with the range of 3 was picked.

Below is the table that was created to bring out the final solution:

‘Flag’ was used to 0 by default, which will be automatically set to 1 every time the number is in use.

How it works:

  • At the beginning of the transaction, the max sleep_time will be checked where flag=1.

  • The sleep_time for the first user will be 0.

  • After this, a random number from max sleep_time is selected with a range of 3.

  • The first user’s range is 1-3.

  • In the case of the second user, one number will be skipped after the max time and will be started after that number.

  • In case a user gets the max sleep_time in 3 then the range for the random number will be 5-7.

  • If the second user gets the random number as 6 then the random number range for the third user will be 8-10.

  • The flag value will be updated as 1 for this random number.

  • In the end, the flag value of the transaction will be updated with 0.

The Final Say:

“All is well, that ends well.” And that’s exactly we have to say for this particular project. Yes, though we had coded and created many e-commerce websites before, this was the first time that we picked up a project to create a Drupal 8 website with an online lottery system. And believe us, it was a monumental success for us and satisfying project for the client.

Nov 03 2018
Nov 03

A machine learning model, that could lead a driver directly to an empty parking spot, fetched the second prize in the Graduate level: MS category at the 2018 Science and Technology Open House Competition. It goes without saying that dreams of computer systems with godlike powers and the wisdom to use them is not just a theological construct but a technological possibility. And sci-fi éminence grise Arthur C. Clarke rightfully remarked that “any sufficiently advanced technology is indistinguishable from magic.”

A robot sitting on a brown chair, draped in a red blanket, working on a laptop with a glass of wine beside him and river water falling over a real human working on a laptop


Artificial Intelligence (AI) may be the buzzword of our times but Machine Learning (ML) is really the brass tacks. Machine learning has made great inroads into different areas. It has the capability of looking at the pictures of biopsies and picking out possible cancers. It can be taught to predict the outcome of legal cases, writing press releases and even composing music! However, the sci-fi future where a machine learning beats a human in all the conceivable department and is perpetually learning isn’t a reality yet. So, how does machine learning fit into the world of content management system like Drupal? Before finding that out, let’s go back to the times when computers did not even exist.

Machine learning predates computers! 

In this day and age, self-driving cars, voice-activated assistants and social media feed are some of the tools which are powered by machine learning. Compilations made by BBC and Forbes show that machine learning has a long timeline that relies on mathematics from hundreds of years ago and the elephantine developments in computing over the years.

Machine learning has a long timeline that relies on mathematics from hundreds of years ago and the elephantine developments in computing over the years

Mathematical innovations like Bayes’ Theorem (1812), Least Squares method for data fitting (1805) and Markov Chains (1913) laid the foundation for modern machine learning concept. 

In the late 1940s, stored-program computers like Manchester Small-Scale Experimental Machine (1948) came into the picture. Through the 1950s and 1960s, several influential discoveries were made like the ‘Turing Test’, first computer learning program, first neural network for computers and the ‘nearest neighbour’ algorithm. In the nineties, IBM’s Deep Blue beat the world chess champion.

Post-millennium, we have several technology giants like Google, Amazon, Microsoft, IBM and Facebook today actively working on more advanced machine learning models. Proof of this is the Alpha algorithm, developed by Google DeepMind, which beat a professional in the Go competition and it is considered more intricate than chess!

Discovering Machine Learning

A gif showing a flowchart of machine learning through icons like computers, bulb, magnifying glass, Rubik's cube and boxes.

Machine learning is a form of AI that allows a system to learn from data instead of doing that through explicit programming. It is not a simple process. As the algorithms ingest training data, producing more accurate models based on that data is possible.

Advanced machine learning algorithms are composed of many technologies (such as deep learning, neural networks and natural-language processing), used in unsupervised and supervised learning, that operate guided by lessons from existing information. - Gartner

When you train your machine learning algorithm with data, the output that is generated is the machine learning model. After training, when you provide an input to the model, an output will be given to you. For instance, a predictive algorithm will build a predictive model. Then, when the predictive model is provided with the data, you receive a prediction based on the data that trained the model.

Difference between AI and machine learning

Illustration showing small boxes inside and outside a bigger box to explain machine learning and artificial intelligenceSource: IBM

Machine learning may have relished a massive success of late but it is just one of the approaches for achieving artificial intelligence.
 
Forrester defines artificial intelligence as “the theory and capabilities that strive to mimic human intelligence through experience and learning”. AI systems generally demonstrate traits like planning, learning, reasoning, problem solving, knowledge solving, social intelligence and creativity among others.
 
Alongside machine learning, there are numerous other approaches used to build AI systems such as evolutionary computation, expert systems etc.

Categories of machine learning

Machine learning is generally divided into the following categories:

  • Supervised learning: It typically begins with an established set of data and with a certain understanding of the classification of that data is done and intends to find patterns in data for applying that to an analytics process.
  • Unsupervised learning: It is used when the problem needs a large amount of unlabeled data.
  • Reinforcement learning: It is a behavioural learning model. The algorithm receives feedback from the data analysis thereby guiding the user to the best outcome.
  • Deep learning: It incorporates neural networks in successive layers for learning the data in an iterative manner.

Why is machine learning accelerating?

A pyramid split into 4 parts of a different colour with each describing the Intelligence capability business modelSource: The Apttus Intelligence Capability Model

Today, the majority of enterprises require descriptive analytics, that is needed for efficient management, but not sufficient to enhance business performance. For the businesses to scale higher level of responsiveness, they need to move beyond descriptive analytics and move up the intelligence capability pyramid. This is where machine learning plays a key role.

For the businesses to scale higher level of responsiveness, they need to move beyond descriptive analytics and move up the intelligence capability pyramid.

Machine learning is not a new technique but the interest in the field has grown multifold in recent years. For enterprises, machine learning has the ability to scale across a broad range of businesses like manufacturing, financial services, healthcare, retail, travel and many others.

Three boxes on top and three boxes at the bottom with small icons over each of the boxes representing different industries of machine learningSource: Tata Consultancy Services

Business processes directly related to revenue-making are among the most-valued applications like sales, contract management, customer service, finance, legal, quality, pricing and order fulfilment.
 
Exponential data growth with unstructured data like social media posts, connected devices sensing data, competitor and partner pricing and supply chain tracking data among others is one of the reasons of why adoptions rates of machine learning have skyrocketed.
 
The Internet of Things (IoT) networks, connected devices and embedded systems are generating real-time data which is great for optimising supply chain networks and increasing demand forecast precision.
 
Another reason why machine learning is successful because of its ability to generate massive data sets through synthetic means like extrapolation and projection of existing historical data to develop realistic simulated data.
 
Moreover, the economics of safe and secure digital storage and cloud computing are merging to put infrastructure costs into free fall thereby making machine learning more cost effective for all the enterprises.

Machine Learning for Drupal

A session at DrupalCon Baltimore 2017 had a presentation which was useful for machine learning enthusiasts and it did not require any coding experience. It showed how to look at data from the eye view of a machine learning engineer. 

It also leveraged deep learning and site content to give Drupal superpowers by making use of same technology that is exploding at Facebook, Google and Amazon.

[embedded content]


The demonstration focused on mining Drupal content as the fuel for deep learning. It showed when to use existing ML models or services when to build your own, deployment of ML models and using them in production. It showed free pre-built models and paid services from Amazon, IBM, Microsoft, Google and others.

Drag and drop interface was used for creating, training and deploying a simple ML model to the cloud with the help of Microsoft Azure ML API. Google Speech API was used to turn spoken audio content into the text content to use them with chatbots and virtual assistants. Watson REST API was leveraged to perform sentiment analysis. Google Vision API module was used so that uploaded images can add Face, Logo, and Object Detection. And Microsoft’s ML API was leveraged to automatically build summaries from node content.

Another session at DrupalCon Baltimore 2017 showed how to personalise web content experiences on the basis of subtle elements of a person’s digital persona.

[embedded content]


Standard personalisation approaches recommend content on the basis of a person’s profile or the past activity. For instance, if a person is searching for a gym bag, something like this works - “Here are some more gym bags”. Or if he or she is reading about movie reviews, this would work - “Maybe you would like this review of the recently released movie”.

But the demonstration shown at this session had advanced motives. They exhibited Deep Feeling, a proof-of-concept project that utilises machine learning techniques doing better recommendations to the users. This proof-of-concept recommended travel experiences on the basis of kind of things a person shares with the help of Acquia Lift service and Drupal 8.

With the help of Instagram API to access a person’s stream-of-consciousness, the demo showed that their feeds were filtered via a computer-vision API and was used to detect and learn subtle themes about the person’s preferences. Once a notion on what sort of experiences, which the person thinks are worth sharing, is established, then the person’s characteristics were matched against their own databases.

Another presentation held at Bay Area Drupal Camp 2018 explored how the CMS and Drupal Community can put machine learning into practice by leveraging a Drupal module, taxonomy system and Google’s Natural Language Processing API.

[embedded content]


Natural language processing concepts like sentiment analysis, entity analysis, topic segmentation, language identification among others were discussed. Numerous natural language processing API alternatives were compared like Google’s natural language processing API, TextRazor, Amazon Comprehend and open source solutions like Datamuse.

It explored use cases by assessing and automatically categorising news articles using Drupal’s taxonomy system. Those categories were merged with the sentiment analysis in order to make a recommendation system for a hypothetical news audience.

Future of Machine learning

A report on Markets and Markets states that the machine learning market size will grow from USD 1.41 Billion in 2017 to USD 8.81 Billion by 2022 at a Compound Annual Growth Rate (CAGR) of 44.1%.

The report further states that the major driving factors for the global machine learning market are the technological advancement and proliferation in data generation. Moreover, increasing demand for intelligent business processes and the aggrandising adoption rates of modern applications are expected to offer opportunities for more growth.

Some of the near-term predictions are:

  • Most applications will include machine learning. In a few years, machine learning will become part of almost every other software applications with engineers embedding these capabilities directly into our devices.
  • Machine learning as a service (MLaaS) will be a commonplace. More businesses will start using the cloud to offer MLaaS and take advantage of machine learning without making huge hardware investments or training their own algorithms.
  • Computers will get good at talking like humans. As technology gets better and better, solutions such as IBM Watson Assistant will learn to communicate endlessly without using code.
  • Algorithms will perpetually retrain. In the near future, more ML systems will connect to the internet and constantly retrain on the most relevant information.
  • Specialised hardware will be delivering performance breakthroughs. GPUs (Graphics Processing Unit) is advantageous for running ML algorithms as they have a large number of simple cores. AI experts are also leveraging Field-Programmable Gate Arrays (FPGAs) which, at times, can even outclass GPUs.

Conclusion

Whether computers start ruling us someday by gaining superabundance of intelligence is not a likely outcome. Even though it is a possibility which is why it is widely debated whenever artificial intelligence and machine learning is discussed.
 
On the brighter side, machine learning has a plenitude of scope in making our lives better with its tremendous capabilities of providing unprecedented insights into different matters. And when Drupal and machine learning come together, it is even more exciting as it results in the provision of awesome web experience.

Opensense Labs always strives to fulfil digital transformation endeavours of our partners with a suite of services.

Contact us at [email protected] to know how machine learning can be put to great to use in your Drupal web application.

Nov 03 2018
Nov 03

Lately I've been spending a lot of time working with Drupal in Kubernetes and other containerized environments; one problem that's bothered me lately is the fact that when autoscaling Drupal, it always takes at least a few seconds to get a new Drupal instance running. Not installing Drupal, configuring the database, building caches; none of that. I'm just talking about having a Drupal site that's already operational, and scaling by adding an additional Drupal instance or container.

One of the principles of the 12 Factor App is:

IX. Disposability

Maximize robustness with fast startup and graceful shutdown.

Disposability is important because it enables things like easy, fast code deployments, easy, fast autoscaling, and high availability. It also forces you to make your code stateless and efficient, so it starts up fast even with a cold cache. Read more about the disposability factor on the 12factor site.

Before diving into the details of how I'm working to get my Drupal-in-K8s instances faster to start, I wanted to discuss one of the primary optimizations, opcache...

Measuring opcache's impact

I first wanted to see how fast page loads were when they used PHP's opcache (which basically stores an optimized copy of all the PHP code that runs Drupal in memory, so individual requests don't have to read in all the PHP files and compile them on every request.

  1. On a fresh Acquia BLT installation running in Drupal VM, I uninstalled the Internal Dynamic Page Cache and Internal Page Cache modules.
  2. I also copied the codebase from the shared NFS directory /var/www/[mysite] into /var/www/localsite and updated Apache's virtualhost to point to the local directory (/var/www/[mysite], is, by default, an NFS shared mount to the host machine) to eliminate NFS filesystem variability from the testing.
  3. In Drupal VM, run the command while true; do echo 1 > /proc/sys/vm/drop_caches; sleep 1; done to effectively disable the linux filesystem cache (keep this running in the background while you run all these tests).
  4. I logged into the site in my browser (using drush uli to get a user 1 login), and grabbed the session cookie, then stored that as export cookie="KEY=VALUE" in my Terminal session.
  5. In Terminal, run time curl -b $cookie http://local.example.test/admin/modules three times to warm up the PHP caches and see page load times for a quick baseline.
  6. In Terminal, run ab -n 25 -c 1 -C $cookie http://local.example.test/admin/modules (requires apachebench to be installed).

At this point, I could see that with PHP's opcache enabled and Drupal's page caches disabled, the page loads took on average 688 ms. A caching proxy and/or requesting cached pages as an anonymous user would dramatically improve that (the anonymous user/login page takes 160 ms in this test setup), but for a heavy PHP application like Drupal, < 700 ms to load every code path on the filesystem and deliver a generated page is not bad.

Next, I set opcache.enable=0 (was 1) in the configuration file /etc/php/7.1/fpm/conf.d10-opcache.ini, restarted PHP-FPM (sudo systemctl restart php7.1-fpm), and confirmed in Drupal's status report page that opcache was disabled (Drupal shows a warning if opcache is disabled). Then I ran another set of tests:

  1. In Terminal, run time curl -b $cookie http://local.example.test/admin/modules three times.
  2. In Terminal, run ab -n 25 -c 1 -C $cookie http://local.example.test/admin/modules

With opcache disabled, average page load time was up to 1464 ms. So in comparison:

Opcache status Average page load time Difference Enabled 688 ms baseline Disabled 1464 ms 776 ms (72%, or 2.1x slower)

Note: Exact timings are unimportant in this comparison; the delta between different scenarios what's important. Always run benchmarks on your own systems for the most accurate results.

Going further - simulating real-world disk I/O in VirtualBox

So, now that we know a fresh Drupal page load is almost 4x slower than one with the code precompiled in opcache, what if the disk access were slower? I'm running these tests on a 2016 MacBook Pro with an insanely-fast local NVMe drive, which can pump through many gigabytes per second sequentially, or hundreds of megabytes per second random access. Most cloud servers have disk I/O which is much more limited, even if they say they are 'SSD-backed' on the tin.

Since Drupal VM uses VirtualBox, I can limit the VM's disk bandwidth using the VBoxManage CLI (see Limiting bandwidth for disk images):

# Stop Drupal VM.
vagrant halt

# Add a disk bandwidth limit to the VM, 1 MB/sec.
VBoxManage bandwidthctl "VirtualBox-VM-Name-Here" add Limit --type disk --limit 5M

# Get the name of the disk image (vmdk) corresponding to the VM.
VBoxManage list hdds

# Apply the limit to the VM's disk.
VBoxManage storageattach "VirtualBox-VM-Name-Here" --storagectl "IDE Controller" --port 0 --device 0 --type hdd --medium "full-path-to-vmdk-from-above-command" --bandwidthgroup Limit

# Start Drupal VM.
vagrant up

# (You can update the limit in real time once the VM's running with the command below)
# VBoxManage bandwidthctl "VirtualBox-VM-Name-Here" set Limit --limit 800K

I re-ran the tests above, and the average page load time was now 2171 ms. Adding that to the test results above, we get:

Opcache status Average page load time Difference Enabled 688 ms baseline Disabled 1464 ms 776 ms (72%, or 2.1x slower) Disabled (slow I/O) 2171 ms 1483 ms (104%, or 3.2x slower)

Not every cloud VM has that slow of disk I/O... but I've seen many situations where I/O gets severely limited, especially in cases where you have multiple volumes mounted per VM (e.g. maximum EC2 instance EBS bandwidth per instance) and they're all getting hit pretty hard. So it's good to test for these kinds of worst-case scenarios. In fact, last year I found that a hard outage was caused by an E_F_S volume hitting a burst throughput limit, and bandwidth went down to 100 Kbps. This caused so many issues, so I had to architect around that potential issue to prevent it from happening in the future.

The point is, if you need fast PHP startup times, slow disk IO can be a very real problem. This could be especially troublesome if trying to run Drupal in environments like Lambda or other Serverless environments, where disk I/O is usually the lowest priority—especially if you choose to allocate a smaller portion of memory to your function! Cutting down the initial request compile time could be immensely helpful for serverless, microservices, etc.

Finding the largest bottlenecks

Now that we know the delta for opcache vs. not-opcache, and vs. not-opcache on a very slow disk, it's important to realize that compilation is just one in a series of many different operations which occurs when you start up a new Drupal container:

  • If using Kubernetes, the container image might need to be pulled (therefore network bandwidth and image size may have a great affect on startup time)
  • The amount of time Docker spends allocating resources for the new container, creating volume mounts (e.g. for a shared files directory) can differ depending on system resources
  • The latency between the container and the database (whether in a container or in some external system like Amazon RDS or Aurora) can cause tens or even hundreds of ms of time during startup

However, at least in this particular site's case—assuming the container image is already pulled on the node where the new container is being started—the time spent reading in code into the opcache is by far the longest amount of time (~700ms) spent waiting for a fresh Drupal Docker container to serve its first web request.

Can you precompile Drupal for faster startup?

Well... not really, at least not with any reasonable sanity, currently.

But there is hope on the horizon: There's a possibility PHP 7.4 could add a cool new feature, Preloading! You can read the gory details in the RFC link, but the gist of it is: when you are building your container image, you could precompile all of your application code (or at least the hot code paths) so when the container starts, it only takes a couple ms instead of hundreds of ms to get your application's code compiled into opcache.

We'll see if this RFC gets some uptake; in the meantime, there's not really much you can do to mitigate the opcache warming problem.

Conclusion

With Preloading, we might be able to pre-compile our PHP applications—notably beefy ones like Drupal or Magento—so they can start up much more quickly in lightweight environments like Kubernetes clusters, Lambda functions, and production-ready docker containers. Until that time, if it's important to have Drupal serve its first request as quickly as possible, consider finding ways to trim your codebase so it doesn't take half a second (or longer) to compile into the opcache!

Nov 02 2018
Nov 02

This blog has been re-posted and edited with permission from Dries Buytaert's blog. Please leave your comments on the original post.

Configuration management is an important feature of any modern content management system. Those following modern development best-practices use a development workflow that involves some sort of development and staging environment that is separate from the production environment.

Configuration management example

Given such a development workflow, you need to push configuration changes from development to production (similar to how you need to push code or content between environments). Drupal's configuration management system helps you do that in a powerful yet elegant way.

Since I announced the original Configuration Management Initiative over seven years ago, we've developed and shipped a strong configuration management API in Drupal 8. Drupal 8's configuration management system is a huge step forward from where we were in Drupal 7, and a much more robust solution than what is offered by many of our competitors.

All configuration in a Drupal 8 site — from one-off settings such as site name to content types and field definitions — can be seamlessly moved between environments, allowing for quick and easy deployment between development, staging and production environments.

However, now that we have a couple of years of building Drupal 8 sites behind us, various limitations have surfaced. While these limitations usually have solutions via contributed modules, it has become clear that we would benefit from extending Drupal core's built-in configuration management APIs. This way, we can establish best practices and standard approaches that work for all.

Configuraton management initiative

The four different focus areas for Drupal 8. The configuration management initiative is part of the 'Improve Drupal for developers' track.

I first talked about this need in my DrupalCon Nashville keynote, where I announced the Configuration Management 2.0 initiative. The goal of this initiative is to extend Drupal's built-in configuration management so we can support more common workflows out-of-the-box without the need of contributed modules.

What is an example workflow that is not currently supported out-of-the-box? Support for different configurations by environment. This is a valuable use case because some settings are undesirable to have enabled in all environments. For example, you most likely don't want to enable debugging tools in production.

Configuration management example

The contributed module Config Filter extends Drupal core's built-in configuration management capabilities by providing an API to support different workflows which filter out or transform certain configuration changes as they are being pushed to production. Config Split, another contributed module, builds on top of Config Filter to allow for differences in configuration between various environments.

The Config Split module's use case is just one example of how we can improve Drupal's out-of-the-box configuration management capabilities. The community created a longer list of pain points and advanced use cases for the configuration management system.

While the initiative team is working on executing on these long-term improvements, they are also focused on delivering incremental improvements with each new version of Drupal 8, and have distilled the most high-priority items into a configuration management roadmap.

  • In Drupal 8.6, we added support for creating new sites from existing configuration. This enables developers to launch a development site that matches a production site's configuration with just a few clicks.
  • For Drupal 8.7, we're planning on shipping an experimental module for dealing with environment specific configuration, moving the capabilities of Config Filter and the basic capabilities of Config Split to Drupal core through the addition of a Configuration Transformer API.
  • For Drupal 8.8, the focus is on supporting configuration updates across different sites. We want to allow both sites and distributions to package configuration (similar to the well-known Features module) so they can easily be deployed across other sites.

How to get involved

There are many opportunities to contribute to this initiative and we'd love your help.

If you would like to get involved, check out the Configuration Management 2.0 project and various Drupal core issues tagged as "CMI 2.0 candidate".

Special thanks to Fabian Bircher (Nuvole), Jeff Beeman (Acquia), Angela Byron (Acquia), ASH (Acquia), and Alex Pott (Thunder) for contributions to this blog post.

Nov 02 2018
Nov 02

I am currently building a Drupal 8 application which is running outside Acquia Cloud, and I noticed there are a few 'magic' settings I'm used to working on Acquia Cloud which don't work if you aren't inside an Acquia or Pantheon environment; most notably, the automatic Configuration Split settings choice (for environments like local, dev, and prod) don't work if you're in a custom hosting environment.

You have to basically reset the settings BLT provides, and tell Drupal which config split should be active based on your own logic. In my case, I have a site which only has a local, ci, and prod environment. To override the settings defined in BLT's included config.settings.php file, I created a config.settings.php file in my site in the path docroot/sites/settings/config.settings.php, and I put in the following contents:

<?php
/**
* Settings overrides for configuration management.
*/

// Disable all splits which may have been enabled by BLT's configuration.
foreach ($split_envs as $split_env) {
  $config["$split_filename_prefix.$split_env"]['status'] = FALSE;
}

$split = 'none';

// Local env.
if ($is_local_env) {
  $split = 'local';
}
// CI env.
if ($is_ci_env) {
  $split = 'ci';
}
// Prod env.
if (getenv('K8S_ENVIRONMENT') == 'prod') {
  $split = 'prod';
}

// Enable the environment split only if it exists.
if ($split != 'none') {
  $config["$split_filename_prefix.$split"]['status'] = TRUE;
}

The K8S_ENVIRONMENT refers to an environment variable I have set up in the production Kubernetes cluster where the BLT Drupal 8 codebase is running. There are a few other little tweaks I've made to make this BLT project build and run inside a Kubernetes cluster, but I'll leave those for another blog post and another day :)

Nov 02 2018
Nov 02

What's your favorite tool for creating content layouts in Drupal? Paragraphs, Display Suite, Panelizer or maybe Panels? Or CKEditor styles & templates? How about the much talked about and yet still experimental Drupal 8 Layout Builder module?

Have you "played” with it yet?

As Drupal site builders, we all agree that a good page layout builder should be:
 

  1. flexible; it should empower you to easily and fully customize every single node/content item on your website (not just blocks)
  2. intuitive, super easy to use (unlike "Paragraphs", for instance, where building a complex "layout", then attempting to move something within it, turns into a major challenge)
     

And it's precisely these 2 features that stand for the key goals of the Layout Initiative for Drupal

To turn the resulting module into that user-friendly, powerful and empowering page builder that all Drupal site builders had been expecting.

Now, let's see how the module manages to “check” these must-have strengths off the list. And why it revolutionizes the way we put together pages, how we create, customize and further edit layouts.

How we build websites in Drupal...
 

1. The Context: A Good Page Builder Was (Desperately) Needed in Drupal

It had been a shared opinion in the open source community:

A good page builder was needed in Drupal.

For, even if we had a toolbox full of content layout creation tools, none of them was “the One”. That flexible, easy to use, “all-features-in-one” website builder that would enable us to:
 

  • build complex pages, carrying a lot of mixed content, quick and easy (with no coding         expertise)
  • fully customize every little content item on our websites and not just entire blocks of content site-wide
  • easily edit each content layout by dragging and dropping images, video content, multiple columns of text and so on, the way we want to
     

Therefore, the Drupal 8 Layout Builder module was launched! And it's been moved to core upon the release of Drupal 8.6.

Although it still wears its “experimental, do no use on production sites!” type of “warning tag”, the module has already leveled up from an “alpha” to a more “beta” phase.

With a more stable architecture now, in Drupal 8.6, significant improvements and a highly intuitive UI (combined with Drupal's well-known content management features) it stands all the chances to turn into a powerful website builder.

That great page builder that the whole Drupal community had been “craving” for.
 

2. The Drupal 8 Layout Builder Module: Quick Overview

First of all, we should get one thing straight:

The Drupal 8.6. Layout Builder module is Panelizer in core!

What does it do?

It enables you, the Drupal site builder, to configure layouts on different sections on your website.

From selecting a predefined layout to adding new blocks, managing the display, swapping the content elements and so on, creating content layouts in Drupal is as (fun and) intuitive as putting Lego pieces together.

Also, the “content hierarchy” is more than logical:
 

  • you have multiple content sections
  • you get to choose a predefined layout or a custom-design one for each section
  • you can place your blocks of choice (field blocks, custom blocks) within that selected layout
     

Note: moving blocks from one section to another is unexpectedly easy when using Layout Builder!
 

3. Configuring the Layout of a Content Type on Your Website

Now, let's imagine the Drupal 8 Layout Module “in action”.

But first, I should point out that there are 2 ways that you could use it:
 

  1. to create and edit a layout for every content type on your Drupal website
  2. to create and edit a layout for specific, individual nodes/ pieces of content
     

It's the first use case of the module that we'll focus on for the moment.

So, first things first: in order to use it, there are some modules that you should enable — Layout Builder and Layout Discovery. Also, remember to install the Layout Library, as well!

Next, let's delve into the steps required for configuring your content type's (“Article”, let's say) display:
 

  • go to Admin > Structure > Content types > Article > Manage Display
  • hit the “Manage layout” button
     

… and you'll instantly access the layout page for the content type in question (in our case, “Article”).

It's there that you can configure your content type's layout, which is made of:
 

  • sections of content (display in 1,2, 3... columns and other content elements)
  • display blocks: tabs, page title...
  • fields: tags, body, title
     

While you're on that screen... get as creative as you want:
 

  • choose a predefined layout for your section —  “Add section” —  from the Settings tab opening up on the right side of the screen
  • add some blocks —  “Add block”; you'll then notice the “Configure” and “Remove” options “neighboring” each block
  • drag and drop the layout elements, arranging them to your liking; then you can click on either “Save Layout” or “Cancel Layout” to save or cancel your layout configuration
     

And since we're highly visual creatures, here, you may want to have a look at this Drupal 8 Layout Builder tutorial here, made by Lee Rowlands, one of the core contributors.

In short: this page builder tool enables you to customize the layout of your content to your liking. Put together multiple sections — each one with its own different layout —  and build website pages, carrying mixed content and multiple layouts, that fit your design requirements exactly.
 

4. Configuring and Fully Customizing the Layout of a Specific Node...

This second use case of the Drupal 8 Layout Builder module makes it perfect for building landing pages.

Now, here's how you use it for customizing a single content type:
 

  • go to Structure>Content types (choose a specific content type)
  • click “Manage display” on the drop-down menu 
  • then click the “Allow each content item to have its layout customized” checkbox
  • and hit “Save”
     

Next, just:
 

  • click the “Content” tab in your admin panel
  • choose that particular article that you'd like to customize
  • click the “Layout” tab
     

… and you'll then access the very same layout builder UI.

The only difference is that now you're about to customize the display of one particular article only.

Note: basically, each piece of content has its own “Layout” tab that allows you to add sections, to choose layouts. 

Each content item becomes fully customizable when using Drupal 8 Layout Builder.
 

5. The Drupal 8.6. Layout Builder vs Paragraphs

“Why not do everything in Paragraphs?" has been the shared opinion in the Drupal community for a long time.

And yet, since the Layout Builder tool was launched, the Paragraphs “supremacy” has started to lose ground. Here's why:
 

  • the Layout builder enables you to customize every fieldable entity's layout
  • it makes combining multiple sections of content on a page and moving blocks around as easy as... moving around Lego pieces 
     

Now. just try to move... anything within a complex layout using Paragraphs:
 

  • you'll either need to keep your fingers crossed so that everything lands in the right place once you've dragged and dropped your blocks
  • or... rebuild the whole page layout from scratch
     

The END!

What do you think:
 

Does Drupal 8 Layout Builder stand the chance to compete with WordPress' popular page builders?


To “dethrone” Paragraphs and become THAT page layout builder that we've all been expected for?

Or do you think there's still plenty of work ahead to turn it into that content layout builder we've all been looking forward to?

Nov 02 2018
Nov 02

You Can’t Put a Price Tag on Visibility, Creditability, and Collegiality

“pink pig” by Fabian Blank on Unsplash

Organizing a DrupalCamp takes a lot of commitment from volunteers, so when someone gets motivated to help organize these events, the financial risks can be quite alarming and sometimes overwhelming. But forget all that mess, you are a Drupal enthusiast and have drummed up the courage to volunteer with the organization of your local DrupalCamp. During your first meeting, you find out that there are no free college or community spaces in the area and the estimated price tag is $25,000. Holy Batman that is a lot of money!

Naturally, you start thinking about how we are going to cover that price tag, so you immediately ask, “how many people usually attend?” Well unless you are one of the big 5, (BadCamp, NYCCamp, GovCon, MidCamp or FloridaCamp) we average between 100 and 200 people. Then you ask, “how much can we charge?” You are then told that we cannot charge more than $50 because camps are supposed to be affordable for the local community and that has been the culture of most DrupalCamps.

Are you interested in attending the first online DrupalCamp Organizers Meeting, on Friday, November 9th at 4:00pm (EST)? RSVP Here.

If Drupal is the Enterprise solution why are all of our camps priced and sponsored like we are still hobbyist in 2002?

Why Don’t We Treat DrupalCamps Like It’s the Enterprise Solution?

Drupal is the Enterprise solution. Drupal has forgotten about the hobbyist and is only concerned about large-scale projects. Drupal developers and companies make more per hour than Wordpress developers. These are all things I have heard from people within the community. So if any of these statements are valid, why are all the camps priced like it is 2002 and we are all sitting around in a circle singing Kumbaya? In 2016 for DrupalCamp Atlanta, we couldn’t make the numbers work, so we decided to raise the price of the camp from $45 to $65 (early bird) and $85 (regular rate). This was a long drawn out and heated debate that took nearly all of our 2 hours allotted for our google hangout. At the end of the day, one of our board members who is also a Diamond sponsor said,

“when you compare how other technology conferences are priced and what they are offering for sessions, DrupalCamps are severely under-priced for the value they provide to the community.”

Courtesy of Amaziee.io Labs

If a camp roughly costs $25,000 and you can only charge 150 people $50, how in the world are DrupalCamps produced? The simple answer, sponsors, sponsors, and more sponsors. Most camps solely rely on the sponsors to cover the costs. One camp, in particular, BADCamp has roughly 2,000 attendees and the registration is FREE. That’s right, the camp is completely free and did I forget to mention that it’s in San Francisco? Based on the BADCamp model and due to the fact the diamond sponsorship for DrupalCon Nashville was $50,000, getting 10 companies to sponsor your camp at $2,500 will be no sweat. Oh and don’t forget Drupal is the enterprise solution, right?

With all of your newfound confidence in obtaining sponsorships, you start contacting some of the larger Drupal shops in your area and after a week nothing. You reach out again maybe by phone this time and actually speak to someone but they are not committing because they want some more information as to why they should sponsor the camp such as, what other perks can you throw in for the sponsorship, are we guaranteed presentation slots, and do you provide the participant list. Of course, the worst response is the dreaded no, we cannot sponsor your conference because we have already met our sponsorship budget for the year.

At this point, you feel defeated and confused as to why organizations are not chomping at the bit to fork over $2,500 to be the sponsor. Yep, that’s right, twenty-five hundred, not $25,000 to be the highest level, sponsor. Mind you many Drupal shops charge anywhere between $150 — $250 an hour. So that means donating 10–17 hours of your organizations time to support a Drupal event in your local community. Yes, you understand that there are a lot of DrupalCamps contacting the same companies for sponsorship so you ask yourself, what has changed from years past?

Are you interested in attending the first online DrupalCamp Organizers Meeting, on Friday, November 9th at 4:00 pm (EST)? RSVP Here.

What Do Companies Expect to Gain From DrupalCamp Sponsorships?

At DrupalCon Nashville, I got an awesome opportunity to participate in a session around organizing DrupalCamps. It was really interesting to hear about how other organizers produce their camp and what were some of the biggest pain points.

Group Photo — DrupalCon 2018 Nashville by Susanne Coates

During this session, we were talking about a centralized sponsorship program for all DrupalCamps (that I personally disagree with and will save that discussion for another blog post) and an individual asked the question,

“why should my company sponsor DrupalCamp Atlanta? There is nothing there for me that makes it worth it. We don’t pick up clients, you to don’t distribute the participant list, so why should we sponsor the camp?”

Needless to say, they caught me completely off guard, so I paused then replied,

“DrupalCamp Atlanta has between 150–200 people, most of them from other Drupal shops, so what is it that you are expecting to get out of the sponsorship that would make it worth it to you? Why do you sponsor any DrupalCamps?”

Have Drupal Companies Outgrown the Need to Sponsor DrupalCamps?

On the plane ride back to the ATL it got me thinking, why does an organization sponsor DrupalCamps? What is the return on their investment? I started reminiscing of the very first DrupalCamp that I attended in 2008 and all the rage at that time (and still is), was inbound marketing and how using a content strategy and or conference presentations can establish your company as thought leaders in the field, therefore, clients will find your information useful and approach you when its time to hire for services. Maybe this is why so many camps received a ton of presentation submissions and why it was easy to find sponsors, but that was over 10 years ago now and some of those same companies have now been established as leaders in the field. Could it be, that established companies no longer need the visibility of DrupalCamps?

What happens to DrupalCamps when companies no longer need the visibility or credibility from the Drupal community?

The Drupal community thrives when Drupal shops become bigger and take on those huge projects because it results in contributions back to the code, therefore, making our project more competitive. But an unintended consequence of these Drupal shops becoming larger is that there is a lot more pressure on them to raise funding thus they need to spend more resources on obtaining clients outside of the Drupal community. Acquia, the company built by the founder of Drupal, Dries Buytaert, have made it clear that they are pulling back on their local camp sponsorships and have even created their own conference called Acquia Engage that showcases their enterprise clients. Now from a business perspective, I totally understand why they would create this event as it provides a much higher return on their investment but it results in competing with other camps (ahem, this year’s DrupalCamp Atlanta), but more importantly the sponsorship dollars all of us depend on are now being redirected to other initiatives.

Are you interested in attending the first online DrupalCamp Organizers Meeting, on Friday, November 9th at 4:00 pm (EST)? RSVP Here.

Why Should Established Companies Sponsor a DrupalCamp?

The reality of the situation is that sponsoring these DrupalCamps are most likely not going to land your next big client that pays your company a $500,000 contract. So what are true reasons to sponsor a DrupalCamp:

  • Visibility
    When sponsoring these DrupalCamps most of us organizers do a pretty good job of tweeting thanks to the company and if the organization has presenters we usually promote the sessions as well. In addition, most camps print logos on the website, merchandise, and name after parties. Yes, its only a little bit but the internet is forever and the more you are mentioned the better off you are. But you are from a well established Drupal shop so you don’t need any more visibility.
  • Credibility
    Even the companies who are have been established need their staff to be credible. There will always be some amount of turnover and when that happens your clients still want to know if this person is talented. And if your company is new, being associated with Drupal in your local community does provide your company a sense of credibility.
  • Collegiality
    I saved the best for last. Collegiality is highly overlooked when looking at sponsoring camps. Most companies have a referral program for new hires and when the time comes for you to hire, people tend to refer their friends and their professional acquaintances. There is no better place to meet and interact with other Drupalist than a DrupalCamp. What about employee engagement? In a recent focus group I participated in with a Drupal shop, many of the staff wanted more opportunities for professional development. These local camps are affordable and can allow staff to attend multiple events in a year when you have small budgets.

I must end by saying, that there are so many great Drupal companies that I have had the pleasure to work with and if it were not for the Acquia’s of the world Drupal wouldn’t exist. I understand that CEO’s are responsible for their employees and their families so I don’t want to underestimate the pressures that come with making payroll and having a client pipeline. The purpose of this post was to explain how it feels as a volunteer who is doing something for the community and the frustrations that sometimes come with it.

Nov 02 2018
Nov 02

‘Blocks’ in Drupal are pieces of content that can be placed anywhere throughout the site. They are an integral part of Drupal and the way it displays information. While Drupal has a variety of useful blocks out of the box for most scenarios, there might be times when custom blocks are required. That is what I’ll be addressing in this post, by going through how to create a custom block in Drupal 8.

There are two ways in which you can create a custom block:

  • Through Drupal’s own GUI, or
  • Programmatically.

Via Drupal GUI

This method is pretty straightforward and easier than creating a block programmatically. However, it also is less flexible and customizable than programmatically creating a block.

  • Go to admin -> structure -> block layout -> custom block library.
  • Click ‘block types’ tab. Once here, click on the ‘Add custom block type’ button.
  • Enter block label and description.
  • Now, you can add fields, manage display type, manage display etc. for your custom block. Customize the block to your liking and click save.
  • Now, go back to custom block library and click the blue ‘Add custom block’ button, to add the block to your library.
  • The next step is to simply place the block into your desired region by navigating to admin -> structure -> block layout.

Programmatically Creating Block

This method requires a little more understanding of the way Drupal works, however, once you get the hang of it, it gets pretty easy.

Create a module

In Drupal 8, it is necessary to create an info.yml file that contains the metadata for every custom module, theme or plugin you create. Similarly, for our custom block, we will need to create an info.yml file in the ‘modules/custom’ directory. Note that if the custom folder isn’t already created, you will need to create it. For creating a custom block, we will need to make a custom module.

Now create an ‘info.yml’ file such as ‘custom_block_example.info.yml’. Inside this file enter following:

name: Custom Block Example
type: module
description: Define a custom block.
core: 8.x
package: Custom
dependencies:
  - block

You can now go to your Drupal dashboard and enable the custom module we have just created.

Create Class

Now, in order to define the logic of the block, we need to create a class which will be placed under the modules/custom/custom_block_example/src/Plugin/Block directory. 

The class file should contain annotation as well. The annotation allows us to identify the block. Apart from the annotation, this class will contain 4 methods:

  • build() - Returns a basic markup by rendering a renderable array.
  • blockAccess() - Defines a custom user access logic.
  • blockForm() - Defines a custom block configuration form using the Form API.
  • blockSubmit() - Used to save a configuration, defined on the blockForm() method.

Now, this is what the class file should contain in the end:

<?php

namespace Drupal\my_block_example\Plugin\Block;

use Drupal\Core\Access\AccessResult;
use Drupal\Core\Block\BlockBase;
use Drupal\Core\Form\FormStateInterface;
use Drupal\Core\Session\AccountInterface;

/**
 * Provides a block with a simple text.
 *
 * @Block(
 *   id = "my_block_example_block",
 *   admin_label = @Translation("My block"),
 * )
 */
class MyBlock extends BlockBase {
  /**
   * {@inheritdoc}
   */
  public function build() {
    return [
      '#markup' => $this->t('This is a simple block!'),
    ];
  }

  /**
   * {@inheritdoc}
   */
  protected function blockAccess(AccountInterface $account) {
    return AccessResult::allowedIfHasPermission($account, 'access content');
  }

  /**
   * {@inheritdoc}
   */
  public function blockForm($form, FormStateInterface $form_state) {
    $config = $this->getConfiguration();

    return $form;
  }

  /**
   * {@inheritdoc}
   */
  public function blockSubmit($form, FormStateInterface $form_state) {
    $this->configuration['my_block_settings'] = $form_state->getValue('my_block_settings');
  }
}

Now, go back to your site, and you should be able to see the block you have just created. Simply assign the block to a region of your choice and it should become visible.

Conclusion

As mentioned earlier, blocks are an integral part of a Drupal site. Learning to customize and play with the blocks in your own way can be a very useful skill.

Having trouble with customizing your Drupal site? Contact us, here at Agiledrop, and forget about having to worry about getting stuck with your Drupal site ever again.
 

Nov 02 2018
Nov 02

We are in the process of transforming the way we host our applications to a docker based workflow. One of the challenges we face is the file storage. At the heart of our business are open source technologies and tools, therefore  we have looked into in using Minio (more or less the same as Amazon S3 for file storage) instead of local filesystem (or Amazon S3).

We are going to use the Drupal module Flysystem S3 - that works both with Amazon S3 and Minio (compatible with the Amazon S3).

Flysystem is a filesystem abstraction library for PHP which allows you to easily swap out a local filesystem for a remote one - or from one remote to another.

For a new site it is pretty straight forward, for a legacy site you need to migrate your files from one storage to another - that I am going to look into in the next blog post.

Minio container

First we need Minio up and running. For that i am using docker, here is an example docker-compose.yml:

services:
  minio:
    image: minio/minio:edge
    container_name: minio
    hostname: minio
    ports:
      - "8001:9000"
    volumes:
      - "./data:/data"
    environment:
      - "MINIO_ACCESS_KEY=AFGEG578KL"
      - "MINIO_SECRET_KEY=klertyuiopgrtasjukli"
      - "MINIO_REGION=us-east-1"
    command: server /data


Settings

When you have installed the Flysystem S3 module (and the dependency - the module Flysystem), we need to add the settings for Minio to our settings.php file (there is no settings for this in Drupal. Yet.):

$schemes = [
    's3' => [
        'driver' => 's3',
        'config' => [
            'key'    => 'AFGEG578KL', 
            'secret' => 'klertyuiopgrtasjukli',
            'region' => 'us-east-1',
            'bucket' => 'my-site',
            'endpoint' => "http://minio.mysite.com:9000",
            'protocol' => "http",
            'cname_is_bucket' => false,
            "cname" => "minio.mysite.com:8001",
            'use_path_style_endpoint' => TRUE,
            'public' => true,
            'prefix' => 'publicfiles',
        ],
        'cache' => TRUE, 
        'serve_js' => TRUE,
        'serve_css' => TRUE,
    ],
];
$settings['flysystem'] = $schemes;

Endpoint is for communicating with Minio, cname is the base URL that files is going to get on the site. Serve_js and serve_css is for Minio to store aggregated CSS and JS.

Create a field

You now need to define which fields are going to use the S3 storage, for this, I create a new image reference field, and use “Flysystem: s3” as the Upload destination.

Surf over to Minio - our example is on http://minio.mysite.com:9000 - add the defined bucket, my-site, and make sure that drupal can write to it (edit policy in Mino and make sure it has read and write on the prefix - or for wildcard prefix - *)

And you are done

And that is it - now we are using Minio for storing the images. Try to upload a file on the field you created - and you should see the file in Minio. Also on the site - you should of course see the image - but now with the URL used in the settings for CNAME, in our case, minio.mysite.com:8001.

We have put some time and effort into the Flysystem S3 module together with other contributors, and we hope you will test it out and report any feedback. Have fun!

Nov 02 2018
Nov 02

As part of the session I presented in Drupal Europe, REST Ready: Auditing established Drupal 8 websites for use as a content hub, I presented a module called “Entity Access Audit”.

This has proved to be a useful tool for auditing our projects for unusual access scenarios as part of our standard go-live security checks or when opening sites up to additional mechanisms of content delivery, such as REST endpoints. Today this code has been released on Drupal.org: Entity Access Audit.

There are two primary interfaces for viewing access results, the overview screen and a detailed overview for each entity type. Here is a limited example of the whole-site overview showing a few of the entity types you might find in core or custom modules:

Entity access audit

Here is a more detailed report for a single entity type:

Entity access audit

The driving motivation behind these interfaces was being able to visually scan entity types and ensure that the access results align with our expectations. This has so far helped identify various bugs in custom and contributed code.

In order to conduct a thorough access test, the module uses a predefined set of dimensions and then uses a cartesian product of these dimensions to test every combination. The dimensions tested out of the box, where applicable to the given entity type are:

  • All bundles of an entity type.
  • If the current user is the entity owner or not.
  • The access operation: create, view, update, delete.
  • All the available roles.

It’s worth noting that these are only common factors used to determine access results, they are not comprehensive. If access was determined by other factors, there would be no visibility of this in the generated reports.

The module is certainly not a silver bullet for validating the security of Drupal 8 websites, but has proved to be a useful additional tool when conducting audits.

Photo of Sam Becker

Posted by Sam Becker
Senior Developer

Dated 2 November 2018

Add new comment

Nov 01 2018
Nov 01

Your company brand encompasses the entire perception of your organization through the eyes of your customers, clients, and employees. Branding consists of more than just your logo and typeface selections; it is how the public (usually users and/or customers) experiences your business. How you position your brand can certainly define the customers’ experience of your organization. However, consider first planning great experiences for your users and customers in order to develop a more customer-centric brand identity. Changing to this strategy requires a solid understanding of your users and customers as well as a thoroughly considered mission statement prior to developing your brand.

UNDERSTAND YOUR USERS AND CUSTOMERS

Knowing who your business is intended to cater to will help you to plan all the touchpoints your customers and clients will have with you, from the in-person experience to the web and social media experience to even the look and feel of business cards you hand out. For example, if your target audience consists of Millennials, your language and messaging would be different than if it consisted of top-level executives and managers. User experience research techniques can be used to help define different customer/client personas. These personas can be used essentially as avatars for your customer base. As new services or features are developed, keep these personas in mind to ensure that these new offerings align with their needs and desires. At times, new personas might replace or get added to the list.

RELY ON YOUR MISSION STATEMENT

Your mission statement defines the core purpose of your business. There’s a delicate balance required between keeping it broad enough to encompass your long term vision, yet narrow enough to define your identity for both your employees and your customers. A good mission statement represents what you stand for in specific, actionable ways. It paves the way for how your employees interact with your customers. The length of your mission statement may range from a sentence or two to several paragraphs, a good rule of thumb would be to keep it as succinct as possible in order to avoid providing too narrow a focus and thus reducing strategic flexibility. Also, keep in mind that your mission statement can evolve over time to adjust for the market, your competitors, and your customer’s needs and desires.

DESIGN YOUR USER AND CUSTOMER EXPERIENCES

The broad definition of User and Customer Experience includes how your customers interact with all facets of your business. Rely on your mission statement and understanding of your customers to help define how you want those interactions to occur instead of leaving it to chance and risking a negative experience. For instance, if you wish to present your organization as caring and supportive to your user base, friendlier language and a welcoming user interface on your website or software would communicate your caring and empathic intent.

User and Customer Experience is about setting and meeting the users’ expectations through clarity of messaging and purpose. Keeping the customer experiences in mind at all times when developing your business processes will help ensure that your customers stay positively engaged with your organization.

By tailoring your business strategies around your customers’ expectations and the driving force behind your mission statement, you can create specific user and customer experiences.  These experiences reinforce their expectations and your mission statement, creating a sustainable, healthy feedback loop. For example, McDonalds recently responded to customer requests by running focus groups to test the viability of providing all-day breakfast service to their fans. Before simply agreeing to these requests (the first of which were posted in 2007!), McDonalds tested the viability of the service through focus group and user testing in small markets in order to ensure that the service could be provided within customer expectations. This is a great example of using feedback and relying on thorough user and customer research to improve brand perception through ensuring consistent customer experience, even for new offerings.1

By leveraging this valuable feedback loop to guide the underlying framework of your business decisions, you can ensure that your organization adjusts to changing expectations and trends, keeping your brand fresh and relevant for as long as possible – while keeping your customers engaged.

1The Story of How McDonald’s All-Day Breakfast Came to Be

 

 

Nov 01 2018
Nov 01

BADCamp 2018 just wrapped up last Saturday. As usual it was a great volunteer organized event that brought together all sorts of folks from the Drupal Community.

Every year Kanopi provides organizational assistance, and this year was no exception. We had Kanopian volunteers working on; writing code for website, organizing fundraising, general operations planning, assisting as room monitors, and working the registration booth.

An event like this doesn’t happen without a lot of work across a lot of different areas and we’re very proud of Kanopi’s contributions.

Personally, Kanopi was able to send me down from Vancouver, Canada to spend time doing a day long training course, as well as doing the regular conference summits and sessions.

The course I chose was “Component-based Theming with Twig” which was really informative. We covered the basics Pattern Lab and then worked on best practice methods to integrate those Pattern Lab tools in to a Drupal theme.

Some of the takeaways:

  • The Gesso (https://www.drupal.org/project/gesso) theme is a great starting place for getting Pattern Lab in to your project.
  • Make sure you are reusing all your basic html components and make the templates flexible. Resist the urge to simply copy and paste markup into a new template.
  • The best way to map Pattern Lab components in Drupal is to use Paragraph types and their display modes.
  • To get the most out of Twig template, make sure you are using the module Twig Tweak (https://www.drupal.org/project/twig_tweak)

For the regular conference sessions, the most interest seemed to lie in the possibilities of GatsbyJS (https://www.gatsbyjs.org/). All the developers with whom I spoke are focused on the performance and security perspective, as it can be completely decoupled from Drupal, allowing for fewer security issues. One interesting talk on Gatsby was this one by Kyle Mathews.

Kanopi was also fortunate enough get four sessions selected:

All in all BADCamp 2018 was a great experience. It’s terrific to meet our distributed co-workers as well as see friends from other parts of the Drupal community.

Nov 01 2018
Nov 01

As Drupal 8 has matured as an enterprise content management system, so has its ability to connect with enterprise SaaS CRMs such as Salesforce. As the undisputed IBM of CRM solutions (for now, anyway) Salesforce is a cornerstone for most businesses. And now with tighter integrations than ever before, Drupal 8 can be too.

With that, let's explore some key considerations involved in connecting Drupal 8 with Salesforce. 

All Hail the Cloud

At its most basic core, Salesforce is really a database of contacts in the same way that Drupal is a database of content. Yes, Drupal also has users and Salesforce often houses products, events, etc., but you get the idea. What’s important is that customers interact with both systems. Whether it’s reading website content or opening an email from a salesperson, customer data across all fronts is critical to consolidate, manage and leverage.

Integration is a Dirty Word

A representation of Salesforce as one integration in the broader Drupal ecoysystemYou may be wondering what’s involved with a Drupal integration with Salesforce. Ah, the dreaded “I” word...integration. So often the herald of scope creep and blown budgets. Integrating Salesforce with Drupal 8 can vary between something as simple as submitting contact forms to the CRM, to running a global ABM effort supported by a sophisticated Drupal website equipped with real-time personalization. In either case, leveraging Drupal 8’s API-first architecture and its plethora of open source modules are key. In this case, the Drupal Salesforce module is our starting point.

Modules Make the World Go Round

The Drupal Salesforce Suite module is a testament to both the ingenuity and passion of the Drupal community and the flexibility of Drupal as an enterprise platform. As a contributed module, the Salesforce Suite for Drupal enables out of the box connection with Salesforce, no matter your configuration.

Available free on drupal.org, the module offers:

  • Single Sign-On (SSO) with OAUTH2 authentication, which lets you pass credentials to Salesforce (and log in seamlessly. Salesforce events are also accessible through Drupal 8. Handy!

  • Entity mapping, which means tying fields in your Drupal site to those in Salesforce, such as “Markets” you serve for upcoming events or hidden user fields like “Lead Score.”

  • Ability to push data to Salesforce from Drupal, such as users engaging with gated content, new leads, or activity data to ensure Salesforce has all the information it needs to make decisions. This is critically important with AI advancements such as Salesforce Einstein.

  • Ability to pull data such as new products, syncing events, etc into Drupal. Often, this takes the form of rough data imports for critical fields (like product information) that site admins can add to using Drupal 8’s editing capabilities.

Take it to the Skies

While the Salesforce Suite module is a great start, any complex integration requires an experienced and competent Drupal development team to implement. Establishing an API connection is one thing, but building a Drupal 8 site to adapt to changing conditions on the Salesforce side is critical, as well as sound architecture on the Drupal 8 side to ensure data integrity and easy management for non-technical site admins.  
Looking to connect Drupal 8 with Salesforce? Contact us about your project and see how we can help.

Nov 01 2018
Nov 01

Content migration is a topic with a lot of facets. We’ve already covered some important migration information on our blog:

So far, readers of this series will have gotten lots of good process information, and learned how to move a Drupal 6 or 7 site into Drupal 8. This post, though, will cover what you do when your content is in some other data framework. If you haven’t read through the previous installments, I highly recommend you do so. We’ll be building on some of those concepts here.

Content Type Translation

One of the first steps of a Drupal to Drupal migration is setting up the content types in the destination site. But what do you do if you are moving to Drupal from another system? Well, you will need to do a little extra analysis in your discovery phase, but it’s very doable.

Most content management systems have at least some structure that is similar to Drupal’s node types, as well as a tag/classification/category system that is analogous to Drupal’s taxonomy. And it’s almost certain to have some sort of user account. So, the first part of your job is to figure out how all that works.

Is there only one ‘content type’, which is differentiated by some sort of tag (“Blog Post”, “Product Page”, etc.)? Well, then, each of those might be a different content type in Drupal. Are Editors and Writers stored in two different database tables? Well, you probably just discovered two different user roles, and will be putting both user types into Drupal users, but with different roles. Does your source site allow comments? That maps pretty closely to Drupal comments, but make sure that you actually want to migrate them before putting in the work! Drupal 8 Content Migration: A Guide For Marketers, one of the early posts in this series, can help you make that decision.

Most CMS systems will also have a set of meta-data that is pretty similar to Drupal’s: created, changed, author, status and so on. You should give some thought to how you will map those fields across as well. Note that author is often a reference to users, so you’ll need to consider migration order as well.

If your source data is not in a content management system (or you don’t have access to it), you may have to dig into the database directly. If you have received some or all of your content in the XML, CSV, or other text-type formats, you may just have to open the files and read them to see what you are working with.

In short, your job here will be to distill the non-Drupal conventions of your source site into a set of Drupal-compatible entity types, and then build them.

Migration from CSV

CSV is an acronym for “Comma-Separated Value”, and is a file format often used for transferring data in large quantity. If you get some of your data from a client in a spreadsheet, it’s wise to export it to CSV. This format strips all the MS Office or Google Sheets gobbledygook, and just gives you a straight block of data.

Currently, migrations of CSV files into Drupal use the Migrate Source CSV module. However, this module is being moved into core and deprecated. Check the Bring migrate_source_csv to core issue to see what the status on that is, and adjust this information accordingly.

The Migrate Source CSV module has a great example and some good documentation, so I’ll just touch on the highlights here.

First, know that CSV isn’t super-well structured, so each entity type will need to be a separate file. If you have a spreadsheet with multiple tabs, you will need to export each separately, as well.

Second, connecting to it is somewhat different than connecting to a Drupal database. Let’s take a look at the data and source configuration from the default example linked above.

migrate_source_csv/tests/modules/migrate_source_csv_test/artifacts/people.csv




  1. id,first_name,last_name,email,country,ip_address,date_of_birth

  2. 1,Justin,Dean,jdean0@example.com,Indonesia,60.242.130.40,01/05/1955

  3. 2,Joan,Jordan,jjordan1@example.com,Thailand,137.230.209.171,10/14/1958

  4. 3,William,Ray,wray2@example.com,Germany,4.75.251.71,08/13/1962


migrate_source_csv/tests/modules/migrate_source_csv_test/config/install/migrate_plus.migration.migrate_csv.yml (Abbreviated)




  1. ...

  2. source:

  3.   plugin: csv

  4.   path: /artifacts/people.csv

  5.   keys:

  6.     - id

  7.   header_row_count: 1

  8.   column_names:

  9.     -

  10.       id: Identifier

  11.     -

  12.       first_name: 'First Name'

  13.     -

  14.       last_name: 'Last Name'

  15.     -

  16.       email: 'Email Address'

  17.     -

  18.       country: Country

  19.     -

  20.       ip_address: 'IP Address'

  21.     -

  22.       date_of_birth: 'Date of Birth'

  23. ...


Note first that this migration is using plugin: csv, instead of the d7_node or d7_taxonomy_term that we’ve seen previously. This plugin is in the Migrate Source CSV module, and handles reading the data from the CSV file.

  path: /artifacts/people.csv

The path config, as you can probably imagine, is the path to the file you’re migrating.  In this case, the file is contained within the module itself.




  1. keys:

  2. - id


The keys config is an array of columns that are the unique id of the data.




  1. header_row_count: 1

  2. column_names:

  3. -

  4. id: Identifier

  5. -

  6. first_name: 'First Name'

  7. -

  8. last_name: 'Last Name'

  9. ...


These two configurations interact in an interesting way. If your data has a row of headers at the top, you will need to let Drupal know about it by setting a header_row_count. When you do that, Drupal will parse the header row into field ids, then move the file to the next line for actual data parsing.

However, if you set the column_names configuration, Drupal will override the field ids created when it parsed the header row. By passing only select field ids, you can skip fields entirely without having to edit the actual data. It also allows you to specify a human-readable field name for the column of data, which can be handy for your reference, or if you’re using Drupal Migrate’s admin interface.

You really should set at least one of these for each CSV migration.

The process configuration will treat these field ids exactly the same as a Drupal fieldname.

Process and Destination configuration for CSV files are pretty much the same as with a Drupal-to-Drupal import, and they are run with Drush exactly the same.

Migration from XML/RSS

XML’s a common data storage format, that presents data in a tagged format. Many content management systems or databases have an ‘export as xml’ option. One advantage XML has over CSV is that you can put multiple data types into a single file. Of course, if you have lots of data, this advantage could turn into a disadvantage as the file size balloons! Weigh your choice carefully.

The Migrate Plus module has a data parser for XMl, so if you’ve been following along with our series so far, you should already have this capability installed.

Much like CSV, you will have to connect to a file, rather than a database. RSS is a commonly used xml format, so we’ll walk through connecting to an RSS file for our example. I pulled some data from Phase2’s own blog RSS for our use, too.

https://www.phase2technology.com/ideas/rss.xml (Abbreviated)




  1. <?xml version="1.0" encoding="utf-8"?>

  2. <rss ... xml:base="https://www.phase2technology.com/ideas/rss.xml">

  3.   <channel>

  4.     <title>Phase2 Ideas</title>

  5.     <link>https://www.phase2technology.com/ideas/rss.xml</link>

  6.     <description/>

  7.     <language>en</language>

  8.         <item>

  9.             <title>The Top 5 Myths of Content Migration *plus one bonus fairytale</title>

  10.             <link>https://www.phase2technology.com/blog/top-5-myths-content</link>

  11.             <description>The Top 5 Myths of Content Migration ... </description>

  12.             <pubDate>Wed, 08 Aug 2018 14:23:34 +0000</pubDate>

  13.             <dc:creator>Bonnie Strong</dc:creator>

  14.             <guid isPermaLink="false">1304 at https://www.phase2technology.com</guid>

  15.         </item>

  16.     </channel>

  17. </rss>


example_xml_migrate/config/install/migrate_plus.migration.example_xml_articles.yml




  1. id: example_xml_articles

  2. label: 'Import articles'

  3. status: true

  4. source:

  5.   plugin: url

  6.   data_fetcher_plugin: http

  7.   urls: 'https://www.phase2technology.com/ideas/rss.xml'

  8.   data_parser_plugin: simple_xml

  9.   item_selector: /rss/channel/item

  10.   fields:

  11.     -

  12.       name: guid

  13.       label: GUID

  14.       selector: guid

  15.     -

  16.       name: title

  17.       label: Title

  18.       selector: title

  19.     -

  20.       name: pub_date

  21.       label: 'Publication date'

  22.       selector: pubDate

  23.     -

  24.       label: 'Origin link'

  25.       selector: link
  26.     -

  27.       name: summary

  28.       label: Summary

  29.       selector: description

  30.   ids:

  31.     guid:

  32.       type: string

  33. destination:

  34.   plugin: 'entity:node'

  35. process:

  36.   title:

  37.     plugin: get

  38.     source: title

  39.   field_remote_url: link
  40.   body: summary

  41.   created:

  42.     plugin: format_date

  43.     from_format: 'D, d M Y H:i:s O'

  44.     to_format: 'U'

  45.     source: pub_date

  46.   status:

  47.     plugin: default_value

  48.     default_value: 1

  49.   type:

  50.     plugin: default_value

  51.     default_value: article


The key bits here are in the source configuration.




  1. source:

  2. plugin: url

  3. data_fetcher_plugin: http

  4. urls: 'https://www.phase2technology.com/ideas/rss.xml'

  5. data_parser_plugin: simple_xml

  6. item_selector: /rss/channel/item


Much like CSV’s use of the csv plugin to read a file, XML is not using the d7_node or d7_taxonomy_term plugin to read the data. Instead, it’s pulling in a url and reading the data it finds there. The data_fetcher_plugin takes one of two different possible values, either http or file. HTTP is for a remote source, like an RSS feed, while File is for a local file. The urls config should be pretty obvious.

The data_parser_plugin specifies what php library to use to read and interpret the data. Possible parsers here include JSON, SOAP, XML and SimpleXML. SimpleXML’s a great library, so we’re using that here.

Finally, item_selector defines where in the XML the items we’re importing can be found. If you look at our data example above, you’ll see that the actual nodes are in rss -> channel -> item. Each node would be an item.




  1.  fields:

  2. ...

  3.     -

  4.       name: pub_date

  5.       label: 'Publication date'

  6.       selector: pubDate

  7. ...


Here you see one of the fields from the xml. The label is just a human-readable label for the field, while the selector is the field within the XML item we’re getting.

The name is what we’ll call a pseudo-field. A pseudo-fields acts as a temporary storage for data. When we get to the Process section, the pseudo-fields are treated essentially as though they were fields in a database.

We’ve seen pseudo-fields before, when we were migrating taxonomy fields in Drupal 8 Migrations: Taxonomy and Nodes. We will see why they are important here in a minute, but there’s one more important thing in source.




  1.  ids:

  2.     guid:

  3.       type: string


This snippet here sets the guid to be a unique of the article we’re importing. This guarantees us uniqueness and is very important to specify.

Finally, we get to the process section.




  1. process:

  2. ...

  3. created:

  4. plugin: format_date

  5. from_format: 'D, d M Y H:i:s O'

  6. to_format: 'U'

  7. source: pub_date

  8. ...


So, here is where we’re using the pseudo-field we set up before. This takes the value from pubDate that we stored in the pseudo-field pub_date, does some formatting to it, and assigns it to the created field in Drupal. The rest of the fields are done in a similar fashion.

Destination is set up exactly like a Drupal-to-Drupal migration, and the whole thing is run with Drush the exact same way. Since RSS is a feed of real-time content, it would be easy to set up a cron job to run that drush command, add the --update flag, and have this migration go from one-time content import to being a regular update job that kept your site in sync with the source.

Migration from WordPress

WordPress export screenshotA common migration path is from WordPress to Drupal. Phase2 recently did so with our own site, and we have done it for clients as well. There are several ways to go about it, but our own migration used the WordPress Migrate module.

In your WordPress site, under Tools >> Export, you will find a tool to dump your site data into a customized xml format. You can also use the wp-cli tool to do it from the command line, if you like.

Once you have this file, it becomes your source for all the migrations. Here’s some good news: it’s an XML file, so working with it is very similar to working with RSS. The main difference is in how we specify our source connections.

example_wordpress_migrate/config/install/migrate_plus.migration.example_wordpress_authors.yml




  1. langcode: en

  2. status: true

  3. dependencies:

  4.   enforced:

  5.     module:

  6.       - phase2_migrate

  7. id: example_wordpress_authors

  8. class: null

  9. field_plugin_method: null

  10. cck_plugin_method: null

  11. migration_tags:

  12.   - example_wordpress

  13.   - users

  14. migration_group: example_wordpress_group

  15. label: 'Import authors (users) from WordPress WXL file.'

  16. source:

  17.   plugin: url

  18.   data_fetcher_plugin: file
  19.   data_parser_plugin: xml

  20.   item_selector: '/rss/channel/wp:author'

  21.   namespaces:

  22.     wp: 'http://wordpress.org/export/1.2/'

  23.     excerpt: 'http://wordpress.org/export/1.2/excerpt/'

  24.     content: 'http://purl.org/rss/1.0/modules/content/'

  25.     wfw: 'http://wellformedweb.org/CommentAPI/

  26.     dc: 'http://purl.org/dc/elements/1.1/'

  27.   urls:

  28.     - 'private://example_output.wordpress.2018-01-31.000.xml'

  29.   fields:

  30.     -

  31.       name: author_login

  32.       label: 'WordPress username'

  33.       selector: 'wp:author_login'

  34.     -

  35.       name: author_email

  36.       label: 'WordPress email address'

  37.       selector: 'wp:author_email'

  38.     -

  39.       name: author_display_name

  40.       label: 'WordPress display name (defaults to username)'

  41.       selector: 'wp:author_display_name'

  42.     -

  43.       name: author_first_name

  44.       label: 'WordPress author first name'

  45.       selector: 'wp:author_first_name'

  46.     -

  47.       name: author_last_name

  48.       label: 'WordPress author last name'

  49.       selector: 'wp:author_last_name'

  50.   ids:

  51.     author_login:

  52.       type: string

  53. process:

  54.   name:

  55.     plugin: get

  56.     source: author_login

  57.     plugin: get

  58.     source: author_email

  59.   field_display_name

  60.     plugin: get

  61.     source: author_display_name

  62.   field_first_name:

  63.     plugin: get

  64.     source: author_first_name

  65.   field_last_name:

  66.     plugin: get

  67.     source: author_last_name

  68.   status:

  69.     plugin: default_value

  70.     default_value: 0

  71. destination:

  72.   plugin: 'entity:user'

  73. migration_dependencies: null


If you’ve been following along in our series, a lot of this should look familiar.




  1. source:

  2. plugin: url

  3. data_fetcher_plugin: file
  4. data_parser_plugin: xml

  5. item_selector: '/rss/channel/wp:author'


This section works just exactly like the XML RSS example above. Instead of using http, we are using file for the data_fetcher_plugin, so it looks for a local file instead of making an http request. Additionally, due to the difference in the structure of an RSS feed compared to a WordPress WXL file, the item_selector is different, but it works the same way.




  1.     namespaces:

  2.       wp: 'http://wordpress.org/export/1.2/'

  3.       excerpt: 'http://wordpress.org/export/1.2/excerpt/'

  4.       content: 'http://purl.org/rss/1.0/modules/content/'

  5.       wfw: 'http://wellformedweb.org/CommentAPI/'

  6.       dc: 'http://purl.org/dc/elements/1.1/'


These namespace designations allow Drupal’s xml parser to understand the particular brand and format of the Wordpress export.




  1.    urls:

  2.       - 'private://example_output.wordpress.2018-01-31.000.xml'


Finally, this is the path to your export file. Note that it is in the private filespace for Drupal, so you will need to have private file management configured in your Drupal site before you can use it.




  1. fields:

  2. -

  3. name: author_login

  4. label: 'WordPress username'

  5. selector: 'wp:author_login'


We’re also setting up pseudo-fields again, storing the value from wp:author_login in author_login.

Finally, we get to the process section.




  1. process:

  2. name:

  3. plugin: get

  4. source: author_login


So, here is where we’re using the pseudo-field we set up before. This takes the value from wp:author_login that we stored in author_login and assigns it to the name field in Drupal.

Configuration for the migration of the rest of the entities - categories, tags, posts, and pages - look pretty much the same. The main difference is that the source will change slightly:

example_wordpress_migrate/config/install/migrate_plus.migration.example_wordpress_category.yml  (abbreviated)




  1. source:

  2. ...

  3. item_selector: '/rss/channel/wp:category'


example_wordpress_migrate/config/install/migrate_plus.migration.example_wordpress_tag.yml (abbreviated)




  1. source:

  2. ...

  3. item_selector: '/rss/channel/wp:tag'


example_wordpress_migrate/config/install/migrate_plus.migration.example_wordpress_post.yml (abbreviated)




  1. source:

  2. ...

  3. item_selector: '/rss/channel/item[wp:post_type="post"]'


And, just like our previous two examples, Wordpress migrations can be run with Drush.

A cautionary tale

As we noted in Managing Your Drupal 8 Migration, it’s possible to write custom Process Plugins. Depending on your data structure, it may be necessary to write a couple to handle values in these fields. On the migration of Phase2’s site recently, after doing a baseline test migration of our content, we discovered a ton of malformed links and media entities. So, we wrote a process plugin that did a bunch of preg_replace to clean up links, file paths, and code formatting in our body content. This was chained with the default get plugin like so:




  1. process:

  2. body/value:

  3. -

  4. plugin: get

  5. source: content

  6. -

  7. plugin: p2body


The plugin itself is a pretty custom bit of work, so I’m not including it here. However, a post on custom plugins for migration is in the works, so stay tuned.

Useful Resources and References

If you’ve enjoyed this series so far, we think you might enjoy a live version, too! Please drop by our session proposal for Drupalcon Seattle, Moving Out, Moving In! Migrating Content to Drupal 8 and leave some positive comments.

Nov 01 2018
Nov 01

"In a virtual community we can go directly to the place where our favourite subjects are being discussed, then get acquainted with people who share our passions or who use words in a way we find attractive. Your chances of making friends are magnified by orders of magnitude over the old methods of finding a peer group."
- Howard Rheingold, The Virtual Community, 1994

Communities are important for the success of any multimedia information systems, today. Gaming is no exception, especially when it has become a part of our new media culture, entertaining people of all ages. The satisfaction of gaming community members can influence the success of a game and it is no secret why highest selling games have the largest communities. 

pokemon gif with money falling over on meow


To keep up the community and the platform with the latest trends, features, and functionalities, it is important that you choose the right technology for your platform. Drupal is an easy choice. But why are gaming communities increasingly opting for Drupal as the platform of their choice? 

“The famous AR game, Pokemon, managed to give an unprecedented swiftness, leading to Nintendo’s stock value increasing dramatically and achieving $470 million in revenue in just 80 days.”

The Power Of Gaming: Why Gaming Industry Needs Community?

Not very often will we associate the word community with gaming. And yet, these community platforms are where the games really mature. In terms of engagement and shared values, a common cultural background plays an important role, which can be reflected by the spatiotemporal distribution of the gamers. 

The community of gamers can be identified either as a whole or part of video game culture. It comprises of people who play games and those are interested in watching and reading about it. 

Community support is important for both game development and community building. 

  • User Acquisition: A shared goal, interest provides the reason for being a part of the community. A community is what builds a game, and community is what drives the game beyond its niche success into the blockbuster — shaping the success of ROI for an engaged, excited community is off the charts.

    Intense interactions and strong ties are not only important for online multiplayer games, they enhance the intensity and user experience too.  

    Over 53% of US teenagers play online games with people they know in their offline lives (Pew Research, 2015). Community support allows integration of offline friend circles into online communities.  
     

  • User Retention: Gaming communities form a very crucial part in retaining the users as video games have grown into a subculture since their birth.

    Community services enhance competition within games, which builds up customer loyalty as a consequence. Games and gaming communities are strongly intertwined and experience permanent co-development. 

    Discussions on new features, the problems they encounter at playing, advice about gaming strategies via forums is where the retention starts at. 

    The modern games provide direct in-game communication, which is not restricted to a simple message exchange, but also involves further service functionality. 

  • Improves Quality: Gaming communities are a place of intense interaction after all games are about shared experiences, rendered with extraordinarily interaction and ownership.  All successful games have communities. tracer over the shoulder victory poserThe infamous butt pose - Tracer' over the shoulder victory poseAnd this where the changes come from. Remember the infamous Tracer butt controversy from 2016? Well, it was after the community chose to put their outrage did the gaming giant Blizzard Entertainment had to pull down the post to show the accurate representation of the values.


Why are Gaming Communities Opting for Drupal?

What does Drupal offer to the gaming communities that they are opting for it? Here is a list of why Drupal is the choice for the community platforms.

  • Decoupled Drupal for Intuitive Game Live UI Experiences

Much like physical sports, video games demand a certain standard of ability where the player can enjoy from the very moment the game is started. Regardless of whether there is an explicit tutorial, players must instantly intuit what to do, what the basic rules are, what is good, what is bad, and how to go about doing the various things that can be accomplished.

The more the realism your game offers to the gamer, the longer they would want to play. 

With the decoupled experience in Drupal, you can create an interactive experience for the gamers by utilizing your site to completely control the useful in-program applications. While the front end can easily be coupled with other technologies such as jQuery, JavaScript, Node.js, and React.js. The backend in turn shifts to become the system of record, but the interaction happens real-time in the browser, back-and-forth. 

The headless site advancement can possibly release the imaginative influence of the diversion with intense gaming experience which is faster, more natural, intuitive and responsive at the gamers’ end. The end result is smoother and faster games played live. 

  • Gameplay based customizations

Games allow players to perceive themselves in alternate ways in their imagined worlds. Player identification – with Avatar and Character – helps build the interest while also improving the gameplay experience and is important to maintain the identity in the possible communities as well. 
 

Garen Avatar from the leagues of legends highlightedAvatars in the Leagues of Legend

An example of this could be the website of League of Legends – built on Drupal – which is a team-oriented strategy game where the goal is to work together and bring down the enemy nexus located in the middle of the game. 

Roles of assasin, fighter, mage, and marksmen offered in the Leagues of LegendRoles offered in the Leagues of Legend

Drupal has tools and services for building user profiles, fostering the creation of virtual sessions, allowing communication with third party serious games, and storing and processing the game analytics. This is important since it helps the gamer take the game more seriously and relate to it on a virtual level.

  • Scalability

Zynga – a leading developer of the world's most popular social games – website is built on Drupal. It claims to have 100 million monthly unique visitors, making it the largest online gaming destination on the web.

Scalability is Drupal’s middle name
Farmville 2 description on Zynga

Handling high volumes of visitors, content, and users is a tough job. But Drupal does it easily. As it is said, “scalability is Drupal’s middle name”. Some of the busiest sites across the world are built on Drupal. 

It is apt in handling sites that burst with humongous traffic, which means your gaming website can perform spectacularly even on the busiest of the days without breaking or bending. 

  • Multimedia support

Visit the famous StarWars The Old Republic (SWOTR) website and the background has video snippets playing from the game. Multimedia support is not new to the gaming industry. To keep the engagement high, you need to support multimedia features like scorecards, videos, photos, audios among others. 

gif from star wars games

Drupal is a highly versatile and customizable CMS. It has various modules available to support this need. The photo gallery module, media entity module, and easy to use templates to customize appearance are just a few from the list.   

Not just this the photo gallery module helps you customize images with templates, build you scorecards

  • Mobile Responsiveness

Video games have once again found themselves more widely played and accepted, thanks to the increasing smartphone reach. Add to it one more feature, your game needs to be device responsive too with easy and intuitive controls. 

Drupal 8 is device responsive out-of-the-box. Which means your content adjusts well from the big screen of your desktop to the little screen. Image size changes, menu items shift to drop-downs, other items are pushed around to make sense of content and size of the device. 

But games are not just about the squeezing to a different size thing. They need to offer the same experience as in the native web application without taking away the intuitive design. This can be sorted with the Hammer.js module in Drupal. Hammer.js helps you enhance the user experience by providing cross-browser support and taking away a lot of complexity when implementing touch and pointer gestures. Leveraging it in Drupal is easier than ever due to the Library API of Drupal 8.

  • Adding complex categories and catalogs

Gaming communities are a lot different from what the gaming websites offer. Since each game will have different sub-communities, it becomes a need to build those categories with design and category apt to the theme. 

screenshot from leagues of legends with various categories and catalogs

Drupal provides a powerful taxonomy engine, allowing gaming companies to support intricate designs and complex categories and catalogs, without much ado. The flexibility of adding different types of products and content is ensured by content creation kit (CCK). CCK allows you to add custom fields to any of the content types using a web interface

  • Discussions, Reviews, and News

Communities are all about discussing what is happening or happened. Therefore one of the primary community needs is for a easy content creation with different content types. The more the types, higher the engagement, more the users will interact. Blogs, events, FAQs, news are all important.

screenshot of leagues of legend with news sectionScreengrab from League of Legends
  • Quick Search 

Communities are a busy place with a lot of activities happening at the same time. Content that might interest a user can get lost in the myriad of content. In Drupal, Solr is used to get more accuracy within less time. 

search for new games and its results

Drupal has Solr integrated for a quicker search. Solr is a highly reliable, scalable and fault tolerant search application which provides distributed indexing, replication, and load-balanced querying with a centralized configuration. 

  • E-commerce Solution

Integrating commerce with the website is an old practice and most gaming companies leverage this opportunity to boost their sales. Klei – an Independent game studio – chose Drupal to create a seamless shopping experience for both mobile and desktop users.

According to The Jibe, "Klei needed a site and store that was as easy for them to manage as it was for their customers to buy: easy sorting, featured items, promo-code inputs, simple searching, and clear calls-to-action."

shopping cart with Klei

After integrating the online store with Drupal the team can easily add new products and games on the fly while also managing the promotions and highlighting featured items easily.

DrupalCommerce and Commerce Kickstart are two of the most popular solution that Drupal offers. With easy payment gateway integration, your online transactions are secure with Drupal.

Drupal vs. Wordpress 2018

Building a Community website

Building an online community, a network of people with shared interests and goals with target niche audience to be part of it with easy usability and navigation. 

Example: Pinterest Community

Winner: Drupal 8 

Why? For an extensive user management in your community, it would require custom fields, different content types, scalability, varied user roles and permissions among the others - all of which are easy to build in Drupal 8. In case you need a simple to-do community with limited features and functionalities, then maybe Wordpress will work. But then that format would be closer to a blog, anyway.

Building a Gaming Website

These are the sites featuring direct online gaming with single or multiplayer and can include games of any type from the different genre. 

Example: Zynga

Winner: Drupal 8 (Clearly)

Why? While you might think of Drupal as a preconfigured PHP framework, it is vastly more suited to developing an online game than Wordpress is. Drupal is fast, mobile responsive and scalable. It can handle as content as much as you want, as many people as you can think of - without crashing. 

And as far as WordPress is concerned, why would you want to choose a software built from a blogging background to create a game?

Building a Basic Gaming related Website

These are the types devoted to the world and culture of computer gaming. Will includes gaming news, magazines, FAQs, and resources. 

Winner: WordPress

Why? Although Drupal 8 more suited to handle the content, WordPress has a slight edge here. All the types mentioned here are related to publishing. Being a blogging platform (niche) WP can suit the needs better since its out-of-the-box configuration comes closer to your goals. 

Although in case there are varied features added like user login, reviews, managing multimedia content, and discussions then, Drupal is clearly the hero. 

Building a Media-Streaming Website

These are the sites that offer audio/video streaming services, such as podcast, television and film, sports, music, among others.

Example: AXN 

Winner: Drupal 8

Why? Drupal 8 can handle multimedia content much more flexible than WordPress. While WordPress can excellently handle content that's primary text, Drupal 8 makes all types of a media a first-class citizen. 

With clear taxonomy and easier role management, coupled with faster-load time, it won’t bend or break when streaming content live. 

Summing Up

Community platforms have become an easy measure to the success of any game since they serve a combination of purposes varying from technical to human factors. Further community satisfaction measures need to be considered in order to improve the product model and quality in future. 

Drupal mostly serves the needs of the gaming industry, is should be a no-brainer when opting for it. Drop a mail at [email protected] to connect with us if you are building your gaming website or community platform.

Nov 01 2018
Nov 01

Visualise that you are working for an organisation that builds web applications for its clients. Every time you gain a new client for a web application, you visit AWS or any cloud provider for that matter. You wind up with 2 VMs for running the app and for the associated database. You will need at least two copies of this infrastructure for production and staging and then start deploying the code for that client. And this process starts all over again for a new client and so forth. Instead, by utilising Infrastructure as Code (IaC), you run a bit of code and that’s it, you are all set to go!

Foundation of a building under construction on left hand side and fully constructed building on right hand side


Infrastructure and Operations (I&O) teams must disrupt their traditional infrastructure architecture strategies with IaC. This comprises of investing in hybrid cloud, containers, composable infrastructure and the automation for supporting these workloads. As we hurtle through the wall-to-wall internet of things (IoT) and edge computing, a holistic strategy for IaC becomes more significant to enterprises than ever before. It will be interesting to witness the power of Infrastructure as Code for the deployment of Drupal-based web applications. Before we dive into that, let’s see how IaC helps in efficient software delivery.

Solving environment drift in the release pipeline

Illustration showing sheets of paper on the left-hand side and a lot of computers on the right-hand side


Infrastructure as Code refers to the governance of infrastructure (networks, virtual machines, load balancers, connection topology) in a descriptive model by leveraging the same versioning as DevOps team uses for source code. In a similar principle of the same source code generating the same binary, an IaC model generates the same environment whenever it is applied. It is an integral DevOps practice and is used in combination with Continuous Delivery.

Infrastructure as Code refers to the governance of infrastructure in a descriptive model by leveraging the same versioning as DevOps team uses for source code

IaC evolved to solve the environment drift in the release pipeline because:

  • The teams must maintain the settings of separate deployment environments without IaCs.
  • Over a period of time, each environment becomes a snowflake. In other words, it leads to a unique configuration that cannot be reproduced automatically.
  • Inconsistent environments incur deployment obstacles.
  • With snowflakes, management and maintenance of infrastructure constitute manual processes which were difficult to track and contributed to errors.

Idempotence, a principle of IaC, is the property in which no matter what the environment’s starting state is, deployment command always sets the target environment into the same configuration. It is attained by either automatically configuring an existing target or through the recreation of a fresh environment by discarding the existing environment.
 
With IaC, DevOps teams can test applications in production-like environments early in the development cycle. These teams expect to provision several test environments and on-demand. Infrastructure represented as code can also be validated and tested for avoiding common deployment challenges. Simultaneously, the cloud dynamically provisions and tears down environments based on IaC definitions.
 
Implementing Infrastructure as code helps in delivering stable environments faster and at scale. By representing the desired state of their environments via code, teams avoid manual configuration of environments and enforce consistency. Infrastructure deployments are repeatable and safeguard against runtime issues that are caused by configuration drift or missing dependencies. DevOps teams can work in combination with a unified set of practices and tools for delivering applications and their supporting infrastructure quickly, reliably and at scale.

Benefits of Infrastructure as Code

Graphical representation showing horizontal bars in light and dark blue colours to depict benefits of Infrastructure as Code

 

  • Minimising Shadow IT: Allowing a fast response to new IT requirements through IaC assisted deployment ensures higher security, compliance with corporate IT standards and helps with budgeting and cost allocation.
  • Satisfying Customers: Delivering a quality service component with a short time period leads to customer satisfaction and enhanced perception of IT within an organisation.
  • Reducing operational expenditures: An enterprise can configure and deploy a completely tested and compliant new IT infrastructure asset in just a matter of few minutes either with minimal or no human intervention at all This saves a superabundance amount of work time and security-related financial risk potential.
  • Reducing capital expenditure: A developer accomplishing the task of several team members on his own, particularly in the context of DevOps, highly benefits the project capital expenditure.
  • Standardisation: When the creation of new infrastructure is coded, there is consistency in the set of instructions and standardisation.
  • Safer change handling: Standardisation assurance allows safer alterations to take place with lower deviation rates.

Challenges of using Infrastructure as Code

  • Organisational resistance to change: Largest organisational challenges stem from budget limitations as it can deter an organisation’s ability to hire or retrain staff lead to an overall resistance to change.
  • The dearth of expertise in-house: Lack of in-house expertise can pose a technical hurdle.
  • Shortage of tools, skills and the fear of loss of control: As IaC languages are more code-like than script-like, so developers are more comfortable with them in general but this poses issues for Ops team. Ops is more concerned with configuration control conflicts as they have traditionally had all control over configurations.

'Key recommendations' written inside a box at the top and some bullet points follows after that to explain Infrastructure as Code


Infrastructure as Code tools

Infographics showing a quarter of a circle inside a square-shaped box with several small circles inside it denoting the market presence of Infrastructure as Code toolsSource: Forrester Wave™: Configuration Management Software For Infrastructure Automation, Q4 ’17
  • The Puppet open source engine emphasises on supporting configuration management on numerous platform such that if a system is reachable by IP then it must be configurable.
  • Puppet Enterprise augments the open source Puppet providing a web-based UI to enable visibility into configurations, dependencies and events.
  • The Chef open source engine leverages an imperative approach with support for several operating systems, containers and cloud services.
  • Chef Automate builds on the Chef open source automation engine which incorporates respective projects of Habitat and InSpec and offers a web-based GUI and dashboard for compliance visibility.
  • The Salt open source project provides the option to run the modular software with or without agents and using push or pull processes.
  • SaltStack Enterprise builds on the open source Salt offering that gives you an enterprise GUI and API for integration.
  • Normation Professional Services sells plug-ins for Window/AIX support, auditing and HTTP data sourcing integration
  • Rudder is an open source automation platform that emphasises on continuous reliability.
  • Ansible open source project emphasises on minimalism and easy usage. It does not require any agents and relies on SSH and WinRM to remotely control member nodes which limits the resource usage and potential network traffic.
  • Ansible Tower is an enterprise solution for Ansible that emphasises on improving the open source project’s analytics and compliance capabilities.
  • Microsoft Azure Automation is a SaaS-based suite for process automation.
  • Microsoft PowerShell DSC is a configuration management execution engine which is developed primarily for Windows with support for Linux and MacOS added recently.
  • CFEngine Community Edition is an open source automation engine which is considered the father of modern-day configuration management.
  • The Enterprise version of CFEngine offers GUI/dashboard to manage and monitor node health, user-based and role-based management, richer reporting, asset management capabilities, and modules to support AIX and Windows 

Infrastructure as Code for Drupal

A digital agency showed how to automate the whole deployment process from the start to finish by leveraging Ansible. Ansible, being agentless, has a great ecosystem, the YAML syntax is easy to read, understand and maintain. This could be automated using any other provisional tool like Chef or Puppet as well.

Black background with ‘Ansible + Drupal’ written in blue and ‘ A fortuitous DevOps Match’ written in white below it


Project involved making the Ansible playbooks a part of their codebase. It will live alongside the Drupal code. Also, it is considered an industry-wide good practice to have infrastructure and deployment as a part of the code. It is still not technically 100% Infrastructure-as-Code setup as they only had the provisioning scripts checked in and not the code to spin the actual servers. The playbooks assume that the servers are already present with Docker, Docker compose is installed and having SSH access.

This setup made the deployment process consistent and repeatable as any developer with necessary permissions in the team could run the script and get the same results all the time. Moreover, when the build fails, it fails loud and clear where exactly things went wrong.

Challenges in the project

They did not guarantee a rollback for this process. If for instance, you perform a deployment and it fails, you would have to manually perform the rollback to the previous state. But it does store DB backups. So, it would not be an arduous task to add a rollback mechanism with the tag rollback and some parameters like what commit to rollback to, which DB to reset to etc.

Steps to be performed

A significant precursor to automating is to document and have a script for each step. They split the tasks into two categories namely

  • Setting up the system like creating DB backup directories
  • Running the DB updates via Drush

Ansible has the concept of tags for which 2 tags were defined namely ‘setup’ and ‘deploy’.

The listicle of setup only tasks included:

  • Creation of a directory for DB files to persist
  • Creation of a directory for storing DB backups
  • Creation of a directory for storing file backups

The listicle of tasks for both setup and deployment included:

  • Creation of a backup  of files and DB
  • Cloning the correct code, that is, specified branch or bleeding edge.
  • Creating .env file
  • Building and booting the latest containers for all services
  • Running composer install and DB updates, importing config from files and clearing cache (Drupal specific)

It is important to secure your servers prior to the deployment of the application. Ansible helps in storing the sensitive information in an encrypted fashion like DB credentials, the SSH key pair and the server user credentials. This setup enables you to easily build production replicas or non-production environment. 

In the years to come

IaC has a bright future with its ability in provisioning and managing computing resources. While it does come its own set of implementation barriers, the benefits that it delivers far exceeds the challenges it currently faces.
 
As the tools the frameworks that are associated with Infrastructure as Code mature, it has the potential of becoming the default standard to deploy and govern infrastructure.

Infographics showing statistics on Infrastructure as Code (IaC) using bar graphs, pie charts and relevant icons.


Technavio analysts forecast the global DevOps platform market to post a CAGR of more than 20% during the period of 2018 to 2022.  One of the major trends that are being seen in the global DevOps platform market 2018-2022 is the increase in the adoption rates of Infrastructure as Code. DevOps tools are being implemented by the organisations to shift from manual configuration of IT infrastructure to programmable IT infrastructure.

Increase in the adoption rates of Infrastructure as Code is a major trend in the global DevOps platform market

The report goes on to state that one of most significant reasons contributing to the growth in the global DevOps platform market is the need for reducing the time to market. Asia-Pacific region is projected to see the maximum enhancement in the market share of global DevOps platform. The Americas region and Europe-the Middle East-Africa region, which holds a large market share currently, will witness a decline in the market share over the forecast period.

Conclusion

Customer-obsessed technology puts the broader charter of service design on the infrastructure and operations team. I&O leaders should own the design for the full system of interacting parts that are sourced from a rich and dynamic software-defined ecosystem. Infrastructure as Code holds a great potential in disruption of traditional infrastructure architecture strategy and can be efficacious for Drupal deployments.

With years of expertise in Drupal Development, Opensense Labs has been providing a wondrous digital experience to its partners.

Talk to our Drupal experts at [email protected] to know how can we implement Infrastructure as Code with Drupal to power your digital transformation endeavours.

Nov 01 2018
Nov 01

As you already learned in a previous tutorial, CKEditor, the default WYSIWYG Editor for Drupal 8, can be enhanced through the installation of different plugins. They add buttons to the editor with additional features.

Content editors often need to embed accordion tabs into their articles, for example, to present a group of Frequently Asked Questions with their answers or to visually divide a topic into several subtopics.

The CKEditor Accordion module for Drupal 8 allows editors to insert an accordion directly into the WYSIWYG Editor (and therefore into the node) without the need to configure additional modules or even views.

This tutorial will explain the usage of this module. Let’s start!

Step #1. Install the required modules

  • Open your terminal window and type:

composer require drupal/ckeditor_accordion

Install Composer using your terminal

This will download the latest stable version of the module (currently 1.1.0) to your modules folder.

  • On your Drupal installation click Extend.
  • Search for the module name, click the checkbox.
  • Click Install.

Click Install

Step #2. Configure the Module

  • Click Configuration > CKEditor Accordion Configuration.
  • Check Collapse all tabs by default, if not already checked.
  • Click Save configuration.

Click Save Configuration

  • Click Configuration > Text formats and editors.

Click Text formats and editors

  • Locate the Full HTML format and click Configure.

Click Configure

  • Scroll down and click the Add group button in order to add a new button group.
  • If you don’t see the Add group button, click the link Show group names on the right.

Click the link Show group names

Click the link Show group names

  • Give this button group a proper name, for example, "Accordion".
  • Drag the "Accordion" button and drop it into the newly created group.

Drag the Accordion button and drop it into the newly created group

  • Scroll down to the Enabled filters section.
  • Check Limit allowed HTML tags and correct faulty HTML.

Check Limit allowed HTML tags and correct faulty HTML

  • This will display a vertical tab at the end of the screen.
  • Locate the dl HTML tag and replace it with <dl class>.
  • Click Save configuration.

Click Save configuration

This allows the module to inject the required CSS class, in order to give the accordion the proper styling.

Step #3. Create the Content

  • Click Content > Add Content > Basic Page.
  • Make sure that you select Text format HTML.
  • Click the Accordion button.

The module displays an accordion with two tabs by default. In order to add a third tab do the following:

  • Right-click inside the accordion element.
  • Select Add accordion tab after.

Select accordion tab

There are now 3 accordion tabs.

  • Write a title and some text for each of them.
  • Click Save.

You should see the accordion with three collapsed tabs.

You should see the accordion with three collapsed tabs

  • If you want to show the first tab displayed by default, go back to Configuration > CKEditor Accordion and uncheck the Collapse all tabs option.

Step #4. Styling the Accordion

The module adds class=”styled” to the dl tag containing all the elements of the accordion. So you have to target this class, in order to style the accordion.

For example:

dl.styled > dt.active > a {
background-color: red;
}

How to integrate Accordion Tabs into CKEditor for Drupal 8

Conclusion

The CKEditor accordion module lets you insert an accordion at any place of your node with the help of the CKEditor WYSIWYG Editor.

Thanks for reading!


About the author

Jorge lived in Ecuador and Germany. Now he is back to his homeland Colombia. He spends his time translating from English and German to Spanish. He enjoys playing with Drupal and other Open Source Content Management Systems and technologies.
Oct 31 2018
Oct 31

Did you miss MidCamp 2018? Or are you just ready to get hyped for 2019? While the MidCamp team is busy getting things ready for 2019, you can re-live all of the amazing sessions from 2018 by checking out our playlist on YouTube. Take time to check out the reaction roundup to find out what others thought of MidCamp 2018 or read through our Fearless Leader’s musings on MidCamp 2018.

If you haven’t been keeping in touch with friends you met at MidCamp, then be sure to join other Drupal Slacks and reconnect with all the folks you met.

Get involved for 2019!

If you’re interested in getting involved with MidCamp 2019, we’re on MidCamp Slack and we’d adore the opportunity to welcome you to our team. We completely empathize with being busy, so don’t feel bad if you can’t contribute as much as you’d like - every little bit helps! You can also contribute by telling us what topics you’re interested in seeing in the 2019 program. 

Join the conversation

Oct 31 2018
Jay
Oct 31

Leveraging Drupal's cache effectively can be challenging at first, but the benefits for your site's performance make it well worth the effort. It all starts with figuring out what sort of rules your site should use to cache its content. The improved page load times that come from properly handling caching rules can help improve SEO and are generally more appealing to those that visit the website. In the right hands, the techniques outlined in this article can get a website into a much more specialized caching system.

I previously covered the fundamentals of how caching works in Drupal 8, including what the two core caching modules do and what cache tags, contexts, and max-age are for. If you're familiar with those things, then this post is for you; otherwise, check out my previous article and get up to speed before we dive into a slightly more in-depth topic: Figuring out how you should set up caching on your site.

If you're using a simple Drupal installation with no custom code and with well-maintained contributed modules, Drupal's Internal Page Cache and Dynamic Internal Page Cache modules will likely cover your caching needs. This article focuses on some more complex and custom scenarios which, nonetheless, come up with some frequency. 

The Guiding Principle

Perhaps the most frequent issue custom code has when it comes to caching is that it doesn't account for caching at all. This isn't ideal if you want to take advantage of Drupal's caching system to optimize your site's speed, and it points to one principle which can be tricky to learn and is critical to master: If you write custom code, always think about its caching implications. Always. 

Often, the implications will be minimal, if there are any at all. It's important not to assume that every bit of custom code will cache perfectly on its own - that's a mistake that could lead to something being cached either for too long or not at all. 

When to Disable the Cache 

Most everything Drupal renders as output (to a web browser, to a RESTful web service, etc.) can be cached. However, sometimes the development time to ensure that your custom code handles caching precisely outweighs the performance benefits that caching might provide. So the first question when you're writing custom code is, should this be cached at all?

Ready to get the most out of Drupal?  Schedule a free consultation with an Ashday Drupal Expert. 

If you're building something like an administrative page that only a few users with special permissions will ever see, it may not be worth the time and effort to make sure it is cached perfectly, especially if the rules for doing so would be complicated. For these scenarios, Drupal has a "page cache kill switch" that can be triggered in code:

 \Drupal::service('page_cache_kill_switch')->trigger();

Calling the kill switch will stop both the page cache and the dynamic page cache from doing any caching on that page.

This should be used with caution though, and only as a last resort in situations where figuring out proper caching logic isn't worth the time. It should never be used on a page which is expected to be viewed by a large number of your website's visitors.

Rules for Cache Contexts

You should consider using a cache context if your content should look different when displayed in different situations. Let's look at a couple scenarios that benefit from using a cache context:

Say you have a site which can be accessed from two different domains and you want to display something a little different depending on which domain someone is looking at. Perhaps the site's logo and tagline change a little. In this case, the block containing the logo and tagline should be given the url.site context. With this context in place, Drupal will cache a separate version of the block for each domain and will show each domain's visitors the appropriate one.

Or, perhaps a block contains a bit of information about which content the currently logged-in user has permission to edit. This sounds like an excellent case for using the user.permissions context to indicate to Drupal that the block is different for each possible combination of permissions that a user might have. If two users have the same permission, the same cached version can be used for both of them.

There are many other contexts are available as well; take a look at the full list to see if one or more of them is applicable to your code.

Rules for Cache Tags 

Cache tags are probably the most important caching mechanism available to custom code. Drupal includes countless cache tags which can be used to invalidate a cache entry when something about your site changes, and it is also very easy to create your own cache tags (which we'll get to in a minute). For now, I'm going to focus on some of the cache tags Drupal has by default.

Say you're creating a page which shows the top five most recently published articles on your site. Now, Drupal sites can often make use of the Views module for this sort of thing, but depending on your exact requirements Views may not be the best approach – for instance, maybe part of the content has to come from a remote service that Views can't readily integrate with. The most obvious tags needed for this page are the tags for the specific pieces of content that are being shown, which are tags in the format of node:<nid>, for instance, node:5 and node:38. With these tags in place, whenever the content gets updated, the cache entry for your page gets invalidated, and the page will be built from scratch with the update information the next time somebody views it.

But that's not all there is to think about. Perhaps this page also shows what categories (using a taxonomy structure) each article is in. Now, the articles each have an entity reference field to their categories, so if a user changes what categories the article is in, the relevant node:<nid> tags already added to your page will get cleared. Easy enough. But what if somebody changes the name of the category? That involves editing a taxonomy term, not the article node, so it won't clear any node:<nid> tags. To handle this situation, you'd want to have appropriate taxonomy_term:<id> tags. If an article with ID 6 has terms with IDs 14 and 17, the tags you'd want are node:6, taxonomy_term:14, and taxonomy_term:17, and you'll want to do this for every article shown on your page.

Fortunately, most of the time, you don't need to worry about the specific tag names. Nodes, terms, and other cacheable objects have a getCacheTags() method that gets exactly whatever tags you should use for that object.

These are all simple entity-based tags, but there are many more available as well. There are tags for when various aspects of Drupal configuration changes as well as for things such as when certain theme settings get changed. Unfortunately, since the available cache tags vary from site to site, there isn't a ready-made list of them available for you to use as a reference. You can, however, look at the "cachetags" table in your Drupal database to see a list of all the tags that have been invalidated at least once on the site. This will be pretty minimal if your site is brand-new, but as people use the site it will start filling up.

The basic idea of tags is this: If you render something on a page, and there's a chance that something displayed on it might change in the future, there should be an appropriate tag in place to watch for that change.

Up Next

This is a big topic, but it looks like we're out of time for today. Next time, we'll delve a bit deeper into cache tags by seeing how to create custom ones that perfectly fit your site's needs and will also cover how to use max-age, including one important gotcha that makes them more complicated than they look. You can check that one out here.

Offer for a free consultation with an Ashday expert

Oct 31 2018
Oct 31

by David Snopek on October 31, 2018 - 1:28pm

As you may know, Drupal 6 has reached End-of-Life (EOL) which means the Drupal Security Team is no longer doing Security Advisories or working on security patches for Drupal 6 core or contrib modules - but the Drupal 6 LTS vendors are and we're one of them!

Today, there is a Critical security release for the Session Limit module to fix a Insecure Session Management vulnerability.

The session limit module enables a site administrator to set a policy around the number of active sessions users of the site may have.

The module does not sufficiently tokenise the list of sessions so that the user's session keys can be found through inspection of the form.

See the security advisory for Drupal 7 for more information.

Here you can download the Drupal 6 patch.

If you have a Drupal 6 site using the Session Limit module, we recommend you update immediately! We have already deployed the patch for all of our Drupal 6 Long-Term Support clients. :-)

If you'd like all your Drupal 6 modules to receive security updates and have the fixes deployed the same day they're released, please check out our D6LTS plans.

Note: if you use the myDropWizard module (totally free!), you'll be alerted to these and any future security updates, and will be able to use drush to install them (even though they won't necessarily have a release on Drupal.org).

Oct 31 2018
Oct 31

Ten Halloweens ago I shared a story of Drupal haunting. The post survives only in the faint afterlife of archive.org's Wayback Machine, having long since disappeared from the site of CivicActions where I was working at the time, so I thought I'd reprise it here. First, the original post. Then some notes on what's changed in the years since--and what remains chillingly accurate.

The Curse of the Haunted Drupal Site!

It's a specter ghastly enough to make the most seasoned Drupal developer quiver with fear.

Yes, it's the dreaded Haunted Drupal Site!

Oh, it may appear innocent enough on the surface. Just a typical business or organizational website that perhaps you've been asked to upgrade or enhance. But don't be fooled--lurking beneath the surface is a host of nasty surprises just waiting for the developer naive or foolish enough to venture in.

By what signs can you recognize the beast? What dreadful acts awakened the accursed spirits that now haunt this site? What mantras or talismans can protect you from inadvertently bringing the curse upon yourself?

Read on to find out.

The Tell Tale Signs of the Haunted Site

The signs may be subtle, even invisible to the casual observer. But to the initiated, there are sure signs by which the Haunted Site can be identified.

As with any case of detection, it's important to watch out for seemingly unimportant details.

Perhaps you find that the site is using a very old release version--Drupal 5.2, even though the current stable release is much later than that. A mere oversight? A mere detail, easily addressed through a routine update? Possibly. But, more likely, it's your first sign that all is not as it appears beneath the tranquil home page.

Or else you're exploring the site and try to do some familiar task, something you've done a thousand times on any number of Drupal sites. You try again and again, incredulous, but every time you get the same mysterious error. Impossible! This can't be true!

Oh, but it can. On the Haunted Drupal Site, all your assumptions are wrong, for nothing is as it appears.

Or perhaps you hear a seemingly casual remark like the following: "The previous developers may have applied a patch or two here and there." A "patch or two"? Don't believe it for a minute. No, you have received almost definitive notice of haunting.

What to do? It's time to reach for the tools that will reveal the worst. Yes, I mean diff. Nothing short of a full diff of the site's codebase will suffice to reveal the full scope of the beast.

How the Haunting Began

The truth is, most hauntings begin with the best of intentions.

Like an Egyptologist, an intrepid developer sets out to delve into secret places--in this case, the mysteries of the Drupal codebase. Perhaps she or he pauses a moment in amazement, dazzled with the complexity brought to light.

But who among us would not feel the temptation at this moment to touch the artifacts, to move them, even to leave our own mark?

Yes, there are professional mores that call for caution, for tedious processes of step by step excavation, for patient consultation, for publishing of results.

But how much quicker and more rewarding it seems simply to reach in and rearrange. In only a moment the change is made. No one need ever know.

Ah, but that moment has been enough--enough to awaken both the first hints of haunting and - more importantly - the thirst for instant treasure.

The next time there's a need for a quick fix, the lure will be harder to resist. What began as "a patch or two, here and there" can soon mushroom into a tangled web of changes--a labyrinthine maze that even the original site developer is quickly lost in.

Protecting Yourself from the Curse

How to ensure you don't get stuck in the Haunted Site?

It's not enough to be able to detect the curse where it exists. Sure, detecting the Haunted Drupal Site can help us avoid taking on a losing struggle with a dangerous foe. But it won't protect us from our own failings, or the dark passages we may ourselves be tempted by.

As with any secret knowledge, one approach is avoidance--look not in the source code and ye shall never be tempted.

But the path of avoidance is fraught with its own perils. No, our surest way is not to avoid the secrets of the codebase, but rather to immerse ourselves in them. Not to hack away and bear off treasures, but to learn, improving our collective knowledge, and restore the artifacts of our predecessors with the same care that went into their original construction.

Then the open, expansive spirit of Drupal will not haunt, but will infuse our work with the energy of creation that flows through all of us.

An energy that will stick with us long after the Trick or Treaters have gone home.

The Curse Revisited!

Okay, that was the state of haunting ten Halloweens ago. What shades still lurk in the Drupal realm?

On the plus side, we now have far more to work with than garlic and wooden stakes. Less than a year after my Halloween musings, Dmitri Gaskin (dmitrig01 on drupal.org) posted the first iteration of Drush Make, a magical system that could conjure an entire code base using spells encoded in a simple text file. For a site or Drupal distribution built and maintained with Drush Make no divination rituals are required--each patch and module version is meticulously documented in the sacred Drush Make File.

At least in theory. In practice? Any number of sites begun with excellent intentions soon diverged from the make file's specification. A contributing factor was that Drush Make was focused on building a code base more than on maintaining one. Efforts like Drush Make Sync to extend Drush Make for ongoing development were valiant but ultimately fated for the graveyard of undead code.

With Drupal 8, focus shifted to a fresh spellbook as Composer promised a new era of certainty in which code demons and hauntings were forever banished. But the transition to a new spellbook was anything but smooth. Even now, years later, Drupal distributions are stuck in the netherworld between a bygone Drush Make and the elusive realm of Composer. And who among us has not had a Composer spell go awry, springing back at us with some cryptic message of censure?

And, sadly, the best spells do nothing if not spoken. Just this month an organization (no sorcery will wrest from me its name!) contacted us to help with a Drupal site that, ostensibly, needed little more than a few simple tweaks. Oh, and--brace yourself for that portentous phrase, here innocuously dangled in a bulleted list: "update site".

Yes, yes, it was just as bad as it sounded. Drupalgeddon 2 may have come and gone, but this site remained suspended in a past state where the creaky gate was off its hinges and demons drifted in and out at will.

Proving once more that times and spellbooks may change but the challenges of keeping up--those continue their frightening reign.

Oct 31 2018
Oct 31

Have you thought of expanding your online business overseas? Or having identical platforms available to visitors in Spanish, French and German? There is one certain way to increase your business globally — create a multilingual website!

One might think that it is easy to translate any platform into any languages. However, there are a number of factors to consider when creating multilingual sites.

Our web development team prefers Drupal 8 content management system (CMS) for multilingual websites. Now find out what features make Drupal 8 the perfect choice for a multilingual website development

Benefits of Building a Multilingual Website

First things first. Why should you make your platform multilingual? The benefits of building a multilingual website are numerous. We point out some of the most important ones for your business:

  • to explore new markets.

Providing multilingual content is a necessity in a modern global marketplace. It’s all about reaching and building strong relationships with new customers and visitors. If your content is presented in the customer’s native language, consumption is easy and loyalty grows. If the visitor likes the platform, he might recommend it to his friends. This will rightly earn your place in new markets.

The Number of Internet Users by Language - April, 2018

  • to get more visitors.

It’s obvious that the more languages your website is presented in, the more visitors you can reach. A multilingual platform is a great opportunity to appeal to people from other countries and to increase website traffic.

  • to sell more products and services.

Increased website traffic creates more product and service selling potentials. It’s a simple, by-the-numbers way to instantly expand your business.

  • to improve SEO.

In addition to Google, some countries have their own search engines. Enabling a multilingual search of your website will improve its SEO.

Why Build a Multilingual Site in Drupal 8?

The content of your website is built and stored in your CMS. Choosing the right CMS at the start makes development and further maintenance of a multilingual website much easier.

Why you should opt for Drupal 8?

Powerful built-in multilingual features in Drupal 8.

The Drupal CMS offers powerful multilingual features. What’s more, Drupal 8 provides built-in multilingual modules that make the process even simpler. They are Language, Interface Translation, Content Translation and Configuration Translation modules.

Drupal offers 90+ languages and has a built-in translational core.Drupal translates both your content as well as all the fields, forms and error messages. Everything from configuration settings to menus and views can be translated with the help of out-of-the-box modules in Drupal 8.

Drupal 8 is scalable for multilingual websites.

Drupal is scalable for all your needs. No matter how many languages you choose, it will deliver all your multilingual content.

Transliteration support in Drupal 8.

One really handy addition to Drupal 8 is the Transliteration module added to Drupal core. This module automatically converts special characters such as "ç" and "ü" to "c" and "u" for machine names, file uploads and search results.

And some more! What can you get by building a multilingual website in Drupal 8?

  • Get automatic software translation updates from the Drupal community.
  • Choose what content not to translate.
  • Add a language selector to your site.
  • Overview screen for translators, contextual translation tabs for site builders.
  • Protected local, custom translations which are exportable.

These impressive multilingual capacities are among Drupal 8’s top benefits and it is no wonder that more and more website owners are choosing to migrate to Drupal 8.

Core Modules for Building a Multilingual Website in Drupal 8

Drupal 8 comes with four build-in modules for a multilingual feature.

1. Language Module.

The Language module lets you choose out from 94 languages as of now. With this module you can assign a language to everything: nodes, users, views, blocks and menus. Browser language detection can be easily configured with external language codes. Each user is able to select his own language for the admin interface. Besides, there is built-in transliteration for the machine names.

2. Interface Translation Module.

The Interface Translation module translates the built-in user interface, your added modules and themes. It has built-in translation UI for easier editing. By allowing automatic downloads and updates, this module lets use any translation interface available in the Drupal community in any language supported by Drupal 8. English language is now customizable and removable. There is no more need to use English as your default language.

3. Content Translation Module.

The Content Translation module allows users to translate content entities. It also allows to translate site content, including pages, taxonomy terms and blocks into different languages. The same as in Interface Translation module, the default language of the content can be easily configured. Users can even hide or display the position of language selector.

4. Configuration Translation Module.

The configuration translation module provides a translation interface for configuration. It allows to translate text that is part of the configuration, such as field labels, the text used in Views, etc.

Moreover, there is a provision of an overview screen to help you in the process.

Contributed Modules for Building a Multilingual Website in Drupal 8

Now, let’s proceed with the list of contributed modules that will help you build your Drupal 8 multilingual website.

1. Language Cookie Module.

Language Cookie module identifies visitors' mother languages. How? It simply adds an extra “cookie” field to the Language Negotiation settings. Therefore, the language on your website will instantly be set in accordance with this extra cookie.

2. IP Language Negotiation Module.

IP Language Negotiation is a key module for your Drupal 8 multilingual website. By detecting the countries that your visitors access your website from it instantly displays the content on your website in their native languages.

3. Language Fallback Module.

Language Fallback module allows to specify a language fallback for each defined language, so translation can fallback to another language. If a certain translation can't be delivered to your visitors in your custom language, they will always get the requested content in another familiar language or dialect.

4. Language Selection Page Module.

Instead of trying to identify your website visitors' mother languages, let them choose the languages they'd like to see the content translated to. Language Selection Page module allows visitors of your website to select the language on landing page/splash page, based on the languages that have been enabled on your Drupal platform.

It’s Time to Build a Multilingual Website in Drupal 8

Content on your website determines traffic, positive user experience and conversion rates. Choosing the right CMS from the start is vital to managing the content of any proposed multilingual site.

As soon as Drupal 8 delivers multilingual platforms straight out-of-the-box, it is a perfect solution for building a multilingual website.

Our Drupal development team at InternetDevels has years of experience in developing multilingual websites in Drupal CMS. Don’t hesitate to contact us if you have any questions regarding your multilingual platform or need our services on its development or support.

Oct 31 2018
Oct 31

Today, Acquia announced a partnership with Elastic Path, a headless commerce platform. In this post, I want to explore the advantages of headless commerce and the opportunity it holds for both Drupal and Acquia.

The advantages of headless commerce

In a headless commerce approach, the front-end shopping experience is decoupled from the commerce business layer. Headless commerce platforms provide a clean separation between the front end and back end; the shopping experience is provided by Drupal and the commerce business logic is provided by the commerce platform. This decoupling provides advantages for the developer, merchant and shopping experience.

  • For developers, it means that you can decouple both the development and the architecture. This allows you to build an innovative shopping experience without having to worry about impacting a system as critical as your commerce backend. For instance, you can add ratings and reviews to your shopping experience without having to redeploy your commerce platform.
  • For merchants, it can provide a better experience for administering the shop. Traditional commerce solution usually ship with a lightweight content management system. This means that there can be competition over which system provides the experience layer (i.e. the "glass"). This can introduce overlap in functionality; both systems offer ways to manage URLs, create landing pages, manage user access rights, etc. Because headless commerce systems are designed from the ground up to integrate with other systems, there is less duplication of functionality. This provides a streamlined experience for merchants.
  • And last but not least, there is the shopping experience for end-users or consumers. Simply put, consumers are demanding better experiences when they shop online. They want editorials, lookbooks, tutorials, product demonstration videos, testimonials, and more. They want the content-rich experiences that a comprehensive content management system can provide.

All this is why Acquia is excited about our partnership with Elastic Path. I believe the partnership is a win-win-win. It's a win for Acquia because we are now better equipped than ever to offer personal, unique and delightful shopping experiences. It is a win for Elastic Path as they have the opportunity to provide contextual commerce solutions to any Acquia customer. Last but not least, it's a win for Drupal because it will introduce more organizations to the project.

Note that many of the above integration challenges don't apply to native solutions like Drupal Commerce for Drupal or WooCommerce for WordPress. It only applies when you have to integrate two entirely different systems. Integrating two different systems is a common use case, because customers either already have a commerce platforms in place that they don't want to replace, or because native solutions don't meet their needs.

Acquia's commitment to best of breed

Acquia remains committed to a best-of-breed strategy for commerce. There isn't a single commerce platform that meets the needs of all our customers. This belief comes from years of experience in the field. Acquia's customers want to integrate with a variety of commerce systems such as Elastic Path, SAP Hybris, Salesforce Commerce Cloud (Demandware), Magento, BigCommerce, Reaction Commerce, Oracle ATG, Moltin, and more. Our customers also want to use Drupal Commerce, Drupal's native commerce solution. We believe customers should be able to integrate Drupal with their commerce management solutions of choice.

October 31, 2018

2 min read time

db db
Oct 31 2018
Oct 31

Join us on November 5th for the Zurich Drupal Meetup at the Amazee Labs Zürich office.

Agenda

  • The File Management Module for Drupal 8 - Lightning talk + Q&A by David Pacassi Torrico
  • Outlook Drupal Switzerland Activities 2019 - Discussion by Josef Dabernig (Amazee Labs)
  • Propose your topic in the comments!


General Information 

The Zurich Drupal Meetup is dedicated to people interested in the Content Management System & Framework Drupal.

We welcome everybody from beginners to Drupal ninjas and would be happy to see you present a recent project of yours or talk about any other Drupal-related topic.

Talk Formats

  • Lightning talk (max. 10 minutes)
  • Short talk (max. 25 minutes)
  • Full talk (max. 45 minutes)

If you would like to join us, sign-up here: https://www.meetup.com/Zurich-Drupal-Meetup/ 

Oct 31 2018
Oct 31

In this episode, we cover the Drupal 8 Linkit Module. This module extends the link functionality of your Drupal 8 WYSIWYG editor (like CKeditor) to make it easier to link to other pages on your website. Rather than having to go find a page on your website you would like to link to, copy the URL, and paste it in the link field, this module allows you to do it in one step!

Check out the Code Karate Patreon page

Oct 30 2018
Oct 30

More than 1 million websites worldwide use Drupal to combine great design with power, speed and security that Drupal provides. From large enterprises to NGOs, Drupal is actively helping organizations change the world through their digital experiences. One of these institutions is the Commonwealth of Massachusetts.

In a recent report published by ITIF (an independent, nonpartisan think tank), the official website for the Commonwealth of Massachusetts (mass.gov) was named #3 in the nation for its overall web presence.

“This report assesses four criteria: page-load speed, mobile friendliness, security, and accessibility. For page-load speed, we reviewed both desktop page-load speed and mobile page-load speed.” - ITIF

Building a Better Experience for Constituents

The Commonwealth set out to better the digital experience for the constituents of Massachusetts back in 2016 when they began engaging with outside vendors to take on the responsibility of redesigning and developing mass.gov using the open source CMS Drupal 8. The end goal for the Commonwealth was to restructure their site’s content in a way that made it intuitive for people to accomplish their goals.

With the help of Palantir.net, Massachusetts launched the new platform in October 2017 designed to better serve constituent needs in the digital age.

“We’ve redesigned Mass.gov for you, the people of the Commonwealth. We have one goal: to make it easy for you to find what you need.” - Mass.gov homepage

We’re proud of Mass.gov for this amazing achievement, and we’re not surprised. Good web design in government is about ensuring a great experience for constituents of diverse backgrounds and creating an open and accessible government for all users.

The goal of ITIF’s report was to assess state government websites based on seven popular state e-government services. Download the full report to see how your state’s website ranked.

Oct 30 2018
Oct 30

Next steps for Drupal's configuration management system.

Configuration management is an important feature of any modern content management system. Those following modern development best-practices use a development workflow that involves some sort of development and staging environment that is separate from the production environment.

Configuration management example

Given such a development workflow, you need to push configuration changes from development to production (similar to how you need to push code or content between environments). Drupal's configuration management system helps you do that in a powerful yet elegant way.

Since I announced the original Configuration Management Initiative over seven years ago, we've developed and shipped a strong configuration management API in Drupal 8. Drupal 8's configuration management system is a huge step forward from where we were in Drupal 7, and a much more robust solution than what is offered by many of our competitors.

All configuration in a Drupal 8 site — from one-off settings such as site name to content types and field definitions — can be seamlessly moved between environments, allowing for quick and easy deployment between development, staging and production environments.

However, now that we have a couple of years of building Drupal 8 sites behind us, various limitations have surfaced. While these limitations usually have solutions via contributed modules, it has become clear that we would benefit from extending Drupal core's built-in configuration management APIs. This way, we can establish best practices and standard approaches that work for all.

Configuraton management initiativeThe four different focus areas for Drupal 8. The configuration management initiative is part of the 'Improve Drupal for developers' track.

I first talked about this need in my DrupalCon Nashville keynote, where I announced the Configuration Management 2.0 initiative. The goal of this initiative is to extend Drupal's built-in configuration management so we can support more common workflows out-of-the-box without the need of contributed modules.

What is an example workflow that is not currently supported out-of-the-box? Support for different configurations by environment. This is a valuable use case because some settings are undesirable to have enabled in all environments. For example, you most likely don't want to enable debugging tools in production.

Configuration management example

The contributed module Config Filter extends Drupal core's built-in configuration management capabilities by providing an API to support different workflows which filter out or transform certain configuration changes as they are being pushed to production. Config Split, another contributed module, builds on top of Config Filter to allow for differences in configuration between various environments.

The Config Split module's use case is just one example of how we can improve Drupal's out-of-the-box configuration management capabilities. The community created a longer list of pain points and advanced use cases for the configuration management system.

While the initiative team is working on executing on these long-term improvements, they are also focused on delivering incremental improvements with each new version of Drupal 8, and have distilled the most high-priority items into a configuration management roadmap.

  • In Drupal 8.6, we added support for creating new sites from existing configuration. This enables developers to launch a development site that matches a production site's configuration with just a few clicks.
  • For Drupal 8.7, we're planning on shipping an experimental module for dealing with environment specific configuration, moving the capabilities of Config Filter and the basic capabilities of Config Split to Drupal core through the addition of a Configuration Transformer API.
  • For Drupal 8.8, the focus is on supporting configuration updates across different sites. We want to allow both sites and distributions to package configuration (similar to the well-known Features module) so they can easily be deployed across other sites.

How to get involved

There are many opportunities to contribute to this initiative and we'd love your help.

If you would like to get involved, check out the Configuration Management 2.0 project and various Drupal core issues tagged as "CMI 2.0 candidate".

Special thanks to Fabian Bircher (Nuvole), Jeff Beeman (Acquia), Angela Byron (Acquia), ASH (Acquia), and Alex Pott (Thunder) for contributions to this blog post.

October 30, 2018

2 min read time

Oct 30 2018
Oct 30

Drupal Modules: The One Percent — Entity Jump Menu (video tutorial)

[embedded content]

Episode 50

Here is where we bring awareness to Drupal modules running on less than 1% of reporting sites. Today we'll consider Entity Jump Menu, a module which allows you to quickly navigate between nodes, users, and taxonomy terms provided you know their ID.

Oct 30 2018
Oct 30

Our client is migrating from Luminate CMS to Drupal because they want to improve performance without changing the look or feel of the site. Each of the pages on a Luminate site are like snowflakes - unique. It doesn’t make sense to rebuild those features as structured blocks given that they only appear on one single page. So having the ability to use existing JS and CSS allows us to copy and paste markup without rebuilding a whole structure that wouldn’t be repurposed on other pages.

This technically savvy client wants a way to add existing JavaScript and CSS to Drupal pages. So let’s give them the capability of putting raw CSS and JavaScript on their pages. This will help them complete the migration, moving their existing code to Drupal. These are the tools the content editors need to make their website beautiful and effective. If your content editors are more familiar with writing javascript and css here’s how to enable them to keep doing that.

To make this happen, first make a raw field formatter.

  • Go to Configuration > Content authoring > Text formats and editors.
  • Add a new text format called “Raw”. None of the filters should be enabled since this will be raw output.

Raw Text Format

Adding in raw text format

No Filters Enabled

AND…No filters enabled!

Since our client wants to add raw css and javascript to landing pages, we will create a field on the ‘landing page’ content type. It will be Text (formatted, long) and label “Inline CSS”. We will limit it to just one on the page.

Add field inline

Add field inline css

Have it use the Raw text format from the last step. You can limit the field to only this format by installing the package

Composer require drupal/allowed_formats

Be sure to check the “Raw” box on the field page and save it.

Now make sure our field is being output.

  • Go to Admin > Structure > Types > Manage > Landing page > Display > Full
  • Make sure it is enabled and the label is hidden. It should be output in the default format.

Inline css displayed

Making sure inline css is displayed

Visit a landing page content form by going to Manage > Content > Add content > Landing Page, and put some real css in our new field:

Add map background raw

Adding map background raw

We also provide a WYSIWYG place to enter HTML. In this case we need some HTML, perhaps a div, with class=‘map’.

We’re not finished yet! We need to provide a twig template. Look at the output HTML. We get:

<!-- THEME DEBUG -->
<!-- THEME HOOK: 'field' -->
<!-- FILE NAME SUGGESTIONS:
* field--node--field-inline-css--landing-page.html.twig
* field--node--field-inline-css.html.twig
* field--node--landing-page.html.twig
* field--field-inline-css.html.twig
x field--text-long.html.twig
* field.html.twig
-->
<!-- BEGIN OUTPUT from 'core/themes/classy/templates/field/field--text-long.html.twig' -->
<div data-quickedit-field-id="node/589/field_inline_css/en/full" class="clearfix text-formatted field field--name-field-inline-css field--type-text-long field--label-hidden field__item">.map {
background: url(http://www.example.com/assets/images/background-images/banner-landing-page/map.png) center no-repeat;
padding-top: 80px;
min-height: 350px;
}</div>
<!-- END OUTPUT from 'core/themes/classy/templates/field/field--text-long.html.twig' -->

in our output! Notice the <div> surrounding our CSS! We don’t want that! So it’s time to create a Twig template without extra div’s. One that will output raw CSS.

We will go from this (notice all the extra <div>s)


{% if label_hidden %}
   {% if multiple %}
       <div{{ attributes.addClass(classes, 'field__items') }}>
           {% for item in items %}
               <div{{ item.attributes.addClass('field__item') }}>{{ item.content }}</div>
           {% endfor %}
       </div>
   {% else %}
       {% for item in items %}
           <div{{ attributes.addClass(classes, 'field__item') }}>{{ item.content }}</div>
       {% endfor %}
   {% endif %}
{% else %}
   <div{{ attributes.addClass(classes) }}>
       <div{{ title_attributes.addClass(title_classes) }}>{{ label }}</div>
       {% if multiple %}
       <div class="field__items">
           {% endif %}
           {% for item in items %}
               <div{{ item.attributes.addClass('field__item') }}>{{ item.content }}</div>
           {% endfor %}
           {% if multiple %}
       </div>
       {% endif %}
   </div>
{% endif %}


And we should do three things:

  1. Remove all <div> tags,
  2. Send it through a raw filter, and
  3. Surround it with <style> tags so we will go to this >

<style>
{% if label_hidden %}
   {% if multiple %}
           {% for item in items %}
               {{ item.content|raw }}
           {% endfor %}
   {% else %}
       {% for item in items %}
           {{ item.content|raw }}
       {% endfor %}
   {% endif %}
{% else %}
       {% if multiple %}
           {% endif %}
           {% for item in items %}
               {{ item.content|raw }}
           {% endfor %}
           {% if multiple %}
       {% endif %}
{% endif %}
</style>


Then we get in output:

<!-- THEME DEBUG -->
<!-- THEME HOOK: 'field' -->
<!-- FILE NAME SUGGESTIONS:
x field--node--field-inline-css--landing-page.html.twig
* field--node--field-inline-css.html.twig
* field--node--landing-page.html.twig
* field--field-inline-css.html.twig
* field--text-long.html.twig
* field.html.twig
-->
<!-- BEGIN OUTPUT from 'themes/custom/example/templates/field/field--node--field-inline-css--landing-page.html.twig' -->
<style>
.map {
background: url(http://www.example.com/assets/images/background-images/banner-section-landing-page/map.png) center no-repeat;
padding-top: 80px;
min-height: 350px;
}
</style>
<!-- END OUTPUT from 'themes/custom/example/templates/field/field--node--field-inline-css--landing-page.html.twig' -->

Tada! The CSS shows up ready to use on the page! The same technique can be used to allow content editors to put JavaScript on the page! Instead of putting <style> tags around the template, make it <script> tags instead.

Make sure you meet your content editors where they are, give them tools they can use but don’t use this technique with novice or non-technical content editors.

Get In Touch

Questions? Comments? We want to know! Drop us a line and let’s start talking.

Learn More Get In Touch
Oct 30 2018
kpv
Oct 30

The article continues the series started with Creating interactive content in CKEditor with VisualN Embed article.

It shows how to use IFrames toolkit provided with VisualN module to share embedded drawings across sites.

For our example we use a Drupal 8 site as drawings origin and a Wordpress site as a target resource exposing those drawings. The Wordpress site can be located at any domain and/or server and doesn't depend on the origin in any way.

[embedded content]

There are a couple of use cases when you might want to share drawings:

  • to share content with you audience to promote your brand, attract new users (generate quality traffic) or spread your data / knowledge across the Internet
  • to create SaaS-like solutions when users use your site to create content and reuse on their sites (e.g. Flickr)
  • to use it as a backend platform for your other resource (as in the video above, Drupal 8 can be used as a backend for Wordpress)

 

Generating quality traffic, promoting brand, spreading knowledge

Any shared drawing can be configured to show origin link near (at the bottom by default) the drawing.

Origin links allow to attract new target users from third-party sites. Clicking that link user is redirected to the origin site. Of course, you can't oblige users to click those links, so the user should want or find it useful (e.g. for some additional info) to click it. That makes it fair to expect that mostly target users will click the link under shared content. And thus generate quality traffic.

You can also manage which pages exactly to direct those users to, per drawing basis or multiple at once (using tokens* or relying on defaults). It allows you to better run marketing campaigns, e.g. to temporarily switch shared drawings to direct traffic to a specific page of interest. Also no restrictions are placed upon link url, it may even point to a different site, which allows you to share traffic.
Usually origin page url (i.e. the page where drawings appears) is used by default.

Though it is not always necessary for user to click the link to achieve your goals. Sometimes just showing the content is enough. E.g. when content is obviously related to your brand, it already accomplishes its promoting functions. Or if you want just to share some knowledge (e.g. plastic waste amounts in oceans dynamics), then its still enough that user just reads it. And of course someone should also want to share your content.

These and some other ideas (including those presented below) served as a foundation and composed the notion of Fidelity Marketing, something that can be seen as another branch of Internet Marketing (along with Content and Referral marketing). The concept is developed and promoted within the scope of VisualN Project.

* There are still tasks to do for better tokens support

Creating SaaS-like solutions

With the IFrames toolkit you can build solutions the primary purpose of which is to provide some service to users. There are a lot of such services on the Internet - some of them allow to share images (e.g. Flickr), slides presentations (e.g. Slideshare), data visualizations etc. Now you can build yours of any of these and other types or invent something different.

Also you can add such functionality to an already existing site which would add additional value to it and its users.

Using as a backend for some other resource

Imagine that you or your client (if you are a developer) owns a site made on some other framework or CMS.

When using Drupal with VisualN along with your primary site (if not D8), all embedded content is at one place and belong to your infrastructure, so you don't depend on third-party services providing similar services (e.g. charts or galleries), also it allows to use such solutions in internal, e.g. corporate systems without need to access Internet.

On the other hand VisualN is a general purpose graphical content authoring and sharing framework and can replace multiple other solutions having more specific implementation area.

Configuration details and Sharing settings

As mentioned before, the sharing functionality is provided by VisualN module, namely with VisualN IFrame submodule. At the moment it supports sharing for embedded drawings and drawings created as generic Drupal blocks, provided by VisualN Block submodule (which goes beyond the subject of the current article).

When enabling sharing, you have some options to configure. The following settings are available.

Use defaults - the defaults are set on IFrame settings page. Allows shared drawings to reuse global IFrame settings without saving locally for each single one. Useful to change settings for multiple shared drawings at once.

Show origin link - whether or not to show origin link at the bottom of shared drawings.

Origin url - by default current drawing page url is used. Though you can use some custom page for a specific drawing iframe as origin.

Origin title - you can set any title for the link to appear under the shared drawing. Of course, you can override shared drawing template to change its style or use an image instead of the origin link, e.g. origin site logo.

Open in new window - whether shared drawings links should open origin site in a new window or not.

Conclusion

These are just some obvious implementations of IFrames toolkit, you are encouraged to develop yours. Also the subject itself is relatively new to Drupal ecosystem since there were no noticeable attempts to create a such. As well as for other popular Content Management Systems.

Oct 30 2018
Oct 30

Entity Reference Views are one way you can make life easier for Drupal content creators.

Normally, when people are creating content on your site, each field consists of a single box with a single data point. For example, in a list of people, you might get only the person's name. 

Entity Reference Views allows you to provide far more information. For example, you can add photos and personal details to your list of people.

Entity Reference Views in Drupal 7

In this example, I have a content type called "Presentations" and another content type called "Speakers". Every time I add a presentation, I want to choose from a list of speakers.

  • Install and enable Views and Entity Reference.
  • Go to Structure > Views > Add new view.
  • Create a view of the content that you want content editors to choose from. In this example, I'm going to make a list of "Speakers":
Create a view of content in Drupal
  • At the top of the page, click "Add" then "Entity Reference":
Adding an entity reference view
  • Click "Settings" next to "Entity Reference list".
Click
  • Search fields: Choose the field you want users to search by.
  • Click "Apply (this display)".
Choose the field you want users to search by in Views
  • Check the preview to make sure your view is working ...
The preview of our entity reference view

Step #2. Add a Field with the Entity Reference View

  • Go to Structure > Content types.
  • Add an "Entity Reference" field.
Add an Entity Reference field
  • Target type: Choose to link to nodes, users, files or whatever else you want to display.
  • Mode: Choose "View: Filter by an entity reference view."
  • View used to select the entities: choose the view you created.
  • Click "Save field settings".
Settings for an Entity Reference field
  • Click "Add content" and the data entry for your field will use the View you created:
A working Entity Reference field

Entity Reference Views Widget

It's possible to extend this module by using Entity Reference View Widget. This places your view inside a pop-up box to make it easier to select items.

There's a tutorial here and this video has guidance:

[embedded content]

Entity Reference Views in Drupal 8

Both Views and Entity Reference are now part of the Drupal 8 core. To use Entity Reference Views in Drupal 8, the process is like this:

  • Create a view, as we showed above.
  • Go to Structure > Content types > Add field.
Adding an Entity Reference field in Drupal 8

Click through the settings and under "Reference type", you can choose "Views: Filter by an entity reference view".

The setting Views: Filter by an entity reference view

About the author

Steve is the founder of OSTraining. Originally from the UK, he now lives in Sarasota in the USA. Steve's work straddles the line between teaching and web development.
Oct 30 2018
Oct 30

Entity Reference Views are a great way to make life easier for Drupal content creators.

Normally, when people create content on your site, each field is very plain. However, Entity Reference Views allows you to provide far more information. For example, instead of just showing a list of users, your content creators can browse through a list of names, photos and personal details.

Both Views and Entity Reference are now part of the Drupal 8 core. This made using Entity Reference Views in Drupal 8 much easier.

If you're a Drupal 7 user, read this version of the tutorial.

Step #1. Create a content view

  • Create a view of the content that you want content editors to choose from. In this example, I'm going to make a view by the name "List of Speakers":
Create a view of content in Drupal
  • At the top of the page, click "Add" then "Entity Reference":
Adding an entity reference view
  • Click "Add Fields":

add field

  • Set For to 'This entity_reference (override)
  • Add the fields you want to have displayed and searchable.

03 add field

  • Click "Settings" next to "Entity Reference list".
Click
  • Search fields: Choose the field(s) you want users to search by.
  • Click "Apply (this display)".
04
  • Check the preview to make sure your view is working:
The preview of our entity reference view

Step #2. Add a Field with the Entity Reference View

  • Go to Structure > Content types.
  • Add an "Entity Reference" field.
Adding an Entity Reference field in Drupal 8
  • Save and continue
  • Save field settings
06
  • Click "Add content" and the data entry for your field will use the View you created.
  • If you prefer a checkbox to autocomplete go to Administrator > Structure > Types > Manage > Speakers > Form-display and change the widget type.
07

About the author

Daniel is a web designer from UK, who's a friendly and helpful part of the support team here at OSTraining.
Oct 30 2018
Oct 30

In this episode, we cover the Drupal 8 Contact Storage Module. This module extends the Drupal 8 core contact module by saving the contact entries in the database. This makes it easy to go back and view, edit, or delete any of the contact form submissions on your Drupal 8 site. It's a handy little module that can save you from needing to install a more fully featured form module (like Webform or Entity Forms).

Check out the Code Karate Patreon page

Oct 29 2018
Oct 29

Yesterday, big tech tripped over itself with IBM’s Red Hat acquisition--for the staggering sum of $34B. Many were shocked by the news, but those that know Red Hat well--may have been less surprised. Long the leader and largest open source company in the world: Red Hat has been getting it right for many years.

Still more shocking is how this fits an albeit new pattern for 2018 and beyond. One which is completely different than the typical enterprise software acquisition of the past. Red Hat is not the first mega tech deal of the year for the  open source community. (There was the $7.5B purchase of GitHub by Microsoft, and recently the $5.2B merger of big-data rivals Cloudera and Hortonworks.)

Now, with this much larger move by IBM, it brings us to consider the importance of open source value, and contribution culture-at-large.

This was a great acquisition target for IBM:

  • They have a powerful product suite for some of the more cutting edge aspects of web development including a secure and fully managed version of Linux, hybrid cloud, containerization technology and a large and satisfied customer base;

  • their products and technologies fit perfectly against IBM’s target market of enterprise digital transformation; and

  • the deal opens up a huge market to Red Hat via Big Blue.

And in the age we live--one focused on (and fearful of) security, privacy, data domiciles, and crypto tech--a $14B valuation, over market cap (a premium of $74/share), is a validation of the open source model shining sunlight on software to achieve more secure products.

At Phase2, this news comes with much interest. Red Hat is a company that we know very well for its contributions to open source and web technology, in general. We have worked with Red Hat since 2013 and come to respect them in several key ways.

As pioneers in the commercialization of open source, Red Hat popularized and legitimized the idea that the concept of open contribution and financial gain can co-exist. While our own experimentations with productization of open source over the years within the Drupal community were certainly less publicized, we, and ostensibly the ‘industry’, looked to Red Hat as the archetype for a modern business model that could work.

We’ve had the privilege of working for, and alongside, the Red Hat team to develop many of the company’s websites over the last five years, including Redhat.com and developers.redhat.com. Through these experiences, we have come to value the way in which they blend great talent, great culture, and open values.

On many occasions, we have even drawn parallels between their business culture and our own. After reading the Open Organization by Red Hat CEO Jim Whitehurst, I was struck by the values and culture of Red Hat and their similarities with how Phase2 similarly side-eyes the future. Perhaps it was their open source ethos, collaborative approach, or the meritocracy (vs. democracy or autocracy) they fostered, but I felt like we were emulating a “big brother”.

FInally, but perhaps most importantly, we respect them as a business. The pure fact that a larger-than-life brand like IBM would pay such a premium implies both strategic and business health. I believe that,  while in part it is earned from a strong repeatable subscription-based revenue stream, nothing creates business value like a great culture of amazing people, dependable customers, and undeniable innovation.

And now with IBM’s extended reach and additional resources, we look forward to Red Hat’s continued success and partnership.

Oct 29 2018
Oct 29

This blog has been re-posted and edited with permission from Dries Buytaert's blog. Please leave your comments on the original post.

The cover of the Decoupled Drupal book

Drupal has evolved significantly over the course of its long history. When I first built the Drupal project eighteen years ago, it was a message board for my friends that I worked on in my spare time. Today, Drupal runs two percent of all websites on the internet with the support of an open-source community that includes hundreds of thousands of people from all over the world.

Today, Drupal is going through another transition as its capabilities and applicability continue to expand beyond traditional websites. Drupal now powers digital signage on university campuses, in-flight entertainment systems on commercial flights, interactive kiosks on cruise liners, and even pushes live updates to the countdown clocks in the New York subway system. It doesn't stop there. More and more, digital experiences are starting to encompass virtual reality, augmented reality, chatbots, voice-driven interfaces and Internet of Things applications. All of this is great for Drupal, as it expands its market opportunity and long-term relevance.

Several years ago, I began to emphasize the importance of an API-first approach for Drupal as part of the then-young phenomenon of decoupled Drupal. Now, Drupal developers can count on JSON API, GraphQL and CouchDB, in addition to a range of surrounding tools for developing the decoupled applications described above. These decoupled Drupal advancements represent a pivotal point in Drupal's history.

Decoupled Drupal sites

A few examples of organizations that use decoupled Drupal.

Speaking of important milestones in Drupal's history, I remember the first Drupal book ever published in 2005. At the time, good information on Drupal was hard to find. The first Drupal book helped make the project more accessible to new developers and provided both credibility and reach in the market. Similarly today, decoupled Drupal is still relatively new, and up-to-date literature on the topic can be hard to find. In fact, many people don't even know that Drupal supports decoupled architectures. This is why I'm so excited about the upcoming publication of a new book entitled Decoupled Drupal in Practice, written by Preston So. It will give decoupled Drupal more reach and credibility.

When Preston asked me to write the foreword for the book, I jumped at the chance because I believe his book will be an important next step in the advancement of decoupled Drupal. I've also been working with Preston So for a long time. Preston is currently Director of Research and Innovation at Acquia and a globally respected expert on decoupled Drupal. Preston has been involved in the Drupal community since 2007, and I first worked with him directly in 2012 on the Spark initiative to improve Drupal's editorial user experience. Preston has been researching, writing and speaking on the topic of decoupled Drupal since 2015, and had a big impact on my thinking on decoupled Drupal, on Drupal's adoption of React, and on decoupled Drupal architectures in the Drupal community overall.

To show the value that this book offers, you can read exclusive excerpts of three chapters from Decoupled Drupal in Practice on the Acquia blog and at the Acquia Developer Center. It is available for preorder today on Amazon, and I encourage my readers to pick up a copy!

Congratulations on your book, Preston!

Oct 29 2018
Oct 29

At this year's BADCamp, our Senior Web Architect Nick Lewis led a session on Gatsby and the JAMstack. The JAMStack is a web development architecture based on client-side JavaScript, reusable APIs, and prebuilt Markup. Gatsby is one of the leading JAMstack based static page generators, and this session primarily covers how to integrate it with Drupal. 

Our team has been developing a "Gatsby Drupal Kit" over the past few months to help jump start Gatsby-Drupal integrations. This kit is designed to work with a minimal Drupal install as a jumping off point, and give a structure that can be extended to much larger, more complicated sites.

This session will leave you with: 

1. A base Drupal 8 site that is connected with Gatsby.  

2. Best practices for making Gatsby work for real sites in production.

3. Sane patterns for translating Drupal's structure into Gatsby components, templates, and pages.

This is not an advanced session for those already familiar with React and Gatsby. Recommended prerequisites are a basic knowledge of npm package management, git, CSS, Drupal, web services, and Javascript. Watch the full session below. 

[embedded content]

Oct 29 2018
Oct 29

The last step is to modify the javascript and styles that your theme uses to display the pullquotes that have been added in the editing interface. As you can see from the GitHub repo, there are four files that will need to be updated or added to your theme:

  1. your theme info file
  2. your theme library file
  3. the javascript file that adds the markup
  4. the scss (or css) file

In our case, the javascript finds any/all pullquote spans on the page, and then adds them as asides to the DOM, alternating between right and left alignment (for desktop). The scss file then styles them appropriately for small and large breakpoints. Note, too, that the theme css includes specific styles that display in the editing interface, so that content creators can easily see when a pullquote is being added or modified. To remove a pullquote, the editor simply selects it again (which turns the pullquote pink in our theme) and clicks the ckeditor button. 

That wraps up this simple tutorial. You can now rest assured that your readers will never miss an important quote again. The strategy is in no way bulletproof, and so its mileage may vary, but if you have questions, feedback, or suggestions on how this strategy can be improved, please add your comment below. 

Pages

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web