Sep 12 2019
Sep 12

The world is going mobile at a very fast pace. Smartphones, tablets, laptops have been the new luxury for the current generation. People want to have easy access to everything. The technology updates happening daily have made it possible for people to go beyond communication on mobile phones. Accessing the website on cell phones may create a confusion loop for the people as many times these websites are not upgraded to behave responsively. 

With a large number of the population using mobile phones, businesses are focusing on providing the best mobile interface experiences to their users. Drupal has acquired a large technology market. It is not only good for building websites but is also considered as an efficient tool to provide an impressive and powerful mobile experience to its users. 

Let’s take the plunge and enter the world of mobile solutions powered by Drupal. 

an illustration of a turned-on space gray iPhone X mobile showing different apps on the screens with a black background portraying for drupal mobile solutions


Every organization has its own specific courses of action to address the increasing needs of the application development process. It usually varies as per the requirements and resource availability, like, some are engaged in the development of responsive web-based applications and others have expertise in delivering full-fledged native applications to be deployed into the mobile devices.

Whether it is about incorporating themes, multi-language distribution, web-based mobile apps, cross-platform integration with third-party tools, Drupal is giving a lot many choices to the developers to define mobile solutions and their integration into an overall content and application infrastructure. The next few sections describe the types of mobile solutions offered by Drupal.

Mobile-First Approach for a Great Web Experience

Mobile-first, the philosophy created by Luke Wroblewski, is an approach of displaying web pages on different devices irrespective of the distinct window screens or sizes. That means the two concepts that are vital for a mobile-first design are-

Responsive Web Design (RWD): A web design method that automatically renders a web page according to the display screen of a user creating a soothing transition for the user.

Progressive Enhancement and Graceful Degradation: In order to display a web page reasonably on different devices, customized versions of the product are designed for different ends. The terms progressive enhancement and graceful degradation are completely different from each other but are always used in conjunction.

  • Progressive Enhancement: In progressive enhancement, basic functionalities, and features are implemented first on the lower browser versions (mobile phones). Later, complex interactions or functionalities are added on the basic version, upgrading it for the advanced browser versions (tablet, PC).
     
  • Graceful Degradation: The designing of the product is started from the most advanced version(desktop, laptop) with the implementation of complete functions and features. Later, when making the version compatible with mobile versions, the removal of some of the features or content is done. 

In between progressive enhancement and graceful degradation, progressive enhancement is the most chosen approach for the application design process. Drupal supports the building of responsive websites and web apps, offering a consistent content experience to its users irrespective of the device size in which the application is being displayed. 

Though the mobile-first development requires an effort, it promises easier deployment and high scalability. Numerous big brands are choosing Drupal over other technologies and are excelling ahead in their approach for mobile application development. 

Below are listed examples of two organizations who have benefitted from Drupal’s mobile-first offering: 

The Men’s Health Magazine Chronicle
 

illustration image showing mobile freindly home page of men's health magzine with images of men,advertisments in boled lettersin different colours


The multinational and constantly evolving brand in the men’s grooming industry, Men’s Health Magazine, intended to develop a thorough content-based platform coupling a better digital experience for its mobile users. With Drupal, the website was optimized for all screen sizes ranging from large desktops to small-sized handheld devices. 

Complete Case Study on Men’s Health Magazine can be read here.

The YardStick Saga
 

illustration image showing a content librarry for yardstick lms in different blocks form with images pasted and menu items on the left hand side


Yardstick, being a global leader in offering modern digital solutions over traditional learning for students, chose Drupal to provide easy and simplified access of the application to different users on distinct device screens.

Complete Case Study on Yardstick LMS can be read here.

Drupal is not only a way to build efficient web applications, but it can also be used as a backend for compelling mobile application development. In the following sections, we will explore the native, hybrid and progressive mobile solutions delivered by Drupal. The native mobile application first.

Scale Content Experience with Native Mobile Applications

Furnishing content through a native mobile application is all about managing the content and application-level services with Drupal. In addition to that, the application is allowed to use the capabilities of the device in which it has been installed. The front-end user experience, input events, and context are handled by the mobile application whereas Drupal responds to the events or requests made, providing the content from a shared source. A service module connects the app and the Drupal.

illustration image showing the different types of native mobile application development


The best part about the native mobile applications is that these can be accessed without an internet connection. Considering the device platform diverseness, native app development with Drupal is described ahead.

The platform-specific application that runs on iOS is usually written in Objective C or Swift. The development is done in Mac that is compatible to run Apple's XCode IDE. Java or Kotlin are used to write an android native application. Connection of  Drupal site running the services module is done with one of the two following HTTP libraries:

  • AFNetworking is a networking framework for iOS, macOS, watchOS, and tvOS.
  • ASIHTTPRequest is a CFNetwork wrapper for HTTP requests, Objective-C, Mac OS X, and iPhone.
  • Alamofire(Swift) is an HTTP networking library written in Swift.

As mentioned before, Drupal is being widely accepted as a backend service for application environments and web services. This has enabled data consumption and manipulation in many distinct ways.

For example, Waterwheel Swift, formerly known as Drupal iOS SDK (software development kit), facilitates Drupal as a backend service for iOS, macOS, tvOS, or watchOS. It integrates most of the Drupal API features (session management, basic auth, entity crud, local caching, login view controller, etc.) in one SDK. 

Caching strategies, use of asynchronous methods for data downloading and refreshing after each download completion, pre-fetching data on application load, and network timeout issue management are the things that should be taken care of while developing a native RESTful iOS app.

Appcelerator Titanium, an open-source framework, developed by Appcelerator, is another example that facilitates native mobile application development for multiple mobile operating systems with a single JavaScript codebase.

Also, PhoneGap, first named as Apache Cordova, is a software development framework from Adobe systems (originally produced by Nitobi Software). It is open-source and is used for cross-platform mobile application development. It facilitates development using HTML, JavaScript, and CSS. DrupalGap, an application development kit for the drupal websites is one of the projects of PhoneGap created using PhoneGap and jQuery mobile. 

Leveraging Native and Web’s best in Hybrid Mobile Applications

While going for hybrid mobile applications, the best of native and web applications are chosen and integrated together. Usually, HTML, CSS, and JavaScript are used in hybrids to offer a native web view (UIWebView in iOS or WebView in Android). These apps run on multiple platforms (Android, iOS) and are easily distributed through App stores. In addition to that, hybrid apps can access many device resources such as camera, contacts, accelerometer, etc.

illustartion image showing the flowchart of hybrid application development with two sided arrows and mobile phones Source: Cleveroad

Ionic is the most widely used HTML5 framework for the development of a hybrid application. Based on AngularJS, Ionic extends the capacity of developers to build cross-platform applications from open source to premium and also enabling easy deployments of the application.

illustartion image showing the clasifika home page on a mobile screen on a red background having house image and its description on the screen Source: Google Play

Clasifika, a multi-platform real estate hybrid application, has been developed using Ionic and Drupal 8 as the backend. The successful integration of design thinking with the real world, keeping the performance and UX design optimized. 

Delivering App like Experience with Progressive Web Applications

The term progressive web app (PWA) was conceived by Alex Russel and Frances Berriman. PWA is developed using modern web APIs along with traditional progressive enhancement strategy to create cross-platform web applications offering app-like experiences on desktop and mobile. Pinterest is one such progressive web app that helps users to curate images, videos, etc. from a list of choices. 

illustartion image showing a pinterest app opened on a mobile screenin red colourSource: Medium/Manifest

Most of the time while visiting a web page, you must have come across the button ‘Add to Home Screen’. When clicked, it gets installed in the background and can be accessed from the application pack after the successful download. The best part of a progressive web app is that it is available as a basic native application on the phone and can work offline too. Also, PWA (Progressive Web Application) loads fast and creates an engaging experience for its users. 

Integration with a Progressive Web App Module supports the straightforward initiation of the Drupal-based progressive web app. With its extensible platform services and content-centric infrastructure, Drupal is an ideal choice for delivering reliable and engaging mobile experiences to the users.

Final Note

The future is definitely bright for mobile devices. As the digital world is evolving, a large mass of people are choosing mobile devices over computers. Convenience being the essence. Drupal is not only the best choice for building web applications but is also a reliable platform for effective and compelling mobile applications development. 

The world is going Drupal and so are we. We at OpenSense Labs are engaged in offering better digital experiences to our clients with our expertise in Drupal Development.  

Drupal has unfolded many major benefits for the developers. How do you see it from your viewpoint? Share your views on our social media channels: Facebook, LinkedIn, and Twitter. You can also reach out at [email protected].

Sep 12 2019
Sep 12

The Drupal 8 SVG Image module changes the image field widget to allow for SVG images to be uploaded on your Drupal 8 website. This module also allows you to set the width and height of the image as well as choose if the image should be displayed as an or tag.

Download the Drupal 8 SVG Image module like you would any other Drupal module. You can download it with Composer using the following command:

composer require drupal/svg_image

Install the SVG Image module.

Navigate to Structure > Content Types > Article and select Manage Fields. You should have an image field on your Article content type. Click the Edit button.

In the Allowed File Extensions text box add “svg” to the list.

Allow svg format

Save the page, then go to the Manage display tab for the Article content type.

Click the gear icon next to your image field and you will notice you have additional options for SVG images. The first is that you can render the SVG as an tag (which is enabled by default). The second is that you can set the width and height of the SVG image.

SVG Image Options

You can now create an Article and upload your SVG image!

Sep 11 2019
Sep 11

An in-depth analysis of how Drupal's development was sponsored between July 1, 2018 and June 30, 2019.

The past years, I've examined Drupal.org's contribution data to understand who develops Drupal, how diverse the Drupal community is, how much of Drupal's maintenance and innovation is sponsored, and where that sponsorship comes from.

You can look at the 2016 report, the 2017 report, and the 2018 report. Each report looks at data collected in the 12-month period between July 1st and June 30th.

This year's report shows that:

  • Both the recorded number of contributors and contributions have increased.
  • Most contributions are sponsored, but volunteer contributions remains very important to Drupal's success.
  • Drupal's maintenance and innovation depends mostly on smaller Drupal agencies and Acquia. Hosting companies, multi-platform digital marketing agencies, large system integrators and end users make fewer contributions to Drupal.
  • Drupal's contributors have become more diverse, but are still not diverse enough.

Methodology

What are Drupal.org issues?

"Issues" are pages on Drupal.org. Each issue tracks an idea, feature request, bug report, task, or more. See https://www.drupal.org/project/issues for the list of all issues.

For this report, we looked at all Drupal.org issues marked "closed" or "fixed" in the 12-month period from July 1, 2018 to June 30, 2019. The issues analyzed in this report span Drupal core and thousands of contributed projects, across all major versions of Drupal.

What are Drupal.org credits?

In the spring of 2015, after proposing initial ideas for giving credit, Drupal.org added the ability for people to attribute their work in the Drupal.org issues to an organization or customer, or mark it the result of volunteer efforts.

Example issue credit on drupal orgA screenshot of an issue comment on Drupal.org. You can see that jamadar worked on this patch as a volunteer, but also as part of his day job working for TATA Consultancy Services on behalf of their customer, Pfizer.

Drupal.org's credit system is truly unique and groundbreaking in Open Source and provides unprecedented insights into the inner workings of a large Open Source project. There are a few limitations with this approach, which we'll address at the end of this report.

What is the Drupal community working on?

In the 12-month period between July 1, 2018 and June 30, 2019, 27,522 issues were marked "closed" or "fixed", a 13% increase from the 24,447 issues in the 2017-2018 period.

In total, the Drupal community worked on 3,474 different Drupal.org projects this year compared to 3,229 projects in the 2017-2018 period — an 8% year over year increase.

The majority of the credits are the result of work on contributed modules:

A pie chart showing contributions by project type: most contributions are to contributed modules.

Compared to the previous period, contribution credits increased across all project types:

A graph showing the year over year growth of contributions per project type.

The most notable change is the large jump in "non-product credits": more and more members in the community started tracking credits for non-product activities such as organizing Drupal events (e.g. DrupalCamp Delhi project, Drupal Developer Days, Drupal Europe and DrupalCon Europe), promoting Drupal (e.g. Drupal pitch deck or community working groups (e.g. Drupal Diversity and Inclusion Working Group, Governance Working Group).

While some of these increases reflect new contributions, others are existing contributions that are newly reported. All contributions are valuable, whether they're code contributions, or non-product and community-oriented contributions such as organizing events, giving talks, leading sprints, etc. The fact that the credit system is becoming more accurate in recognizing more types of Open Source contribution is both important and positive.

Who is working on Drupal?

For this report's time period, Drupal.org's credit system received contributions from 8,513 different individuals and 1,137 different organizations — a meaningful increase from last year's report.

A graph showing that the number of individual and organizational contributors increased year over year.

Consistent with previous years, approximately 51% of the individual contributors received just one credit. Meanwhile, the top 30 contributors (the top 0.4%) account for 19% of the total credits. In other words, a relatively small number of individuals do the majority of the work. These individuals put an incredible amount of time and effort into developing Drupal and its contributed projects:

Out of the top 30 contributors featured this year, 28 were active contributors in the 2017-2018 period as well. These Drupalists' dedication and continued contribution to the project has been crucial to Drupal's development.

It's also important to recognize that most of the top 30 contributors are sponsored by an organization. Their sponsorship details are provided later in this article. We value the organizations that sponsor these remarkable individuals, because without their support, it could be more challenging for these individuals to be in the top 30.

It's also nice to see two new contributors make the top 30 this year — Alona O'neill with sponsorship from Hook 42 and Thalles Ferreira with sponsorship from CI&T. Most of their credits were the result of smaller patches (e.g. removing deprecated code, fixing coding style issues, etc) or in some cases non-product credits rather than new feature development or fixing complex bugs. These types of contributions are valuable and often a stepping stone towards towards more in-depth contribution.

How much of the work is sponsored?

Issue credits can be marked as "volunteer" and "sponsored" simultaneously (shown in jamadar's screenshot near the top of this post). This could be the case when a contributor does the necessary work to satisfy the customer's need, in addition to using their spare time to add extra functionality.

For those credits with attribution details, 18% were "purely volunteer" credits (8,433 credits), in stark contrast to the 65% that were "purely sponsored" (29,802 credits). While there are almost four times as many "purely sponsored" credits as "purely volunteer" credits, volunteer contribution remains very important to Drupal.

Contributions by volunteer vs sponsored

Both "purely volunteer" and "purely sponsored" credits grew — "purely sponsored" credits grew faster in absolute numbers, but for the first time in four years "purely volunteer" credits grew faster in relative numbers.

The large jump in volunteer credits can be explained by the community capturing more non-product contributions. As can be seen on the graph below, these non-product contributions are more volunteer-centric.

A graph showing how much of the contributions are volunteered vs sponsored.

Who is sponsoring the work?

Now that we've established that the majority of contributions to Drupal are sponsored, let's study which organizations contribute to Drupal. While 1,137 different organizations contributed to Drupal, approximately 50% of them received four credits or less. The top 30 organizations (roughly the top 3%) account for approximately 25% of the total credits, which implies that the top 30 companies play a crucial role in the health of the Drupal project.

Top conytinuying organizationsTop contributing organizations based on the number of issue credits.

While not immediately obvious from the graph above, a variety of different types of companies are active in Drupal's ecosystem:

Category Description Traditional Drupal businesses Small-to-medium-sized professional services companies that primarily make money using Drupal. They typically employ fewer than 100 employees, and because they specialize in Drupal, many of these professional services companies contribute frequently and are a huge part of our community. Examples are Hook42, Centarro, The Big Blue House, Vardot, etc. Digital marketing agencies Larger full-service agencies that have marketing-led practices using a variety of tools, typically including Drupal, Adobe Experience Manager, Sitecore, WordPress, etc. They tend to be larger, with many of the larger agencies employing thousands of people. Examples are Wunderman, Possible and Mirum. System integrators Larger companies that specialize in bringing together different technologies into one solution. Example system agencies are Accenture, TATA Consultancy Services, Capgemini and CI&T. Hosting companies Examples are Acquia, Rackspace, Pantheon and Platform.sh. End users Examples are Pfizer or bio.logis Genetic Information Management GmbH.

A few observations:

  • Almost all of the sponsors in the top 30 are traditional Drupal businesses with fewer than 50 employees. Only five companies in the top 30 — Pfizer, Google, CI&T, bio.logis and Acquia — are not traditional Drupal businesses. The traditional Drupal businesses are responsible for almost 80% of all the credits in the top 30. This percentage goes up if you extend beyond the top 30. It's fair to say that Drupal's maintenance and innovation largely depends on these traditional Drupal businesses.
  • The larger, multi-platform digital marketing agencies are barely contributing to Drupal. While more and more large digital agencies are building out Drupal practices, no digital marketing agencies show up in the top 30, and hardly any appear in the entire list of contributing organizations. While they are not required to contribute, I'm frustrated that we have not yet found the right way to communicate the value of contribution to these companies. We need to incentivize each of these firms to contribute back with the same commitment that we see from traditional Drupal businesses
  • The only system integrator in the top 30 is CI&T, which ranked 4th with 795 credits. As far as system integrators are concerned, CI&T is a smaller player with approximately 2,500 employees. However, we do see various system integrators outside of the top 30, including Globant, Capgemini, Sapient and TATA Consultancy Services. In the past year, Capgemini almost quadrupled their credits from 46 to 196, TATA doubled its credits from 85 to 194, Sapient doubled its credits from 28 to 65, and Globant kept more or less steady with 41 credits. Accenture and Wipro do not appear to contribute despite doing a fair amount of Drupal work in the field.
  • Hosting companies also play an important role in our community, yet only Acquia appears in the top 30. Rackspace has 68 credits, Pantheon has 43, and Platform.sh has 23. I looked for other hosting companies in the data, but couldn't find any. In general, there is a persistent problem with hosting companies that make a lot of money with Drupal not contributing back. The contribution gap between Acquia and other hosting companies has increased, not decreased.
  • We also saw three end users in the top 30 as corporate sponsors: Pfizer (453 credits), Thunder (659 credits, up from 432 credits the year before), and the German company, bio.logis (330 credits). A notable end user is Johnson & Johnson, who was just outside of the top 30, with 221 credits, up from 29 credits the year before. Other end users outside of the top 30, include the European Commission (189 credits), Workday (112 credits), Paypal (80 credits), NBCUniversal (48 credits), Wolters Kluwer (20 credits), and Burda Media (24 credits). We also saw contributions from many universities, including the University of British Columbia (148 credits), University of Waterloo (129 credits), Princeton University (73 credits), University of Austin Texas at Austin (57 credits), Charles Darwin University (24 credits), University of Edinburgh (23 credits), University of Minnesota (19 credits) and many more.
A graph showing that Acquia is by far the number one contributing hosting company.Contributions by system integrators

It would be interesting to see what would happen if more end users mandated contributions from their partners. Pfizer, for example, only works with agencies that contribute back to Drupal, and uses Drupal's credit system to verify their vendors' claims. The State of Georgia started doing the same; they also made Open Source contribution a vendor selection criteria. If more end users took this stance, it could have a big impact on the number of digital agencies, hosting companies and system integrators that contribute to Drupal.

While we should encourage more organizations to sponsor Drupal contributions, we should also understand and respect that some organizations can give more than others and that some might not be able to give back at all. Our goal is not to foster an environment that demands what and how others should give back. Instead, we need to help foster an environment worthy of contribution. This is clearly laid out in Drupal's Values and Principles.

How diverse is Drupal?

Supporting diversity and inclusion within Drupal is essential to the health and success of the project. The people who work on Drupal should reflect the diversity of people who use and work with the web.

I looked at both the gender and geographic diversity of Drupal.org contributors. While these are only two examples of diversity, these are the only diversity characteristics we currently have sufficient data for. Drupal.org recently rolled out support for Big 8/Big 10, so next year we should have more demographics information

Gender diversity

The data shows that only 8% of the recorded contributions were made by contributors who do not identify as male, which continues to indicate a wide gender gap. This is a one percent increase compared to last year. The gender imbalance in Drupal is profound and underscores the need to continue fostering diversity and inclusion in our community.

A graph showing contributions by gender: 75% of the contributions come from people who identify as male.

Last year I wrote a post called about the privilege of free time in Open Source. It made the case that Open Source is not a meritocracy, because not everyone has equal amounts of free time to contribute. For example, research shows that women still spend more than double the time as men doing unpaid domestic work, such as housework or childcare. This makes it more difficult for women to contribute to Open Source on an unpaid, volunteer basis. It's one of the reasons why Open Source projects suffer from a lack of diversity, among others including hostile environments and unconscious biases. Drupal.org's credit data unfortunately still shows a big gender disparity in contributions:

A graph that shows that compared to males, female contributors do more sponsored work, and less volunteer work.

Ideally, over time, we can collect more data on non-binary gender designations, as well as segment some of the trends behind contributions by gender. We can also do better at collecting data on other systemic issues beyond gender alone. Knowing more about these trends can help us close existing gaps. In the meantime, organizations capable of giving back should consider financially sponsoring individuals from underrepresented groups to contribute to Open Source. Each of us needs to decide if and how we can help give time and opportunities to underrepresented groups and how we can create equity for everyone in Drupal.

Geographic diversity

When measuring geographic diversity, we saw individual contributors from six continents and 114 countries:

A graph that shows most contributions come from Europe and North America.Contributions per capitaContribution credits per capita calculated as the amount of contributions per continent divided by the population of each continent. 0.001% means that one in 100,000 people contribute to Drupal. In North America, 5 in 100,000 people contributed to Drupal the last year.

Contributions from Europe and North America are both on the rise. In absolute terms, Europe contributes more than North America, but North America contributes more per capita.

Asia, South America and Africa remain big opportunities for Drupal, as their combined population accounts for 6.3 billion out of 7.5 billion people in the world. Unfortunately, the reported contributions from Asia are declining year over year. For example, compared to last year's report, there was a 17% drop in contribution from India. Despite that drop, India remains the second largest contributor behind the United States:

A graph showing the top 20 contributing countries.The top 20 countries from which contributions originate. The data is compiled by aggregating the countries of all individual contributors behind each issue. Note that the geographical location of contributors doesn't always correspond with the origin of their sponsorship. Wim Leers, for example, works from Belgium, but his funding comes from Acquia, which has the majority of its customers in North America.

Top contributor details

To create more awareness of which organizations are sponsoring the top individual contributors, I included a more detailed overview of the top 50 contributors and their sponsors. If you are a Drupal developer looking for work, these are some of the companies I'd apply to first. If you are an end user looking for a company to work with, these are some of the companies I'd consider working with first. Not only do they know Drupal well, they also help improve your investment in Drupal.

Rank Username Issues Volunteer Sponsored Not specified Sponsors 1 kiamlaluno 1610 99% 0% 1% 2 jrockowitz 756 98% 99% 0% The Big Blue House (750), Memorial Sloan Kettering Cancer Center (5), Rosewood Marketing (1) 3 alexpott 642 6% 80% 19% Thunder (336), Acro Media Inc (100), Chapter Three (77) 4 RajabNatshah 616 1% 100% 0% Vardot (730), Webship (2) 5 volkswagenchick 519 2% 99% 0% Hook 42 (341), Kanopi Studios (171) 6 bojanz 504 0% 98% 2% Centarro (492), Ny Media AS (28), Torchbox (5), Liip (2), Adapt (2) 7 alonaoneill 489 9% 99% 0% Hook 42 (484) 8 thalles 488 0% 100% 0% CI&T (488), Janrain (3), Johnson & Johnson (2) 9 Wim Leers 437 8% 97% 0% Acquia (421), Government of Flanders (3) 10 DamienMcKenna 431 0% 97% 3% Mediacurrent (420) 11 Berdir 424 0% 92% 8% MD Systems (390) 12 chipway 356 0% 100% 0% Chipway (356) 13 larowlan 324 16% 94% 2% PreviousNext (304), Charles Darwin University (22), University of Technology, Sydney (3), Service NSW (2), Department of Justice & Regulation, Victoria (1) 14 pifagor 320 52% 100% 0% GOLEMS GABB (618), EPAM Systems (16), Drupal Ukraine Community (6) 15 catch 313 1% 95% 4% Third & Grove (286), Tag1 Consulting (11), Drupal Association (6), Acquia (4) 16 mglaman 277 2% 98% 1% Centarro (271), Oomph, Inc. (16), E.C. Barton & Co (3), Gaggle.net, Inc. (1), Bluespark (1), Thinkbean (1), LivePerson, Inc (1), Impactiv, Inc. (1), Rosewood Marketing (1), Acro Media Inc (1) 17 adci_contributor 274 0% 100% 0% ADCI Solutions (273) 18 quietone 266 41% 75% 1% Acro Media Inc (200) 19 tim.plunkett 265 3% 89% 9% Acquia (235) 20 gaurav.kapoor 253 0% 51% 49% OpenSense Labs (129), DrupalFit (111) 21 RenatoG 246 0% 100% 0% CI&T (246), Johnson & Johnson (85) 22 heddn 243 2% 98% 2% MTech, LLC (202), Tag1 Consulting (32), European Commission (22), North Studio (3), Acro Media Inc (2) 23 chr.fritsch 241 0% 99% 1% Thunder (239) 24 xjm 238 0% 85% 15% Acquia (202) 25 phenaproxima 238 0% 100% 0% Acquia (238) 26 mkalkbrenner 235 0% 100% 0% bio.logis Genetic Information Management GmbH (234), OSCE: Organization for Security and Co-operation in Europe (41), Welsh Government (4) 27 gvso 232 0% 100% 0% Google Summer of Code (214), Google Code-In (16), Zivtech (1) 28 dawehner 219 39% 84% 8% Chapter Three (176), Drupal Association (5), Tag1 Consulting (3), TES Global (1) 29 e0ipso 218 99% 100% 0% Lullabot (217), IBM (23) 30 drumm 205 0% 98% 1% Drupal Association (201) 31 gabesullice 199 0% 100% 0% Acquia (198), Aten Design Group (1) 32 amateescu 194 0% 97% 3% Pfizer, Inc. (186), Drupal Association (1), Chapter Three (1) 33 klausi 193 2% 59% 40% jobiqo - job board technology (113) 34 samuel.mortenson 187 42% 42% 17% Acquia (79) 35 joelpittet 187 28% 78% 14% The University of British Columbia (146) 36 borisson_ 185 83% 50% 3% Calibrate (79), Dazzle (13), Intracto digital agency (1) 37 Gábor Hojtsy 184 0% 97% 3% Acquia (178) 38 adriancid 182 91% 22% 2% Drupiter (40) 39 eiriksm 182 0% 100% 0% Violinist (178), Ny Media AS (4) 40 yas 179 12% 80% 10% DOCOMO Innovations, Inc. (143) 41 TR 177 0% 0% 100% 42 hass 173 1% 0% 99% 43 Joachim Namyslo 172 69% 0% 31% 44 alex_optim 171 0% 99% 1% GOLEMS GABB (338) 45 flocondetoile 168 0% 99% 1% Flocon de toile (167) 46 Lendude 168 52% 99% 0% Dx Experts (91), ezCompany (67), Noctilaris (9) 47 paulvandenburg 167 11% 72% 21% ezCompany (120) 48 voleger 165 98% 98% 2% GOLEMS GABB (286), Lemberg Solutions Limited (36), Drupal Ukraine Community (1) 49 lauriii 164 3% 98% 1% Acquia (153), Druid (8), Lääkärikeskus Aava Oy (2) 50 idebr 162 0% 99% 1% ezCompany (156), One Shoe (5)

Limitations of the credit system

It is important to note a few of the current limitations of Drupal.org's credit system:

  • The credit system doesn't capture all code contributions. Parts of Drupal are developed on GitHub rather than Drupal.org, and often aren't fully credited on Drupal.org. For example, Drush is maintained on GitHub instead of Drupal.org, and companies like Pantheon don't get credit for that work. The Drupal Association is working to integrate GitLab with Drupal.org. GitLab will provide support for "merge requests", which means contributing to Drupal will feel more familiar to the broader audience of Open Source contributors who learned their skills in the post-patch era. Some of GitLab's tools, such as in-line editing and web-based code review will also lower the barrier to contribution, and should help us grow both the number of contributions and contributors on Drupal.org.
  • The credit system is not used by everyone. There are many ways to contribute to Drupal that are still not captured in the credit system, including things like event organizing or providing support. Technically, that work could be captured as demonstrated by the various non-product initiatives highlighted in this post. Because using the credit system is optional, many contributors don't. As a result, contributions often have incomplete or no contribution credits. We need to encourage all Drupal contributors to use the credit system, and raise awareness of its benefits to both individuals and organizations. Where possible, we should automatically capture credits. For example, translation efforts on https://localize.drupal.org are not currently captured in the credit system but could be automatically.
  • The credit system disincentives work on complex issues. We currently don't have a way to account for the complexity and quality of contributions; one person might have worked several weeks for just one credit, while another person might receive a credit for 10 minutes of work. We certainly see a few individuals and organizations trying to game the credit system. In the future, we should consider issuing credit data in conjunction with issue priority, patch size, number of reviews, etc. This could help incentivize people to work on larger and more important problems and save smaller issues such as coding standards improvements for new contributor sprints. Implementing a scoring system that ranks the complexity of an issue would also allow us to develop more accurate reports of contributed work.

All of this means that the actual number of contributions and contributors could be significantly higher than what we report.

Like Drupal itself, the Drupal.org credit system needs to continue to evolve. Ultimately, the credit system will only be useful when the community uses it, understands its shortcomings, and suggests constructive improvements.

A first experiment with weighing credits

As a simple experiment, I decided to weigh each credit based on the adoption of the project the credit is attributed to. For example, each contribution credit to Drupal core is given a weight of 11 because Drupal core has about 1,1 million active installations. Credits to the Webform module, which has over 400,000 installations, get a weight of 4. And credits to Drupal's Commerce project gets just 1 point as it is installed on fewer than 100,000 sites.

The idea is that these weights capture the end user impact of each contribution, but also act as a proxy for the effort required to get a change committed. Getting a change accepted in Drupal core is both more difficult and more impactful than getting a change accepted to Commerce project.

This weighting is far from perfect as it undervalues non-product contributions, and it still doesn't recognize all types of product contributions (e.g. product strategy work, product management work, release management work, etc). That said, for code contributions, it may be more accurate than a purely unweighted approach.

Top contributing individuals based on weighted credits.The top 30 contributing individuals based on weighted Drupal.org issue credits. Top contributing organizations based on weighted credits.The top 30 contributing organizations based on weighted Drupal.org issue credits.

Conclusions

Our data confirms that Drupal is a vibrant community full of contributors who are constantly evolving and improving the software. It's amazing to see that just in the last year, Drupal welcomed more than 8,000 individuals contributors and over 1,100 corporate contributors. It's especially nice to see the number of reported contributions, individual contributors and organizational contributors increase year over year.

To grow and sustain Drupal, we should support those that contribute to Drupal and find ways to get those that are not contributing involved in our community. Improving diversity within Drupal is critical, and we should welcome any suggestions that encourage participation from a broader range of individuals and organizations.

September 11, 2019

13 min read time

Sep 11 2019
Sep 11

Portland, OR - The Drupal Association, an international nonprofit organization, welcomes its newly appointed board members to help advance its mission to unite a global open source community to build, secure, and promote Drupal. The Association’s Board of Directors ratified the appointment of five new board members in September, including: Grace Francisco, Lo Li, Owen Lansbury, Ryan Szrama and Leslie Glynn, who was elected for the community-at-large seat.

“We are excited to have these amazing individuals join us in our efforts to broaden our reach into diverse communities and to grow Drupal adoption. They bring a wide range of experiences and expertise to the Association that will enhance our opportunities to reach new audiences, support the Drupal community and elevate the Drupal project around the world,” said Adam Goodman, Drupal Association Board Chair. “We welcome Grace’s significant work in developer relations, developer marketing and program management; Leslie’s experience as a developer and project manager long emphasized by her years of contributions as a Drupal community member; Owen’s creative problem-solving, local Drupal association and DrupalCamp experience and business leadership skills; Lo’s extensive work in content management, brand promotion and tech platforms alongside her advocacy for women in technology; and Ryan’s product and service development and business skills coupled with his strong relationships in the Drupal community. We look forward to working with all of our new board members to achieve the Association’s strategic goals.”

grace's headshot photoGrace Francisco joined MongoDB in July as Vice President, Worldwide Developer Relations. Prior to that, she served as Vice President of Developer Relations and Education at gaming platform Roblox where she doubled the size of active developers to 2+ million. A seasoned developer relations leader with over 20 years of experience in software, she has co-authored three patents and led worldwide developer initiatives at Microsoft, Intuit, Yodlee and Atlassian. Francisco graduated cum laude and holds a BBA in Business Management from Golden Gate University.

“I am super excited to join the Drupal Association board,” said Francisco. “I first encountered the Drupal project back in 2010 while I was at Microsoft doing outreach to open source projects - building bridges to open source communities. It’s wonderful now, almost a decade later, to help from the other side to build bridges from Drupal to other tech organizations to broaden Drupal’s adoption.”

leslieg's headshot photoLeslie Glynn has more than thirty years of experience in the tech field as a software developer and project manager. She has been a freelance Drupal Project Manager and Site Builder since 2012. Glynn is very active in the Drupal community as an event organizer (Design 4 Drupal, Boston and NEDCamp), sprint organizer, mentor, trainer and volunteer. She is the winner of the 2019 Aaron Winborn Award. This annual award recognizes an individual who demonstrates personal integrity, kindness, and above-and-beyond commitment to the Drupal community.

“Being a volunteer at numerous Drupal camps and DrupalCons has given me the opportunity to meet and learn from many diverse members of the Drupal community,” said Glynn. “I hope to bring that knowledge and experience to my work on Drupal Association initiatives. One of the things I would like to help with is growing Drupal adoption through new initiatives that reach out to underrepresented and diverse groups through an increased presence at secondary schools and universities and to groups, such as Girls Who Code, in the tech space.”

owen's headshot photoOwen Lansbury is co-founder of PreviousNext, an independent Australian digital design and development company that has been one of the world's most prolific code contributors to the Drupal project. With 25 years’ professional experience and a background in Fine Art, Digital Media and User Experience Design, Lansbury blends creative problem solving with the business skills required to sustain his own company and work successfully with complex customers. He is also an active leader within the Australian and New Zealand Drupal community, bringing DrupalCon to Sydney in 2013, acting as Track Chair at several regional events and chairing the DrupalSouth Steering Committee.

Lansbury said, “As a long-term Drupal community contributor in Australia and New Zealand, I'm excited about the opportunity to bring my grassroots experience to the Association board at a global level. I've always been a bit jealous of our developers contributing code to Drupal, so being able to contribute my own business and community leadership experience to the Association board is a great opportunity for me to give something back at a global level."

lo's headshot photoLo Li is the Senior Vice President, CIO of Global Consumer Solutions at Equifax. She has spent the past two decades leading global multi-billion dollar corporations for some of the world’s most renowned hospitality and retail brands in the world, working with hundreds of teams dispersed in the UK, China, Singapore and India. Some of her work includes the creation for dynamic pricing and predictive analytics engines for global hotels; and scaling big data and Agile to enable business transformation at large retailers including double digit growth plans for digital and international presence. She brings a deep understanding of how to translate corporate visions and strategies into simple, elegant solutions - using her international business acumen and technology background as both a business enabler and a competitive differentiator. Li, who is multilingual - fluent in Mandarin, Portuguese, Spanish and English - is the recipient of several industry accolades and serves on the Board of Directors for several national nonprofit organizations. She received her Bachelor’s and Master’s degree from the University of Georgia.

Li said, “I am thrilled to join the Drupal Association board because of the incredible open source community that has been fostered. The nurturing and growth of communities, like the one we have at Drupal, are the very catalyst to help organizations leap forward and provide an incubator for new ideas and thought leadership amongst digital citizens. It's truly an honor to be able to help shape the future of such a great organization!”

ryan's headshot photoRyan Szrama co-founded Commerce Guys in 2009 to offer Drupal-based eCommerce consulting and development services. He was the project lead of Drupal’s most popular eCommerce framework, Drupal Commerce, from its creation to its eventual use on over 60,000 websites. In 2016, Ryan acquired control of Commerce Guys from his partners, leading the company to rebrand to Centarro and launch new product and support offerings that enable teams to build with confidence on Drupal Commerce.

"My personal goals align perfectly with the mission of the Drupal Association: uniting a global open source community to build Drupal,” said Szrama. “I've been privileged to build a career in this community as a long-time contributor turned business owner, and I'm continually inspired by the members of this board to think bigger and give back more. I hope to apply my knowledge and experience to board initiatives that empower more people to better themselves and their organizations by using and contributing to Drupal."

The newly-elected members will join the following Association board members, continuing their service in the upcoming term: 

  • Baddý Sonja Breidert, 1xINTERNET

  • Dries Buytaert, Acquia

  • Luma Dahlbacka, Charles Schwab & Co

  • Suzanne Dergacheva, Evolving Web

  • Adam Goodman, Northwestern University’s Center for Leadership

  • Mike Lamb, Pfizer

  • Audra Martin-Merrick, Red Backpack Limited

  • George Matthes, Johnson & Johnson

  • Vishal Mehrotra, Tata Consultancy Services

  • Ingo Rübet, BOTLabs GmbH

  • Michel van Velde, One Shoe

About Drupal

Drupal is one of the leading content management software platforms that has been used to create millions of websites around the world. There are 46,000 plus developers with 1.3 million users on Drupal.org, and Drupal has the largest open source community in the world. Drupal has great standard features, easy content authoring, reliable performance and excellent security. What sets it apart is its flexibility; modularity is one of its core principles. Its tools help you build the versatile, structured content that ambitious web experiences need.

About Drupal Association

The Drupal Association is an international non-profit organization that engages a broad audience about Drupal, the leading CMS open source project. The Association promotes  Drupal adoption through the work and initiatives of a worldwide community of dedicated contributors, and support from individual and organizational members. The Drupal Association helps the Drupal community with funding, infrastructure, education, promotion, distribution and online collaboration. For more information, visit Drupal.org.

###

Sep 11 2019
Sep 11

The news about the forthcoming Drupal 9 release in June 2020 becomes the hottest topic in the Drupal world. Website owners and developers start planning for Drupal 9. One of the key points in this plan is to prepare for Drupal 9 with the Upgrade Status module to make sure the site uses no deprecated code.

We are glad to announce that there is one more big advancement in the Drupal 9 readiness field. The newly released 8.7.7 version introduced the core versioning support. It allows developers to declare their project compatibility both with Drupal 8 and Drupal 9. More details are coming right now.

Drupal 9 readiness: how are things going for modules?

The community is gradually building D9 in D8 while deprecating older APIs and functions and supporting new releases of third-party dependencies. Among examples are the drupal_set_message() function deprecation or the Drupal 8.8.x-dev compatibility with the new Symfony 4.3.3.

Being up-to-date with all that means complete and instant Drupal 9 readiness for modules and themes. And it's great to notice that more contributed projects are D9 compatible! 

According to Gábor Hojtsy, a famous Drupal contributor and product manager, their number increased from 48% in March to 54% in July 2019

Core version requirement key introduced in Drupal 8.7.7!

As of Drupal 8.7.7, providing Drupal 9 readiness reaches a new level. So if anyone asks what’s new in Drupal 8.7.7, it’s the core versioning support that comes to mind first. Gábor Hojtsy devoted a special blog post to describe this innovation and called it “huge.” 

The key point is that all module and theme developers will need to add one more line to their info.yml file — the core_version_requirement key. 

Using it, developers can specify the multiple core versions their projects are compatible with. For example, they can mark both Drupal 9 readiness and compatibility both with the eighth version. Here is how the lines might look for such a project:

name: Module Name
type: module
core: 8.x
core_version_requirement: ^8 || ^9

The new key allows being even more specific about the version compatibility by specifying minor versions (8.7.7, 8.8.3. etc.), which was not possible with the traditional “core: 8.x” key. 

new feature in drupal 8 7 7.jpg

If a project only works with the 8.7.7 version or later, the developer will need nothing but the new key to mark its D9 readiness:

name: Module Name
type: module
core_version_requirement: ^8.7.7 || ^9

However, versions older than D8.7.7 do not support this improvement. So if a project is also meant to work with them, the traditional “core: 8.x” key should be listed along with the new one in the info.yml file.

Let your website’s Drupal 9 readiness be ultimate!

With so many opportunities for full Drupal 9 readiness, it’s time to start. Our development and support team will assist you in:

  • updating your site to version 8.7.7 and always keeping it up-to-date with fresh releases
  • checking your website for deprecated code and giving it a cleanup
  • marking the Drupal 9 readiness of your projects according to the newly introduced requirements

Contact our developers to keep your website prepared — and let it fly swiftly and effortlessly to the ninth version in June 2020!

Sep 11 2019
Sep 11

Adding media content to a website instantly turns it into an engaging and attractive one. Media like images, videos, audio, slideshows, etc. work wonders in creating better and compelling digital experiences. The Media module for Drupal 8 lets you manage your media elements easily and systematically. The release of Drupal 8.4 saw the inclusion of many media features like the Media module and Media Library in core.

What is Media module?

The Drupal 8 Media module is often called as a ‘File browser to the internet’. 
Media module for Drupal 8 lets you manage media files and assets regardless of where they are hosted. With this module you can add/embed different kinds of media to your website content, save them in the Media Library, embed videos with a URL and more.

Drupal 8 Media Entities

The concept of Media entities are similar to that of Nodes. Media types are to media entities as Content types are to nodes. However, with Media entities, every media type is different. For example, an Image media type will come with a different set of features that can be modified (like dimensions) when compared to a video embed with a URL media type. When you enable the Media module, these Media types are created automatically -

  • File
  • Image
  • Audio File
  • Video File
  • Remote Video

You can also add fields to media types and create your own media types based on your selected media source.

Installation of Media module:

The Media module ships with Drupal 8 core but you will need to enable it first to be able to use it. Here are the steps to enabling and using the Drupal 8 Media module-

Step1: 

First we have to go to Extend tab and enable the Media and media library module as shown in the below image.

Step2:

Once we enable the module now we can see that the Media types are part of the site’s structure, which has 5 types - Audio, Image , Remote video, File and video. Here we can create our own type as well.

Step3: 

Now we can use this as an entity reference for any field.
For example, here we will create a field for Images as Media type as shown in the below.

Next, we have to select a Reference type of media as shown in the below image.

Step 4: 

The field has been created successfully. It is time to add some media of the available types. You can add it in two ways:

  1. From the admin dashboard “Content ->  Add content ->  Article” for Image field

  1. Directly from the “Media” tab in “Content” where you will see the entire Media Library with all media that you have added.

The new Media Library interface can display your media in the Gridview and in the Table view

It is easy to filter and sort the media by various criteria, as well as select particular items to do actions with them (delete, publish, save, unpublish).

Sep 11 2019
Sep 11

In a previous post I had the opportunity to present the Entity Activity module which allows us to set up a notification system on any type of Drupal 8 content entity, according to the three main actions of their life cycle: creation, update and deletion.

Since beta version 8, the Entity Activity module includes a sub-module, Entity Activity Mail, which now allows us to send by email a summary of the notifications generated for each user, according to a frequency that can be configured by each user.

General configuration

The module offers several general configuration options.

Entity Activity Mail General settings

The main configuration options, in addition to the possibility of stopping the sending of notifications by email globally, are:

  • The possibility to exclude notifications, which have been marked as read, from the report sent by email
  • The definition of the time at which background tasks will be launched to generate all notification reports for each user, and then generate their sending. The tasks launched can be very time-consuming here depending on the volume of notifications and users, so it is recommended to configure the site cron at the server level so that the Cron is launched at least every hour so that the queues in charge of generating reports and sending reports by email are completely processed.
  • The ability to log the sending of each report for history or debugging purposes.

We also have the possibility to customize the email of the notification report.

Entity Activty Mail mail settings

In particular, we can customize the sender mail (leave blank to use the site mail), the subject of the mail, a text displayed before the notification report and finally a text displayed after the report (footer).

It is strongly recommended to use the SwiftMailer module to allow the sending of emails in HTML format, or any other solution at your convenience to ensure that the emails sent are in HTML format, for easier integration and theming of the reports generated and sent.

Configuration for users

In order to allow each user to select the frequency at which they wish to receive the notification report, we must configure the Logs report frequency component on the manage form display of the user entity.

Widget frequency

This component will allow each user to set up their account according to their preferences.

Frequency options

Thus, a user can choose from:

  • Never receive a report by email
  • Receive an email with each generated notification (immediately)
  • Receive a daily, weekly or monthly report.

Theming of the report

The email of the notification report can be themed by overloading the basic template provided by the module: entity-activity-mail-report.html.twig

The basic template render each notification according to the Mail view mode, which you can customize at this level, with the logs_content variable that contains a rendering table of all log entities in this view mode. For more detailed needs, the Twig template has with the logs variable all the Log entities that make up the report. If your needs are more advanced, you can always ask for an intervention from a Drupal Freelance who will most certainly be able to customize the rendering according to your expectations.

By way of conclusion

With a few clicks we can now offer users the option of receiving a notification report according to their preference.

As we have said above, generating and sending these reports by email can be very time-consuming, so the module works in tree steps: first of all it collects all eligible users according to their sending frequency, then for each user a processing queue is in charge of generating the report, then another processing queue is in charge of sending the email itself. These three steps make it possible to generate and send reports more easily, but it should be borne in mind that processing queues are only executed when the site's Cron is launched. This is why it is highly recommended to launch the Cron from the server hosting the site at least every hour, or less depending on the volume (number of users, estimated number of notifications) of your Drupal project.

Sep 10 2019
Sep 10

This blog has been re-posted and edited with permission from Dries Buytaert's blog.

I'm excited to share that when Drupal 8.8 drops in December, Drupal's WYSIWYG editor will allow media embedding.

You may wonder: Why is that worth announcing on your blog? It's just one new button in my WYSIWYG editor.

It's a big deal because Drupal's media management has been going through a decade-long transformation. The addition of WYSIWYG integration completes the final milestone. You can read more about it on Wim's blog post.

Drupal 8.8 should ship with complete media management, which is fantastic news for site builders and content authors who have long wanted a simpler way to embed media in Drupal.

Congratulations to the Media Initiative team for this significant achievement!

Sep 10 2019
Sep 10

open waters

In this episode, we talk with Mediacurrent's Mario Hernandez about why training is so important for web teams to grow and stay competitive. And yes, we are once again interviewing one of the hosts. 

Audio Download Link

About Mario 

In addition to his position as Head of Learning, Mario is a Senior Front End Developer with over 15 years of Drupal experience. He and I actually started on the same day, 5 years ago. Mario is a regular speaker and trainer at tech conferences including Drupal Camps and DrupalCons. He is a co-host of the Open Waters podcast and an active participant in the Drupal core project and other open source projects. Prior to Mediacurrent, Mario also has over 10 years of experience in the Federal Government.

Project Pick

Apollo GraphQL

  • Server
  • Client
  • Platform

Interview: 

The best way to learn is to teach. 

  1. How did you get started with Drupal and front end development in general?
  2. How did you get started doing training?
  3. What is your favorite part of training people?
  4. Is Mediacurret’s training limited to only events and/or only Drupal?
  5. How do you think training is most effective when working with a client’s internal development team?
  6. In addition to FE training, does Mediacurrent offer training in other areas?  Yes! We offer training in Accessibility, SEO, Back End, Digital Strategy, GatsbyJS and more
  7. How can organizations interested in our training offerings get more information?
Sep 10 2019
Sep 10

This is the second of three of blog posts about creating an app with React Native. To get started with React Native, read the Basics of React Native. Once you are familiar with the system and have an app, it is time to fill it out with content. If you don’t have content on your Drupal website, read Understanding Drupal 8’s Migrate API.

Exposing Drupal content

Some helpful modules to expose Drupal content are: Views, RESTful Web Services, and Serialization. The concept of getting Drupal content to an endpoint is simple:

  1. Build a view containing the data you want to expose.
  2. Add a “REST export” display to the view. During this step, select the appropriate serialization type.
  3. This will automatically create a REST endpoint at the URL.

The dataflow should look something like this: Drupal Content -> View -> Serializer -> REST endpoint.

Using fetch to asynchronously retrieve data

React Native’s compiler is Babel, which means ES6 code can be used anywhere in the project. This is useful for retrieving data, because ES6 enables the async and await keywords. When used in conjunction with something like fetch, you can create a smooth and elegant solution for pulling data from a server, such as content from a Drupal website.

The code for pulling from a Drupal website REST endpoint is the same as REST endpoint. It should look something like this:

async getSomeData() { let url = "https://data.org/useful/data"; let response = await fetch(url); let data = await response.text(); return data; }

The advantage to making a call like this asynchronously is that it allows other threads to continue running while the fetch is waiting for the server call to return with all of the data it ordered. This improves the user experience because it allows them to continue using other functions while the data loads.

Building a FlatList from a data set

After pulling in data from the endpoint, add a component to display the data. <FlatList> is an excellent component already built in to React Native. These components are useful because they could handle infinite amounts of data without impacting performance, since they only render the part of the list that is currently on screen.

A <FlatList> component takes two props for displaying data. You may need to massage the data to make it easier to use inside a <FlatList>. The first prop is the set of data that it will display. The next prop required by a <FlatList> is renderItem, which describes how the data should be displayed. This is a JSX object that tells the <FlatList> component how to represent each list item, and what fields to pull from the data. You can use any component inside renderItem.

The ListItem component provided by React Native Elements has lots of styling features, like color gradients and automatic chevron placement.

Here is an example <FlatList>:

<FlatList> style={{backgroundColor: '#ededed'}} data= {this.state.peopleData} renderItem={({person}) => <View> <ListItem title={person.name} titleStyle={{ color: '#00AEED', fontWeight: 'bold' }} subtitle={person.position} /> </View> } />

With the skills to expose, retrieve, and display your data, you can integrate a Drupal website with your new React Native app.

Sep 10 2019
Sep 10

Dropsolid is a Diamond sponsor at DrupalCon Amsterdam, 28-31 October. In this post, I’d like to share a bit about our vision for delivering the best customer experiences, our open integrated Digital Experience Platform, our partner program, and a special opportunity for DrupalCon attendees.

Are you working in a digital agency and coming to DrupalCon? We’d love to meet you at DrupalCon and talk about how our tools, infrastructure, and expertise could help you as a digital agency partner. We’ll be at Stand 13, by the catering area, so if you fancy a coffee, stop by for a chat. We’re running a very special giveaway, too. Complete a quick survey and we’ll donate 15 minutes of core contribution time as a thank you.

Sign up for Dropsolid News


A vision for Drupal to improve customer experience

In my previous post, I wrote about why we’re sponsoring DrupalCon. Simply put, without it, we wouldn’t exist. I also wrote about what we’re working on for the future, inspired by the market changes around digital experience management. I think we have something unique to offer our partner digital agencies right now.

I’ve gone from being a developer to a CEO, and I know the attraction of solving problems by building your own solutions. Yet, like many agencies, we discovered a few years ago that doing everything in-house was hindering us from growth. To solve this, we ended up pivoting our entire company, defining and offering solutions in a completely different way.

We found that many of our clients’ and partners’ teams were working in silos, with different focuses—one on marketing, another on hosting, and so on. We believe we have to take an integrated approach to solving today’s problems and a big part of that is offering stellar customer experience. We discovered that investing in customer experience meant your customers stick around more and longer. This translates to increased customer lifetime value, lower customer acquisition costs, and lower running costs. But what does it take to get there?

We have to recognize how problems are connected, so we can build connected solutions. You can see this in problems like search engine optimization. SEO is as much about great user experience as it is about your content. Today, for example, the speed and performance of your website affects your search engine rankings. Incidentally, my colleagues Wouter De Bruycker (SEO Specialist) and Brent Gees (Drupal Architect) will be talking about avoiding Drupal SEO pitfalls at DrupalCon Amsterdam.

Similarly, it seemed that various solutions out there were narrowly focused on a single area. We saw the potential and power of integrating these as parts of a unified Digital Experience Platform. Stand-alone, any one of these tools offers benefits, but integrated together, the whole is greater than the sum of its parts.

We are taking this approach with our clients already. With each successful engagement, we add what we learn to our toolbox of integrated solutions. We are building these solutions out for customers with consultation and training to make the most out of their investments. These include our hosting platform; our local dev tool, Launchpad; our Drupal install profile, Dropsolid Rocketship; Dropsolid Personalization; and Dropsolid Search optimized with Machine Learning. 

But our vision is bigger. We are working towards an open, integrated, Digital Experience Platform that our partner agencies can leverage to greater creative freedom and increased capacity without getting in their own way.

Stop by at DrupalCon or get in touch and see what we’re building for you. 

Read more: Open Digital Experience Platform


A Partner for European Digital Agencies

Dropsolid is the only European company sponsoring DrupalCon Amsterdam at the top-tier, Diamond sponsor level. With all due respect for our American colleagues, we believe a robust European company should exist to support all of us here. We want to help other European companies build successful digital experiences with Drupal at the core for organizations, governments, and others.

Like many Drupal agencies, we’ve gotten to where we are now providing services to our local market. Being based in Belgium, we design, strategize, build, maintain, and run websites and applications for clients, mainly in the Benelux region.

Dropsolid is the only European company sponsoring DrupalCon Amsterdam at the top-tier, Diamond sponsor level.

Now, we are looking for partners outside of Belgium to benefit from using our Drupal Open Digital Experience Platform for themselves and their customers. Dropsolid has the tools, infrastructure, and expertise to support creating sustainable digital experiences for anyone. Furthermore, we have the advantage of knowing and understanding the differing needs of our colleagues and clients across Europe.


Come join us!

We are looking for more partners to join us on this journey. By leaning on our tools and expertise, those who have already joined us now have more capacity for creative growth and opportunity.

What you might see as tedious problems and cost-centers holding your agency back, we see as our playground for invention and innovation. Our partners can extend and improve their core capabilities by off-loading some work onto us. And you gain shared revenue from selling services that your customers need.

You might be our ideal partner if you prefer

  • benefitting from recurring revenue, and 
  • not taking on additional complexity that distracts you from your core creative business.

Partners who sign up with us at DrupalCon will get significant benefits including preferred status and better terms and conditions compared to our standard offerings. Talk to us about it at our booth at Stand 13 or contact us to arrange a time to talk.


Facets of the Digital Experience - Dropsolid Sessions at DrupalCon

Check out my other post to see where to meet the Dropsolid Team at DrupalCon. You’re welcome to come say hello at our booth at Stand 13, and we can show you the facets of digital experience management as we see them, and also share our vision for the future.

Each one of our talks focuses on different facets of improving the digital experience for customers:

Sep 10 2019
Sep 10
Just like the poem says, “Little drop makes the mighty Ocean,” all contributions matter in the growth of the Drupal global community.  A diverse community results in great things. To ensure the longevity of Drupal digital experiences and the…
Sep 10 2019
Sep 10

DrupalCon Minneapolis 2020 is accepting session proposal submissions until Dec. 4, 2019! We welcome a wealth of perspectives and a vast knowledge base as presenters in Minneapolis for DrupalCon North America.

Sep 10 2019
Sep 10

Download the Menu Item Extras module like you would any other Drupal module. You can download it with Composer using the following command:

composer require drupal/menu_item_extras

Install the Menu Item Extras module. The module also provides a Demo module that can be used to see some examples of a menu with fields and display modes configured. In this case, we will just look at the base Menu Item Extras module.

Navigate to Structure > Menus > Main Navigation. The first thing you should notice is that there are additional links to Manage fields, Manage form display, Manage display, and View mode settings. This is very similar to what you have probably used on other entity types.

If you need to store any additional data for a menu link, you can do this on the Manage fields page. One potential use of this is to add an image field:

Manage Fields

You can then manage the way this is displayed on the menu link add/edit form:

Manage Form Display

You can also control how this menu item is displayed:

Manage Display

If you navigate to Structure > Display modes > View modes you can add additional view modes for menu items. In this example, I created a new view mode for Custom menu links. I called the view mode Image Link.

Custom Menu Link

You can now navigate back to Structure > Menus > Main Navigation and go to Manage display. In the Custom Display Settings section, you can enable the Image Link view mode and configure the display settings for the Default links and the Image Link view mode displays.

Custom Display Settings

You can now navigate to the View Modes Settings tab and select what view mode to use for each link in your menu.

View Mode Settings

This additional flexibility allows you to do a lot with your menu items. You could use this to build out a customized mega menu (this would require additional theme and template development). You could also use this to customize the display of menu items (perhaps by adding icons next to menu links, adding additional menu link descriptions, and more. The module provides you the site building tools to customize your menu items, now it’s up to you to decide how you want to use it!

Sep 09 2019
Sep 09

Our lead community developer, Alona Oneill, has been sitting in on the latest Drupal Core Initiative meetings and putting together meeting recaps outlining key talking points from each discussion. This article breaks down highlights from meetings this past week.

You'll find that the meetings, while also providing updates of completed tasks, are also conversations looking for community member involvement. There are many moving pieces as things are getting ramped up for Drupal 9, so if you see something you think you can provide assistance on, we encourage you to get involved.

Drupal 9 Readiness Meeting

September 02, 2019

Meetings are for core and contributed project developers as well as people who have integrations and services related to core. Site developers who want to stay in the know to keep up-to-date for the easiest Drupal 9 upgrade of their sites are also welcome.

  • It usually happens every other Monday at 18:00 UTC.
  • It is done over chat.
  • Happens in threads, which you can follow to be notified of new replies even if you don’t comment in the thread. You may also join the meeting later and participate asynchronously!
  • Has a public Drupal 9 Readiness Agenda anyone can add to.
  • The transcript will be exported and posted to the agenda issue.

Multiple Version Compatibility for info.yml Files

Congratulations on landing multiple version compatibility for info.yml files! The new core_version_requirement key in *.info.yml files for modules and themes now supports semantic versioning as implemented by the Composer project. This allows modules and themes to also specify that they are compatible with multiple major versions of Drupal core. For more information, read the issue at
New 'core_version_requirement' key in info.yml files for modules and themes allows Composer semantic version constraints including specifying multiple major versions of core.

To follow up, the issue Don't catch exception for invalid 'core_version_requirement' in info.yml files was opened and Gábor Hojtsy posted Drupal 8.7.7+ will support extensions compatible with both Drupal 8 and 9! to explain the multi-version support in the newest release.

Drupal 9 Requirements Issue

The Drupal 9 requirements issue has been updated to list 3 requirements: 

  1. Multi-version compatibility
  2. Symfony 4.4 green, and
  3. lack of (or very low) use of deprecated APIs.

There is also a fallback date on the week of October 14th, alongside 8.9's branching [META] Requirements for opening the Drupal 9 branch.

Symfony Updates

Drupal Core Depreciation

Drupal core's own deprecation testing results are posted here.

DrupalCon Amsterdam

  • DrupalCon Amsterdam is approaching fast! It would be lovely to run Drupal 9 compatibility contribution events for next month's DrupalCon.
  • Tools for PHP deprecations should be there.

Missing tooling for constants, JS deprecations, and Twig deprecations

Admin UI Meeting

September 04, 2019

  • Meetings are for core and contributed project developers as well as people who have integrations and services related to core. 
  • Usually happens every other Wednesday at 2:30pm UTC.
  • Is done over chat.
  • Happens in threads, which you can follow to be notified of new replies even if you don’t comment in the thread. You may also join the meeting later and participate asynchronously!
  • There are roughly 5-10 minutes between topics for those who are multitasking to follow along.
  • The Admin UI Meeting agenda is public and anyone can add new topics in the document.

Core Issue Draft

  • Review core issue draft to add Claro administration theme to core.
  • We need to open an issue to start evaluating what's needed to add Claro to core and start getting feedback from everybody involved in the process
  • Roadmap issue we still need several things.

Underlining Link

Today some concerns were raised about the last issue for underlining link.

There are two proposed options to solve the issue.

Black underlined links and hover in blue.

showing black text with a black underline on load and on hover text changes to blue

Blue underlined links and remove underline on hover.

showing blue text with black underline on load and on hover the underline disappears leaving only blue text


Action Link Styles and Padding

The current design of action links leads to several issues:

  • Spacing between mixed elements button | button | action-link | action-link | button.
  • No explicit visual feedback that they're links.
  • Extra whitespace if an action link is the first element in a content row.

In the last revision, we had with Product Management for Claro we had some feedback that we need to address before adding Claro to core.

Some initial tests are moving this way.

spacial adjustments to settings toolbar in claro

Composer in Core Initiative Meeting

September 04, 2019

While working toward issue 2982680: Add composer-ready project templates to Drupal core, we discovered that the Vendor Hardening Plugin sometimes fails to work, throwing an error. The issue is documented in Move Drupal Components out of 'core' directory.

A related issue, [meta] Reorganize Components so they are testable in isolation, covers the ability to test Drupal's core Components in isolation as much as possible.

Vendor Hardening plugin

Add Composer vendor/ hardening plugin to core.

Ryan Aslett summarized: Duplicate the contents of core-file-security into core-vendor-hardening because core-vendor-hardening is unable to find core-file-security when composer-installers move it due to the composer's behavior regarding plugin interaction. Moving all of the components might have other consequences and require additional efforts that we eventually want to tackle, just not right now.

Broken Builds

  • While creating the 1.4.0 release of this project this line was changed resulting in broken builds for all packages that depended on behat/mink-selenium2-driver: 1.3.x-dev such as Drupal core and many others.
  • It also looks like we're inching closer to a 1.7.2 release in behat/mink.    

Template Files

The templates we have so far look good, once vendor hards things the question is, will they be done? The tests are probably not something we want in drupalci like that. Possible we should wait for Add a new test type to do real update testing to add tests. It's okay to add this to core without tests for the time being.

Automatic Updates

Potential conflicts have been uncovered and need to be addressed.

Angie Byron has been trying the past couple of days to get up to speed on the autoupdates initiative. Work is happening at https://www.drupal.org/project/issues/automatic_updates and it seems they’re going a “quasi-patch” approach doing in-place updates. You can find Greg Anderson's summary within the issue comments to follow along.

Migration Initiative Meeting

September 05, 2019

This meeting:

  • Usually happens every Thursday and alternates between 14:00 and 21:00 UTC.
  • Is for core migrate maintainers and developers and anybody else in the community with an interest in migrations.
  • Is done over chat.
  • Happens in threads, which you can follow to be notified of new replies even if you don’t comment in the thread. You may also join the meeting later and participate asynchronously!
  • Has a public migration meeting agenda anyone can add to.
  • Transcript will be exported and posted to the agenda issue.
  • For anonymous comments, start with a bust in silhouette emoji. To take a comment or thread off the record, start with a no entry sign emoji.

Issues Needing Review

Great progress is being made, and there are a lot of issues that are awaiting review.

31 Days of Migrations

31 days of migrations have been an outstanding success, thanks to Mauricio Dinarte and his brother! How best can we include these guides in the official documentation and make sure that they are disseminated to those who could use them? Mauricio suggested: 

  • It can be copied over in full like this book, Drupal 7 - The Essentials.
  • Should it be broken down into "recipes" (self-contained examples)?
  • Should it be broken apart and add the relevant pieces into the existing documentation topics/pages?
Sep 09 2019
Sep 09

Rain logo updated

Mediacurrent created the Rain Install Profile to build fast, consistent Drupal websites and improve the editorial experience. Rain expedites website creation, configuration, and deployment.

Overview

The Mediacurrent development team uses a Composer project template that extends the official Drupal Composer template to add Rain projects as well as additional tools and scripts.

Our template by default leverages a fork of DrupalVM which will provision the local environment. Note that Docker-based environments such as Lando or DDEV could be used as an alternative to Vagrant.

In this tutorial, we will walk through each step to get you up and running quickly. Below, you can also watch a narrated tutorial video to see these steps in action.

Installation instructions

First, you will want to create a repository wherever you typically host your Git projects (e.g. Github, Bitbucket or Gitlab). Once you have that setup you can clone Mediacurrent’s repo and point the origin back to your Git repo. The example command below illustrates how this is done.

Example:

git remote set-url origin [email protected]:mediacurrent/shortcode_project.git

Next, you will want to initialize the project. You can do that by running the following commands with your local host name and IP (see example below).

Example:

composer install

composer drupal-scaffold

./scripts/hobson project:init example.mcdev 192.168.50.4

Finally, to build the project and run the install you can simply run the following build command to execute the composer install and Drupal install:

./scripts/buid.sh

Note that this command does require Mediacurent’s Vagrant environment in order to work. If you are using an alternative local environment you would run composer install, followed by the drush site install command instead of running the build script.

Once you get a full install working with the sample profile that’s been provided you will want to follow the project README documentation for further setup instructions. Remember to commit all of your files and push up to your Git’s origin. That’s it!

Questions or comments? Let me know at https://twitter.com/drupalninja/.

Sep 09 2019
Sep 09

When you build a new website, going live is relatively easy. You get ahold of a domain name, point it at a webhost, put the website code there, and you're up and running!

After a site is live, it gets a lot more complicated.

What's important about deployment?

If you have a simple brochure site, deploying updates doesn't have to be complicated. The more your site does, the more complex deployment becomes. A deployment plan can help you stay out of trouble, keep your site online, minimize data loss. So when going live with an update to a site, you should ask:

  • How much downtime is acceptable?
  • How much testing do we need before we make a change to the production site?
  • What data could we lose, from the production site?
  • What might go wrong with this deployment strategy?
  • How can we recover if something does go wrong?

A good deployment plan should make you smile with comfort, knowing you have all the bases covered. Are you smiling? If not, read on.

Common deployment strategies

Here are the main strategies we've seen or used for deployment:

  • Do all work in the production environment so there's nothing to deploy
  • Copy the entire new site into the production environment
  • Compile/build a site and put the result into the production environment
  • Dev/Stage/Production pipeline
  • Blue/Green deployments

Let's take a deeper look at each one.

No Deployment - work in production

All too often, this is what you get if you aren't careful hiring a freelancer. This really seems to be the standard approach for most WordPress sites, which to me is horrifying.

Coding is often a process of trying something, breaking things, and then fixing them. Rinse and repeat. If you're doing this on a live production website, your site visitors will see broken pages, weird issue, or sometimes nothing at all. If your site is already getting traffic, working in production is irresponsible, dangerous. Especially if you aren't extremely careful about backups, and aren't extremely proficient.

The only benefit of "no deployment" deployment strategies is that it's cheap -- you're saving the cost of managing a copy of your site, and deploying changes.

Copy site to production

This also seems to be a pretty common way of deploying sites -- simply copy the new site in its entirety to the production server and make it live.

For major upgrades, such as going from Drupal 7 to Drupal 8, or changing from one platform to an entirely different one, this is the main strategy we use. And there are definitely times when this strategy makes sense. However, for day-to-day maintenance, theme refreshes, or most typical deployments, this is not a very good approach.

If your production site has a database, and regular changes to it, you need to be extremely careful to not lose production data. For example, if your site allows user comments, or takes e-commerce orders, or manages sales leads, if you simply copy a new site up you risk losing something.

Save this one for entirely new sites. Don't do this for day to day work -- unless your site doesn't even have a database.

Build site and deploy

"Static site generators" like Gatsby and Jeckyll have become quite popular recently, because they generate static sites that do not have a database -- greatly simplifying security. If you're running a full-blown Content Management System (CMS) like Drupal or WordPress, you're putting an application with bugs on the Internet where anyone can attack it. If your site is just a collection of files, they can't really attack it -- they can attack the hosting environment but your site itself has far less "attack surface" for an attacker to go after.

Gatsby in particular is becoming quite popular as a front-end to Drupal and WordPress -- you write your content in a private CMS on a private LAN, not reachable from the Internet, export the entire site using Gatsby (the build step), and then copy the resulting code up to the web host (much like the previous deployment strategy).

If you use this approach, you still need to consider how to keep your CMS up to date, though if it's not online, updating it in place becomes a far more reasonable proposition.

Dev/Stage/Production pipeline

Now we've reached what we consider to be the "standard" deployment practice -- run 3 copies of your site:

  • Dev, or Development -- this copy is where you do the development work, or at least integrate all the various developer copies, and conduct the initial round of testing.
  • Stage, or Test -- The main purpose of this copy is to test the deployment process itself, and understand what might break when you roll out to production.
  • Production, or Live -- The site that's available to the public.

In general, code flows from dev to production, whereas content/data flows from production to dev. If your site takes orders, collects data using forms, supports ratings/reviews or comments, or does anything sophisticated, you'll probably end up with this deployment strategy.

Several of the more professional hosts, like Pantheon, Acquia, WP Engine, and others provide these 3 environments along with easy ways to deploy code up to prod, and copy data down from prod.

Many larger companies or highly technical startups have built out "continuous integration/continuous deployment" on pipelines along these lines -- including Freelock. "Continuous Integration" basically kicks off automatic tests after code is pushed to a particular branch, and "Continuous Deployment" automates the deployment of code to production when tests have passed.

This is the key service we provide to nearly all our clients -- two different kinds of testing, a fully automatic pipeline, with automatic backups, release scheduling, release note management, and more. And we've build our pipeline to work with a variety of hosts including Pantheon and Acquia but also bare Linux servers at any cloud provider.

The main downsides of this type of deployment is that it can be slow to deploy, very hard to set up, prone to breaking as code and standards evolve, and different platforms have different challenges around deploying configuration changes. For example, when you move a WordPress database to another location, you need to do a string search/replacement in the database to update the URL and the disk location, and you may need to do manual steps after the code gets deployed. Drupal, on the other hand, may put the site in maintenance mode for a few minutes as database updates get applied.

All in all, when done well, this is a great deployment strategy, but can be very expensive to maintain. That's why our service is such a great value -- we do all the hard work of keeping it running smoothly across many dozens of clients, have automated a lot of the hard bits, and streamlined the setup.

Blue/Green deployments

If even a minute of downtime costs a significant amount of income, you may want to consider a Blue/Green deployment strategy. This is a strategy made for "high availability" -- doing your utmost to both minimize maintenance windows, and provide a rock-solid roll-back option if something goes awry.

With a Blue/Green deployment strategy, you essentially create two full production environments -- "blue" and "green". One of them is live at any given instance, the other is in standby. When you want to deploy an update, you deploy all the new code and configuration changes to the offline environment, and when it's all ready to go, you simply "promote" it to be the live one. For example, if Blue is live, you deploy everything to Green, possibly using a normal dev/stage/prod deployment process. The configuration changes happen while the green site is offline, so the public never gets a "down for maintenance" message. When it's all ready, you promote Green to live, and Blue becomes the offline standby copy. And if you discover a problem after going live, you simply promote Blue back to live, and Green goes into standby where it can get fixed.

There is a big downside here -- if your site takes orders, or otherwise changes the production database, there's a window where you could lose data, much like the "Copy Site to Production" strategy. You might be able to somewhat mitigate this by setting the live site to "read only" but still available, while you copy the database to the standby site and then apply config and promote. Or you might be able to create a "journal" or log of changes that you replay on the new site after it gets promoted. Or move to a micro-service architecture -- but then you're just moving the problem into individual microservices that still need a deployment strategy.

Which deployment strategy is best?

There is no "best" deployment strategy -- it's all about tradeoffs, and what is most appropriate for a particular site's situation. If you break up your site into multiple pieces, you may end up using multiple strategies -- but each one might be quite a bit simpler than trying to update the whole. On the other hand, that might actually lower availability, as various pieces end up with different maintenance schedules.

If you're running a PHP-based CMS, and you want to rest easy that your site is up-to-date, running correctly, and with a solid recovery plan if something goes wrong, we can help with that!

Sep 09 2019
Sep 09

With Drupal 9 stated to be released in June 2020, the Drupal community has around 11 months to go. So, before it maps out a transition plan, now is the time to discuss what to expect from Drupal 9.

Switch to Drupal 9

You must be wondering:

Is Drupal 9 a reasonable plan for you?

Is it easy to migrate from recent Drupal versions to the new one?

This blog post has all your questions answered.

Let’s see the major changes in Drupal 9

The latest version of Drupal is said to be built on Drupal 8 and the migration will be far easy this time.

  • Updated dependencies version so that they can be supported
  • Removal of deprecated code before release

The foremost update to be made in Drupal 9 is Symfony 4 or 5 and the team is working hard for its implementation.

Planning to Move to Drupal 9?

With the release of Drupal 8.7 in 2019, it has optionally supported Twig 2 that has helped developers start testing their code against the version of Twig. Drupal 8.8 will virtually support the recent version of Symfony. Ideally, the Drupal community would like to release Drupal 9 with support for Symfony 5, that is to be released at the end of 2019.

Drupal 9

If you are already using Drupal 8, the best advice is to keep your site up to date. Drupal 9 is an updated version of Drupal 8 with the updates for third-party dependencies and depreciated code removed.

Ensure that you are not using any deprecated APIs and modules and wherever possible use the recent versions of dependencies. If you do that, your upgrade experience will not encounter any problems.

Since Drupal 9 is being built within version 8, developers will have the choice to test their code and make updates before the release of Drupal 9. This is an outstanding update and was not possible with the previous versions of Drupal!

So, where are you in the Drupal journey?

Here are some scenarios to support your migration process:

Are you on Drupal 6?

You are way behind! . We strongly suggest you move to Drupal 8 as soon as you can. Migration from 8 to 9 will be straight forward. Drupal 8 includes migration facilitations for Drupal 6 which probably won’t be included in Drupal 9. While there is a possibility that there might be some contribution modules available, but it is better to be safe.

Are you on Drupal 7?

Both Drupal 7 and 8 support will end by 2021. Since the release of Drupal 9 is set for June 2020, you should plan your upgrade and go live. If not migrated by the required time, your website may be vulnerable to security threats. Since Drupal 7 to 9 are similar to new development, you just need to consider the timeline involved.

Are you on Drupal 8?

Great work if you are already on Drupal 8! For you, it would be easier to move to the next major version with zero effort.

The big difference between the last version of Drupal 8 and the first order of Drupal 9 is that of deprecated code being removed. You just need to check that your themes, modules, and profiles don’t include any such code. So, there’s no need to worry about migrating your content at all!

Want to know more about Drupal and its migration process?

Sep 09 2019
Sep 09

The node_list cache tag is generated automatically when we create a view that displays nodes entity type in Drupal 8. This cache tag will invalidate the cache of all views that list any kind of nodes (page, article, ....) when we make a CUD (create, update, delete) action on any kind of nodes.

This is a very good cache invalidation strategy since when we modify a node through a CUD action, the cache of every views that display nodes will be invalidate to reflect this new change. Yes, the improvements in the new Drupal8 Cache API are amazing! Thanks to all the contributors (like Wim Leers) to make this possible.

node_list cache tag views drupal9

(to see the cache tags in your header response, just enable your settings.local.php)

So far so good, but... What would happen if we have a high traffic web site with hundred of different node bundles and hundred of views displaying different kind of nodes?

In that case, if we modify a node (let's say a page), the cache of every views listing page nodes (but also views listing article nodes and other node bundles) will be invalidated. This means that if we modify a page node, the cache of all views displaying article nodes will also be invalided. This could be a serious performance issue and especially when we have a lot of updates like in a newspaper web site.

How can we invalidate the view caches in a more specific way, let's say only for nodes of the same type?

To do so, we'll use a two steps process:

- Place a custom cache tag for all views displaying the same node type, like node:type:page for all views listing page nodes, node:type:article for views listing articles nodes and so on...
- Invalidate these custom cache tags when a CUD operation is performed on a specific node type with a hook_node_presave() 

Add custom cache tags with the Views Custom Cache Tags contrib module

Luckily, to solve the node-list cache tag problem, the community made a contrib module that let us to place custom cache tags in our views: the Views Custom Cache Tags contrib module. This module allows us to define custom cache tags for any view we build.

In this case, we are going to place in our views a custom cache tags for each type of node we are displaying:
node:type:article for views listing articles
node:type:page for views listing pages
node:type:<your custom node type> ....

First, we are going to download and install the module.

# Download the module with composer
composer require drupal/views_custom_cache_tag
# Install the module with DC
drupal moi views_custom_cache_tag 

Then we could create two block views, one listing 5 articles (name it Block Articles) and one listing 5 pages (name it Block Pages). Next we place a custom cache tag node:type:article in the view that list articles (Block Articles). We do the same, but with an other custom cache tag like  node:type:page in the view listing pages (Block Pages).

Let's see how to do it for the view (block) listing articles:

1. Edit the view
2. Go to ADVANCED and click on 'Tag based' in Caching

custom cache tags views drupal8

3. Then click on Custom Tag based

custom cache tags views drupal 8

4. Insert the custom cache tag for this node type. In our case, as we are listing articles, we introduce node:type:article. For views listing other kind of nodes, we'll introduce node:type:<node-type>. Don't forget to click on Apply when you're done.

custom cache tags views drupal8

5. Save the view

custom cache tags views Drupal 8

When we have placed custom cache tags in all views listing nodes, we can now move to the second step, invalidate these tags when needed.

Invalidate custom cache tags with a hook_node_presave

Now we need to invalidate these custom cache tags when a CUD action is performed on a specific node type thanks to the hook_node_presave.

To do that, we're going to create a module. You can download the code of this module here.

Let's first create our module with Drupal Console as follow:

drupal generate:module  --module="Invalidate custom cache tags" --machine-name="kb_invalidate_custom_cache_tags" --module-path="modules/custom" --description="Example to invalidate views custom cache tags" --core="8.x" --package="Custom"  --module-file --dependencies="views_custom_cache_tag"  --no-interaction

Then we enable the module, we can do it also with Drupal Console:

drupal moi kb_invalidate_custom_cache_tags

Now we can edit our kb_invalidate_custom_cache_tags.module and place the following hook:

// For hook_ENTITY_TYPE_presave.
use Drupal\Core\Cache\Cache;
use Drupal\Core\Entity\EntityInterface;

/**
 * Implements hook_ENTITY_TYPE_presave().
 *
 * Invalid cache tags for node lists.
 */
function kb_invalidate_custom_cache_tags_node_presave(EntityInterface $entity) {
  $cache_tag = 'node:type:' . $entity->getType();
  Cache::invalidateTags([$cache_tag]);
}

Yes, we still have hooks in Drupal8... This hook will be fired each time a node is created or updated. We first retrieve the node type (page, article, ...) with $entity->getType() and create the variable $cache_tag with this value, this variable will correspond to our custom tag for this kind of node. Next we mark this cache tag in all bins as invalid with Cache::invalidateTags([$cache_tag]);. So the cache of every view with this custom cache tag will be invalidated.

In this case, if we insert or update a node of type article, the views custom cache tag will be node:type:article and the cache of all views with this custom cache tag will be invalidated. The cache of the views with other custom cache tags like node:type:page will remain valid. This was just what we were looking for! Thank you Drupal!

Recap.

In order to avoid the cache of all node views to be invalidated when we make a CUD operation on a node, we need the general node_list cache tag to be replaced by a more specific custom cache tag like node:type:<node-type>.

Thanks the Views Custom Cache Tags contrib module we can now insert custom cache tags in our views based on the node type listed in the view like node:type:article, node:type:page and so on.

Next, we mark these custom cache tags as invalid when we insert or update a specific node type thanks to the hook_ENTITY_TYPE_presave hook we've placed on our custom module.

Voilà! If you have an other kind of strategy to face the node_list problem, please share it with us in the comments.

Sep 09 2019
Sep 09

The staff and board of the Drupal Association would like to congratulate our newest At-Large board member:

Leslie Glynn

Leslie Glynn portrait photoLeslie has more than 30 years of experience in the tech field as a software developer and project manager. She has been a freelance Drupal Project Manager and Site Builder since 2012. Glynn is very active in the Drupal community as an event organizer (Design 4 Drupal, Boston and NEDCamp), sprint organizer, mentor, trainer and volunteer. She is the winner of the 2019 Aaron Winborn Award. This annual award recognizes an individual who demonstrates personal integrity, kindness, and above-and-beyond commitment to the Drupal community.

Being a volunteer at numerous Drupal camps and DrupalCons has given me the opportunity to meet and learn from many diverse members of the Drupal community. I hope to bring that knowledge and experience to my work on Drupal Association initiatives. One of the things I would like to help with is growing Drupal adoption through new initiatives that reach out to under-represented and diverse groups through an increased presence at secondary schools and universities and to groups such as "Girls Who Code" and other groups in the tech space.

We are all looking forward to working with you, Leslie.

Thank you to all our candidates

On behalf of all the staff and board of the Drupal Association, and I’m sure the rest of the Drupal community, I would like to thank all of those people who stood for election this year. It truly is a big commitment to contribution and one to be applauded. We wish you well for 2019 and hope to see you back in 2020!

About the Elections Methodology: Instant Run-off Voting (IRV)

Elections for the Community-at-large positions on the Drupal Association Board are conducted through Instant Run-off Voting. This means that voters can rank candidates according to their preference. When tabulating ballots, the voters' top-ranked choices are considered first. If no candidate has more than 50% of the vote, the candidate with the lowest votes is eliminated. Then the ballots are tabulated again, with all the ballots that had the eliminated candidate as their first rank now recalculated with their second rank choices. This process is repeated until only two candidates remain and a clear winner can be determined. This voting method helps to ensure that the candidate who is most preferred by the most number of voters is ultimately elected. You can learn more about IRV (also known as Alternative Vote) in this video.

Detailed Voting Results

There were 12 candidates in contention for the single vacancy among the two community-at-large seats on the Board. 1,050 voters cast their ballots out of a pool of 49,498 eligible voters (2.2%).

The full results output is below. The system allows for candidates to keep their name hidden, if they choose, so we replaced the names of those who did with a candidate number:

The number of voters is 1050 and there were 998 valid votes
and 52 empty votes. Removed withdrawn candidate Tushar Thatikonda from 
the ballots.

Counting votes using Instant Runoff Voting.

 R|Candi|Candi|Imre |Brian|Candi|Shada|Ahmad|Candi|Alann|Manji|Lesli|Exhau
  |date |date |Gmeli| Gilb|date |b Ash| Khal|date |a Bur|t Sin|e Gly|sted 
  |4    |3    |g Mei|ert  |2    |raf  |il   |1    |ke   |gh   |nn   |     
  |     |     |jling|     |     |     |     |     |     |     |     |     
  |     |     |     |     |     |     |     |     |     |     |     |     
==========================================================================
 1|   71|   74|  166|  119|   36|   45|    7|  115|   67|  116|  182|    0
  |-----------------------------------------------------------------------
  | Count of first choices.
==========================================================================
 2|   71|   75|  167|  120|   36|   46|     |  116|   67|  117|  183|    0
  |-----------------------------------------------------------------------
  | Count after eliminating Ahmad Khalil and transferring votes.
==========================================================================
 3|   72|   76|  177|  124|     |   47|     |  118|   68|  117|  185|   14
  |-----------------------------------------------------------------------
  | Count after eliminating Candidate 2  and transferring votes.
==========================================================================
 4|   74|   76|  178|  125|     |     |     |  132|   70|  130|  186|   27
  |-----------------------------------------------------------------------
  | Count after eliminating Shadab Ashraf and transferring votes.
==========================================================================
 5|   89|   77|  183|  133|     |     |     |  142|     |  131|  211|   32
  |-----------------------------------------------------------------------
  | Count after eliminating Alanna Burke and transferring votes.
==========================================================================
 6|   93|     |  192|  134|     |     |     |  151|     |  134|  217|   77
  |-----------------------------------------------------------------------
  | Count after eliminating Candidate 3 and transferring votes.
==========================================================================
 7|     |     |  199|  149|     |     |     |  177|     |  136|  248|   89
  |-----------------------------------------------------------------------
  | Count after eliminating Candidate 4 and transferring votes.
==========================================================================
 8|     |     |  208|  163|     |     |     |  228|     |     |  254|  145
  |-----------------------------------------------------------------------
  | Count after eliminating Manjit Singh and transferring votes.
==========================================================================
 9|     |     |  239|     |     |     |     |  247|     |     |  296|  216
  |-----------------------------------------------------------------------
  | Count after eliminating Brian Gilbert and transferring votes.
==========================================================================
10|     |     |     |     |     |     |     |  288|     |     |  359|  351
  |-----------------------------------------------------------------------
  | Count after eliminating Imre Gmelig Meijling and transferring votes.
  | Final round is between Candidate 1 and Leslie Glynn.
  | Candidate Leslie Glynn is elected.

Winner is Leslie Glynn.
Sep 09 2019
Sep 09

We’re back with an overview of the blog posts we wrote last month. If there are some you particularly enjoyed, this is the perfect opportunity to revisit them, as well as catch up on the ones you might have missed.

Recap of Acquia's webinar on the Digital Experience Platform

The first post we wrote in August is a recap of Acquia’s webinar on the DXP (Digital Experience Platform), which was presented by Tom Wentworth, SVP of Product Marketing at Acquia, and Justin Emond, CEO of Third and Grove

They talked about digital experiences in general, then explained what a DXP is, why an open approach is best for a DXP, and how Acquia can serve as the basis for an open DXP.

The high emphasis placed on digital experiences is due to the fact that a single negative one can do irreparable damage to a brand. It is thus important to deliver integrated experiences on a platform that’s future-ready. 

As the only truly open DXP, Acquia’s Open Experience Platform is likely the best choice, as integrations with future technologies will be easier due to this open nature.

Read more

Interview with Ricardo Amaro: The future is open, the future is community and inclusion

Our second post is part of the series of our Drupal Community Interviews. This one features a prominent and prolific member of the community - Ricardo Amaro, Principal Site Reliability Engineer at Acquia and an active member of the Portuguese as well as the broader Drupal communities.

Ricardo has been involved in numerous important projects and initiatives, ranging from more technical endeavors such as Docker and containers, to more community-oriented things such as the Promote Drupal initiative

Apart from that, he has presented at Drupal events and participated in the organization of several of them in Portugal as the president of the Portuguese Drupal Association

He is also a strong advocate for Free Software and encourages collaboration with other projects in the ecosystem. He strives to keep the future of the web and technology in general open and rich in possibilities.

Read more

Top 10 Drupal Accessibility Modules

Even though Drupal is already quite optimized for accessibility, it never hurts to have even more resources at one’s disposal. This was our reasoning behind researching Drupal’s available accessibility modules and putting together this list. 

The modules on the list touch different aspects of accessibility and take into account everyone who interacts with the site in any way: there are modules for developers building the site, those for admins and content editors, and those that are geared towards users of the site (e.g. the Fluidproject UI Options module).

Some of the modules have particularly interesting functionality. Namely, the a11y module provides support for simulating specific disabilities, which helps developers feel empathy for users with these disabilities. The htmLawed module can also be especially useful, as it improves both accessibility and security.

Read more

Interview with pinball wizard Greg Dunlap, Senior Digital Strategist at Lullabot

Next up, we have another community interview, this one with pinball enthusiast Greg Dunlap, Lullabot’s Senior Digital Strategist. Interestingly, his first interaction with Drupal was with Lullabot, the company he’s now working for more than 10 years later!

Greg points out that it was actually Lullabot’s Jeff Eaton who gave him the push to start contributing, and the two became really good friends. He believes (and we agree!) that who you do something with is more important than what you do - very fitting, then, that he and Jeff now form Lullabot’s strategy team.

One of the things he has particularly enjoyed recently was working with the Drupal Diversity and Inclusion group. Since welcoming diverse backgrounds and viewpoints into the community is instrumental to the future of Drupal, he encourages anyone who’s interested to join the initiative.

Read more

Agiledrop recognized as a top Drupal development company by TopDevelopers.co

Our final post from August is a bit more company oriented. In a press release published in early August, the IT directory and review platform TopDevelopers.co listed us among the top 10 Drupal development companies of August 2019.

Of course, we’re very happy with the recognition and, with our diverse contribution to the Drupalverse and the numerous successful client projects, we feel it is well deserved. 

Among the reasons for selecting us, the spokesperson at TopDevelopers.co listed the super fast integration of development teams into clients’ teams, our clear and frequent communication with clients, and our adherence to strict coding and security standards. 

To learn more about our work, you can also check out our portfolio of references and case studies, as well our profile page on TopDevelopers.co, which their team helped us build.

Read more

These were all our blog posts from August. We'll be back again next month with an overview of September's posts. Till then - enjoy!

Sep 09 2019
Sep 09

Some different modules and plugins can alter the display of a view in Drupal, for instance, to alternate the order of image and text every new row, or to build some kind of stacked layout. 

It is possible to alter the display of a view with just some lines of CSS code instead. This approach has many advantages, being the fact of not having to install and update a module, the most relevant one.

Keep reading to learn how!

Step #1. - The Proposed Layout

190828 theming views

As you can notice, we can divide the layout into six columns and five rows. There are empty cells in the grid, whereas other cells contain one item of the view across many cells (grid area). The Drupal view shows a list of articles and their titles. The view format is unformatted list.

Step #2. - Create an Image Style

To ensure that all images are squared, it is necessary to configure an image style and set it as display style in the view.

  • Click Configuration > Image styles
    190828 theming views 001
  • Click Add image style
    190828 theming views 002
  • Give the new image style a proper name, for example, Crop 600x600.

It is always a good idea to include some reference about the dimension or the proportion of the image style. That helps when having multiple image styles configured.

  • Click Create new style
  • Select Crop from the dropdown
  • Click Add
    190828 theming views 003
  • Set height and width for the crop effect (make sure both dimensions are equal)
  • Leave the default crop anchor at the center of the image
  • Click Add effect
    190828 theming views 004
  • Make sure the image effect was recorded properly and click Save
    190828 theming views 005

Step #3. - Create the View

You can read more about rewriting results in Views here.

  • Save the view
  • Click Structure > Block layout
  • Scroll down to the Content section
  • Click Place block
    190828 theming views 013
  • Search for your block
  • Click Place block
  • Uncheck Display title
  • Click Save block
    190828 theming views 014
  • Drag the cross handle and place the block above the Main content
  • Scroll down and click Save blocks

Step #4. - Theming the View

There are 3 classes you need to target to apply the layout styles to the view:

  • .gallery-item  (each content card will be a grid)

  • #block-views-block-front-gallery-block-1 .view-content

  • #block-views-block-front-gallery-block-1 .view-row

We set the specificity of the CSS styles on the block. The classes .view-content and .view-row are default Views classes. Theming only with these would break the layout of other views on the site, for example, the teaser view on the front page.

Hint: I am working on a local development environment with a subtheme of Bartik. There is much more about Drupal theming at OSTraining here.
If you don’t know how to create a subtheme in Drupal yet, and you are working on a sandbox installation, just add the code at the end of the file and please remember always to clear the cache.

/core/themes/bartik/css/layout.css 

Let’s start with the content inside the .gallery-item container. It will be a grid with one column and 4 rows. The image will cover all 4 rows, whereas the title text will be located on the last row. To center the title on its cell, we declare the link tag as a grid container too.

  • Edit the CSS code:
.gallery-item {
    display: grid;
    grid-template-rows: repeat(4, 1fr);
}
 
.gallery-item a:first-of-type {
    grid-row: 1 / span 4;
    grid-column: 1;
}
 
.gallery-item a:last-of-type {
   grid-row: 4;
   grid-column: 1;
   display: grid; /* Acting as a grid container */
   align-content: center;
   justify-content: center;
   background-color: rgba(112, 97, 97, 0.5);
   color: white;
   font-size: 1.2em;
}

 Make the images responsive.

  • Edit the CSS code:
img {
   display: block;
   max-width: 100%;
   height: auto;
}

As already stated, we need a grid with 5 rows and 6 columns. After declaring it, map every position in the grid according to the layout with an area name. The empty cells/areas will be represented with a period. 

  • Edit the CSS code:
#block-views-block-front-gallery-block-1 .view-content {
 display: grid;
 grid-template-columns: repeat(6, 1fr);
 grid-template-rows: repeat(5, 1fr);
 grid-gap: 0.75em;
 grid-template-areas:
 ". thumb1 main main main thumb2"
 ". thumb3 main main main thumb4"
 ". thumb5 main main main thumb6"
 "secondary secondary thumb7 thumb8 thumb9 ."
 "secondary secondary . . . .";
 max-width: 70vw;
 margin: 0 auto;
}

Now it’s time to assign each grid item to its corresponding region.

  • Edit the CSS code:
#block-views-block-front-gallery-block-1 .view-content > .views-row:nth-of-type(1) {
   grid-area: main;
}
 
#block-views-block-front-gallery-block-1 .view-content > .views-row:nth-of-type(2) {
   grid-area: secondary;
}
 
#block-views-block-front-gallery-block-1 .view-content > .views-row:nth-of-type(3) {
   grid-area: thumb1;
}
#block-views-block-front-gallery-block-1 .view-content > .views-row:nth-of-type(4) {
   grid-area: thumb3;
}
#block-views-block-front-gallery-block-1 .view-content > .views-row:nth-of-type(5) {
   grid-area: thumb5;
}
#block-views-block-front-gallery-block-1 .view-content > .views-row:nth-of-type(6) {
   grid-area: thumb2;
}
#block-views-block-front-gallery-block-1 .view-content > .views-row:nth-of-type(7) {
   grid-area: thumb4;
}
#block-views-block-front-gallery-block-1 .view-content > .views-row:nth-of-type(8) {
   grid-area: thumb6;
}
#block-views-block-front-gallery-block-1 .view-content > .views-row:nth-of-type(9) {
   grid-area: thumb7;
}
 
#block-views-block-front-gallery-block-1 .view-content > .views-row:nth-of-type(10) {
   grid-area: thumb8;
}
#block-views-block-front-gallery-block-1 .view-content > .views-row:nth-of-type(11) {
   grid-area: thumb9;
}

I think this is a practical way to layout Views items the way you want without the need of installing extra modules, which could unnecessarily affect the performance of your site. 

The Media Queries

The layout will break at around 970px, because of the font size. 

  • Edit the CSS code:
@media screen and (max-width: 970px) {
 .views-row > div .gallery-item > a:nth-child(2) {
   font-size: .9em;
 }
}

To change the layout, just add a media query with a new grid-template-areas distribution, and of course, we have to change the way the rows and columns are distributed The items are already assigned to their respective areas.

  • Edit the CSS code:
@media screen and (max-width: 700px) {
 .view-content {
   grid-template-columns: repeat(2, 1fr);
   grid-template-rows: repeat(10, auto);
   grid-template-areas:
     "main main"
     "main main"
     "thumb1 thumb2"
     "thumb3 thumb4"
     "secondary secondary"
     "secondary secondary"
     "thumb5 thumb6"
     "thumb7 thumb8"
     "thumb9 thumb9"
     "thumb9 thumb9";
 }
}

This layout will work even with the smallest device screen.

I hope you liked this tutorial. Thanks for reading!


About the author

Jorge lived in Ecuador and Germany. Now he is back to his homeland Colombia. He spends his time translating from English and German to Spanish. He enjoys playing with Drupal and other Open Source Content Management Systems and technologies.
Sep 09 2019
Sep 09

3 minute read Published: 9 Sep, 2019 Author: Colan Schwartz
Drupal Planet , Aegir , DevOps

Have you been looking for a self-hosted solution for hosting and managing Drupal sites? Would you like be able able to upgrade all of your sites at once with a single button click? Are you tired of dealing with all of the proprietary Drupal hosting providers that won’t let you customize your set-up? Wouldn’t it be nice if all of your sites had free automatically-updating HTTPS certificates? You probably know that Aegir can do all of this, but it’s now trivial to set up a temporary trial instance to see how it works.

The new Aegir Development VM makes this possible.

History

Throughout Aegir’s history, we’ve had several projects striving to achieve the same goal. They’re listed in the Contributed Projects section of the documentation.

Aegir Up

Aegir Up was based on a VirtualBox virtual machine (VM), managed by Vagrant and provisioned with Puppet. It was superseded by Valkyrie (see below).

Aegir Development Environment

Aegir Development Environment took a completely different approach using Docker. It assembles all of the services (each one in a container, e.g. the MySQL database) into a system managed by Docker Compose. While this is a novel approach, it’s not necessary to have multiple containers to get a basic Aegir instance up and running.

Valkyrie

Valkyrie was similar to Aegir Up, but provisioning moved from Puppet to Ansible. Valkyrie also made extensive use of custom Drush commands to simplify development.

Its focus was more on developing Drupal sites than on developing Aegir. Now that we have Lando, it’s no longer necessary to include this type of functionality.

It was superseded by the now current Aegir Development VM.

Present

Like Valkyrie, the Aegir Development VM is based on a VirtualBox VM (but that’s not the only option; see below) managed with Vagrant and provisioned with Ansible. However, it doesn’t rely on custom Drush commands.

Features

Customizable configuration

The Aegir Development VM configuration is very easy to customize as Ansible variables are used throughout.

For example, if you’d like to use Nginx instead of Apache, simply replace:

    aegir_http_service_type: apache

…with:

    aegir_http_service_type: nginx

…or override using the command line.

You can also install and enable additional Aegir modules from the available set.

Support for remote VMs

For those folks with older hardware who are unable to spare extra gigabytes (GB) for VMs, it’s possible to set up the VM remotely.

While the default amount of RAM necessary is 1 GB, 2 GB would be better for any serious work, and 4 GB is necessary if creating platforms directly from Packagist.

Support for DigitalOcean is included, but other IaaS providers (e.g. OpenStack) can be added later. Patches welcome!

Fully qualified domain name (FQDN) not required

While Aegir can quickly be installed with a small number of commands in the Quick Start Guide, that process requires an FQDN, usually something like aegir.example.com (which requires global DNS configuration). That is not the case with the Dev VM, which assumes aegir.local by default.

Simplified development

You can use it for Aegir development as well as trying Aegir!

Unlike the default set-up provisioned by the Quick Start Guide, which would require additional configuration, the individual components (e.g. Hosting, Provision, etc.) are cloned repositories making it easy to create patches (and for module maintainers: push changes upstream).

Conclusion

We’ve recently updated the project so that an up-to-date VM is being used, and it’s now ready for general use. Please go ahead and try it.

If you run into any problems, feel free to create issues on the issue board and/or submit merge requests.

The article Try Aegir now with the new Dev VM first appeared on the Consensus Enterprises blog.

We've disabled blog comments to prevent spam, but if you have questions or comments about this post, get in touch!

Sep 07 2019
Sep 07

In the previous article, we began talking about debugging Drupal migrations. We gave some recommendations of things to do before diving deep into debugging. We also introduced the log process plugin. Today, we are going to show how to use the Migrate Devel module and the debug process plugin. Then we will give some guidelines on using a real debugger like XDebug. Next, we will share tips so you get used to migration errors. Finally, we are going to briefly talk about the migrate:fields-source Drush command. Let’s get started.

Example configuration for debug process plugin

The migrate_devel module

The Migrate Devel module is very helpful for debugging migrations. It allows you to visualize the data as it is received from the source, the result of field transformation in the process pipeline, and values that are stored in the destination. It works by adding extra options to Drush commands. When these options are used, you will see more output in the terminal with details on how rows are being processed.

As of this writing, you will need to apply a patch to use this module. Migrate Devel was originally written for Drush 8 which is still supported, but no longer recommended. Instead, you should use at least version 9 of Drush. Between 8 and 9 there were major changes in Drush internals.  Commands need to be updated to work with the new version. Unfortunately, the Migrate Devel module is not fully compatible with Drush 9 yet. Most of the benefits listed in the project page have not been ported. For instance, automatically reverting the migrations and applying the changes to the migration files is not yet available. The partial support is still useful and to get it you need to apply the patch from this issue. If you are using the Drush commands provided by Migrate Plus, you will also want to apply this patch. If you are using the Drupal composer template, you can add this to your composer.json to apply both patches:

"extra": {
  "patches": {
    "drupal/migrate_devel": {
      "drush 9 support": "https://www.drupal.org/files/issues/2018-10-08/migrate_devel-drush9-2938677-6.patch"
    },
    "drupal/migrate_tools": {
      "--limit option": "https://www.drupal.org/files/issues/2019-08-19/3024399-55.patch"
    }
  }
}

With the patches applied and the modules installed, you will get two new command line options for the migrate:import command: --migrate-debug and --migrate-debug-pre. The major difference between them is that the latter runs before the destination is saved. Therefore, --migrate-debug-pre does not provide debug information of the destination.

Using any of the flags will produce a lot of debug information for each row being processed. Many times, analyzing a subset of the records is enough to stop potential issues. The patch to Migrate Tools will allow you to use the --limit and --idlist options with the migrate:import command to limit the number of elements to process.

To demonstrate the output generated by the module, let’s use the image migration from the CSV source example. You can get the code at https://github.com/dinarcon/ud_migrations. The following snippets how to execute the import command with the extra debugging options and the resulting output:

# Import only one element.
$ drush migrate:import udm_csv_source_image --migrate-debug --limit=1

# Use the row's unique identifier to limit which element to import.
$ drush migrate:import udm_csv_source_image --migrate-debug --idlist="P01"
$ drush migrate:import udm_csv_source_image --migrate-debug --limit=1
┌──────────────────────────────────────────────────────────────────────────────┐
│                                   $Source                                    │
└──────────────────────────────────────────────────────────────────────────────┘
array (10) [
    'photo_id' => string (3) "P01"
    'photo_url' => string (74) "https://agaric.coop/sites/default/files/pictures/picture-15-1421176712.jpg"
    'path' => string (76) "modules/custom/ud_migrations/ud_migrations_csv_source/sources/udm_photos.csv"
    'ids' => array (1) [
        string (8) "photo_id"
    ]
    'header_offset' => NULL
    'fields' => array (2) [
        array (2) [
            'name' => string (8) "photo_id"
            'label' => string (8) "Photo ID"
        ]
        array (2) [
            'name' => string (9) "photo_url"
            'label' => string (9) "Photo URL"
        ]
    ]
    'delimiter' => string (1) ","
    'enclosure' => string (1) """
    'escape' => string (1) "\"
    'plugin' => string (3) "csv"
]
┌──────────────────────────────────────────────────────────────────────────────┐
│                                 $Destination                                 │
└──────────────────────────────────────────────────────────────────────────────┘
array (4) [
    'psf_destination_filename' => string (25) "picture-15-1421176712.jpg"
    'psf_destination_full_path' => string (25) "picture-15-1421176712.jpg"
    'psf_source_image_path' => string (74) "https://agaric.coop/sites/default/files/pictures/picture-15-1421176712.jpg"
    'uri' => string (29) "./picture-15-1421176712_6.jpg"
]
┌──────────────────────────────────────────────────────────────────────────────┐
│                             $DestinationIDValues                             │
└──────────────────────────────────────────────────────────────────────────────┘
array (1) [
    string (1) "3"
]
════════════════════════════════════════════════════════════════════════════════
Called from +56 /var/www/drupalvm/drupal/web/modules/contrib/migrate_devel/src/EventSubscriber/MigrationEventSubscriber.php
 [notice] Processed 1 item (1 created, 0 updated, 0 failed, 0 ignored) - done with 'udm_csv_source_image'

In the terminal, you can see the data as it is passed along in the Migrate API. In the $Source, you can see how the source plugin was configured and the different columns for the row being processed. In the $Destination, you can see all the fields that were mapped in the process section and their values after executing all the process plugin transformation. In $DestinationIDValues, you can see the unique identifier of the destination entity that was created. This migration created an image so the destination array has only one element: the file ID (fid). For paragraphs, which are revisioned entities, you will get two values: the id and the revision_id. The following snippet shows the $Destination and  $DestinationIDValues sections for the paragraph migration in the same example module:

$ drush migrate:import udm_csv_source_paragraph --migrate-debug --limit=1
┌──────────────────────────────────────────────────────────────────────────────┐
│                                   $Source                                    │
└──────────────────────────────────────────────────────────────────────────────┘
Omitted.
┌──────────────────────────────────────────────────────────────────────────────┐
│                                 $Destination                                 │
└──────────────────────────────────────────────────────────────────────────────┘
array (3) [
    'field_ud_book_paragraph_title' => string (32) "The definitive guide to Drupal 7"
    'field_ud_book_paragraph_author' => string UTF-8 (24) "Benjamin Melançon et al."
    'type' => string (17) "ud_book_paragraph"
]
┌──────────────────────────────────────────────────────────────────────────────┐
│                             $DestinationIDValues                             │
└──────────────────────────────────────────────────────────────────────────────┘
array (2) [
    'id' => string (1) "3"
    'revision_id' => string (1) "7"
]
════════════════════════════════════════════════════════════════════════════════
Called from +56 /var/www/drupalvm/drupal/web/modules/contrib/migrate_devel/src/EventSubscriber/MigrationEventSubscriber.php
 [notice] Processed 1 item (1 created, 0 updated, 0 failed, 0 ignored) - done with 'udm_csv_source_paragraph'

The debug process plugin

The Migrate Devel module also provides a new process plugin called debug. The plugin works by printing the value it receives to the terminal. As Benji Fisher explains in this issue, the debug plugin offers the following advantages over the log plugin provided by the core Migrate API:

  • The use of print_r() handles both arrays and scalar values gracefully.
  • It is easy to differentiate debugging code that should be removed from logging plugin configuration that should stay.
  • It saves time as there is no need to run the migrate:messages command to read the logged values.

In short, you can use the debug plugin in place of log. There is a particular case where using debug is really useful. If used in between of a process plugin chain, you can see how elements are being transformed in each step. The following snippet shows an example of this setup and the output it produces:

field_tags:
  - plugin: skip_on_empty
    source: src_fruit_list
    method: process
    message: 'No fruit_list listed.'
  - plugin: debug
    label: 'Step 1: Value received from the source plugin: '
  - plugin: explode
    delimiter: ','
  - plugin: debug
    label: 'Step 2: Exploded taxonomy term names '
    multiple: true
  - plugin: callback
    callable: trim
  - plugin: debug
    label: 'Step 3: Trimmed taxonomy term names '
  - plugin: entity_generate
    entity_type: taxonomy_term
    value_key: name
    bundle_key: vid
    bundle: tags
  - plugin: debug
    label: 'Step 4: Generated taxonomy term IDs '
$ drush migrate:import udm_config_entity_lookup_entity_generate_node --limit=1
Step 1: Value received from the source plugin: Apple, Pear, Banana
Step 2: Exploded taxonomy term names Array
(
    [0] => Apple
    [1] =>  Pear
    [2] =>  Banana
)
Step 3: Trimmed taxonomy term names Array
(
    [0] => Apple
    [1] => Pear
    [2] => Banana
)
Step 4: Generated taxonomy term IDs Array
(
    [0] => 2
    [1] => 3
    [2] => 7
)
 [notice] Processed 1 item (1 created, 0 updated, 0 failed, 0 ignored) - done with 'udm_config_entity_lookup_entity_generate_node'

The process pipeline is part of the node migration from the entity_generate plugin example. In the code snippet, a debug step is added after each plugin in the chain. That way, you can verify that the transformations are happening as expected. In the last step you get an array of the taxonomy term IDs (tid) that will be associated with the field_tags field. Note that this plugin accepts two optional parameters:

  • label is a string to print before the debug output. It can be used to give context to what is being printed.
  • multiple is a boolean that when set to true signals the next plugin in the pipeline to process each element of an array individually. The functionality is similar to the multiple_values plugin provided by Migrate Plus.

Using the right tool for the job: a debugger

Many migration issues can be solved by following the recommendations from the previous article and the tools provided by Migrate Devel. But there are problems so complex that you need a full-blown debugger. The many layers of abstraction in Drupal, and the fact that multiple modules might be involved in a single migration, makes the use of debuggers very appealing. With them, you can step through each line of code across multiple files and see how each variable changes over time.

In the next article, we will explain how to configure XDebug to work with PHPStorm and DrupalVM. For now, let’s consider where are good places to add breakpoints. In this article, Lucas Hedding recommends adding them in:

  • The import method of the MigrateExecutable class.
  • The processRow method of the MigrateExecutable class.
  • The process plugin if you know which one might be causing an issue. The transform method is a good place to set the breakpoint.

The use of a debugger is no guarantee that you will find the solution to your issue. It will depend on many factors, including your familiarity with the system and how deep lies the problem. Previous debugging experience, even if not directly related to migrations, will help a lot. Do not get discouraged if it takes you too much time to discover what is causing the problem or if you cannot find it at all. Each time you will get a better understanding of the system.

Adam Globus-Hoenich, a migrate maintainer, once told me that the Migrate API "is impossible to understand for people that are not migrate maintainers." That was after spending about an hour together trying to debug an issue and failing to make it work. I mention this not with the intention to discourage you. But to illustrate that no single person knows everything about the Migrate API and even their maintainers can have a hard time debugging issues. Personally, I have spent countless hours in the debugger tracking how the data flows from the source to the destination entities. It is mind-blowing, and I barely understand what is going on. The community has come together to produce a fantastic piece of software. Anyone who uses the Migrate API is standing on the shoulders of giants.

If it is not broken, break it on purpose

One of the best ways to reduce the time you spend debugging an issue is having experience with a similar problem. A great way to learn is by finding a working example and breaking it on purpose. This will let you get familiar with the requirements and assumptions made by the system and the errors it produces.

Throughout the series, we have created many examples. We have made our best effort to explain how each example work. But we were not able to document every detail in the articles. In part to keep them within a reasonable length. But also, because we do not fully comprehend the system. In any case, we highly encourage you to take the examples and break them in every imaginable way. Making one change at a time, see how the migration behaves and what errors are produced. These are some things to try:

  • Do not leave a space after a colon (:) when setting a configuration option. Example: id:this_is_going_to_be_fun.
  • Change the indentation of plugin definitions.
  • Try to use a plugin provided by a contributed module that is not enabled.
  • Do not set a required plugin configuration option.
  • Leave out a full section like source, process, or destination.
  • Mix the upper and lowercase letters in configuration options, variables, pseudofields, etc.
  • Try to convert a migration managed as code to configuration; and vice versa.

The migrate:fields-source Drush command

Before wrapping up the discussion on debugging migrations, let’s quickly cover the migrate:fields-source Drush command. It lists all the fields available in the source that can be used later in the process section. Many source plugins require that you manually set the list of fields to fetch from the source. Because of this, the information provided by this command is redundant most of the time. However, it is particularly useful with CSV source migrations. The CSV plugin automatically includes all the columns in the file. Executing this command will let you know which columns are available. For example, running drush migrate:fields-source udm_csv_source_node produces the following output in the terminal:

$ drush migrate:fields-source udm_csv_source_node
 -------------- -------------
  Machine Name   Description
 -------------- -------------
  unique_id      unique_id
  name           name
  photo_file     photo_file
  book_ref       book_ref
 -------------- -------------

The migration is part of the CSV source example. By running the command you can see that the file contains four columns. The values under "Machine Name" are the ones you are going to use for field mappings in the process section. The Drush command has a --format option that lets you change the format of the output. Execute drush migrate:fields-source --help to get a list of valid formats.

What did you learn in today’s blog post? Have you ever used the migrate devel module for debugging purposes? What is your strategy when using a debugger like XDebug? Any debugging tips that have been useful to you? Share your answers in the comments. Also, I would be grateful if you shared this blog post with others.

This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors. Contact Understand Drupal if your organization would like to support this documentation project, whether it is the migration series or other topics.

Sep 06 2019
Sep 06

Our team has always been engaged and hands-on when it comes to web accessibility and inclusion. Through learning, teaching, auditing, remediating or supporting others doing the same, providing information access to all is at the core of how we run our business. Inclusivity, equality, and global access is still very much a work in progress for humanity. Any step that Hook 42 can take towards improving inclusion and access, you bet we’re going to do it. This year’s Bay Area Drupal Camp (BADCamp) is no exception.

This camp is packed with content covering accessibility topics. Aimee and Lindsey leading a web accessibility training session on Thursday. On Friday and Saturday, there are four sessions that will touch base on accessible best practices. Thank you to each of the presenters for broadening the audience one of our favorite subjects. We hear you, and are so happy to join in on the discussion!

Web Accessibility Sessions at BADCamp

Your Code is Terrible!, presented by Jayson Jaynes, will cover the topic of semantic HTML. In the talk, Jayson will explain its importance, how to best practice it, and what developers benefit from by understanding it. The best part? Jayson will explore some tools that make creating semantic code in Drupal easier and how to utilize your new tools to ensure accessibility compliance.

It's a Bird... It's a Plane... It's ¯\_(ツ)_/¯? Using Machine Learning to Meet Accessibility Requirements, presented by Danny Teng and Neha Bhomia, will explore a new world where manual input for alt tags becomes a thing of the past. The pair will explore how machine learning can be leveraged to take care of the tedious task of alt text generation. 

Shine a Light on Me, presented by Paul Sheldrake, will cover a broad overview of a chrome extension, Lighthouse, that checks pages for not only accessibility compliance, but site performance and SEO concerns as well. Paul will cover the basics for why it’s important to run these scans, and how Lighthouse can help make that process a little better for everyone.

SVG Magic!, by Anthony Horn, will talk about the glory that SVGs behold. There is more to an SVG than putting scalable vector on the web, and Anthony will walk through exactly what that entails from animation to accessibility compliance and everything in between.

Web Accessibility Training at BADCamp

Our accessibility training session at BADCamp, Web Accessibility Training, will take place on Thursday from 8:00 am to 5:00 pm. We hope you’re ready for a deep-dive into all things inclusion on the web. 

Aimee and Lindsey will cover as much as they can squeeze into one full-day crash course. The course starts at accessibility laws and principles, works through design, content, media, code, and testing tools. We cover the topic with both a broad but deep approach for all attendees to gain the most exposure to such a big topic. 

At the end of the day, our goal is to make sure everyone becomes an advocate for web accessibility! We hope you’ll gain a better understanding of where your organization stands with accessibility, what specific role you’ll play in ensuring your websites stay compliant, and how you can show others where to go in order to apply accessible best practices in their areas of expertise.

It’s going to be a busy camp, and we are thrilled to be part of such a hot topic in the web community. If you want to chat further about accessibility, you can always stop by our booth while you’re there to pick our brains. We’re always in the mood to talk accessibility!

Sep 06 2019
Sep 06

To get started, you will need a Drupal 8 site. If you don’t have one, you can create a free Drupal 8 development site on Pantheon. Once you have your Drupal 8 site installed, make sure you have some Article content. You will also need to enable the JSON:API module.

API Module

That is it for setup on the Drupal site. The next step is to install the Gatsby Source Drupal plugin.

npm install --save gatsby-source-drupal

Or

yarn add gatsby-source-drupal

Next open the gatsby-config.js file so we can configure the plugin. Add the following code to the Gatsby config.

Gatsby Drupal Config

If you re-run your Gatsby development server, you will be able to open the GraphiQL explorer at http://localhost:8000/___graphql and explore the data the Drupal database is providing. In this case, I am filtering by the article content type. You can use the Explorer panel on the left to build the query and the play button at the top to run the query.

GraphQL Explorer

Now that you know your Gatsby site can see your Drupal data, we need to consume this data during the Gatsby build process and create pages. In this case, we want to create a page on our Gatsby site for each Article. We want to make sure we use the path that was configured in Drupal as the path to our Article. Anytime we want to create pages during the Gatsby build process, we can do so in the gatsby-node.js file. Go ahead and open that file up.

Add the following code to that file:

const path = require('path');
 
exports.createPages = async ({ actions, graphql }) => {
 const { createPage } = actions;
 
 const articles = await graphql(`
   {
     allNodeArticle {
       nodes {
         id
         title
         path {
           alias
         }
       }
     }
   }
 `);
 
 articles.data.allNodeArticle.nodes.map(articleData =>
   createPage({
     path: articleData.path.alias,
     component: path.resolve(`src/templates/article.js`),
     context: {
       ArticleId: articleData.id,
     },
   })
 );
}

This code does a few things. First it uses GraphQL to pull in the id and path of all the Articles from your Drupal site. It then loops through this list and calls Gatsby’s createPage method which will create a Gatsby page for this Article. We make sure we pass in the correct path and a template (which we still need to create). We also pass in the Article id as the context. You will see why this is important in a few minutes.

Create the src/templates folder if it doesn’t already exist, then create a file called article.js. Add the following code to this new article.js file:

import React from 'react';
import PropTypes from 'prop-types';
import { graphql } from 'gatsby';
 
import Layout from '../components/layout';
 
const Article = ({ data }) => {
 const post = data.nodeArticle;
 
 return (
   <Layout>
     <h1>{post.title}</h1>
     <img 
        src={data.nodeArticle.relationships.field_image.localFile.publicURL}
        alt={data.nodeArticle.field_image.alt}
      />
 
     <div
       dangerouslySetInnerHTML={{ __html: post.body.processed }}
     />
   </Layout>
 );
};
 
Article.propTypes = {
 data: PropTypes.object.isRequired,
};
 
export const query = graphql`
 query($ArticleId: String!) {
   nodeArticle(id: { eq: $ArticleId }) {
     id
     title
     body {
       processed
     }
     field_image {
        alt
      }
     relationships {
        field_image {
          localFile {
            publicURL
          }
        }
      }
 
   }
 }
`;
 
export default Article;

This might seem a bit confusing at first. The first section contains the React component called Article. It receives a data prop which contains Article data. It returns outputs the title and the body text in a Layout component.

The second part contains the propTypes. This just says that our Article component will receive a prop called data that will be an object and it will be required. This is just a way to validate our props.

The third part is where it gets a bit more confusing. As you already know, we ran one query in the gatsby-node.js file to get the data, but here we are also running a page query. When using Drupal as a backend, it’s useful for each template to run it’s own page query so it can build the page in a self contained manner. This is especially important when you want to implement live preview (which we will cover in the future). This query takes the id and loads additional Article data such as the title and the body field. Notice that the relationships field has to be used to pull in the actual image.

Another thing to note here is that this is not the best way to pull in images. It’s recommended to use the Gatsby Image component, but that makes the GraphQL a little more complicated so we will revisit that in more depth in the future.

If you shut down and re-run your Gatsby development server, it should create the article pages at the same path they were created on your Drupal site. There is no listing page (another thing we will fix in the future), but you can manually paste one of the known paths into the address bar to see your Article content on your Gatsby Site!

Gatsby Article Page

Sep 06 2019
Sep 06

I'm excited to share that when Drupal 8.8 drops in December, Drupal's WYSIWYG editor will allow media embedding.

You may wonder: Why is that worth announcing on your blog? It's just one new button in my WYSIWYG editor..

It's a big deal because Drupal's media management has been going through a decade-long transformation. The addition of WYSIWYG integration completes the final milestone. You can read more about it on Wim's blog post.

Drupal 8.8 should ship with complete media management, which is fantastic news for site builders and content authors who have long wanted a simpler way to embed media in Drupal.

Congratulations to the Media Initiative team for this significant achievement!

September 06, 2019

27 sec read time

db db
Sep 06 2019
Sep 06

 


Today, IT security is paramount to succeed in business. Enterprises are spending hefty amount on security than ever before. Progress in both security and hacking technologies such as intrusion detection systems, honey pots, honeynets, and other various security-related hardware and software solutions are showcasing the pressing need for transformation in the information security domain.

One of the reports by Gartner cited that enterprises in India alone are going to spend laboriously on the information security front which will mark up to US$2 billion in 2020.

The increasing awareness on the benefits of the risk assessment and the realization of the fact that security is one of the driving forces for digital transformation are boosting enterprise security globally. 

The battle between open-source and proprietary software has been throwing a fit since long. Multiple issues and concerns are being examined and scrutinized by both sides of the story. In the most recent phase of this fanatical dispute, both camps have inspected the issue of security with serious tenacity.

Having said that, let’s take a sneak peek into this blog for further insights on the same.

Myths Are Meant to Be Debunked

Proprietary software is more secure than open-source software. This myth comes from many prejudices. But a commercial license doesn’t assure security. Unlike proprietary software, open-source software is transparent about potential vulnerabilities.

#Myth1: Anyone can view the code 

Because it is open source, anyone can view the code. People often want to argue that being able to view the code allows nefarious hackers to look at it and exploit vulnerabilities.

However, this openness enables collaboration. Unlike, say, one proprietary software, which is developed and is maintained by a single company, Drupal is developed and maintained by more than one hundred thousand programmers around the world. These programmers might work for companies that compete with each other, or they might volunteer to create something new that’s then given away. For free.


In fact, in 2015 Google open sourced its artificial intelligence engine, TensorFlow. Something which is a core part of its business. It hoped more developers would make the software better as they adapted it to their own needs. And it did, by making it open source, Google boasts of more than 1,300 developers, outside Google, have worked on TensorFlow making it one of the standard frameworks for developing AI applications, which could bolster its cloud-hosted AI services. 

#Myth2: Proprietary software are secure and not prone to attacks

There have been multiple instances in the past that depicts that proprietary software has been attacked several times. Such as:

Melissa Virus and ILoveYou Worm - spread through Microsoft Word email attachments. The email contained attachment. If the victim’s system had the Microsoft outlook application installed, then the virus would send the email to 50 too all contacts in the Outlook program’s address book. would also overwrite & consequently destroy various types of files on the victim’s device including MP3 files, JPEG files, and more. It led Microsoft to shut down its inbound email system.

Wannacry - a worldwide cyberattack that took place in 2017. It was a ransomware crypto worm attack that aimed at computers using Windows operating systems, encrypting all the files on hard drives on these machines. It didn’t let users access the files until they paid a ransom in the cryptocurrency Bitcoin.

The WannaCry attack impacted major entities all over the world, such as the National Health Service in Britain and Scotland, the University of Montreal in Canada, State Government websites in India, and Russian Railways.

With that said, it's evident that proprietary software is also easily vulnerable to attacks!

Although countermeasures like anti-virus programs and security patches were implemented to mitigate the threats and weaknesses, the long-term and especially exorbitant effects of these dangers have been engraved for permanent into the memories of people all over the world. This is because these attacks not only damaged vital electronic data but also shut down business operations and services, and facilitated malicious infiltration and theft of money & proprietary information.

History of Open source Software

The term “open-source”, popular since its inception in the late 70s and early 80s has come from a revolution, “open-source revolution”, which completely revamped the way software is developed- resulting in the birth of the community-generated software development method.

Box with text written inside it

In 1971, Richard Stallman, a young software engineer from Harvard, joined the MIT Artificial Intelligence Lab with the intent of developing computing platforms. After serving for a few years in the early 1980s, the MIT Lab became extinct due to the booming of proprietary software in the market and lost its talented developers to privately held tech companies.

Stallman, who was closely involved in the field knew customers’ software requirements believed customers should be empowered enough to fix and debug the software themselves instead of simply operating it.

“Users should be empowered enough to fix and debug the software themselves-instead of simply operating it”

The majority of software until now was controlled in its entirety by the developer where individual user rights were completely discarded. This was also a pain point for MIT AI Lab since they failed to incorporate this feature into their software development strategies.

The Disembarkation of the Free Software Movement

But this was until 1984. Post evaluation, Stallman began his GNU Project. Initiating with a compiler, GCC and a new operating systems-Stallman felt that GNU project was the major turning point in the evolution of free software community.

“The Free Software Foundation was formulated to let users run the software as they wanted”

Stallman believed that software should be available for free in terms of accessibility. Hence, the Free Software Foundation (FSF) was formulated so that users can run, modify, update, and disseminate software in the community.

Later on, he also introduced the concept of copyleft, wherein a program is first copyrighted, and then additional distribution terms are added for its further use.

Challenges Associated With Proprietary CMS 

Proprietary CMS comes up with a set of restrictions which makes it less flexible in comparison to open-source software. 

“The contribution and development teams of proprietary cms are smaller, which makes it evident that there is a probability of missing out on mistakes and bugs in the code”

It might appear that closed source software or proprietary software is more secure since the code is not available. But unfortunately, it is not the case! The contribution and development teams of proprietary CMS are smaller, which makes it evident that there is a probability of missing out on mistakes and bugs in the code.

You might not know what issues the proprietary system has had in the past, or is having currently because the provider of the proprietary CMS isn’t going to voluntarily reveal this information. This sets a major drawback for proprietary CMS users in terms of security as well.

Let’s further see the challenges associated with proprietary CMS-

Not many customizations options

Since these proprietary CMS are developed for a specific kind of industry and audience, it gets difficult to customize the website to fit the exact needs of the people. Users are not building their system so it's obvious that they will have limited flexibility options.

Portability is beyond the bounds of possibility

Users don’t have an option to extract data and files out of their system with a proprietary solution. They are quite restricted because they won’t be able to even move their website from one hosting service to another.

“Several CMS vendors don’t upgrade their platforms, so it's better to do a bit of research first and then jump onto doing business with a vendor”

You don’t have any option other than trusting the company blindly

Since the company owns the platform and the storage space your website will be built upon, you’ll have to manifest a lot of trust into your vendor. They will have to continuously develop and refine their software, to handle their consumers’ needs better. The vendor should also be in reach whenever you need assistance with your website

Several CMS vendors don’t upgrade their platforms, so it's better to do a bit of research first and then jump onto doing business with a vendor.

You are just renting software

Even if you have bought the proprietary CMS, you won’t own the code it's built with. It is not yours and hence requires a monthly rent from you, to keep your website running.

Benefits of Open-source Software

“People in the open-source community come forward to find solutions, assist each other, and to share extensions that would benefit the masses”

  • It is open-source!

This implies that the source code is available for anyone who wishes to study it, analyze it, and modify it in any way.

Thanks to this feature that people can easily extend the code and add specific functionalities as per their requirements.

  • An open-source CMS is maintained by the large community

There is always a primary group of developers, similar to WordPress but it is also supported by its user base. People in the open-source community come forward to find solutions, assist each other, and to share extensions that would benefit the masses.

Rectangle with various lengths of horizontal bar
Source: Sas.com

  • An open-source CMS can be hosted ubiquitously

Most of them, like Drupal, offers one-click installs in the control panel of the accompanying hosting service, which again is very user-friendly and comfortable.

  • The CMS software itself is usually free of cost

You can easily make use of plenty of extensions, themes, and a variety of tools for free. However, there are plenty of paid extensions and themes as well. Some solutions can only be leveraged with paid software. An open-source CMS is usually the most budget-friendly solution.

Alternatives to Proprietary Software

It is interesting to see that there are so many open-source software alternatives for the existing proprietary software which are equivalent or more reliable, secure, and flexible. 

If you are contemplating to migrate from proprietary software to open-source, you can surely - that too with ease!

Software Category

Proprietary Software

Equivalent Open-source Software

Operating System

Microsoft Windows

Linux Ubuntu

Browser

Internet Explorer

Mozilla Firefox

Office automation

Microsoft Office

Open Office

MATHWORKS

MATLAB

Sci Lab

Graphics Tool

Adobe Photoshop

GIMP(GNU Image Manipulation Program

Drafting tool

Auto CAD

Archimedes

Web Editors

Adobe Dreamweaver

NVU

Desktop Publishing

Adobe Acrobat

PDF Creator

Blogs

Blogger

WordPress

Mobile

IOS

Android

Media Player

Windows Media Player

VLC Player

Database

Oracle, Microsoft SQL Server

My SQL, Mongo DB, HADOOP

Server

Microsoft Window Server

Red Hat Server, Ubuntu Server

Web Server

IIS

Apache


Open-source Security in Drupal

Drupal, having a proven track record of being the most secure CMS, has been rolling with punches against critical internet susceptibleness. Thanks to Drupal security team for earnestly finding anomalies, authenticating them, and responding to security issues.  

The responsibilities of the security team include documentation of these identifications and alterations made so that developers don’t feel heebie-jeebies when faced with similar kind of situation.

“Drupal community comprises of over 100,000 contributors towards its enhancement”

Besides, the team also assists the infrastructure team to keep the Drupal.org infrastructure secure. They ensure that any security issues for code hosted on Drupal are reviewed, reported, and solved in the shortest period possible.

Important features that make Drupal 8 the best WCMS in regards to Security-

  • The Security Working Group (SecWBG) ensures that Drupal core and Drupal’s contributed project ecosystem provides a secure platform while ensuring that the best practices are followed.
  • The community makes sure that people are notified the day patches are released, which are released every Wednesday for contributed projects, and the third Wednesday of every month for core, usually for a fixed period.
  • Drupal abides by the OWASP ( Open Web Application Security Project) standards and its community is devoted towards prevention of any security breaches.
  • Drupal community comprises of over 100,000 contributors towards its enhancement. An open-source code base, where contributed modules are properly reviewed, verified, and sent a notification if that module is acceptable for use.
  • Apart from encrypting and hashing the passwords, Drupal provides those modules which can support two-step authentication and SSL certificates.
  • Any member can make changes to Drupal modules and report any issues or bugs that occur in their system.
  • Access controls offered by Drupal is a superb feature. Dedicated accounts can be created for certain user roles with specified permissions. For instance, you can create separate user accounts for Admin and Editor.
  • It’s multibranched cache structure that assists in reducing Denial of Service (DoS) attacks and makes it as the best CMS for some of the world’s highest traffic websites like NASA, the University of Oxford, Grammys, Pfizer, etc.

Statistics Says It All

Sucuri, a security platform for websites, curated the “Hacked website report 2018”. It evaluated more than 34,000 compromised websites. Among the statistics it shared, one of the factors was to juxtapose the affected open-source CMS applications.

drupal-sucuri

The results were clearly on Drupal’s side declaring it a better WCMS than other leading platforms for preventing safety hazards.

The infection crept in these websites due to improper deployment, configuration, and maintenance.

Additionally, Cloud Security Report by Alert Logic also marked Drupal as the website content management system with the least number of web application attacks.

11 Columns and 8 rows with text written inside them                                                                        Source: Alert Logic

Difference Between Open-source and Proprietary Software

Factor

Open-source

Proprietary

Cost

Open-source software is free which makes it an alluring option if you have in-house capacities to meet your business requirements.

Proprietary software costs differently from a couple of thousand dollars to one hundred thousand dollars, depending upon the multifaceted nature of the framework needed.

Service and support

Open-source software communities of developers are huge and steadfast which helps clients with prompt solutions to their problems.

Proprietary software vendors offer progressing backing to clients- a key offering point for clients without specialized mastery.

Innovation

Open-source software boosts innovation by providing users the opportunity to modify, append, or distribute as per their requirements.

Proprietary software vendors don’t permit its users to view or adjust the source code, thus making it unfit for organizations who desire scalability and flexibility.

Only developers can incorporate new features to the product as and when requested by users.

Security

As open-source code is available to everybody, it increases the possibility of finding more vulnerabilities easily. 

It is also worth noting that open-source communities fixed security vulnerabilities twice as quickly as commercial software vendors do.

Proprietary software is considered secure as it is developed in a governed condition of the employees having a frequent direction.

However, getting rid of the possibility of backdoor Trojans as well as lowering the threat of any other bugs or obstacles can be troublesome in proprietary software.

Availability

Open-source software is available for free on the web with 24*7 support from the community.

Proprietary software is accessible if the companies have the rights to the bundle or they have purchased from the respective vendors.

The trial version is also accessible for free to test.

Flexibility

As organizations aim at deriving more business values from less, open-source software can deliver high flexibility, lower IT costs and increased opportunities for innovation.

With proprietary software, such as Microsoft Windows, and Office, companies are required to upgrade both software and hardware on a timely basis. Updates must be installed for the proper working. However, not all updates are flexible with all the versions of the software.

In The End

Website security has always been a cause of hindrance in the journey of digital transformation and survival due to several potential threats. 

Open-source software can be considered as a befitting solution than a closed source or proprietary software. Further, this report indicates that there is an obvious desire among companies to adopt open-source technology and also prioritize the task of enhancing security in their organization.

Rectangle with text written inside                                                            Source: Gartner

However, it all depends on the preferences and needs of the organization and the on-going project for their digital business.

Drupal, an open-source content management framework, comes out as the most secure CMS in comparison to the leading players in the market.

It has been the pacesetter when it comes to opting the security focussed CMS. More individuals working on and reviewing the product always means a higher chance of a secure product!

Sep 06 2019
Sep 06

Diagnosing a website for accessibility and fixing any issues that are uncovered is not a one-size-fits-all endeavor. 

Every site has a distinct set of strengths and challenges. Plus, site owners vary widely in their level of expertise and the availability of resources that can be devoted to accessibility -- which includes diagnosing the site, making the necessary modifications, and ensuring that the tools and expertise are in place to maintain compliance. 

That’s why flexibility is an essential criteria when seeking ADA Section 508 accessibility solutions.

Another key: a consultative approach. Generally speaking, developers and content editors aren’t hired for their knowledge of WCAG 2.1, and for most organizations, this expertise is not mission critical. Tapping the expertise of a Certified Professional in Accessibility Core Competencies (CPACC) is the most efficient and effective path for bringing a site into compliance.

For organizations that partner with Promet Source to transition their websites into compliance, the process consists of a series of value-added steps in a specific order.  

The following graphic depicts the essential steps involved in an Accessibility Audit in which Promet review all facets of a website’s accessibility relative to WCAG 2.1, and consults with site owners on remediation. 

A circular graphic that indicates the six steps in a Promet Source Accessibility Audit. 1. PA11Y Setup 2. PA11Y Remediation 3. Round 1 Manual Audit 4. Round 1 Remediation 5. Round 2 Manual Audit 6. Final Statements

PA11Y Setup

A11Y is an abbreviation for accessibility, with the number 11 representing the number of letters between the first and last letter. PA11Y is an automated testing tool that scans web pages to detect inaccessibility. While automated testing is an essential component of the accessibility audit process, it cannot be counted on to be comprehensive. 

On average, automated testing detects approximately 30 percent of a site’s accessibility errors. The errors detected by automated testing tend to be the “low-hanging fruit” found within global elements across the site, as well as logins, landing pages, and representative page templates.
 

PA11Y Remediation

What sets Promet apart following this initial, automatic testing phase is a high degree of consultation, along with a list of custom code fixes for bringing the site into compliance. Additionally, for a year following the audit, clients have the advantage of a dashboard, that serves as a tool from which pages can be scanned and red flagged for accessibility errors.

It’s also important to point out that at the onset of the audit process, it might not be clear what remediation will entail. For any number of reasons, clients who initially intended to manage the remediation in house, might opt for a different approach once they gain an understanding of the scope of work involved.
 

Round 1 Manual Audit 

The manual audit does not occur until all of the issues flagged by the PA11Y scan are fixed. This process is facilitated by the customized code fixes that Promet provides, along with a dashboard that provides a roadmap of sorts for tracking progression and red flagging issues that need to be fixed.  

As mentioned above, the PA11Y scan cannot be counted on to detect all of the accessibility errors on a site. Manual testing is required to root out the deeper errors, which are the issues that have a greater tendency to expose site owners to legal liability. Among them:

  • Keyboard testing,
  • Color contrast testing,
  • Screen reader testing,
  • Link testing,
  • Tables and forms testing,
  • Cognitive testing, and 
  • Mobile testing.

If a site is revealed to be unresponsive, this finding can result in a recommendation to not move forward with remediation. Another potential remediation deal breaker: a mobile site that is not consistent in terms of content and functionality with the desktop site, as a mobile site is required to have the same content as its desktop counterpart.

It’s important to note that a strong accessibility initiative has been built into Drupal 8, and that will continue to be the case for Drupal 9 and subsequent updates. At this point, we have found Drupal to be the best CMS in terms of accessibility.
 

Round 1 Remediation

Promet is in close consultation with clients during the manual audit, and walks through every component of success criteria before the client moves forward with Round 1 Remediation.

A customized plan is created that varies according to depth and breadth of remediation work required, as well as the in-house expertise, and available resources. Depending on client needs, the plan can incorporate various levels of consultation, and either online or in-person training.

Working closely with both content editors and developers, the training focuses on the required remediation steps, as well as how to write code that’s accessible. Ensuring the accessibility of PDFs is another key area of focus.  

The remediation dashboard serves as an essential tool during and following Round 1 remediation. The dashboard flags errors and issues warnings which then need to be manually reviewed and addressed.
 

Round 2 Manual Audit

The Round 2 Audit represents the final review, along with ongoing consultation concerning any remediation challenges that have proven to be complex, and best practices for maintaining compliance. The Round 2 Audit won’t begin until all errors reported in the Round 1 Audit have been remediated to 0 errors.
 

Final Statements

Once all recommended remediation has been completed and verified, final statements are prepared. The final statements provide official language that the audit and remediation are complete. A final Statement of Accessibility and Statement of Work Completed will be provided. Optimally, a complete Statement of Conformance is issued, but in instances where the site links to third-party vendors, (which is often the case) and the vendor sites are not accessible, a Statement of Partial Conformance is issued, along with an explanation of the site owner’s good-faith efforts. 

It is recommended that instances of inaccessibility be reported to third-parties that are linked to the site. Often the result is ongoing remediation work and ultimately, a comprehensive Statement of Conformance.
 

Moving Forward

Without exception, Promet clients report a high degree of added value during and following an accessibility audit. The education, consultation, and opportunity to dig deep and deconstruct aspects of a site that no longer serve the organizational mission fuels a better and wiser team of developers and content editors. Plus, the dashboard that remains in place for a full year, is an essential resource for staying on track.

In the current climate, websites are highly dynamic and serve as the primary point of engagement for customers and constituents. Constantly evolving sites call for an ongoing focus on accessibility, and an acknowledgement that staff turnover can erode the education, expertise, and commitment to accessibility that is in place at the conclusion of an audit. For this reason, a bi-annual or annual audit, which can be viewed essentially as an accessibility refresh, is a highly recommended best practice. Interested in kicking off a conversation about auditing your site for accessibility? Contact us today.

Sep 05 2019
Sep 05

Throughout the series we have shown many examples. I do not recall any of them working on the first try. When working on Drupal migrations, it is often the case that things do not work right away. Today’s article is the first of a two part series on debugging Drupal migrations. We start giving some recommendations of things to do before diving deep into debugging. Then, we are going to talk about migrate messages and presented the log process plugin. Let’s get started.

Example configuration for log process plugin.

Minimizing the surface for errors

The Migrate API is a very powerful ETL framework that interacts with many systems provided by Drupal core and contributed modules. This adds layers of abstraction that can make the debugging process more complicated compared to other systems. For instance, if something fails with a remote JSON migration, the error might be produced in the Migrate API, the Entity API, the Migrate Plus module, the Migrate Tools module, or even the Guzzle HTTP Client library that fetches the file. For a more concrete example, while working on a recent article, I stumbled upon an issue that involved three modules. The problem was that when trying to rollback a CSV migration from the user interface an exception will be thrown making the operation fail. This is related to an issue in the core Migrate API that manifests itself when rollback operations are initiated from the interface provided by Migrate Plus. Then, the issue causes a condition in the Migrate Source CSV plugin that fails and the exception is thrown.

In general, you should aim to minimize the surface for errors. One way to do this by starting the migration with the minimum possible set up. For example, if you are going to migrate nodes, start by configuring the source plugin, one field (the title), and the destination. When that works, keep migrating one field at a time. If the field has multiple subfields, you can even migrate one subfield at a time. Commit every progress to version control so you can go back to a working state if things go wrong. Read this article for more recommendations on writing migrations.

What to check first?

Debugging is a process that might involve many steps. There are a few things that you should check before diving too deep into trying to find the root of the problem. Let’s begin with making sure that changes to your migrations are properly detected by the system. One common question I see people ask is where to place the migration definition files. Should they go in the migrations or config/install directory of your custom module? The answer to this is whether you want to manage your migrations as code or configuration. Your choice will determine the workflow to follow for changes in the migration files to take effect. Migrations managed in code go in the migrations directory and require rebuilding caches for changes to take effect. On the other hand, migrations managed in configuration are placed in the config/install directory and require configuration synchronization for changes to take effect. So, make sure to follow the right workflow.

After verifying that your changes are being applied, the next thing to do is verify that the modules that provide your plugins are enabled and the plugins themselves are properly configured. Look for typos in the configuration options. Always refer to the official documentation to know which options are available and find the proper spelling of them. Other places to look at is the code for the plugin definition or articles like the ones in this series documenting how to use them. Things to keep in mind include proper indentation of the configuration options. An extra whitespace or a wrong indentation level can break the migration. You can either get a fatal error or the migration can fail silently without producing the expected results. Something else to be mindful is the version of the modules you are using because the configuration options might change per version. For example, the newly released 8.x-3.x branch of Migrate Source CSV changed various configuration options as described in this change record. And the 8.x-5.x branch of Migrate Plus changed some configurations for plugin related with DOM manipulation as described in this change record. Keeping an eye on the issue queue and change records for the different modules you use is always a good idea.

If the problem persists, look for reports of similar problems in the issue queue. Make sure to include closed issues as well in case your problem has been fixed or documented already. Remember that a problem in a module can affect a different module. Keeping an eye on the issue queue and change records for all the modules you use is always a good idea. Another place ask questions is the #migrate channel in Drupal slack. The support that is offered there is fantastic.

Migration messages

If nothing else has worked, it is time to investigate what is going wrong. In case the migration outputs an error or a stacktrace to the terminal, you can use that to search in the code base where the problem might originate. But if there is no output or if the output is not useful, the next thing to do is check the migration messages.

The Migrate API allows plugins to log messages to the database in case an error occurs. Not every plugin leverages this functionality, but it is always worth checking if a plugin in your migration wrote messages that could give you a hint of what went wrong. Some plugins like skip_on_empty and skip_row_if_not_set even expose a configuration option to specify messages to log. To check the migration messages use the following Drush command: drush migrate:messages [migration_id]. If you are managing migrations as configuration, the interface provided by Migrate Plus also exposes them.

Messages are logged separately per migration, even if you run multiple migrations at once. This could happen if you execute dependencies or use groups or tags. In those cases, errors might be produced in more than one migration. You will have to look at the messages for each of them individually.

Let’s consider the following example. In the source there is a field called src_decimal_number with values like 3.1415, 2.7182, and 1.4142. It is needed to separate the number into two components: the integer part (3) and the decimal part (1415). For this, we are going to use the extract process plugin. Errors will be purposely introduced to demonstrate the workflow to check messages and update migrations. The following example shows the process plugin configuration and the output produced by trying to import the migration:

# Source values: 3.1415, 2.7182, and 1.4142

psf_number_components:
  plugin: explode
  source: src_decimal_number
$ drush mim ud_migrations_debug
[notice] Processed 3 items (0 created, 0 updated, 3 failed, 0 ignored) - done with 'ud_migrations_debug'

In MigrateToolsCommands.php line 811:
ud_migrations_debug Migration - 3 failed.

The error produced in the console does not say much. Let’s see if any messages were logged using: drush migrate:messages ud_migrations_debug. In the previous example, the messages will look like this:

 ------------------- ------- --------------------
  Source IDs Hash    Level   Message
 ------------------- ------- --------------------
  7ad742e...732e755   1       delimiter is empty
  2d3ec2b...5e53703   1       delimiter is empty
  12a042f...1432a5f   1       delimiter is empty
 ------------------------------------------------

In this case, the migration messages are good enough to let us know what is wrong. The required delimiter configuration option was not set. When an error occurs, usually you need to perform at least three steps:

  • Rollback the migration. This will also clear the messages.
  • Make changes to definition file and make they are applied. This will depend on whether you are managing the migrations as code or configuration.
  • Import the migration again.

Let’s say we performed these steps, but we got an error again. The following snippet shows the updated plugin configuration and the messages that were logged:

psf_number_components:
  plugin: explode
  source: src_decimal_number
  delimiter: '.'
 ------------------- ------- ------------------------------------
  Source IDs Hash    Level   Message
 ------------------- ------- ------------------------------------
  7ad742e...732e755   1       3.1415000000000002 is not a string
  2d3ec2b...5e53703   1       2.7181999999999999 is not a string
  12a042f...1432a5f   1       1.4141999999999999 is not a string
 ----------------------------------------------------------------

The new error occurs because the explode operation works on strings, but we are providing numbers. One way to fix this is to update the source to add quotes around the number so it is treated as a string. This is of course not ideal and many times not even possible. A better way to make it work is setting the strict option to false in the plugin configuration. This will make sure to cast the input value to a string before applying the explode operation. This demonstrates the importance of reading the plugin documentation to know which options are at your disposal. Of course, you can also have a look at the plugin code to see how it works.

Note: Sometimes the error produces an non-recoverable condition. The migration can be left in a status of "Importing" or "Reverting". Refer to this article to learn how to fix this condition.

The log process plugin

In the example, adding the extra configuration option will make the import operation finish without errors. But, how can you be sure the expected values are being produced? Not getting an error does not necessarily mean that the migration works as expected. It is possible that the transformations being applied do not yield the values we think or the format that Drupal expects. This is particularly true if you have complex process plugin chains. As a reminder, we want to separate a decimal number from the source like 3.1415 into its components: 3 and 1415.

The log process plugin can be used for checking the outcome of plugin transformations. This plugin offered by the core Migrate API does two things. First, it logs the value it receives to the messages table. Second, the value is returned unchanged so that it can be used in process chains. The following snippets show how to use the log plugin and what is stored in the messages table:

psf_number_components:
  - plugin: explode
    source: src_decimal_number
    delimiter: '.'
    strict: false
  - plugin: log
 ------------------- ------- --------
  Source IDs Hash    Level   Message
 ------------------- ------- --------
  7ad742e...732e755   1       3
  7ad742e...732e755   1       1415
  2d3ec2b...5e53703   1       2
  2d3ec2b...5e53703   1       7182
  12a042f...1432a5f   1       1
  2d3ec2b...5e53703   1       4142
 ------------------------------------

Because the explode plugin produces an array, each of the elements is logged individually. And sure enough, in the output you can see the numbers being separated as expected.

The log plugin can be used to verify that source values are being read properly and process plugin chains produce the expected results. Use it as part of your debugging strategy, but make sure to remove it when done with the verifications. It makes the migration to run slower because it has to write to the database. The overhead is not needed once you verify things are working as expected.

In the next article, we are going to cover the Migrate Devel module, the debug process plugin, recommendations for using a proper debugger like XDebug, and the migrate:fields-source Drush command.

What did you learn in today’s blog post? What workflow do you follow to debug a migration issue? Have you ever used the log process plugin for debugging purposes? If so, how did it help to solve the issue? Share your answers in the comments. Also, I would be grateful if you shared this blog post with others.

Next: How to debug Drupal migrations - Part 2

This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors: Drupalize.me by Osio Labs has online tutorials about migrations, among other topics, and Agaric provides migration trainings, among other services.  Contact Understand Drupal if your organization would like to support this documentation project, whether it is the migration series or other topics.

Sep 05 2019
Sep 05

tl;dr Are you an event organizer who wants to increase the number of speakers from marginalized and underrepresented groups at your event?  We're having an online training November 16 so you can learn how to hold your own Speaker Diversity Workshop in your local area! Get notifications for when registration opens.

One of the most common questions we get in the Drupal Diversity and Inclusion channel is from event organizers, asking how to increase the number of speakers at their events who are from marginalized and underrepresented groups.

We’re delighted to offer an online two-day Speaker Diversity Workshop, set for September 21 and 28, to help people from marginalized and underrepresented groups to prepare for submitting talks to events. We’ve had a tremendous response and still have room for more people to sign up. Register today!

To help event organizers even further, we are very excited to share that on Saturday, November 16, from 1-4 p.m. ET, we’ll be offering a Train the Trainers online workshop on how to run the Drupal Speaker Diversity Workshop at your local events. This will allow meetups and DrupalCamps to offer the workshop all around the world! Get notifications about this upcoming workshop by filling out this quick interest form.
 

The Drupal Diversity and Inclusion group has partnered with Jiil Binder on this effort, as she successfully pioneered this approach within the WordPress community. By both offering direct speaker training workshops herself and teaching communities how to run these workshops, she’s been able to make a huge difference in the WordPress community since she began this outreach in 2018. So far the workshop has been run by twelve WordPress meetup groups in the US, Canada, Brazil, South Africa, and Venezuela.

All of the communities that held this workshop experienced a real change in the speaker roster for their annual conferences; many of their WordCamps went from having 10% women speakers to having 50% or more women speakers in less than a year. In 2017, Seattle had 60% women speakers and in 2018, Vancouver had 63%.

— “Want to See a More Diverse WordPress Contributor Community? So Do We.”, Automattic blog post

This workshop includes the following:

  • A three hour online workshop

  • PDFs with Dos and Don’ts for running anevent that supports people from marginalized and underrepresented groups

If you’d like to take the Train the Trainers workshop, it’s best if you take the Drupal Speaker Diversity workshop as a participant first. Reminder that this will take place September 21 & 28, from 1–3:30 p.m. ET each day. Registration is open through September 20, so please register today!

Then, follow up with the online Train the Trainers Workshop on Saturday, November 16, from 1–4 p.m. ET.  To let us know you are interested, please sign up for notifications

We strongly encourage local meetup groups and Drupal Camp organizers to identify one or more people within their local community to attend both the Drupal Speaker Diversity Training as well as the Train the Trainers workshop. Having trainers who are themselves people from marginalized and underrepresented groups is a big help in connecting trainers with the people they will be working with in the workshop. Allies can assist the primary leaders of the workshop, so all folks are welcome to attend the Train the Trainers event. 

Marginalized? Underrepresented? Could you clarify that?

Some people have asked or wondered if these workshops are right for them, and what we mean when we say “people from marginalized and underrepresented groups”. On our Statement of Values page, we say the following:

There are many intersecting oppressions in society today. Some of them can make it difficult for people to take part in open source communities. We oppose excluding people due to racism, misogyny, homophobia, transphobia, ableism, Islamophobia, class, and BDSM or kink lifestyles, as a non-exhaustive list. We seek to amplify the voices of those affected by oppressions. We also want to create safer spaces in the Drupal community where individuals can work and grow.

If folks have felt tension around speaking due to an aspect of their identity that is underrepresented or marginalized in the Drupal community, we would love to see them at our Speaker Diversity Workshop. 

And if those tensions connect with you, we’d love for you to bring your experiences to learn how to lead a Speaker Diversity Workshop!

Sponsors

A huge thanks to our partner in this workshop, Pantheon: without their matching fund, this workshop would not be possible.

We also want to pass on a special thanks to our corporate sponsors, Lullabot and Kanopi Studios, who helped to kick off this fundraising drive, and who believed in this effort from the very start. And another big thanks to individual sponsors Dries Buytaert and Drew Griffiths.

And finally we want to thank all the other organizations and individuals who stepped up to make this possible. Thank you, thank you, thank you!

Our work is never done! We welcome one-time donations and ongoing sponsorships to help us do the work of diversity and inclusion in the Drupal community: donate today with Open Collective!

Sep 05 2019
Sep 05

We love to say that Drupal 8’s logo resembles the infinity sign, which means infinite opportunities for websites. It includes plenty of ways to make your website user-friendly and engaging. 

One of the techniques used in this area is infinite scrolling, which can be implemented through a nice Drupal 8 module called Views Infinite Scroll. Let’s see what infinite scrolling is and how to design it with the help of this Drupal 8 module. 

A glimpse at the technique: what is infinite scrolling? 

Infinite scrolling means continuous content uploading as the user scrolls down the page. This can optionally be accompanied by the “load more” button at the bottom of the page, which infinitely uploads the content upon click.

Endless pages make the user’s interaction with the website more natural because it is convenient to not have to click on the next page. This technique is especially popular with content-rich websites, social networks, e-commerce stores, etc. It is incredibly useful for long pages and mobile navigation. 

However, infinite scrolling should be used carefully so it does not annoy the user, distract their attention from their main goal, or block the calls-to-action. For example:

  • If the footer “disappears” together with your contacts every time your user scrolls, consider using a sticky footer or move the key links to the sidebar. 
  • Try the “load more” button to give the user more control and never block anything.
  • You can also add more usability by letting the user choose the number of displayed items before hitting “load more.”

All this and more is provided by the Views Infinite Scroll module in Drupal 8, which we will now move on to.

Introduction to the Views Infinite Scroll module in Drupal 8

The Drupal 8 Views Infinite Scroll module works with the Drupal Views. This allows you to present any Drupal data you wish — collections of images, articles, products, lists of user-profiles, or anything else.  

You can:

  • let the content be infinitely uploaded upon the scroll
  • add a “load more” button with any text on it
  • expose some viewing options to users

The Views Infinite Scroll module is available both for Drupal 7 and Drupal 8. However, the Drupal 8 version has a special bonus — it uses the built-in AJAX system of Drupal Views and requires no third-party libraries. In the next chapter, we will look more closely at how it works.

How the Views Infinite Scroll Drupal module works

Installing the module

We start by installing the Views Infinite Scroll Drupal module in any preferred way and enabling it. In our example, we are using its 8.x-1.5 version.

Installing the Views Infinite Scroll module

Creating the Drupal view

Now let’s prepare our Drupal view with a few elements in it. In our case, the grid view will show two columns of car images. 

When creating the page, we choose the “create a page” option. The default “Use a pager” and “10 items to display” settings can remain unchanged so far — we will take care of this in the next step.

Drupal 8 Views

Setting up the Drupal Views infinite scrolling

On the Drupal Views dashboard, we select the Pager section and open the “Use pager — mini” option.

Setting Drupal 8 Views pager to infinite scroll

There, we switch the pager to “Infinite Scroll.”

Setting Drupal 8 Views pager to infinite scroll

Next, we configure the settings for the Infinite Scroll. We set the number of items per page to 4.

And, most importantly, we can check or uncheck the automatic loading of content. If we uncheck it, there will be a “load more” button, for which we can write our custom text. “View more luxury cars” sounds good for this example.

Configuring infinite scroll in Drupal 8 Views

After saving the view and checking the page, we see 4 cars on page with a nice “View more luxury cars” button. Success!

View more button in Drupal 8 Views

Exposing choices to users

In our Infinite Scroll settings, there is the “Exposed options” section. By checking its options, you will allow users to:

choose the number of items displayed
see all items
specify the number of items skipped from the beginning

Exposed options in Drupal 8 infinite scroll

With these applied, our collection now looks like this.
Infinite scroll in Drupal 8 Views with exposed options

Additional CSS tweaks will make the view look exactly the way you want as far as colors, fonts, distances between elements, etc.

Apply infinite scrolling on your website

Make your customer satisfaction infinite through the use of the infinite scroll technique! So if you want to:

  • install and configure the Views Infinite Scroll module
  • customize the output with CSS
  • create a custom Drupal module for your ideas in scrolling
  • design the scrolling effect from scratch using the latest techniques

contact our Drupal team!

Sep 05 2019
Sep 05

The Drupal 8 Field Defaults module is a handy little module that allows you to bulk update the default values of a Drupal field. This is helpful if you have ever added a field to a content type or entity and wished you could have the default value apply to all the existing content on your Drupal site.

Download and install the Field Defaults module just like any other Drupal 8 module.

You can visit the configuration page by going to Admin > Configuration > System > Field Defaults settings. The only setting is the ability to retain the original entity updated time when default values are updated.

Field Default Module Settings

Navigate to a Content Type and go to Manage Fields. Edit one of the fields on the content type. In the screenshot below, I am editing a text field called Example. You will notice under the default value section there is a new fieldset called Update Existing Content. If you needed to change the default value and wanted it to apply to all of your existing content on the site, you would use the checkboxes to update the defaults.

Field Default Module Update Content

That’s it! There really is not a lot to it, but it’s useful when you are adding new fields to existing sites.

Sep 04 2019
Sep 04
Date: 2019-September-04Description: 

In June of 2011, the Drupal Security Team issued Public Service Advisory PSA-2011-002 - External libraries and plugins.

8 years later that is still the policy of the Drupal Security team. As Drupal core and modules leverage 3rd party code more and more it seems like an important time to remind site owners that they are responsible for monitoring security of 3rd party libraries. Here is the advice from 2011 which is even more relevant today:

Just like there's a need to diligently follow announcements and update contributed modules downloaded from Drupal.org, there's also a need to follow announcements by vendors of third-party libraries or plugins that are required by such modules.

Drupal's update module has no functionality to alert you to these announcements. The Drupal security team will not release announcements about security issues in external libraries and plugins.

Current PHPUnit/Mailchimp library exploit

Recently we have become aware of a vulnerability that is being actively exploited on some Drupal sites. The vulnerability is in PHPUnit and has a CVE# CVE-2017-9841. The exploit targets Drupal sites that currently or previously used the Mailchimp or Mailchimp commerce module and still have a vulnerable version of the file sites/all/libraries/mailchimp/vendor/phpunit/phpunit/src/Util/PHP/eval-stdin.php. See below for details on whether a file is vulnerable or not. The vulnerable file might be at other paths on your individual site, but an automated attack exists that is looking for that specific path. This attack can execute PHP on the server.

Solution: 

Follow release announcements by the vendors of the external libraries and plugins you use.

In this specific case, check for the existence of a file named eval-stdin.php and check its contents. If they match the new version in this commit then it is safe. If the file reads from php://input then the codebase is vulnerable. This is not an indication of a site being compromised, just of it being vulnerable. To fix this vulnerability, update your libraries. In particular you should ensure the Mailchimp and Mailchimp Ecommerce modules and their libraries are updated.

If you discover your site has been compromised, we have a guide of how to remediate a compromised site.

Also see the Drupal core project page.

Reported By: Coordinated By: 
Sep 04 2019
Sep 04

Your website’s users are its dearest treasure. Drupal 8 offers everything to make your users happy and satisfied. They can publish content with ease, quickly find things through the robust search in Drupal 8, use their native language thanks to Drupal 8’s multilingual improvements, and so much more.

But let’s get to the beginning of their journey — user profiles. We will take a tour of building the structure of user profiles in Drupal 8.

First, let’s talk about some basics before moving on to a few interesting tweaks that make profiles richer and more engaging. These will involve new Drupal 8 core modules such as:

  • the Media Library to enrich profiles with multimedia
  • the Layout Builder to shape the profile layout with the handy drag-and-drop feature

And we will also use the not so new but always essential Views module that is part of the Drupal 8 core to help us display the needed data more precisely. Let’s begin!

Building user profiles in Drupal 8

1. Introduction: using fields to build user profiles

Users are fieldable entities in Drupal 8 just like content types. This means you can build user profiles with any fields, and every account will have them.

These can be any fields imaginable — first name, last name, picture, email, link to the website, and so forth. They can be created in Configuration — People — Account settings — Manage fields with the use of the relevant field types.

Managing fields in Drupal 8 profiles

The order in which the fields will appear to visitors can be specified by the drag-and-dropping at “Manage display,” hide or show their labels and use formatters.

Manage display to reorder fields in Drupal 8

Every field can be made required or optional on the field “Edit” page.

Making fields required or optional in Drupal 8

A special group of field types is “Reference”. It allows you to connect to other entity types. With this, you can allow users to:

  • list other users of your website (i.e. “My mentors”)
  • select options from taxonomy vocabularies (i.e. counties, cities, or spoken languages)
  • list their favorite content from your website

and much more.

One of interesting referenced entity cases comes next with the Media Library.

adding referenced entity field in Drupal 8

2. Making profiles richer with the Media Library

You can allow users to embed media of various types from Drupal 8’s Media Library into their profiles. This includes images, videos, audio, files, and remote videos from YouTube or Vimeo. For example, they can list their featured photos, favorite music videos, and so on.

The Media Library appeared as an experimental module in Drupal 8.6 for media handling. New Media Library interface in Drupal 8.7 impressed even the experts with its stylish design and handy features.

Media Library in Drupal 8

For media embedding, the Media and the Media Library core modules need to be enabled. Then it’s necessary to set the Reference field type to “Media,” specify the allowed number of values and select the media type.

Media field settings in Drupal 8

Adding Media field in Drupal 8

With these settings like on our picture, the user’s account will have the fields for up to 5 featured photos that users can embed into profiles directly from the Media Library.

Adding media from Media Library in Drupal 8

It’s great to know that in Drupal 8.8, there will be a media embed button added to the CKEditor dashboard.

3. Displaying the needed data in profiles via Views

More opportunities are open thanks to adding collections of entities, or Drupal Views. Remember, for example, we mentioned the referenced entity field open to other users such as the “My mentors” field?

referenced entity user field in Drupal 8

However, this just listed the mentor’s usernames on the user profile. What if we want the mentors’ pictures to be shown?

Views comes to the rescue! We can arrange mentors’ photos as Views and attach it as a block or page to the user profile.

We need to:

  • go to Structure — Views and create a new view block of the “user” type that will use fields
  • add the field for the user picture and, in the field settings, relate it to our “My Mentors” field
  • add a relationship in the “Advanced” section of Views to the “My Mentors” field
  • create a contextual filter in the “Advanced” section of Views that will display only the mentors of the particular user

Views in Drupal 8 that shows each user's mentors

Contextual filter by user ID in Drupal 8

With this done, we now have a block that shows the pictures of mentors on each user page. However, this block is not yet added anywhere to the site.

4. Shaping user profiles with the Layout Builder

It’s time to finally unite it all together. The Views block can be attached to the profile using the traditional Drupal block layout as a simple option. However, the new Layout Builder that appeared in Drupal 8.5 offers an amazing drag-and-drop interface for this purpose!

The Layout Builder is used with all fieldable entities, including user profiles. In addition to enabling the module, we need to enable the Layout Builder on the Manage Display tab of the particular entity type (in this case — user account settings).

Enable Layout Builder for entity type

The Manage layout button takes us to the drag-and-drop interface where we can add sections with a different number of columns, set their width proportions, and add blocks to them. Blocks include Drupal fields, Views blocks, forms, menus, and much more.

adding blocks in Layout Builder Drupal 8

Every Drupal block is configured on the right sidebar with all settings traditionally available in “Manage display.”

Configuring field in Drupal 8's Layout Builder

We are creating a three-column section and adding profile fields as blocks, including the “My featured photos” filed and the “My mentors” Views block. We will save the result and see how our profile looks.

User profile created in Drupal 8's Layout Builder

Of course, it still needs a good touch of HTML and CSS. However, we have only touched the tip of the iceberg of what core Drupal 8 modules can do for building user profiles. The opportunities are endless!

Building user profiles in Drupal 8 with our team

Let your user profiles look exactly as you wish, with no limits to your imagination. Entrust building user profiles to our Drupal team who will use core, contributed, or custom modules created specifically for your case. Contact us!

Sep 04 2019
Sep 04

In recent posts we have explored the Migrate Plus and Migrate Tools modules. They extend the Migrate API to provide migrations defined as configuration entities, groups to share configuration among migrations, a user interface to execute migrations, among other things. Yet another benefit of using Migrate Plus is the option to leverage the many process plugins it provides. Today, we are going to learn about two of them: `entity_lookup` and `entity_generate`. We are going to compare them with the `migration_lookup` plugin, show how to configure them, and explain their compromises and limitations. Let’s get started.

What is the difference among the migration_lookup, entity_lookup, entity_generate plugins?

In the article about migration dependencies we covered the `migration_lookup` plugin provided by the core Migrate API. It lets you maintain relationships among entities that are being imported. For example, if you are migrating a node that has associated users, taxonomy terms, images, paragraphs, etc. This plugin has a very important restriction: the related entities must come from another migration. But what can you do if you need to reference entities that already exists system? You might already have users in Drupal that you want to assign as node authors. In that case, the `migration_lookup` plugin cannot be used, but `entity_lookup` can do the job.

The `entity_lookup` plugin is provided by the Migrate Plus module. You can use it to query any entity in the system and get its unique identifier. This is often used to populate entity reference fields, but it can be used to set any field or property in the destination. For example, you can query existing users and assign the `uid` node property which indicates who created the node. If no entity is found, the module returns a `NULL` value which you can use in combination of other plugins to provide a fallback behavior. The advantage of this plugin is that it does not require another migration. You can query any entity in the entire system.

The `entity_generate` plugin, also provided by the Migrate Plus module, is an extension of `entity_lookup`. If no entity is found, this plugin will automatically create one. For example, you might have a list of taxonomy terms to associate with a node. If some of the terms do not exist, you would like to create and relate them to the node.

Note: The `migration_lookup` offers a feature called stubbing that neither `entity_lookup` nor `entity_generate` provides. It allows you to create a placeholder entity that will be updated later in the migration process. For example, in a hierarchical taxonomy terms migration, it is possible that a term is migrated before its parent. In that case, a stub for the parent will be created and later updated with the real data.

Getting the example code

You can get the full code example at https://github.com/dinarcon/ud_migrations The module to enable is `UD Config entity_lookup and entity_generate examples` whose machine name is `ud_migrations_config_entity_lookup_entity_generate`. It comes with one JSON migrations: `udm_config_entity_lookup_entity_generate_node`. Read this article for details on migrating from JSON files. The following snippet shows a sample of the file:


{
  "data": {
    "udm_nodes": [
      {
        "unique_id": 1,
        "thoughtful_title": "Amazing recipe",
        "creative_author": "udm_user",
        "fruit_list": "Apple, Pear, Banana"
      },
      {...},
      {...},
      {...}
    ]
  }
}

Additionally, the example module creates three users upon installation: 'udm_user', 'udm_usuario', and 'udm_utilisateur'. They are deleted automatically when the module is uninstalled. They will be used to assign the node authors. The example will create nodes of types "Article" from the standard installation profile. You can execute the migration from the interface provided by Migrate Tools at `/admin/structure/migrate/manage/default/migrations`.

Using the entity_lookup to assign the node author

Let’s start by assigning the node author. The following snippet shows how to configure the `entity_lookup` plugin to assign the node author:


uid:
  - plugin: entity_lookup
    entity_type: user
    value_key: name
    source: src_creative_author
  - plugin: default_value
    default_value: 1

The `uid` node property is used to assign the node author. It expects an integer value representing a user ID (`uid`). The source data contains usernames so we need to query the database to get the corresponding user IDs. The users that will be referenced were not imported using the Migrate API. They were already in the system. Therefore, `migration_lookup` cannot be used, but `entity_lookup` can.

The plugin is configured using three keys. `entity_type` is set to machine name of the entity to query: `user` in this case. `value_key` is the name of the entity property to lookup. In Drupal, the usernames are stored in a property called `name`. Finally, `source` specifies which field from the source contains the lookup value for the `name` entity property. For example, the first record has a `src_creative_author` value of `udm_user`. So, this plugin will instruct Drupal to search among all the users in the system one whose `name` (username) is `udm_user`. If a value if found, the plugin will return the user ID. Because the `uid` node property expects a user ID, the return value of this plugin can be used directly to assign its value.

What happens if the plugin does not find an entity matching the conditions? It returns a `NULL` value. Then it is up to you to decide what to do. If you let the `NULL` value pass through, Drupal will take some default behavior. In the case of the `uid` property, if the received value is not valid, the node creation will be attributed to the anonymous user (uid: 0). Alternatively, you can detect if `NULL` is returned and take some action. In the example, the second record specifies the "udm_not_found" user which does not exists. To accommodate for this, a process pipeline is defined to manually specify a user if `entity_lookup` did not find one. The `default_value` plugin is used to return `1` in that case. The number represents a user ID, not a username. Particularly, this is the user ID of "super user" created when Drupal was first installed. If you need to assign a different user, but the user ID is unknown, you can create a pseudofield and use the `entity_lookup` plugin again to finds its user ID. Then, use that pseudofield as the default value.

Important: User entities do not have bundles. Do not set the `bundle_key` nor `bundle` configuration options of the `entity_lookup`. Otherwise, you will get the following error: "The entity_lookup plugin found no bundle but destination entity requires one." Files do not have bundles either. For entities that have bundles like nodes and taxonomy terms, those options need to be set in the `entity_lookup` plugin.

Using the entity_generate to assign and create taxonomy terms

Now, let’s migrate a comma separated list of taxonomy terms. An example value is `Apple, Pear, Banana`.  The following snippet shows how to configure the `entity_generate` plugin to look up taxonomy terms and create them on the fly if they do not exist:


field_tags:
  - plugin: skip_on_empty
    source: src_fruit_list
    method: process
    message: 'No src_fruit_list listed.'
  - plugin: explode
    delimiter: ','
  - plugin: callback
    callable: trim
  - plugin: entity_generate
    entity_type: taxonomy_term
    value_key: name
    bundle_key: vid
    bundle: tags

The terms will be assigned to the `field_tags` field using a process pipeline of four plugins:

  • `skip_on_empty` will skip the processing of this field if the record does not have a `src_fruit_list` column.
  • `explode` will break the string of comma separated files into individual elements.
  • `callback` will use the `trim` PHP function to remove any whitespace from the start or end of the taxonomy term name.
  • `entity_generate` takes care of finding the taxonomy terms in the system and creating the ones that do not exist.

For a detailed explanation of the `skip_on_empty` and `explode` plugins see this article. For the `callback` plugin see this article. Let’s focus on the `entity_generate` plugin for now. The `field_tags` field expects an array of taxonomy terms IDs (`tid`). The source data contains term names so we need to query the database to get the corresponding term IDs. The taxonomy terms that will be referenced were not imported using the Migrate API. And they might exist in the system yet. If that is the case, they should be created on the fly. Therefore, `migration_lookup` cannot be used, but `entity_generate` can.

The plugin is configured using five keys. `entity_type` is set to machine name of the entity to query: `taxonomy_term` in this case. `value_key` is the name of the entity property to lookup. In Drupal, the taxonomy term names are stored in a property called `name`. Usually, you would include a `source` that specifies which field from the source contains the lookup value for the `name` entity property. In this case it is not necessary to define this configuration option. The lookup value will be passed from the previous plugin in the process pipeline. In this case, the trimmed version of the taxonomy term name.

If, and only if, the entity type has bundles, you also must define two more configuration options: `bundle_key` and `bundle`. Similar to `value_key` and `source`, these extra options will become another condition in the query looking for the entities. `bundle_key` is the name of the entity property that stores which bundle the entity belongs to. `bundle` contains the value of the bundle used to restrict the search. The terminology is a bit confusing, but it boils down to the following. It is possible that the same value exists in multiple bundles of the same entity. So, you must pick one bundle where the lookup operation will be performed. In the case of the taxonomy term entity, the bundles are the vocabularies. Which vocabulary a term belongs to is associated in the `vid` entity property. In the example, that is `tags`. Let’s consider an example term of "Apple". So, this plugin will instruct Drupal to search for a taxonomy term whose `name` (term name) is "Apple" that belongs to the "tags" `vid` (vocabulary).

What happens if the plugin does not find an entity matching the conditions? It will create one on the fly! It will use the value from the source configuration or from the process pipeline. This value will be used to assign the `value_key` entity property for the newly created entity. The entity will be created in the proper bundle as specified by the `bundle_key` and `bundle` configuration options. In the example, the terms will be created in the `tags` vocabulary. It is important to note that values are trimmed to remove whispaces at the start and end of the name. Otherwise, if your source contains spaces after the commas that separate elements, you might end up with terms that seem duplicated like "Apple" and " Apple".

More configuration options

Both `entity_lookup` and `entity_generate` share the previous configuration options. Additionally, the following options are only available:
`ignore_case` contains a boolean value to indicate if the query should be case sensitive or not. It defaults to true.
`access_check` contains a boolean value to indicate if the system should check whether the user has access to the entity. It defaults to true.
`values` and `default_values` apply only to the `entity_generate` plugin. You can use them to set fields that could exist in the destination entity. An example configuration is included in the code for the plugin.

One interesting fact about these plugins is that none of the configuration options is required. The `source` can be skipped if the value comes from the process pipeline. The rest of the configuration options can be inferred by code introspection. This has some restrictions and assumptions. For example, if you are migrating nodes, the code introspection requires the `type` node property defined in the process section. If you do not set one because you define a `default_bundle` in the destination section, an error will be produced. Similarly, for entity reference fields it is assumed they point to one bundle only. Otherwise, the system cannot guess which bundle to lookup and an error will be produced. Therefore, always set the `entity_type` and `value_key` configurations. And for entity types that have bundles, `bundle_key` and `bundle` must be set as well.

Note: There are various open issues contemplating changes to the configuration options. See this issue and the related ones to keep up to date with any future change.

Compromises and limitations

The `entity_lookup` and `entity_generate` plugins violate some ETL principles. For example, they query the destination system from the process section. And in the case of `entity_generate` it even creates entities from the process section. Ideally, each phase of the ETL process is self contained. That being said, there are valid uses cases to use these plugins and they can you save time when their functionality is needed.

An important limitation of the `entity_generate` plugin is that it is not able to clean after itself. That is, if you rollback the migration that calls this plugin, any created entity will remain in the system. This would leave data that is potentially invalid or otherwise never used in Drupal. Those values could leak into the user interface like in autocomplete fields. Ideally, rolling back a migration should delete any data that was created with it.

The recommended way to maintain relationships among entities in a migration project is to have multiple migrations. Then, you use the `migration_lookup` plugin to relate them. Throughout the series, several examples have been presented. For example, this article shows how to do taxonomy term migrations.

What did you learn in today’s blog post? Did you know how to configure these plugins for entities that do not have bundles? Did you know that reverting a migration does not delete entities created by the `entity_generate` plugin? Did you know you can assign fields in the generated entity? Share your answers in the comments. Also, I would be grateful if you shared this blog post with others.

This blog post series, cross-posted at UnderstandDrupal.com as well as here on Agaric.coop, is made possible thanks to these generous sponsors. Contact Understand Drupal if your organization would like to support this documentation project, whether it is the migration series or other topics.

Sep 04 2019
Sep 04

04 Sep

Dominique De Cooman

Dropsolid was conceived at DrupalCon, and now we’re a Diamond Sponsor! We’ll be in Amsterdam to show off the Dropsolid platform and our vision for Drupal. We’d love your feedback. And we are donating 15 minutes of core contributor time for everyone who completes our survey at our booth.

We hope to see you there! Contact us, sign up for our newsletter, or stop by our booth!

Stop by Our Booth, Help Make Drupal Shine

We will donate 15 minutes of core contribution time for each person who fills out a short survey at our booth (Stand 13, by the catering area) at DrupalCon—one per person.

We didn’t want to be Diamond sponsors just for the sake of it. Drupal and DrupalCon got us here, made us what we are today. We want to make a difference. We asked ourselves what kind of a booth-giveaway would make a lasting impact on Drupal? A t-shirt of just the right shade of blue? ;-) We decided to invest in Drupal, paying a core contributor for their work.

Sponsors are a DrupalCon’s Best Friend

DrupalCon sparked the formation of Dropsolid, and we are very proud to be able to be Diamond Sponsors in Amsterdam this year. I wanted to take a moment to reflect on what DrupalCon has meant for us.

In 2012, after five years as a developer, I attended my very first DrupalCon in Munich. I saw Dries speak, attended so many sessions, met so many community members. There was so much incredible, positive energy; I was overwhelmed.

At that DrupalCon, I met some extraordinary people who helped persuade me that founding a Drupal company was a great idea. The experience convinced me to invest everything I ever owned into a company with Drupal at its core. I felt it was now or never. And so Steven Pepermans and I founded Dropsolid.

Now seven years later, we are one of the Diamond sponsors at DrupalCon Amsterdam. It’s hard to believe Dropsolid can do this. Sponsoring DrupalCon is a dream come true for us. For us, this is already a huge achievement. The very experience of it co-created our company, and now we get to contribute to it ourselves.

We are grateful to be here and want to make a difference.

The Dropsolid Vision for Better Customer Experience with Drupal

At the conference, we want to share a vision for possibilities with Drupal. We see Drupal pinning together an integrated digital experience platform that enables teams to deliver great digital customer experiences more effectively and at a lower cost. Our vision starts with the best practices of working with Drupal, hosting, deployment & development tools, and digital marketing capabilities. It’s what we offer customers today.

Out in the market, these “digital experience platforms” make connecting all the parts together easier. It means you can avoid getting nickeled-and-dimed on individual services and dealing with quirks in integrations. This is all possible right now with Drupal, when you have the skills and knowledge to put everything together. It’s what we do for our clients every day. We build flexible integrated platforms, and we provide training and consultation along the way.

In building these solutions with Drupal, we discovered some best practices, many things that can be recycled and reused, and that there are real advantages and economies of scale. We’ll be talking about that in our talks such as Nick’s talk on improving on-site search with Drupal Machine Learning, and Wouter and Brent’s talk about avoiding Drupal SEO Pitfalls, and Mathias will share the insights we’ve gained in working on Launchpad our Local Development Tool. These are very practical and direct ways to get more out of your investment with Drupal.

But we have a bigger vision. Next we’re working on our integrated service so you can get these capabilities with one offering in Drupal. If you want to know more about this vision, and how to get there today, come along to my talk about Open Digital Experiences to Increase Customer Lifetime Value
You can also stop by our booth to see demos of our Dropsolid hosting Platform, see how to use Dropsolid Personalization, and see Rocketship in action.

Facets of the Digital Experience - Dropsolid Sessions at DrupalCon

Where to meet the Dropsolid Team. In addition to visiting our booth (and making us pay a core contributor!) at Stand 13, we’ll be showing many facets of what goes into digital experiences —investing in Digital Customer Experiences, Search Engine Optimization tips for Drupal—and we’ll be on a panel about local development tools, too.

Demo: A future vision of Drupal as a Digital Experience Platform

  • What: In our live demo, we’ll show you the power of our Platform, Launchpad, Rocketship, Search & Machine Learning, and Personalization tools working together to break down silos and create engaging customer experiences with Drupal. 
  • When: Wed, 30 Oct, 12:40 -13:10
  • Where: Sponsor Stage

Stop Buying Drupal Websites, Buy Open Digital Experiences to Increase Customer Lifetime Value

  • Who: Dominique De Cooman, Founder and CEO Dropsolid 
  • What: My talk is a distillation of what we learned about the difference between Drupal being “just a CMS” for “just building a website,” and how Drupal can be a truly comprehensive Digital Experience Manager.
  • When: Tue, 29 Oct, 11:55 to 12:15
  • Where: Room G 107

The Battle of the Local Development Tools [Panel Discussion]

  • Who: Mattias Michaux, Drupal Developer, and DevOps Engineer joins a Panel chaired by Michael Schmid from Amazee
  • What: DrupalCon website: “In this session, creators and users of different local development tools will provide their story of why they made the choices they made.”
  • When: Wed, 30 Oct, 11:30 - 12:10
  • Where: Room G 102

Machine Learning: Creating More Relevant Search Results With “Learn To Rank”

  • Who: Nick Veenhof, CTO Dropsolid, and Mattias Michaux
  • What: A summary of what machine learning is, but more importantly, how can you use it for a pervasive problem, namely the relevance of your internal site search.
  • When: Wed, 30 Oct, 16:15 to 16:55
  • Where: Auditorium

Drupal SEO Pitfalls and How To Avoid Them

  • Who: Wouter De Bruycker, SEO Specialist and Brent Gees, Drupal Architect
  • What: Drupal can be your perfect technical SEO platform, but to get the most out of it, you have to make sure it’s set­ up as it should be for the search engines. We will go into the details of how to detect SEO issues common and rare (on real Drupal sites!) and explain their impact on SEO.
  • When: Wed, 30 Oct, 16:40 to 17:00
  • Where: Room G 102

See you there! Be in touch!

We hope to see you there! Sign up for our newsletter or stop by our booth at Stand 13 and help us contribute!

And remember, we will donate 15 minutes of core contribution time for each person who fills out a short survey at our booth (Stand 13, by the catering area) at DrupalCon—one per person.

DrupalJam 2019

Sep 04 2019
Sep 04

Drupal 8 makes it easier and easier to create rich, interesting, and beautiful content pages. Among the new features of the Drupal 8.7 release, we saw the stable Layout Builder and the new Media Library user interface. 

Another great piece of news is coming now! The Media Library in Drupal 8 has an embed button added to the CKEditor panel, and media embedding without a mouse is possible. This Media Library and CKEditor integration is now in the dev branch and will be officially available with the Drupal 8.8 stable release in December 2019. 

Consider scheduling your Drupal website update to 8.8 with our team. Meanwhile, let’s learn more about the new features.

The Media Library in Drupal 8 and rich content creation

Thanks to the Media Library and Media modules being part of the Drupal core, media handling in Drupal 8 is very convenient. It’s possible to add various types of media, store them in the Library, and reuse the content whenever you need it. 

You can display the items in a grid or table view, select and insert them, sort and filter them by various criteria, bulk upload, and so on. With the new Library user interface introduced in Drupal 8.7, everything looks and works especially well. Here are our screenshots from this version.

Media Library in Drupal 8

Media Library in Drupal 8: adding or selecting images

The default Drupal 8 media types are:

  • Audio
  • File
  • Image
  • Remote video (with links from YouTube, Vimeo, etc.)
  • Video

Using items from Media Library in content

Content editors appreciate the ability to select items from the Library and insert it directly into the content. To achieve this, it is necessary to add the Media field of the relevant type to a content type (or other fieldable entities like user account). 

Media Library in Drupal 8: oEmbed videosMedia Library in Drupal 8: adding or selecting videos

Great news: media button in the CKEditor panel

To make media selection and embedding experiences even smoother, the embed button has now been added to the CKEditor dashboard in Drupal 8.8x-dev release. This Media Library and WYSIWYG integration was announced in a tweet by “The Drop is Always moving.”

Media Library and CKEditor integration tweet

As we see, the Media Library button has an icon that looks attractive and clearly shows its purpose to users. 

The Media subsystem maintainer Phenaproxima shows nice screenshots and writes that the icon design is agreed by all, usability tests are successfully passed, and the button is well-tested. Congrats and thanks to the team of amazing experts for their job!

The work is successfully committed to the Drupal 8.8.x dev branch, waiting for the official release on December 4, 2019.

Media Library button added to Drupal 8.8 CKEditor

Users can click on the button, see the Media Library, select media, and click “ Insert Selected.”

Media Library button added to Drupal 8.8 CKeditor

The button can be enabled or disabled by drag-and-dropping, which is a great capability of the CKEditor in Drupal 8.

CKEditor panel now with Media Library button

Breaking news: final patch for Media Library and WYSIWYG integration

As we were preparing this article for publication, another awesome news arrived about the final feature patch for the Media Library. It allows for media embedding in WYSIWYG with no mouse needed. Wim Leers, one of the gurus who make such things happen, posted a video on his blog post.

Media embedding without mouse in Drupal 8.8 CKEditor

Enjoy the Media library’s new features!

Start producing richer content in a snap of a finger — use the Media Library in Drupal 8. Our Drupal support and development team can assist you in every step of your way. For example, we can:

  • update you to Drupal 8.7 so you can use new Media Library’s user interface
  • upgrade your website to Drupal 8 if you are still on Drupal 7
  • adjust your website’s settings for easy media handling workflows
  • advise you and set up other attractive ways to display content in Drupal 8
  • and, of course, update you to the upcoming Drupal 8.8 as soon as it arrives in December

Follow our news about Drupal support services and always feel free to contact us!

Pages

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web