Apr 17 2019
Apr 17

By Diego SaboloFull Stack Developer | April 17, 2019

By Diego SaboloFull Stack Developer | April 17, 2019

DrupalCon Seattle 2019 was my second Drupal Conference. Everybody enjoys travel, and everybody should enjoy learning while at it! This year I had the opportunity to do both, taking benefit of the Professional Development Program that weKnow offers as well as taking my family on vacation.

Seattle Convention Center The Washington State Convention Center

In my first hours in Seattle, I joined my teammates, got my credentials and the full information about the event... I was surprised by the variety of sessions available! One difference compared to Nashville 2018, this year there were only 2 days for room conferences, but the quantity looks similar. In fact, I did attend more sessions this year than in 2018.

Seattle View from the Space Needle Seattle View from the Space Needle

Tuesday, Dries opened officially the event with an interesting Keynote about the current state of Drupal,  the importance to update the current D7 sites to D8, and the upcoming improvements in the Drupal UI. Also, he focused on the importance of the accessibility and inclusion in the Drupal Community.

Drupalcon Seattle Program The sessions menu had a lot of interesting offers...not easy to decide where to go!

On Wednesday the sessions begun in full…The selection of tracks was not an easy job, most of them were really interesting, so I focused on those that I believe can add more value to my current role (working in growth & support). So I focused on sessions related to Custom Development and Testing. But there were sessions for all topics, from how to improve estimations to tips for remote work. There were also exotics topics such as "Installing Drupal in a homemade raspberry Pi cloud", and Alexa & Drupal integration.

My picks for Wednesday were: 

On Thursday I attended:

  • Advanced Webforms, by @jrockowitz:  A full review of the webforms module potential, handlers and Webform APIS

  • Building a Slack ChatBot, by @jmolivas: Jesus gives us an awesome example for creating chatbots , integrating them with external API’s, and several tips to make interactions between computers and humans.

  • Custom Compound Fields in Drupal 8, by @thagler: an excellent review for multiple elements fields  in D8 and focus on how to create, theme and test our custom compound fields.

  • Advanced Automated Visual Testing, by @shwetasharma84:  Interesting demo for VRT and how to including it in our automated deployment process.

  • Design a decoupled application,  by @justafish, @alwaysworking and @da_wehner: an architecture guide based upon the Drupal admin UI.

The overall balance was an awesome experience for reviewing ideas, learning new things and making new friends. I’m leaving Seattle with a sweet taste, renewed energies and the desire to put into practice the new concepts I learned. 

Big or Complex Project,
Not Enough Devs?

We help development teams meet their deadlines by seamlessly integrating our highly skilled developers.

Mar 13 2019
Mar 13

By Natasha ChantoMarketing | March 13, 2019

By Natasha ChantoMarketing | March 13, 2019

DrupalCon 2019

We’re Going to Seattle!

We are a month away from flying out to Seattle for the one thing we have all been waiting for here at weKnow… DrupalCon 2019! Our team is beyond excited to be a part of this event once more as attendees and special conference guests.

Why are we going to DrupalCon?

Most of our WeGive efforts go towards coding for several projects weKnow maintain where we have several developers contributing a significant portion of their time to improving coding tools. Our top contributors, Jesus Manuel Olivas and Omar Aguirre, have dedicated much of their time to projects like the Drupal Console, which has now been downloaded more than 3 million times, helping hundreds of thousands developers code more efficiently. We also take the time to contribute to numerous modules and the Drupal core. This is why we are flying out to Seattle to share with other people and companies our work within the Drupal community.

What are some of the benefits of attending DrupalCon?

In this five day event, people from all around come for the training, conferences, social events and for the wealth of networking opportunities. It opens up your mind to a new world and business views and it allows you to learn about and from other people’s experiences with Drupal. This is the event where we go and soak up the inspiration. As developers, we learn from the strong and inspiring leaders that encourage us to think big and envision our future. It is a boundless source of motivation where they ignite your spark to continue improving your work and technology for the greater good. From the moment you step in, the positive vibe will enthrall you and, if you are a new attendee, you will for sure find support to take your first steps in Drupal. Find a space to bond and strengthen your relationship with colleagues, business partners and dive into the experience.

Our team at DrupalCon

This year, we are going to have our Head of Products, Jesus Manuel Olivas, as a speaker for two sessions. The first one will be in participation with Mario Hernandez and Mark Casias from Mediacurrent, and it is, Introduction to Decoupled Drupal with Gatsby and React  where people will understand and learn how to create a React-based Front End with Gatsby in combination with React. The second session is Building a Slack ChatBot where people will understand how to make interactions between computers and humans feel just like an interaction solely between humans. Andres and Omar are looking forward to attending DevOps and Front End sessions such as Drupal Blue/Green deployments with AWS ECS, Serverless, Well Actually…, Gatsby and Drupal, amongst others.

Through our time in Seattle, we will be uploading content and information on our social media sites to keep you updated. If you want to come with us through our journey, follow Jesus Manuel Olivas in twitter as @jmolivas and Andres Avila as @andresavila97. And if you are going to DrupalCon, see you in Seattle!


Drupalcon Seattle 2019

If you are looking to augment your team, execute your vision, automate your process or learn more about Drupal and our contributions, we invite you to get to know us better.

Feb 25 2019
Feb 25

By Gio ChacónMarketing | February 25, 2019

By Gio ChacónMarketing | February 25, 2019

So you are looking out for help... Either you have a product idea to develop from scratch or need to augment your IT team, basically; you are looking for the best fit. This read will support you to set the right expectations as well as how to evaluate your future business partner effectively to gain a lifetime relationship, a technical team or the best short-term service.

Let us explore both perspectives: yours and the one of the agencies you hire, so we urge you not to filter by price but by expertise and better overall experience. 

Here are our 4 steps to finding the best suitable software agency: 

1. What is expected from you?

Do you know your scope and the objectives of your project? 
This will help you know what to delegate.  Use this general product development list as an idea:

  • UI/UX design
  • Defining functional requirements
  • Technical requirements
  • Project management
  • Product management
  • Quality assurance
  • Production and test infrastructure creation
  • Support

Having a rough plan is better than having no plan. Why is it important? It will help you set a realistic timetable and budget when negotiating your contract. Your software idea may not be completely formed or you are creating something from zero but still, try to set realistic milestones.  And your team will have clearly defined deliverables. It is good to have something to grab on, even if you are not planning to go to market yet.

2. Do you know how to work together?

From day one articulate what is expected from each other. Here is why Agile development frameworks are widely used in the industry, producing highly collaborative teams and promoting communication. We urge that each side actively tries to understand their business models and the way both measure quality.

Let's put it this way: when developers are motivated to demo and interact frequently with you... Then you (the client) have the same motivation to provide feedback or specifications too.                                                                                 
Another important aspect would be to ask about their project/communication management tools, the reason behind is to identify how accessible they will be (platforms like Asana, Trello, Jira, etc) so you can ask yourself, would these fit with the way we do things too?

3. How well do you know them?

Do some research, look for reviews about them, ask recommendations and their portfolio. Also if they have created any apps, download them, try them. How about open-source projects? Check if they meet your expectations. For example, weKnow loves to give to the Drupal Community, click to meet our main contributors

4. How much can you spend on the developer? 

The most cost-effective service is way more profitable than the cheapest. Many times low-cost offers are filled with blind spots you may regret in the future. Like dealing with lack of documentation, poorly written code, or technology difficult to work on or maintain, lack of experience or even low English level.

And contract wise, make sure to clarify what the price actually includes, specify the type of contract you are negotiating such as Fixed Price Model vs Time & Material Model project.

Pro Tip: Request details on their after-sale or maintenance service.

Consider a partner that can advise you, not only write code; giving you better ideas and features that connect with you from a technical perspective. You want to buy the services from someone who will make your business succeed.

Feb 01 2019
Feb 01

By Jesus Manuel OlivasHead of Products | February 01, 2019

By Jesus Manuel OlivasHead of Products | February 01, 2019

This is the latest post of the “Improving Drupal and Gatsby Integration” series. This time I will be talking about the Gatsby Boina Starter; we are contributing to make your Drupal-Gatsby integration easier. The Boina starter ships with the main Gatsby configuration files you might need to get up and running on your Gatsby site.

What does this Gatsby Boina Starter provide among others features:

  • React components to render home, blog and taxonomy pages.
  • Source plugin for pulling data (including images) into Gatsby from Drupal sites.
  • Support for image markdown preprocessing on Drupal body fields.
  • Pre-configured RSS feed available at `/drupalplanet.xml` (make sure you use the `drupal` tag on your blog pages or make the proper changes on your `.env` files.
  • This starter is based on a theme which allows you to keep receiving updates and override look and feel.

Where do I find this project?

Github repository is located at  https://github.com/weknowinc/gatsby-starter-drupal-boina

Project dependencies

Download Project Dependencies

gatsby new boina https://github.com/weknowinc/gatsby-starter-drupal-boina
cd boina

Copy environment file

cp .env.dist .env.development

NOTE: You should use .env.production for the production environment.

Update environment variables

# drupal

The DRUPAL_HOST variable contains the URL for your Drupal backed server. We highly recommend you to use the Drupal Boina Distribution. But in case you want to try right away. Don't worry we prepare a demo server running our Boina Distribution at http://drupal-boina.weknowinc.com/ 

Start Gatsby in development mode

gatsby develop

Open your browser and point to http://localhost:8000/

How Gatsby Boina looks like?

If you want to take a look at an example of the running Gatsby site looks like try this https://boina.weknowinc.com/

Are you excited as we are with GatsbyJS and this new API Driven approach?

We invite you to check back as this series continues, exploring more tools we are building to contribute back to Drupal and Gatsby ecosystems that will allow you to implement a Drupal and Gatsby integration without needing to DIY.

Want to learn how to take advantage of these modules?

We can show you how these modules can improve your Drupal and Gatsby integration.

Jan 31 2019
Jan 31

By Jesus Manuel OlivasHead of Products | January 31, 2019

By Jesus Manuel OlivasHead of Products | January 31, 2019

Drupal 8 has plenty of contributed modules to help you building a headless/decoupled web application. However, getting all those set up correctly could be a daunting task. 

Understanding this is an issue that should be addressed and as mentioned previously in our blog posts of this “Improving Drupal and Gatsby Integration” series we wrote and contributed two modules Toast UI Editor and Build Hooks. But there are some other needed modules: JSON-API, JSON-API Extras, Site Settings to mention a few and also a minimum configuration you should take care of to have a pleasant experience with your Drupal-Gatsby integration. 

In order to save you from DIY and make this ramp-up easy for Drupal and non-Drupal people. At weKnow, we decided to contribute the Drupal Boina Distribution. Boina comes with everything you need (code, configuration, and even default content), all that will be added to your site during the installation process so you can start working with the Boina Gatsby Starter immediately.

Which modules are included on the Boina distribution?

To see the full list of dependencies for Boina distribution you can take a look at the project composer.json file.

Where do I find this project?

Github repository is located at https://github.com/weknowinc/drupal-boina

Project dependencies

Project setup

# Clone repository
git clone [email protected]:weknowinc/drupal-boina.git

# Change directory
cd drupal-boina

# Copy .env file
cp .env.dist .env

# Add hostname entry in your /etc/hosts file    drupal-boina.develop

# Start containers
ahoy up

# Install Composer dependencies
ahoy composer install

# Install Boina distribution
ahoy drupal boina:install

If you want to take a look a the running Drupal site


You can access the API endpoints from this link


Feel free to give a try end let us know your thoughts leaving a comment 

Are you excited as we are with GatsbyJS and this new API Driven approach?

We invite you to check back as this series continues, exploring more tools we are building to contribute back to Drupal and Gatsby ecosystems that will allow you to implement a Drupal and Gatsby integration without needing to DIY.

Want to learn how to take advantage of these modules?

We can show you how these modules can improve your Drupal and Gatsby integration.

Jan 18 2019
Jan 18

By Eduardo GarcíaCTO | January 18, 2019

By Eduardo GarcíaCTO | January 18, 2019

Last month I attended my second Drupal South in Canberra, the capital of Australia. Yep! the capital is Canberra, not Sydney.

In my first Drupal South in 2016 I was invited to present a keynote. I also did a Drupal 8 training and a regular session.

This time my experience was completely different, since I am a local resident in Australia now. I had the opportunity of presenting too, and enjoyed the conference from a different perspective.  I definitely learned a lot from Australian colleagues whom now I can call my peers (plus, this time I get to understand the local jokes). Finding someone who knows Drupal in Australia is not that easy, and even more difficult in Tasmania. Events like this help train developers interested in adopting this technology.

What I Shared during Drupal South

The last few months in our company (weKnow) we have been working in projects involving what I call “Offline Headless Drupal”, where we use Gatsby to create a React Application using Drupal 8 as the source of content.

My session was “How to keep Drupal relevant in the API-driven and git-based CMS era”.

In this architecture, Gatsby blends all content into React and puts it into a CDN, which not only improves UX and accelerates performance but also serves as the starting point for new integrations with modern tools of third-party providers.

You can see my session slides here.

What caught my eye

This year I was very interested in sessions related to GovCMS. Here are a few I enjoyed:

GovCMS distribution and platform hosting is getting to the next level, especially with thei latest updates in their infrastructure and the incorporation of Drupal 8 to the distro.

The future looks brilliant for Drupal in Australian Government and the number of Drupal sites in the next 24 months will increase significantly without a doubt. The new version supports Drupal 8 which has more features that make it possible to make better sites. Perhaps the biggest challenge for the community at this moment is to be able to find the talent to work on those new projects.

Hosting more events like this will definitely help local Australians with valuable training that will enable them to participate in projects both in Australia and overseas. Remote work is a solution that can benefit programmers all over the world, and we could take great advantage from it too.

Things are going south!


In Australia, this expression doesn’t necessarily have a negative connotation; after all, we are in the extreme south!

If you thought we could not go further down; yes we can and we will. I’m glad to be part of the team that will be hosting Drupal South 2019 in Hobart, Tasmania, the place that I call home nowadays.

We will put in all the effort to organize the best event we can. Come and enjoy Drupal, walk through the fantastic trails that Tasmania has to offer, and of course don’t miss the opportunity to meet the Iconic Tasmanian devil! See you there!

Dec 07 2018
Dec 07

By Jesus Manuel OlivasHead of Products | December 07, 2018

By Jesus Manuel OlivasHead of Products | December 07, 2018

On the first post of this series “Improving Drupal and Gatsby Integration - The Drupal Modules”. I introduced two contributed modules we wrote to simplify the Drupal usage while working with Gatsby. One of the modules mentioned was `tui_editor` a WYSIWYG markdown editor integration with the Toast UI Editor project. This module allows content editors to enter content as markdown, making easy to implement JSON-API endpoints that return markdown.

In this post, I will mention you how to take advantage of that markdown using `gatsby-remark-drupal` the Gatsby plugin we wrote. This plugin provides support for markdown preprocessing to Drupal body fields. In order to take advantage of Gatsby `gatsby-transformer-remark` to parse markdown as HTML, `gatsby-remark-images` to processes images in markdown so they can be used in the production build, and `gatsby-remark-external-links` among others.

What does this plugin do?

  • Creates a new `text/markdown` field for drupal body fields of the selected content types.
  • Replaces Drupal relative image paths to previously downloaded and cached images by the `gatsby-source-drupal` plugin.

Where can you find the plugin? 

How can you install this plugin?

By executing npm, from the root of your project:

npm install --save @weknow/gatsby-remark-drupal

How can you configure this plugin?

As any other Gatsby plugin by registering in your gatsby-config.js file.

By default this plugin process page and article content types.

resolve: `gatsby-transformer-remark`,
options: {
  plugins: [

But you can customize which content types to process.

resolve: `gatsby-transformer-remark`,
options: {
  plugins: [
      resolve: `@weknow/gatsby-remark-drupal`,
      options: {
        nodes: [`article`,`page`, `landing`, `cta`]

NOTE: In order to keep this series of publications clear and simple. We will be extended with a one or a couple of extra posts instead of the original planned three, for your ease of reading and comprehension of each topic.

Are you excited as we are with GatsbyJS and this new API Driven approach?

We invite you to check back as this series continues, exploring more tools we are building to contribute back to Drupal and Gatsby ecosystems that will allow you to implement a Drupal and Gatsby integration without needing to DIY.

Want to learn how to take advantage of these modules?

We can show you how these modules can improve your Drupal and Gatsby integration.

Dec 04 2018
Dec 04

By Jesus Manuel OlivasHead of Products | December 04, 2018

By Jesus Manuel OlivasHead of Products | December 04, 2018

At weKnow we are not only using Drupal, we also take contributing back very seriously and now is the time for improving the Drupal and Gatsby integration.

As mentioned in my personal blog, Moving weKnow's personal blog sites from Drupal to GatsbyJS, we have been using Gatsby with Drupal for projects as our decouple strategy lately, and after building a few sites with Drupal and Gatsby we found some challenges, which we resolved writing custom code. But now we’ve decided to share our knowledge as contributed modules.

Toast UI Editor

This module provides a markdown WYSIWYG editor integration for Toast UI Editor

Solutions this module provides

This module allows content editors to enter content as markdown using a WYSIWYG tool, making easy to implement JSON-API endpoints that return markdown. By providing markdown to Gatsby you can take advantage of the gatsby-transformer-remark plugin features.


Link to the module: https://www.drupal.org/project/tui_editor

Build Hooks

This module triggers a build hook on any service provider that supports build hooks. Provides the ability to be configured and execute that trigger manually clicking a toolbar element or automatically executing the trigger via cron or whenever a node is updated.

Solutions this module provide:

  • Deploy site to a PaaS CDN as Netlify.
  • Execute build and deploy the site on demand and/or programmatically after updating data on Drupal.
build_hooks module

Link to the module: https://www.drupal.org/project/build_hooks

Are you excited as we are with GatsbyJS and this new API Driven approach?

We invite you to check back as this series continues, exploring more tools we are building to contribute back to Drupal and Gatsby ecosystems that will allow you to implement a Drupal and Gatsby integration without needing to DIY.

Want to learn how to take advantage of these modules?

We can show you how these modules can improve your Drupal and Gatsby integration.

Nov 13 2018
Nov 13

By adminadmin | November 13, 2018

By adminadmin | November 13, 2018

As a fully distributed company, weKnow supports remote working; a form of management and daily routine that may not be for everyone but, we prove all bumps on the road can be successfully sorted out and made our organization even surpass productivity metrics compared to the in-office style.

Having a career outside of a traditional office setting comes with unique challenges, getting to know them beforehand will allow you to be more productive and happier. Read further to learn some tips to help you and your team excel.

Remote work challenges plus how to overcome them

  • Health Habits: If no one is on your shoulder telling you when or not to take your breaks or it is easy to postpone lunch time or breaks, it could lead to the point where you might forget to eat. Tip: Enter lunch and breaks in your schedule each day, serving as an incentive to finish tasks before eating or just taking time walking around to stretch or clear your mind.
  • You might ditch exercising altogether: Many stay in a single position most day or forget to go outside enough. Tip: The human body is wired for movement, try adjusting your desk to also be able to type standing up some portion of the day or make a pause every twenty to forty-five minutes and just stand up to drink some water and come back to your chair to continue on.
  • Diet: You are as productive as what you eat or drink. Tip: Try having a healthy diet that keeps you awake, energized and hydrated naturally with a variety of vegetables, grains, protein and monitoring white sugar consumption. 
  • Home Adjustments: Working at home is an adjustment for everyone in the house, including kids, roommates or family. A great advice is to be consistent and reasonable, therefore trying to accomplish work and quality time simultaneously is not advised. Set up a morning routine that allows everyone to be self-sufficient and allows you to have a fluid workflow too.  
  • Isolation: the opposite problem. Here is where working from a cafe, coworking with friends or just signing up for an afternoon hobby activity is crucial. 
  • Overworking: a recent report from the United Nations International Labour Organization found that productiveness is the greatest metric achieved by remote employees but they are more vulnerable to working longer hours. Work smart instead, by setting appointments on your calendar, set up reminders to take breaks, be clear with your team on when you are leaving, create physical boundaries between you and your workspace (have a dedicated office space) and as tempting as it is: never work from bed, it will only slow you down.
  • Time management: Prioritize; first thing of each day: eat that frog! Work on the hardest task to get it out of the way, some people procrastinate by working on minor tasks first, stating “if I get the little issues first then I will have time for the more complicated one” and by the end of the day end up having little time and energy to dedicate the main issue. Basically, remote workers need to be time management experts by monitoring their energy peaks.
  • Connectivity: the greatest fear of all is when internet goes down. Well… this is the remote worker problem to solve, have a mobile hotspot device or a great cell phone plan, if possible a backup computer or even a tablet can be of help.

weKnow´s Secrets to Remote Work Success

Trust keeps building up when everyone stays on task while feeling equally represented and present. 

We promote a company culture that embraces remote work to talented nearshore developers, weKnow keeps information and conversations open to everyone, any actions or plans are documented keeping them available to asynchronous team members in order for everyone to have a clear understanding and feel supported, therefore; making everyone feel connected. 

Embracing the remote model has given us access to the best talent in the region, regardless of their location.

A way we switch the regular mindset is by thinking on delivering results rather than time… 
In-office jobs are based on a clock-in and clock-out dynamic, instead; we have a mindset of productiveness that laser focus employees, who will actively try to avoid procrastination because their goal is to deliver great products and services but also to have time for their own means. Jointly this is done by having clear processes, structures and agendas while promoting a healthy system of meetings, events, and habits that keep people communicating, providing the right tools to achieve it.

Communication is key. Every member logs into our instant messaging tool, and every project has its own dedicated "space" within this tool, which enables the team to interact in real time. This also allows our Technical Leadership to be easily reachable to assist with any blockers or technical guidance.

We are an organization that understands the best talent is seeking to apply its knowledge and experience where creativity is encouraged, as well as where meaningful work experiences are provided. Each aspect is meant to facilitate not only the internal process in our company but also swift communication with our partners and customers regardless of location. By sharing these set pieces of advice, our hope is to edify and promote how really useful working remotely can be. 

Each year many more are realizing an office facility is not absolutely necessary, on the 2015 Global Leadership Summit in London 34% replied to a survey stating that more than half their company’s full-time workforce would be working remotely by 2020, we can definitely see it is now a fact that disrupting the regular working environment provides more positive outcomes, definitely provides more convenient perks for all: organizations, clients and workforce personnel.

You can read Part 1 of: “weKnow’s remote working guide to success”, indulge with more benefits of remote working here.

Nov 06 2018
Nov 06

By Jesus Manuel OlivasHead of Products | November 06, 2018

By Jesus Manuel OlivasHead of Products | November 06, 2018

During this year and at several events SANDCamp, DrupalCamp LA, DrupalCon Nashville, and DrupalCamp Colorado I had a chance to talk and show how at WeKnow we approached the development of API driven applications. For all of you that use Drupal, this is something like decoupled or headless Drupal but without the Drupal part.

This article outlines weKnow’s approach and provides some insight into how we develop some web applications.

Yes, this may sound strange but whenever we need to build an application that is not content-centric, we use Symfony instead of Drupal; what are those cases? Whenever we do not require the out-of-the-box functionality that Drupal offers as content management, content revision workflow, field widgets/formatters, views, and managing data structure from the UI (content types).

Why we still use PHP.

We definitely knew the language pretty well, we have a large experience working with PHP, Drupal and Symfony and we decided to take advantage of that knowledge and use it to build API driven applications.

Why the API Platform.

This project is a REST and GraphQL framework that helps you to build modern API-driven projects. The project provides an API component that includes Symfony 4, Flex, and Doctrine ORM. It also provides you with client-side components and an Admin based on React and a Docker configuration ready to start up your project using one single command. Allowing you to take advantage of thousands of existing Symfony bundles and React components.

Wrapping up

Our developer's expertise within different technologies has given us the advantage to provide a great time to market while developing client projects. We also like sharing, if you want to see this session live, probably for the last time you should attend and join me at DrupalCamp Atlanta

Video from DrupalCon Nashville at the Youtube Drupal Association channel here:

[embedded content]

You can find the latest version of the slides from DrupalCampLA here 

Nov 05 2018
Nov 05

By Manuel SantibanezFront-end Developer | November 05, 2018

By Manuel SantibanezFront-end Developer | November 05, 2018

weKnow gave me the opportunity to attend my first BADCamp as part of the team that represented the company at this awesome event.

First day I attended the Drupal Frontend Submit, a roundtable format that I had not experienced before. It was very rewarding to discuss my experience as a developer who has worked with accessibility guidelines, sharing the tools and strategies that I have used to implement such an important standard. 

Lots of great sessions shared valuable knowledge that allowed me to leave BADCamp as a better developer!

My top picks:

Without a doubt, the new kid on the block was Gatsby, a piece of technology that takes the development of sites to a new level.

In my opinion, and I may be a little biased here, one of the best talks of the camp was "How to keep Drupal relevant in the Git-based and API-driven CMS", given by Jesus Manuel Olivas. This session opened a great discussion about Drupal’s vision, touching base on how it can integrate to become a fundamental piece in the scheme of modern technologies and strategies in web development, allowing Drupal to focus on what it does best which is to manage content.

As a final note, I would like to highlight that the venue was great at UC Berkeley; awesome lounge area with coffee to keep us energized all day, pinball machines were a great surprise and a special mention for the waffles!

Thanks for everything, hope to return next year and this time proposing a talk and thus sharing my own knowledge and experience!

Nov 05 2018
Nov 05

By Harold JuárezFull Stack Developer | November 05, 2018

By Harold JuárezFull Stack Developer | November 05, 2018

BADCamp 2018 was the first real big event I attended, aside from actively participating in Drupal Camp Costa Rica for three years. Kindly enough some co-workers who had already assisted shared with me their experience which gave me great expectations. In addition, I was excited to sightsee San Francisco and Berkeley.

After dedicating this year to front-end, BADCamp sessions left me more than satisfied, with refreshed knowledge and practices. So I would like to share my experience and the content of sessions I participated:

The second day was a highlight, assistants were given challenges and tools, dialogue tables enriched my personal experience by listening to others talk about ways to improve development applications.

My first BadCamp 03

On Friday Pattern Lab sessions were quite interesting, practising the creation of themes without relying on a backend. Although I already had the experience of using this tool before, it provided new knowledge to improve its implementation at work.

React + Gatsby’s potential to create static sites was explored, and I learned compelling ways to take advantage of these new tools to improve the performance of an application using React to render the page and Drupal as an API to enter data. This talk was presented by my co-worker Jesus in his session HOW TO KEEP DRUPAL RELEVANT IN THE GIT-BASED AND API-DRIVEN CMS ERA.

My first BadCamp 04

On Saturday I attended an Accessibility session that showed tools for people with different types of disability, some may be paid or free to implement on the site, it all depends on the needs of the specific project.

Another talk that caught my attention was Artificial Intelligence in Drupal, by using Google Cloud Vision API in sites that provide tagging of images, face, logo and explicit content detection through Machine Learning.

A fantastic experience and I am very grateful to weKnow for helping me attend. It was a great success that I hope to repeat in a near future!

My first BadCamp 05

Most Interesting Sessions

Oct 23 2018
Oct 23

By Veronica WheelockMarketing | October 23, 2018

By Veronica WheelockMarketing | October 23, 2018

Autumn is in the air… and part of the weKnow team is heading to BADCamp18, each one of them excited to share experiences, our team culture and contribute to strengthening ties among the members of the Drupal community.

BADCamp 2018

This is a very special BADCamp edition as it sets a milestone in weKnow’s journey. Back in 2011, this was one of the first Drupal events that we attended in the USA. This year we increased the number to 8 attendants and we proudly became one of the event’s sponsors.

BADCamp is simply the biggest DrupalCamp in the world, reporting 1,300 attendees in 2017 and featuring Summits, Sessions and training in benefit of the open source community. The event will be held at UC Berkeley from October 24th to 27th, 2018.

WeKnow’s team at BADCamp is backed and led by our CTO Jesús Manuel Olivas, one of the co-maintainers of the Drupal Console. He will be speaking on the hottest topics of the event: Decoupled Drupal, APIs, React and GatsbyJS, don't miss his session How to keep Drupal relevant in the Git-based and API-driven CMS era.

Sponsoring an event like this is the best way we found to support countless developers from all around who believe in the strong values of a collaborative and open source community.

This is weKnow’s full roster for BADCamp 18:

BADCamp 2018 weKnow Team
  1. Jesús Manuel Olivas / CTO at weKnow Inc.
  2. Omar Aguirre / Products Division & DevOps
  3. Jorge Valdez / Drupal Full Stack Developer
  4. Joseph Zamora / Drupal Frontend Developer
  5. Miguel Castillo / Drupal Backend Developer
  6. Harold Juarez / Drupal Full Stack Drupal Developer
  7. Manuel Santibañez / Drupal Full Stack Drupal Developer
  8. Heissen Lopez / Drupal Frontend Developer

We hope to meet you there!

Big or Complex Project,
Not Enough Devs?

We help development teams meet their deadlines by seamlessly integrating our highly skilled developers.

Jun 28 2018
Jun 28

Drupal 8 provides the option to include an Ajax Callback within our applications using the Ajax Framework. There are some existing functions which can be used: Methods to hide/show elements in the html document, attach content to an element, redirect a page after a submit, and so on. Sometimes we need to implement something particular, or a custom JS code. In that case, those out-of-the-box functions are not enough. Fortunately, we can also create our own custom responses. So, let’s start creating a new ajax callback for a custom form submission.

Since now we are using Drupal 8, we can take advantage of the Drupal Console to easily generate the necessary boilerplate code. That said, before continue make sure you have the latest console version (1.6.0) if you do not have it yet, you can follow instructions at the official docs page.

Generating the custom Module

The first step is to define the module where code will be generated. If you don't have a custom module, you can create a new one executing the following Drupal Console command:

drupal generate:module \
--module="example" \
--machine-name="example" \
--module-path="modules/custom" \
--description="My Awesome Module" \
--core="8.x" \
--package="Custom" \
--module-file \

Creating AJAX Command

Now that we have a home for our new AJAX command, we can create the command itself. Starting in Drupal Console 1.6.0, the drupal generate:ajax:command generates a custom AJAX command.

After entering the command, you will be prompted for several pieces of information to generate the boilerplate code for the command:

We can also specify all the options up front:

drupal generate:ajax:command  \
--module="example" \
--class="ExampleCommand" \
--method="example" \
--js-name="example" \

Once complete, we can inspect what was generated.

The AJAX Command Class

The generated Ajax Command class implements CommandInterface, and the mandatory method to implement the render function:


namespace Drupal\example\Ajax;

use Drupal\Core\Ajax\CommandInterface;

 * Class ExampleCommand.
 class ExampleCommand implements CommandInterface {

 * Render custom ajax command.
 * @return ajax command function
 public function render() {
     return [
         'command' => 'example',
         'message' => 'My Awesome Message'


The render() method must return a render array. In this case, 'example' will be the javascript command which we will invoke later inside the *.js file. We can define other properties on this response, for example a custom message.

The JavaScript file

Now let's take a look to the second generated file, example.js, which is located inside your custom module. Here, we are reading the message property from the response object and displaying the result in the browser's console. This is a really basic example, but you can implement a more robust actions depending on your requirements in fact we could reuse AJAX functions from core to interact with our response.

 (function ($, Drupal) {
 * Add new custom command.
Drupal.AjaxCommands.prototype.example = function (ajax, response, status) {
})(jQuery, Drupal);

While our example is pretty simple -- it just writes to the browser’s console -- you can make any command you like!

The library definition file

In addition to the above boilerplate, Drupal Console also generated a library definition file:

   js/example.js: {}
   - core/drupal.ajax

The library definition tells Drupal what javascript file(s) need to be loaded, and where and where to find them relative to the module’s directory. We can also define any required dependencies -- such as core/drupal.ajax above -- by defining the dependencies key.

Invoking our custom command

First we need to attach the library to the rendered page. There are lots of ways to do this. For this post, we want to attach it to a specific form. So, we’ll define a form alter:

function custom_form_alter(&$form, FormStateInterface $form_state, $form_id) {

   * Apply the form_alter to a specific form #id
   * the form #id can be found through inspecting the markup

  if ($form['#id'] == 'custom-form') {
     * Include a js, which was defined in example.libraries.yml
    $form['#attached']['library'][] = "example/example-library";


We alter the $form array by adding a new item under #attached. Since we’re attaching a library, we also use the library key. Finally, we specify our module name (example) and then our library name (example-library).

The above only attaches all the necessary Javascript. Now we need to put it to use! To do that, we need to modify the submit callback of the form to add our custom AJAX command:

         * {@inheritdoc}
        public function exampleSubmitForm(array &$form, FormStateInterface $form_state) {

                if ($form_state->hasAnyErrors() || !empty($form_state->getErrors())) {
                        $ajax_response = new AjaxResponse();
                        $ajax_response->addCommand(new AjaxCommand());

                        return $ajax_response;


To add our command, we need to tell Drupal to only return AJAX content, rather than a full web page. In the form submit function we create a new AjaxResponse and call the  addCommand() method to add our custom command.

Put together, our custom ajax command will be triggered every time the exampleSubmitForm() method is called, and our custom message will appear in the browser’s console.

Wrap up

While Drupal Console makes creating a new AJAX command easy, you can also do it manually:

  1. Create a custom command class extending AjaxInterface.

  2. Invoke the Drupal.AjaxCommands.prototype object and define there your custom actions.

  3. Create a library definition specify the location of your custom Javascript.

  4. Include core/drupal.ajax as part of the library dependencies.

  5. Add your ajax custom command by using addCommand() on the  ajax response object.

Creating a custom AJAX command isn’t complex in Drupal 8. Once you add the command to the AJAX response, you can create all kinds of amazing user experiences!

May 09 2018
May 09

Some operations are time consuming, really heavy memory and/or CPU intensive. By performing an operation one time, and then caching the output, the next requests could be executed faster. Drupal provides an easy cache API in order to store, retrieve and invalidate cache data. I did this tutorial because I couldn’t find a step by step tutorial in order to add cache metadata to render arrays easily!

In this tutorial we'll:

  • Get an overview of the render array caches and how to use them properly.
  • We are going to get our hands dirty on code.


  • Familiarity with custom module development.
  • How to create a custom controller to process incoming requests.
  • Some knowledge of render arrays.

Overview of Render array

Drupal uses render arrays to generate HTML that is presented to the end user. While render arrays are a complex topic, let’s cover the basics.  A render array is an associative array that represents a one or more HTML elements, properties and values.  If you’re interested in more about render arrays, see Render arrays from official Drupal docs.

Cache metadata to render array

When we have a render array, instructing Drupal to cache the results is easy, we only need to use the #cache property. But what kind of caching? Drupal 8 provides several kinds out of the box:

  • max-age stores cache data by defining its time in integer format and seconds
  • tags is an array of one or more cache tags identifying the data this element depends on.
  • contexts specifies one or more cache context IDs. These are converted to a final value depending on the request. For instance, 'user' is mapped to the current user's ID.

Creating the module and controller

$ drupal generate:module --machine-name=d8_cache

A module alone isn’t enough. We also need a controller to respond to incoming requests. We can use Drupal Console to generate the controller too:

$ drupal generate:controller --module=d8_cache --class=DefaultController

When creating the controller, you’ll enter into a loop where you can enter three pieces of information necessary for the controller to define a route: The title, method name, and the path. Let’s make one route for each of the cache types:

Title Method Name Path cacheMaxAge cacheMaxAge /d8_cache/max-age cacheContextsByUrl cacheContextsByUrl /d8_cache/contexts cacheTags cacheTags /d8_cache/tags

Now we should have an *.info.yml, *.routing.yml and our controller class.inally, let’s enable our custom module:

 $ drupal module:install d8_cache

Cache “max-age”

With the module and routes created, we can now start playing with Drupal caching.In DefaultController.php, locate the cacheMaxAge() method and add the following:

public function cacheMaxAge() {
 return [
   '#markup' => t('Temporary by 10 seconds @time', ['@time' => time()]),
   '#cache' => [
     'max-age' => 10,

If we open a web browser and navigate to http://your_drupal_site.test/d8_cache/max-age, we see a “Temporary by 10 seconds timestamp” where timestamp is the current time as a UNIX timestamp. 
“What good is that!?” you might ask. Well, if you refresh the page you’ll notice something interesting.  The first time the page will say something like  “Temporary by 10 seconds 1520173774”. If we hit refresh immediately, we’ll see:

“Temporary by 10 seconds 1520173780” (the first second)

“Temporary by 10 seconds 1520173780” (in the next second)

“Temporary by 10 seconds 1520173780” (and so on)

The timestamp doesn’t change! If we wait for the whole 10 seconds we specified in max-age, the cache invalidates/expires and and is replaced with a new timestamp: “Temporary by 10 seconds 1520173790” 

Great, this worked like a charm!

What if we want to make it so the page never expires? Drupal provides a special constant for this, \Drupal\core\cache\Cache::PERMANENT exactly for this case. We’d only need to change the value of max-age:

public function cacheMaxAge() {
  return [
    '#markup' => t('WeKnow is the coolest @time', ['@time' => time()]),
    '#cache' => [
      'max-age' => \Drupal\Core\Cache\Cache::PERMANENT,

And the message for instance “weKnow is the coolest 1520173780” will never change! Well, not “never”. We can force the page to update by clearing the Drupal cache. This can be done under Admin > Config > Development > Performance, or using Drupal Console:

$ drupal cr all

So that was max-age, one of the simplest caching strategies. What if we need something more...nuanced?

Cache “contexts”

Caching by contexts let us specify a condition by which something remains cached. A simple example is the URL Query, or any part after the ? in a URL. We already defined the route earlier, so we open DefaultController.php and edit the cacheContextByUrl() method:

public function cacheContextsByUrl() {
  return [
    '#markup' => t('WeKnow is the coolest @time', ['@time' => time()]),
    '#cache' => [
      'contexts' => ['url.query_args'],

The above piece of code will display a message such as “weKnow is the coolest 1520173780”, and invalidate cache when a new query parameter from url is set or gets updated.

If we visit for instance http://your_drupal_site.test/d8_cache/contexts the first time, we’ll see something like:  “weKnow is the coolest 1520173780”. If we hit again the same message is displayed. But, if we do add a query parameter like http://your_drupal_site.test/d8_cache/contexts?query_a=value, then the cache is invalidated and the page updates with a new timestamp: “weKnow is the coolest 1520173909”.

Sometimes, we only want to invalidate the cache based on a specific argument in the URL query. We can do that too:

public function cacheContextsByUrlParam() {
  return [
    '#markup' => t('WeKnow is the coolest @time', ['@time' => time()]),
    '#cache' => [
      'contexts' => ['url.query_args:your_query_param'],

Now if we visit the following URL:


Only then does the message change:“weKnow is the coolest 1520173909” If we visit the same URL with the same query parameter set (your_query_param), the cache is invalidated and we get a new timestamp once again:

“weKnow is the coolest 1520173910”
And so on…

The url.query_args:your_query_param value we passed to contexts in our render array instructs Drupal to only invalidate the cache if a certain URL query parameter is set. 

If we visit:


The message is “weKnow is the coolest 1520173910” (first second)
“weKnow is the coolest 1520173910” (next second)
“weKnow is the coolest 1520173910” (after few minutes)

And so on!

Notice the message doesn’t change. This is because we set to invalidate cache on the query param “your_query_param” and above is another query param. Since your_query_param is not in our URL, Drupal will never invalidate the cache. 

Caching by the URL query isn’t the only context available in Drupal. There are several others:

  • theme (vary by negotiated theme)
  • user.roles (vary by the combination of roles)
  • user.roles:anonymous (vary by whether the current user has the 'anonymous' role or not, i.e. "is anonymous user")
  • languages (vary by all language types: interface, content …)
  • languages:language_interface (vary by interface language — LanguageInterface::TYPE_INTERFACE)
  • languages:language_content (vary by content language — LanguageInterface::TYPE_CONTENT)
  • url (vary by the entire URL)
  • url.query_args (vary by the entire given query string)
  • url.query_args:foo (vary by the ?foo query argument

Refer to drupal 8 contexts official documentation for more details about cache “contexts”.

Cache “tags” 

The contexts cache type is really versatile, but sometimes we need more complete control over what is and isn’t cached. For that, there’s tags. Open the controller and modify the cacheTags() method to be the following:

public function cacheTags() {
  $userName = \Drupal::currentUser()->getAccountName();
  $cacheTags =   User::load(\Drupal::currentUser()->id())->getCacheTags();
  return [
    '#markup' => t('WeKnow is the coolest! Do you agree @userName ?', ['@userName' => $userName]),
    '#cache' => [
     // We need to use entity->getCacheTags() instead of hardcoding "user:2"(where 2 is uid) or trying to memorize each pattern.
      'tags' => $cacheTags,

Ok, now let’s login with our username  -- this post uses “Eduardo” -- and visit:


Above code prints “weKnow is the coolest! Do you agree Eduardo?” If we hit the page again it will say “weKnow is the coolest! Do you agree Eduardo?” and subsequent requests will say the same.

If we edit our own username to “EduardoTelaya” and hit save our tag cached page changes:

“weKnow is the coolest! Do you agree EduardoTelaya?”

Why is that?

If you look closely at the method, you’ll notice we get a list of cache tags for the current user. If we use a debugger to see the value of $cacheTags, it will say “user:userID” where userID is the user’s unique ID number. When we updated our user account, Drupal invalidated any cached content associated with that tag. Cache tags let us build a dependency into our cache on another entity or entities in the site. We can even define our own tags to have full control!

Tips and tricks

In the above examples we only had one #cache in each render array. Drupal allows us to specify the caching at different levels in the tree depending on need. Let’s suppose we have the following, a tree of render array:

public function cacheTree() {

   return [
     'permanent' => [
       '#markup' => 'PERMANENT: weKnow is the coolest ' . time() . '<br>',
        '#cache' => [
          'max-age' => Cache::PERMANENT,
     'message' => [
       '#markup' => 'Just a message! <br>',
       '#cache' => [
     'parent' => [
         'child_a' => [
           '#markup' => '--->Temporary by 20 seconds ' . time() . '<br>',

         '#cache' => [
           'max-age' => 20,
      'child_b' => [
        '#markup' => '--->Temporary by 10 seconds ' . time() . '<br>',
        '#cache' => [
          'max-age' => 10,
    'contexts_url' => [
      '#markup' => 'Contexts url - ' . time(),
      '#cache' => [
        'contexts' => ['url.query_args'],

If we visit the first time http://your_drupal_site.test/d8_cache/tree:

We get this:

PERMANENT: weKnow is the coolest 1520261602
Just a message! 
--->Temporary by 20 seconds 1520261602
--->Temporary by 10 seconds 1520261602
Contexts url - 1520261602

(Please refer to timestamp above for example purposes)

In the next second, if we visit the same page again, we get the same message. But once it reaches 10 seconds, the cache is invalidated thanks to the render array element “child_b” (which was set to expire/invalidate to 10 seconds) and we are going to have a different message:

PERMANENT: weKnow is the coolest 1520261612
Just a message! 
--->Temporary by 20 seconds 1520261612
--->Temporary by 10 seconds 1520261612
Contexts url - 1520261612

Notice how not only “child_b” was updated but also the rest of render array elements. The same will happen if you wait 20 seconds or visit /d8_cache/tree?query=value, which invalidates cache according to url query contexts.

This is called  “bubbling up cache”. This can affect the response cache you can see as a whole! In order to avoid that you should use “keys” attribute in order to cache individual elements. By adding “keys” you protect from cache invalidation from siblings array elements and children array elements. Let’s add a new method and path to our code in order to add keys:

public function cacheTreeKeys() {

 return [
   'permanent' => [
     '#markup' => 'PERMANENT: weKnow is the coolest ' . time() . '<br>',
     '#cache' => [
       'max-age' => Cache::PERMANENT,
       'keys' => ['d8_cache_permament']
   'message' => [
     '#markup' => 'Just a message! <br>',
     '#cache' => [
       'keys' => ['d8_cache_time']
   'parent' => [
     'child_a' => [
       '#markup' => '--->Temporary by 20 seconds ' . time() . '<br>',
       '#cache' => [
         'max-age' => 20,
         'keys' => ['d8_cache_child_a']
     'child_b' => [
       '#markup' => '--->Temporary by 10 seconds ' . time() . '<br>',
       '#cache' => [
         'max-age' => 10,
         'keys' => ['d8_cache_child_b']
   'contexts_url' => [
     '#markup' => 'Contexts url - ' . time(),
     '#cache' => [
       'contexts' => ['url.query_args'],
      'keys' => ['d8_cache_contexts_url']

If we visit now /d8_cache/tree-keys

We will get:

PERMANENT: weKnow is the coolest 1520261612
Just a message! 
--->Temporary by 20 seconds 1520261612
--->Temporary by 10 seconds 1520261612
Contexts url - 1520261612

And if we wait for 10 seconds we are going to see:

PERMANENT: weKnow is the coolest 1520261612
Just a message! 
--->Temporary by 20 seconds 1520261612
--->Temporary by 10 seconds 1520261622
Contexts url - 1520261612

Notice how just “--->Temporary by 10 seconds 1520261622” gets updated but the rest of the output doesn’t get updated (this is thanks to keys attribute that prevent cache invalidation to the rest of array elements).


You can download full source code for this post on Github.


In this post, we saw an overview of render arrays, how to use three different cache types. We used max-age for simple, time-based caching. Cache contexts provides a caching strategy based on a variety of dynamic conditions. The tags cache type lets us invalidate caches based on the activity on other entities or full control via custom tag names. Finally, we used  “cache keys” to protect against other cache invalidation in a render array tree.

This is it! I hope you enjoyed this tutorial! Stay tuned for more!

This post was contributed by Eduardo Telaya, a former member of the weKnow team. You can find him on Twitter at @Edutrul, or speaking at Drupal events in Latin America such as Drupalcamp Costa Rica.

Mar 07 2018
Mar 07

By Jesus Manuel OlivasHead of Products | March 07, 2018

By Jesus Manuel OlivasHead of Products | March 07, 2018

The Configuration Management (CM) system, is probably one of the most well known and exciting features of Drupal 8. But wouldn't it be even more awesome to be able to install a site, export configuration and then re-install site from scratch importing the previously exported configuration?

For those who are not yet clear on what we are talking about, this post is related to fixing the infamous exception error message when importing configuration:

"Site UUID in source storage does not match the target storage."

Why would you want to be able to install your site from an existing configuration?

A couple of big reasons come to mind:

  • Automate the creation of reproducible build/artifacts from scratch at any stage (Development, QA, Production) to test, launch or deploy your site.
  • Simplify onboarding for new developers to any project without the need to obtain a database-dump. Developers will be able spin-up sites from scratch just by installing the site and importing configuration files.

How to achieve this using Drupal Console?

Installing a site from a previously exported configuration using Drupal Console is as simple as 
updating your `console/config.yml` and append this configuration to the new overrides section. 

      skip-validate-site-uuid: true

Executing the commands to install the site and import your previously exported configuration:

drupal site:install --force --no-interaction
drupal config:import --no-interaction

Simple and easy right? Well, this is possible using Drupal Console starting with the 1.7.0 version. This functionality is not supported by Drupal Core out-of-the-box. However, providing a better user experience while using Drupal 8 is one of the goals of Drupal Console and this is the reason we introduce features as the one we mentioned above.

What if my site does not have Drupal Console installed?

Download Drupal Console using composer in your site. if you do not have it already.

composer require drupal/console:~1.0 --prefer-dist --optimize-autoloader

Create a Drupal Console configuration file for your site.

drupal init --site --no-interaction

What if my site is using an old version of Drupal Console?

Update per-site installation using composer:

composer update drupal/console --with-dependencies

Update the Launcher.

drupal self-update

Can I automate the execution of the site installation and import configuration commands?

Yes, using a chain command. A chain command is a custom command that helps you automate multiple command execution, allowing you to define and read an external YAML file containing the definition name, options, and arguments of multiple commands and execute that list based on the sequence defined in the file.

For more information about chain commands, refer to the Drupal Console documentation.

This is an example of a chain command to install a site and import a previously exported configuration.

  name: build
  description: 'Build site by installing and importing configuration'
  # Install site
  - command: site:install
      force: true
      profile: standard
  # Import configurations
  - command: config:import
  # Rebuild cache
  - command: cache:rebuild
        cache: all

After adding this file, you can execute one command

drupal build

If you have any continuous integration or continuous deployment workflow you can integrate this command as part of that workflow.

Will this work with other modules like config_split?

Yes, you can use the config_split provided Drupal Console command. You should use the provided command to import the configuration and it will work as expected, without any issues or errors related to the uuid values.

drupal config_split:import --split=development --no-interaction

Note that you should replace `development` with the name you gave to your split.

Do I have other alternatives?

Yes, the other two well-known alternatives are:

Using config_suite module:

  • Create a new custom profile.
  • Add the drupal/config_suite dependency using composer.
  • Add the config_suite module to your custom profile and have it as a dependency on your profile_name.info.yml file.
  • Install the site using your new custom profile.
  • Export your site's configuration.

After following these steps, you will be able to reinstall your site using the custom profile and import the previously exported configuration.

Using the config_installer profile:

  • Install the site using your preferred contrib or custom profile.
  • Remove the `install_profile` key from your settings.php file.
  • Add patches to your composer.json file.
  • Add the drupal/config_installer dependency using composer.
  • Export your site's configuration.

After following these steps you will be able to reinstall your site. Note that you will be using the `config_installer` profile for any subsequent site installation, instead of the profile your site is currently using.

Read more about using `config_installer` profile:

Wrapping up

Feel free to update Drupal Console to latest 1.7.0 and try this new feature while is hot and provide feedback. Also, make sure you let us know which other UX/DX improvements you will like to see on the Drupal Console project.

Big or Complex Project,
Not Enough Devs?

We help development teams meet their deadlines by seamlessly integrating our highly skilled developers.

Jan 22 2018
Jan 22

Despite being on the market for over a decade, to many, MongoDB still carries a mythical tone with a hint of ‘wizardry’.

The popular misconception is that MongoDB is only suitable for hip startups and former startups that are still considered ‘hip’ and out of the box, such as AirBnB.

Even with all the buzz and talk around MongoDB, the adoption rate remains relatively low in comparison with other ‘standard’ relational database technologies. Not many seem to understand that to be successful in the world of code you must approach everything new with an open mind.

Besides bearing an open mind, you need to incorporate an avenue to test and learn new technologies and tools. Personally, I choose to learn how to use new tools by trying to accomplish routine tasks.

In this blog I’ll explain to backup and restore data between different MongoDB environments. A simple yet critical task that we to do all too often.

Basic tools for MongoDB backup and restore

First things first, we need to include the CLI tools needed to access and operate our mongo databases. These tools are usually available in the same package that contains the mongo CLI client.

Installation on Mac is a piece of cake using [brew] using the following command.

$ brew install mongodb

If you are looking a more intuitive interface to interact with your Mongo Databases, I recommend using RoboMongo (even though it doesn't include backup features).

Connecting to the Database

Local database

As with any regular database, to connect, you need the server, port and database name (when using local setup). If you are connecting to a remote database, you need to provide a username, password, and authentication mode.

For example, to connect to a database named meteor inside your localhost, and running in port 3001, you would use the following command*.

$ mongo

*The client shell version and server versions don’t necessarily have to match. You can also see the warning generated in the image below.

Remote database

As previously mentioned connecting to a remote MongoDB database, requires more information, in this example, I'll use the server Mongo Cloud Atlas.

In addition, to being a None-SQL database, MongoDB is also a distributed database that automatically implements the replication concept, this is an incredibly significant feature. I still have nightmares from the first time I tried to implement that in MySQL.

To connect to the remote server, you need to provide the information of all replicas or nodes, as you can see in the following command.

$ mongo "mongodb://cluster0-shard-00-00-XXX.mongodb.net:27017,cluster0-shard-00-01-XXX.mongodb.net:27017,cluster0-shard-00-02-XXX.mongodb.net:27017/test?replicaSet=Cluster0-shard-0" --authenticationDatabase admin --ssl --username MYSUPERUSER --password MYSUPERPASS

In this case, MongoDB Cloud atlas utilizes different methods of authentication; the default method is using an internal database for users. Also, the method of connection we’ll use is SSL, it’s important to try to keep our information secure.

If everything went as expected, you now have access to a regular command line, and you can execute queries as usual as you do with your local database.


Backing up your Mongo database

To backup our database local or remote we’ll use the program mongodump

Local Database

Depending on your connection the order and format may differ a little, but the output folder containing your backup should be the same

$ mongodump --host -d meteor --port 3201 --out ~/Downloads/

After a successful execution, inside the output folder, a new folder with the name of the database will be created, in this example a ‘meteor’ folder.

Inside the folder, you will find two files per collection in database. One file with extension json which will contain your collection’s metadata about the structure and definitions, and another with extension bson, where b stands for binary which is where the data is stored.

Remote Database

mongodump --host cluster0-shard-00-00-XXX.mongodb.net --port 27017 -d MYDATABASE --username MYSUPERUSER --password MYSUPERPASS --authenticationDatabase admin --ssl --out ~/Downloads/

Just as we had done earlier in the connection step, we provide username, password and authentication method, ensuring that we use SSL for our connection.

We also provide the database we want to backup, in this case only one node is required, because in theory all of them are sync.

Restoring your Mongo Database.

To import our backup we will use the program **mongorestore**

Local Database

This command follows the same rules of mongodump, below you could find an example

$ mongorestore --host -d meteor ~/Downloads/meteor/

If we want to import only one specific collection, we just need to include the extra information for the particular collection as you can see below.

$ mongorestore --host -d meteor --collection mycollection ~/Downloads/meteor/mycollection.bson

Remote Database

Here again we need to provide all node replicas, check the following example

$ mongorestore --host "Cluster0-shard-0/cluster0-shard-00-00-XXX.mongodb.net:27017,cluster0-shard-00-01-XXX.mongodb.net:27017,cluster0-shard-00-02-XXX.mongodb.net:27017" -d MYDATABASE -u MYSUPERUSER -p 'MYSUPERPASS' --authenticationDatabase admin --nsExclude 'admin.system.users' --nsExclude 'admin.system.roles' --ssl ~/Downloads/meteor

You can clearly see that the fundamentals are not that far removed from any backup and restore in a SQL based DB. I hope this guide eliminates an excuse that has been holding you back from dipping your toes in MongoDB.

Happy NO-SQL queries!

Jan 11 2018
Jan 11

Setting up a new local environment can be challenging and really time-consuming if you're doing it from scratch. While this might not be a big deal when working as a single developer on a project, in a team-based scenario it's important to share the same infrastructure configuration. That's why we highly recommended using a tool like Docker to simplify the process.

Last summer, Jesus Manuel Olivas (Project lead) and I started working on a new project, and we had to discuss which setup we should use for the local environments. Since the project was already set up to use Lightning and BLT, we both agreed to use DrupalVM with Vagrant. Everything seemed to work great apart from some permissions conflicts, which we could easily resolve since the project only had two developers at the time.

DrupalVM is a tool used in creating Drupal development environments quick and easy, and it comes with the option to use Docker instead of or in addition to Vagrant but is mostly known and used with Vagrant.

Why We Switched to Docker?

After a few weeks of development, more developers came on board on the project and we started running into some issues. Vagrant was not working as expected on some machines and we were spending way too much time researching and fixing the provisioning issues, so Jesus and I had to go back to the drawing board to come up with a comprehensive solution, and we decided to switch from Vagrant to Docker.

Trying Docker

Docker is a tool for building and deploying applications by packaging them into lightweight containers. A container can hold pretty much any software component along with its dependencies (executables, libraries, configuration files, etc.), and execute it in a guaranteed and repeatable runtime environment.

This makes it very easy to build your app once and deploy it anywhere - on your laptop for testing, then on different servers for live deployment, etc.

There are plenty of 'ready to use' tools to implement Docker with Drupal, just to mention a few:

At this point we didn't want to add an extra layer or tool to the setup process, so we decided to go straight to a plain vanilla Docker configuration.

How To Implement a Basic Docker Configuration For Drupal

Installing Docker

This should be an easy step and once the installation is completed you should have the docker daemon running, confirm by running docker on your terminal and you should see the list of available commands. Download link

Step 1. Add the hostname

Edit your /etc/hosts file and add the new site name. site.local 

Step 2. Add the docker-compose.yml file

Add the following file to your project root.

version: "2"

    image: wodby/mariadb:10.1-2.3.3
    env_file: .env
      - '3306:3306'
      - mysqldata:/var/lib/mysql

    image: wodby/drupal-php:7.0-2.4.3
    env_file: .env
      PHP_SENDMAIL_PATH: /usr/sbin/sendmail -t -i -S mailhog:1025
      DB_DRIVER: mysql
      - ./:/var/www/html:cached
#      - ./mariadb-init:/docker-entrypoint-initdb.d

    image: wodby/drupal-nginx:8-1.13-2.4.2
      - php
      NGINX_SERVER_ROOT: /var/www/html/docroot
      - ./:/var/www/html:cached
      - 'traefik.backend=nginx'
      - 'traefik.port=80'
      - 'traefik.frontend.rule=Host:${TRAEFIK_HOST}'

    image: mailhog/mailhog
      - 'traefik.backend=mailhog'
      - 'traefik.port=8025'
      - 'traefik.frontend.rule=Host:mailhog.${TRAEFIK_HOST}'

    image: traefik
    command: -c /dev/null --web --docker --logLevel=INFO
      - '80:80'
      - /var/run/docker.sock:/var/run/docker.sock
    driver: "local"

Step 3. Add the .env file

Create a new file named .env  at the project root, to provide per-environment configuration.

# ENV 

# Database 

# Traefik host 

Step 4. Starting the containers

To start the containers you need to execute the following command docker-compose up -d, grab some coffee or a beer and be patient while the images are downloaded to your local computer.

Step 5. Importing a database dump (optional)

You can import previously exported DB dump by copying the dump file under the mariadb-init directory and uncommenting the following line on your docker-compose.yml file.

- ./mariadb-init:/docker-entrypoint-initdb.d

Step 6. Checking for used ports

One common issue you'll likely run into while starting the containers, is finding the ports in use, this could mean an instance of Apache, Nginx, MySQL or other service is already running, so if you want to know what is using the ports you can run this commands on your terminal:

lsof -i :<PORT_NUMBER>

Useful docker-compose commands

Starting the containers, using detached mode

docker-compose up -d

Stopping the containers.

docker-compose stop

Destroying the containers

docker-compose down [-v]

NOTE: You can pass the -v flag to destroy the shared volumes as well. Be careful this will destroy any data on the shared volumes between the container and the local machine.

Checking the logs

docker-compose logs -f <CONTAINER_NAME>

Executing CLI commands.

While working with containers is common to see developers ssh-ing into the machine to execute commands. To avoid this practice you can take advantage of the docker-compose exec command.

docker-compose exec <CONTAINER_NAME> <COMMAND_NAME>

Using Composer

Drupal 8 really takes a lot of advantages of using composer, you can install/uninstall dependencies and patches. Although it’s a good practice to run this commands inside your container because if you have a PHP version on your local machine, you could install dependencies that are not suitable for your container instance.

docker-compose exec --user=82 php composer <COMMAND_NAME>

Using DrupalConsole

If you want to use DrupalConsole on your project you can add an alias file to the repo at console/sites/site.yml containing the following configuration.

  root: /var/www/html
  extra-options: docker-compose exec --user=82 php
  type: container

After this file is added you will be able to run commands locally but actually execute them on the container by calling DrupalConsole commands you can use:

drupal @site.local <COMMAND_NAME>

Find more information here: https://docs.drupalconsole.com/en/alias/using-site-alias.html

On the other hand, if you prefer the direct way to run the commands you can use

docker-compose exe --user=82 php drupal <COMMAND_NAME>

Wrapping up

The new setup worked really well on everyone’s computer, and we didn’t have any more issues on this since we changed, now the project went live and we got a great experience and we plan to keep using Docker for future projects.

If you have the feeling Docker’s architecture is hard to understand and could be complex to get up and running you can take advantage of the projects mentioned below (Lando, Docksal, etc) to make it easy for you to start working with containers.

Because of the nature of the project a Drupal site, we based our docker configuration on the Drupal4Docker project by wodby. For other projects using technologies as Symfony, ReactJS, MeteorJS we create our own custom Docker files and custom images.

Dec 26 2017
Dec 26

A few months ago I had the pleasure of starting a new journey in my professional career, joining the weKnow family. This was a natural step after collaborating in the last couple of years with Jesús and Enzo in open source projects like DrupalConsole. Right from the start, working to reach our projects’ milestones has been a really fun adventure, with lots of new knowledge and lessons learned along the way.

One of my first projects was leading the effort to rebuild weKnow’s new site. Most of you can probably relate to the fact that 'you are your toughest client', which is why we needed to strategize intensely before deciding on what approach to use, we treated this project as a functional prototype for the implementation of our new workflows in future projects with our clients and partners.

In this series of blog posts, I'll share how we developed the theme for the weKnow site using Pattern Lab. Drupal 8 is no longer an island, and there’s now a world of different options and tools to implement in our projects that makes the developing process much more efficient. 

Component-Driven Theming

The adoption of component-driven theming in Drupal has increased significantly, due to its many advantages:

  • Helps break-up huge sophisticated projects into smaller pieces of software.
  • Narrows focus on user experience and functionality of a single component at a time.
  • Eliminates having to deal with cumbersome drupal quirks aka “Drupalisms”.
  • Begin work without depending on a Drupal installation. 
  • Work the backend and frontend simultaneously. 
  • Reusability and portability of components. 

To make this work you need to define a stage to integrate with Drupal, and if this is your first time working with components, it can get a little complicated. That said, the objective of this blog post series is to share the lessons we learned on this process and make the path clearer for you.

Our Weapon of Choice

There are several tools that can help you start using components in your projects, each one with its own advantages. Some of the popular ones include:

We decided to go with Pattern Lab, a tool we liked for its concept of “Atomic Design”, which I'll expand on in detail in a bit.

Pattern Lab

Pattern Lab is a tool that facilitates the implementation of Atomic Design. It has extensive documentation and examples, supports PHP, and leverages twig, the template engine of Drupal 8. 

At its most simple, Pattern Lab is a static site generator that allows you to start building components for your Drupal website even before installing Drupal, which decreases the ramp-up time for developers and helps them jump in and start creating components right away.

Atomic Design

Atomic Design is a methodology for creating design systems, created by Brad Frost and based on five distinct levels:

  • Atoms
  • Molecules
  • Organism
  • Templates
  • Pages


Atoms are the smallest blocks of our components. We can think of them as the most basic HTML tags of our website, such as labels, buttons, inputs, images and even some simple components like a CTA made up of a background color, text, and a URL.


With molecules we start grouping our atoms to create some fundamental components for our website, for example a search form that is basically the grouping of an input, a label, and a search button.


In an organism, we are combining our molecules to build a more complex component and create distinct sections for our UI. An example could be the header region of a website, which groups molecules like the main nav and a search form, plus the main logo of the site.


In the template we put together all these pieces in one place and we can start seeing the behavior of our layout. This is the exciting moment when the look of our website really starts coming together, though at this point we still use placeholders to render our components. 


In this instance our templates are ready to use some real content with the purpose of showing what the future website will really look like.

Choosing our base Theme

For the Drupal community there are curated resources that can help you get on track to use Pattern Lab. In our case we decided to use Particle, previously known as Pattern Lab Starter, which out of the box offered:

  • Easy and fast setup.
  • Highly configurable gulp tasks.
  • No need to have Drupal installed to start working with it.
  • Generators, taking advantage of some really cool tools like Yeoman
  • Use of dummy data using Faker.

To start working with this theme, we first had to meet the following requirements in our local machine:

  • Node v6 (for using npm).
  • PHP 5.4, 5.5, 5.6 or 7.
  • Composer.

At the moment, there is no way to use this theme as a Composer dependency in our Drupal project, but this makes absolute sense since this is not just a Drupal Theme, it can also be used as a standalone project.  That's why before getting started to work on this theme, we needed to download the project's repo and place it in the theme's directory. After that, we could then rename the folder and the configuration files of our theme, as suggested in this documentation

Once that part was done, we needed to download the dependencies for the project. Fortunately, Particle comes with some useful commands to make this process easier. In our case, we needed to run the following commands:

npm install
To download all node dependencies.

npm run setup
This is a combination of two commands: bower install,to download our frontend packages (this also can be managed by npm using yarn), and composer install,because Pattern Lab has its own PHP dependencies.

npm start
This command is the equivalent to run gulp default task.

After this, you can access http://localhost:3050/pattern-lab/public/ and see your patternlab style guide.

Particle’s commands to run the most common tasks include:

npm run compile
Compiles all source assets and creates the build of our theme. 

npm run test
Validates our source code according to the linter of our theme.

npm run update
Updates the Node and PHP dependencies of the project.

npm run new
This is the most used command in the development process, this command executes a Yeoman generator that creates a new component with a scss, twig, js and json files. After the component is created, it also updates the Pattern Lab configuration file and the libraries.yml file of our theme. In a future post we will cover this command in more detail.

That’s it for now! Stay tuned for our follow-up blog posts, where we will dive into how to implement Pattern Lab in our Drupal project.

Dec 18 2017
Dec 18

By Kenny AbarcaCEO | December 18, 2017

By Kenny AbarcaCEO | December 18, 2017

weKnow is a fully distributed company, something we proclaim loudly and proudly to our partners and potential clients when engaging with them. It’s a characteristic that gives us the competitive edge because it highlights weKnow’s core values and the character of every individual that works at here.

I decided to write this because our clients are always amazed by how seamless our operations and projects run. They always seem amazed by the fact that we span 12 countries and cover 6 time zones, yet seamlessly integrate into their projects from kickoff to completion without a hitch. This is how we keep things running smooth…

Happy Accident

Back in 2013, working remotely was a benefit tied to tenure and seniority. Qualifying employees got to work from home 2 days a week and everything was working smoothly. However, we were one day shocked to receive an eviction notice from our landlord. Turns out he was renting the space to us without a business permit and the authorities were shutting down his operation. At that point we had been thinking about going full remote for some time, so we just took advantage of the opportunity and made the jump to a fully distributed model. 

It’s been 4 years since we started perfecting the methodology, certainly learning through our mistakes and making adjustments especially because we moved from a local team to a globally distributed team. While he administrative complexities can be burdensome, they are easily overshadowed by the sheer quantity of talent at our disposal.

There are three pillars that we’ve identified necessary to make the distributed team actually work, those pillars are:

Hiring A Distributed Team

The key driving factor that led us to global distribution was to expand our scope for talent acquisition. Not all potential candidates are suitable for a globally distributed system. Most prospects are attracted to the concept of managing their own time and location, but are not prepared to handle the responsibilities. We often quip the quote “with great power, comes great responsibility”, while talking to prospective employees.

Our personality interview includes questions related to self-Motivation, strong communication skills, attention to detail and enthusiasm to determine if a candidate is suitable for a remote position.

Here’s an old post from Recruiter.com that best summarizes the personality types suitable for virtual work.


Lack of communication leads to conflict and mistakes, that’s why while working remote, everyone should maintain healthy communication with all teams, both internally (within the company) and externally (client teams).

We place a lot of emphasis on ‘raising your hand’ when help is needed, having a hero complex in a distributed system leads to a loss of time and ROI potential. Nobody should stay up all night fixing what would take another, more experienced team member, five minutes to address or advice.

Managing A Distributed Team

weKnow is a horizontal organization, all decisions and challenges are addressed at the same level. But this also means that all employees, despite seniority, have direct access to anyone in the organization. This enhances the team’s problem solving capacity and facilitates team-wide adoption of the solutions.

For instance, we standardized holidays throughout the company, that means that whether you’re based in Argentina or France you still take the 12 holidays of the Costa Rican calendar, as a reference and because that’s our baseline. Individuals can trade holiday days but as a default, we all share the Costa Rican holidays.

One challenge that proved to be a struggle was finding a Time Tracking and Resources Administration tool. We pretty much tried them all but they all had either too much for our needs, or just weren’t a fit.

We decided it was time to have our own custom fit web application to handle Time Tracking, Resources, Skills, Requests, Prospects, and many more in a way that it syncs with our company’s style and personality. We even made it a cool internal project where developers transitioning to a new client could help out. It was built in React, MeteorJS and MongoDB which was very attractive challenge to our team, because they were new technologies, thus a learning opportunity.

One of the features that I like the most about ‘KeepTrack’, our tool to track pretty much everything at weKnow,  is the ‘Skills feature’, where people input their skills as well as rate how much experience they have on that skill.  Additionally, that same feature is a place where we  have our technology radar for the skills our team has but also the ones that we lack of or should invest for the near future.

This is already helping our Capabilities team identify who’s best suitable for an upcoming project without having to ask our IT team. The ‘technology radar’ feature, is helping our team learn about technologies that are taking off so they can focus on learning these frameworks or languages.

Our Tools For Managing A Distributed Team

Here's the mix of tools we use to make this work:

Hope you can find this article useful. If you already have a distributed team, this might help you improve the model, and if you are thinking about implementing it, then you certainly can make good use of one or two tips.

Please comment or reach out if you have any questions or would like us to expand on a specific aspect of managing distributed teams.

Dec 14 2017
Dec 14

By Jesus Manuel OlivasHead of Products | December 14, 2017

By Jesus Manuel OlivasHead of Products | December 14, 2017

It’s been a little over a year since weKnow came to life as a rebranding for Anexus, which allowed me to join the company as a new business partner. My main goal within the company is directing our efforts to explore new fields and technologies (that’s right, we are not just a Drupal shop anymore!)

As a web solution provider, having a website that accurately reflects what we do is a challenging task, because usually our plate is full with client work, and it’s not uncommon to put your own website at the end of the queue. This is why last year we decided to put together a basic landing page while setting aside some time to work on the company image as part of the rebranding.

Come 2017, we had the designs ready and started working on the architecture of the site, with the intention of taking advantage of the techniques, technologies, and lessons we learned while working on our clients’ projects, which led to the successful launch of our revamped site!

During the upcoming weeks, we plan to publish a series of blog posts in which we will explain the tools we used, the modules we selected, and the general reasoning that led to the approach we took. Some of the topics that we will discuss include: 

  • Component-Driven design using Pattern Lab.
  • Local development using Docker, Docker Compose, and Traefik.
  • Using Composer to manage project dependencies.
  • Site Configuration Management.
  • Content Synchronization.
  • Continuous Integration workflow and Continuous Deployments.

Stay tuned!

Nov 16 2017
Nov 16

By Eduardo GarcíaCTO | November 16, 2017

By Eduardo GarcíaCTO | November 16, 2017

Before the existence of the Drupal Console as a project, it all began with an idea to make Drupal 8 better. Every great invention/innovation begins with an idea, and the Drupal transition from 7 to 8 came with massive changes to the fundamental operating procedures of yesterday. Symfony components were making a splash into Drupal Core.

Jesus and David, the initiators of Drupal Console project, came up with the idea of including the symfony console into the Drupal core. The same way that other symfony components were being included into the Drupal core.

People Working

Powering Though Frustration

As helpful as the Drupal Console project is nowadays, it wasn’t very widely accepted into the drupal community initially. In fact, it turned out to be a huge challenge to get anyone to listen to the idea. For Jesus and David, the primary objective to include the Symfony Console in Drupal was to have the option to have code generators, in the same way, the Symfony community does. Who wouldn’t want that? A way to automate the annoying redundancies that plague developers everywhere. So they decided to propose the idea to the Drupal core maintainers via the issue queue. That idea was however quickly dismissed.

After few attempts to request the inclusion and trying to collaborate into different drupal projects, it dawned on Jesus and David that inclusion and collaboration was not going to happen. They needed to regroup and find a better approach.

While at lunch at Drupalcamp Costa Rica, Jesus and David were casually discussing the frustrations they had encountered trying to bring innovation to Drupal and related projects, and Larry Garfield chimed in “someone needs to create a separate project that includes Symfony Console and code generation”. That sentence gave birth to the Drupal Console project as you know it today.

someone needs to create a separate project that includes Symfony Console and code generation.

Building A Community

Jesus stacked up his calendar with almost every Drupal event in the U.S. The goal was to talk about the project in sessions at all Drupal community gatherings he could physically attend, or at minimum, present the idea at BOFs where sessions were not possible. The code sprints helped him interact with developers and users forming a critical source of feedback.

Along the way, he convinced me to join the project as a maintainer. I also embarked on his outreach campaign to help spread the word. Only, my campaign was global because it was important to reach non-english speakers because they often feel left out of major open source projects. Currently, the Drupal Console project is available, with some variations, in the 18 languages listed below.

  • English

  • Spanish

  • Catalán

  • French

  • Korean

  • Hindi

  • Hungarian

  • Indonesian

  • Japanese

  • Marathi

  • Punjabi

  • Brazilian Portuguese

  • Romanian

  • Russian

  • Tagalog

  • Vietnamese

  • Chinese Simplified

  • Chinese Traditional

One Million Downloads! 

After four years of development in July 2017, we reached our first million downloads across different releases. This achievement is thanks to our more that 250 contributors across the globe.

This brought a great sense of validation for deciding to stick to our guns, do the right thing and most importantly... be globally inclusive.

Oct 31 2017
Oct 31

By Michael KinyanjuiMarketing | October 31, 2017

By Michael KinyanjuiMarketing | October 31, 2017

BADCamp holds a special place in the history of weKnow. Kenny Abarca and Enzo Garcia co-founders of the company, had their first foray into the Drupal community at BADCamp 7 years ago. Enzo and Jesús met at BADCamp, and this meeting changed the trajectory of the Drupal Console as we know it. So this time around Jesús and Omar had the ‘grueling’ task to represent the weKnow team in the Bay Area. We highly recommend placing BADCamp on your bucket list, there are no words to describe the atmosphere, maybe … ‘the Woodstock of the Drupal calendar’?

So here are our highlights after interviewing Jesús and Omar, besides the social events, BADCamp offers a lot of insight on innovations and the general direction of the Drupal project.

Omar goes to BADCAMP 2017

Omar pointed out that he noticed that the camp focused, and was split, around two themes that are currently abuzz in the Drupal kingdom, Theming and DevOps. For theming, he was quick to point out how Twig has continued to make a massive impact in the frontend sector.

“The arrival of engine templates like Twig, has helped avoid ‘drupalism’ in how we create themes for Drupal, this is a great step for the productivity during the development process, the theming focuses on components, allowing us to work independently in the theming layer of the Drupal backend and site building, without the need to think in an integrated architecture perspective. The complexity to a big group of front end developers, without Drupal knowledge in theming, was one of the main reasons to use decoupled architecture in Drupal 8, it’s definitely one of the best things Drupal 8 has to offer” Omar added.

According to Omar, discussions around the DevOps subject seemed to revolve mostly around enterprise development needs. This ranged from workflow to testing and using bots to automate the quality assurance.

Jesús goes to BADCAMP 2017

Jesús took the opportunity to give a talk on creating, building, testing and deploying Drupal 8. He shared tools that are helpful in the process of setting up the local development environment and how to make use of them for a successful deployment.

“Everything is Docker!”

That was Jesús’s simple reply to the question “What were your BADCamp highlights?”

Jesús added that it looked like every Drupal developer was attracted to Docker’s flexibility and was quickly becoming the preferred setup for the local development environment. He based his observation on the emergence of a myriad of tools that simplify Docker setup such as Lando, Docksal, DrupalVM.

“This is a natural evolution when you realize creating and building a Drupal 8 site is more complex than previous versions. In order to start working on a Drupal 8 site, your local environment requires some tools; a package manager as composer for handling the site dependencies and talking about theming something similar is happening since you will be probably using gulp or npm.” Jesús added.

Serverless + FaaS

As Jesús has come to learn, a lot of the nuggets of innovation are hidden deep in the hallways of Drupal events. One such nugget was acquired while in a conversation with Thom Toogood from Australia. Thom introduced him to ‘Composer as a Service’, a project he is working on using , which is an open source framework that you can use to run any CLI-driven binary program embedded in a Docker container… making it a Function as a Service.

GatsbyJS Drupal Plugin

One of the main highlights for Jesús came in the last session he attended, about a static site generator, GatsbyJS. It’s based on ReactJS and packs some awesome benefits, just to name a few;

  • Takes advantage of React.js, Webpack and modern JavaScript and CSS.
  • Pre-build pages to load flat files and avoid accessing expensive server side resources.
  • Pre-fetches page’s resources as JS, CSS and other pages. This makes navigate the site feels really fast.

The session also included a live demo showing the Gatsby source Drupal plugin that allows you to build static sites using Drupal as the source data.

Interesting Tools Discovered at BADCAMP

Most Interesting Sessions of BADCAMP 2017

Oct 13 2017
Oct 13

By Michael KinyanjuiMarketing | October 13, 2017

By Michael KinyanjuiMarketing | October 13, 2017

The weKnow team is excited to announce that we’ll be attending BadCamp October 18-21. BadCamp is a celebration of open source software and one of the most prominent events in the Drupal universe. We take great pride in our track record for giving back to the open source community, so we are also happy to announce that Jesus and Omar will be holding a training session on hands-on Drupal 8 module development using Drupal Console.

Hands on Drupal 8 Module Development using DrupalConsole

Over the years we’ve met many open source enthusiasts at BadCamp, so feel free to say hello… you can’t miss us in our weKnow gear !

Jul 27 2017
Jul 27

By Kenny AbarcaCEO | July 27, 2017

By Kenny AbarcaCEO | July 27, 2017

We are excited to announce our line-up for the 2017 Drupal Camp Costa Rica. As proud members of a great community in Costa Rica, weKnow is committed to growing the community by sharing information and insights. We also take this opportunity to thank our team members for consistently sharing knowledge with the Drupal community in Costa Rica, as well as around the world in our global outreach initiatives.

Here are the topics of our sessions, they range from all expertise levels as well as technologies:

All sessions will be recorded and uploaded to the web in case you can’t make it to the camp.

Mar 14 2017
Mar 14

By Kenny AbarcaCEO | March 14, 2017

By Kenny AbarcaCEO | March 14, 2017

After all the hard work we have been putting into building “WeKnow” as a company that primarily focus on training, we are excited to announce that the company is reaching one of its first milestones and that is to provide a training in a DrupalCon. This goal is coming to a reality in Baltimore where we will be presenting Mastering Drupal 8 Development.

The training, created by Jesus Olivas and Enzo García will provide an introduction to the most important changes for developers in Drupal 8, allowing attendees to learn by practicing, while at the same time providing a solid knowledge of the process of building modules for Drupal 8.

During the workshop, students will create a custom module and other components by using various APIs, plugins, and hooks. By the end of the training, trainees will have a better understanding of Drupal 8 and how the introduction of Symfony components are changing the way modules should be written.

Originally the training was going to be provided only by two trainers but there was such an overwhelming response from attendees that we had to increase the trainers to 4 in order to provide the best training experience for all attendees.

Sign up now! There are still some tickets left to attend the training but they are selling out quickly.

Additionally, we also have a presentation session called “Improving your Drupal 8 Development Workflow”. Make sure you stop by and say hello.

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web