Nov 06 2018
Nov 06

By Jesus Manuel OlivasHead of Products | November 06, 2018

By Jesus Manuel OlivasHead of Products | November 06, 2018

During this year and at several events SANDCamp, DrupalCamp LA, DrupalCon Nashville, and DrupalCamp Colorado I had a chance to talk and show how at WeKnow we approached the development of API driven applications. For all of you that use Drupal, this is something like decoupled or headless Drupal but without the Drupal part.

This article outlines weKnow’s approach and provides some insight into how we develop some web applications.

Yes, this may sound strange but whenever we need to build an application that is not content-centric, we use Symfony instead of Drupal; what are those cases? Whenever we do not require the out-of-the-box functionality that Drupal offers as content management, content revision workflow, field widgets/formatters, views, and managing data structure from the UI (content types).

Why we still use PHP.

We definitely knew the language pretty well, we have a large experience working with PHP, Drupal and Symfony and we decided to take advantage of that knowledge and use it to build API driven applications.

Why the API Platform.

This project is a REST and GraphQL framework that helps you to build modern API-driven projects. The project provides an API component that includes Symfony 4, Flex, and Doctrine ORM. It also provides you with client-side components and an Admin based on React and a Docker configuration ready to start up your project using one single command. Allowing you to take advantage of thousands of existing Symfony bundles and React components.

Wrapping up

Our developer's expertise within different technologies has given us the advantage to provide a great time to market while developing client projects. We also like sharing, if you want to see this session live, probably for the last time you should attend and join me at DrupalCamp Atlanta

Video from DrupalCon Nashville at the Youtube Drupal Association channel here:

[embedded content]

You can find the latest version of the slides from DrupalCampLA here 
 

Nov 05 2018
Nov 05

By Manuel SantibanezFront-end Developer | November 05, 2018

By Manuel SantibanezFront-end Developer | November 05, 2018

weKnow gave me the opportunity to attend my first BADCamp as part of the team that represented the company at this awesome event.

First day I attended the Drupal Frontend Submit, a roundtable format that I had not experienced before. It was very rewarding to discuss my experience as a developer who has worked with accessibility guidelines, sharing the tools and strategies that I have used to implement such an important standard. 

Lots of great sessions shared valuable knowledge that allowed me to leave BADCamp as a better developer!

My top picks:

Without a doubt, the new kid on the block was Gatsby, a piece of technology that takes the development of sites to a new level.

In my opinion, and I may be a little biased here, one of the best talks of the camp was "How to keep Drupal relevant in the Git-based and API-driven CMS", given by Jesus Manuel Olivas. This session opened a great discussion about Drupal’s vision, touching base on how it can integrate to become a fundamental piece in the scheme of modern technologies and strategies in web development, allowing Drupal to focus on what it does best which is to manage content.

As a final note, I would like to highlight that the venue was great at UC Berkeley; awesome lounge area with coffee to keep us energized all day, pinball machines were a great surprise and a special mention for the waffles!

Thanks for everything, hope to return next year and this time proposing a talk and thus sharing my own knowledge and experience!

Nov 05 2018
Nov 05

By Harold JuárezFull Stack Developer | November 05, 2018

By Harold JuárezFull Stack Developer | November 05, 2018

BADCamp 2018 was the first real big event I attended, aside from actively participating in Drupal Camp Costa Rica for three years. Kindly enough some co-workers who had already assisted shared with me their experience which gave me great expectations. In addition, I was excited to sightsee San Francisco and Berkeley.

After dedicating this year to front-end, BADCamp sessions left me more than satisfied, with refreshed knowledge and practices. So I would like to share my experience and the content of sessions I participated:

The second day was a highlight, assistants were given challenges and tools, dialogue tables enriched my personal experience by listening to others talk about ways to improve development applications.

My first BadCamp 03

On Friday Pattern Lab sessions were quite interesting, practising the creation of themes without relying on a backend. Although I already had the experience of using this tool before, it provided new knowledge to improve its implementation at work.

React + Gatsby’s potential to create static sites was explored, and I learned compelling ways to take advantage of these new tools to improve the performance of an application using React to render the page and Drupal as an API to enter data. This talk was presented by my co-worker Jesus in his session HOW TO KEEP DRUPAL RELEVANT IN THE GIT-BASED AND API-DRIVEN CMS ERA.

My first BadCamp 04

On Saturday I attended an Accessibility session that showed tools for people with different types of disability, some may be paid or free to implement on the site, it all depends on the needs of the specific project.

Another talk that caught my attention was Artificial Intelligence in Drupal, by using Google Cloud Vision API in sites that provide tagging of images, face, logo and explicit content detection through Machine Learning.

A fantastic experience and I am very grateful to weKnow for helping me attend. It was a great success that I hope to repeat in a near future!

My first BadCamp 05

Most Interesting Sessions

Oct 23 2018
Oct 23

By Veronica WheelockMarketing | October 23, 2018

By Veronica WheelockMarketing | October 23, 2018

Autumn is in the air… and part of the weKnow team is heading to BADCamp18, each one of them excited to share experiences, our team culture and contribute to strengthening ties among the members of the Drupal community.

BADCamp 2018

This is a very special BADCamp edition as it sets a milestone in weKnow’s journey. Back in 2011, this was one of the first Drupal events that we attended in the USA. This year we increased the number to 8 attendants and we proudly became one of the event’s sponsors.

BADCamp is simply the biggest DrupalCamp in the world, reporting 1,300 attendees in 2017 and featuring Summits, Sessions and training in benefit of the open source community. The event will be held at UC Berkeley from October 24th to 27th, 2018.

WeKnow’s team at BADCamp is backed and led by our CTO Jesús Manuel Olivas, one of the co-maintainers of the Drupal Console. He will be speaking on the hottest topics of the event: Decoupled Drupal, APIs, React and GatsbyJS, don't miss his session How to keep Drupal relevant in the Git-based and API-driven CMS era.

Sponsoring an event like this is the best way we found to support countless developers from all around who believe in the strong values of a collaborative and open source community.

This is weKnow’s full roster for BADCamp 18:

BADCamp 2018 weKnow Team
  1. Jesús Manuel Olivas / CTO at weKnow Inc.
  2. Omar Aguirre / Products Division & DevOps
  3. Jorge Valdez / Drupal Full Stack Developer
  4. Joseph Zamora / Drupal Frontend Developer
  5. Miguel Castillo / Drupal Backend Developer
  6. Harold Juarez / Drupal Full Stack Drupal Developer
  7. Manuel Santibañez / Drupal Full Stack Drupal Developer
  8. Heissen Lopez / Drupal Frontend Developer

We hope to meet you there!

Big or Complex Project,
Not Enough Devs?

We help development teams meet their deadlines by seamlessly integrating our highly skilled developers.

Jun 28 2018
Jun 28

Drupal 8 provides the option to include an Ajax Callback within our applications using the Ajax Framework. There are some existing functions which can be used: Methods to hide/show elements in the html document, attach content to an element, redirect a page after a submit, and so on. Sometimes we need to implement something particular, or a custom JS code. In that case, those out-of-the-box functions are not enough. Fortunately, we can also create our own custom responses. So, let’s start creating a new ajax callback for a custom form submission.

Since now we are using Drupal 8, we can take advantage of the Drupal Console to easily generate the necessary boilerplate code. That said, before continue make sure you have the latest console version (1.6.0) if you do not have it yet, you can follow instructions at the official docs page.

Generating the custom Module

The first step is to define the module where code will be generated. If you don't have a custom module, you can create a new one executing the following Drupal Console command:

drupal generate:module \
--module="example" \
--machine-name="example" \
--module-path="modules/custom" \
--description="My Awesome Module" \
--core="8.x" \
--package="Custom" \
--module-file \
--no-interaction

Creating AJAX Command

Now that we have a home for our new AJAX command, we can create the command itself. Starting in Drupal Console 1.6.0, the drupal generate:ajax:command generates a custom AJAX command.

After entering the command, you will be prompted for several pieces of information to generate the boilerplate code for the command:

We can also specify all the options up front:

drupal generate:ajax:command  \
--module="example" \
--class="ExampleCommand" \
--method="example" \
--js-name="example" \
--no-interaction

Once complete, we can inspect what was generated.

The AJAX Command Class

The generated Ajax Command class implements CommandInterface, and the mandatory method to implement the render function:

 <?php

namespace Drupal\example\Ajax;

use Drupal\Core\Ajax\CommandInterface;

 /**
 * Class ExampleCommand.
 */
 class ExampleCommand implements CommandInterface {

 /**
 * Render custom ajax command.
 *
 * @return ajax command function
 */
 public function render() {
     return [
         'command' => 'example',
         'message' => 'My Awesome Message'
     ];
 }

}

The render() method must return a render array. In this case, 'example' will be the javascript command which we will invoke later inside the *.js file. We can define other properties on this response, for example a custom message.

The JavaScript file

Now let's take a look to the second generated file, example.js, which is located inside your custom module. Here, we are reading the message property from the response object and displaying the result in the browser's console. This is a really basic example, but you can implement a more robust actions depending on your requirements in fact we could reuse AJAX functions from core to interact with our response.

 (function ($, Drupal) {
/**
 * Add new custom command.
 */
Drupal.AjaxCommands.prototype.example = function (ajax, response, status) {
  console.log(response.message);
}
})(jQuery, Drupal);

While our example is pretty simple -- it just writes to the browser’s console -- you can make any command you like!

The library definition file

In addition to the above boilerplate, Drupal Console also generated a library definition file:

example-library:
 js:
   js/example.js: {}
 dependencies:
   - core/drupal.ajax

The library definition tells Drupal what javascript file(s) need to be loaded, and where and where to find them relative to the module’s directory. We can also define any required dependencies -- such as core/drupal.ajax above -- by defining the dependencies key.

Invoking our custom command

First we need to attach the library to the rendered page. There are lots of ways to do this. For this post, we want to attach it to a specific form. So, we’ll define a form alter:

function custom_form_alter(&$form, FormStateInterface $form_state, $form_id) {

  /**
   * Apply the form_alter to a specific form #id
   * the form #id can be found through inspecting the markup
   */

  if ($form['#id'] == 'custom-form') {
    /**
     * Include a js, which was defined in example.libraries.yml
     */
    $form['#attached']['library'][] = "example/example-library";
  }

}

We alter the $form array by adding a new item under #attached. Since we’re attaching a library, we also use the library key. Finally, we specify our module name (example) and then our library name (example-library).

The above only attaches all the necessary Javascript. Now we need to put it to use! To do that, we need to modify the submit callback of the form to add our custom AJAX command:

/**
         * {@inheritdoc}
         */
        public function exampleSubmitForm(array &$form, FormStateInterface $form_state) {

                if ($form_state->hasAnyErrors() || !empty($form_state->getErrors())) {
                        $ajax_response = new AjaxResponse();
                        $ajax_response->addCommand(new AjaxCommand());

                        return $ajax_response;
                }

        }

To add our command, we need to tell Drupal to only return AJAX content, rather than a full web page. In the form submit function we create a new AjaxResponse and call the  addCommand() method to add our custom command.

Put together, our custom ajax command will be triggered every time the exampleSubmitForm() method is called, and our custom message will appear in the browser’s console.

Wrap up

While Drupal Console makes creating a new AJAX command easy, you can also do it manually:

  1. Create a custom command class extending AjaxInterface.

  2. Invoke the Drupal.AjaxCommands.prototype object and define there your custom actions.

  3. Create a library definition specify the location of your custom Javascript.

  4. Include core/drupal.ajax as part of the library dependencies.

  5. Add your ajax custom command by using addCommand() on the  ajax response object.

Creating a custom AJAX command isn’t complex in Drupal 8. Once you add the command to the AJAX response, you can create all kinds of amazing user experiences!

May 09 2018
May 09

Some operations are time consuming, really heavy memory and/or CPU intensive. By performing an operation one time, and then caching the output, the next requests could be executed faster. Drupal provides an easy cache API in order to store, retrieve and invalidate cache data. I did this tutorial because I couldn’t find a step by step tutorial in order to add cache metadata to render arrays easily!

In this tutorial we'll:

  • Get an overview of the render array caches and how to use them properly.
  • We are going to get our hands dirty on code.

Prerequisites

  • Familiarity with custom module development.
  • How to create a custom controller to process incoming requests.
  • Some knowledge of render arrays.

Overview of Render array

Drupal uses render arrays to generate HTML that is presented to the end user. While render arrays are a complex topic, let’s cover the basics.  A render array is an associative array that represents a one or more HTML elements, properties and values.  If you’re interested in more about render arrays, see Render arrays from official Drupal docs.

Cache metadata to render array

When we have a render array, instructing Drupal to cache the results is easy, we only need to use the #cache property. But what kind of caching? Drupal 8 provides several kinds out of the box:

  • max-age stores cache data by defining its time in integer format and seconds
  • tags is an array of one or more cache tags identifying the data this element depends on.
  • contexts specifies one or more cache context IDs. These are converted to a final value depending on the request. For instance, 'user' is mapped to the current user's ID.

Creating the module and controller

$ drupal generate:module --machine-name=d8_cache

A module alone isn’t enough. We also need a controller to respond to incoming requests. We can use Drupal Console to generate the controller too:

$ drupal generate:controller --module=d8_cache --class=DefaultController

When creating the controller, you’ll enter into a loop where you can enter three pieces of information necessary for the controller to define a route: The title, method name, and the path. Let’s make one route for each of the cache types:

Title Method Name Path cacheMaxAge cacheMaxAge /d8_cache/max-age cacheContextsByUrl cacheContextsByUrl /d8_cache/contexts cacheTags cacheTags /d8_cache/tags

Now we should have an *.info.yml, *.routing.yml and our controller class.inally, let’s enable our custom module:

 $ drupal module:install d8_cache

Cache “max-age”

With the module and routes created, we can now start playing with Drupal caching.In DefaultController.php, locate the cacheMaxAge() method and add the following:

public function cacheMaxAge() {
 return [
   '#markup' => t('Temporary by 10 seconds @time', ['@time' => time()]),
   '#cache' => [
     'max-age' => 10,
   ]
 ];
}

If we open a web browser and navigate to http://your_drupal_site.test/d8_cache/max-age, we see a “Temporary by 10 seconds timestamp” where timestamp is the current time as a UNIX timestamp. 
“What good is that!?” you might ask. Well, if you refresh the page you’ll notice something interesting.  The first time the page will say something like  “Temporary by 10 seconds 1520173774”. If we hit refresh immediately, we’ll see:

“Temporary by 10 seconds 1520173780” (the first second)

“Temporary by 10 seconds 1520173780” (in the next second)

“Temporary by 10 seconds 1520173780” (and so on)

The timestamp doesn’t change! If we wait for the whole 10 seconds we specified in max-age, the cache invalidates/expires and and is replaced with a new timestamp: “Temporary by 10 seconds 1520173790” 

Great, this worked like a charm!

What if we want to make it so the page never expires? Drupal provides a special constant for this, \Drupal\core\cache\Cache::PERMANENT exactly for this case. We’d only need to change the value of max-age:

public function cacheMaxAge() {
  return [
    '#markup' => t('WeKnow is the coolest @time', ['@time' => time()]),
    '#cache' => [
      'max-age' => \Drupal\Core\Cache\Cache::PERMANENT,
    ]
  ];
}

And the message for instance “weKnow is the coolest 1520173780” will never change! Well, not “never”. We can force the page to update by clearing the Drupal cache. This can be done under Admin > Config > Development > Performance, or using Drupal Console:

$ drupal cr all

So that was max-age, one of the simplest caching strategies. What if we need something more...nuanced?

Cache “contexts”

Caching by contexts let us specify a condition by which something remains cached. A simple example is the URL Query, or any part after the ? in a URL. We already defined the route earlier, so we open DefaultController.php and edit the cacheContextByUrl() method:

public function cacheContextsByUrl() {
  return [
    '#markup' => t('WeKnow is the coolest @time', ['@time' => time()]),
    '#cache' => [
      'contexts' => ['url.query_args'],
    ]
  ];
}

The above piece of code will display a message such as “weKnow is the coolest 1520173780”, and invalidate cache when a new query parameter from url is set or gets updated.

If we visit for instance http://your_drupal_site.test/d8_cache/contexts the first time, we’ll see something like:  “weKnow is the coolest 1520173780”. If we hit again the same message is displayed. But, if we do add a query parameter like http://your_drupal_site.test/d8_cache/contexts?query_a=value, then the cache is invalidated and the page updates with a new timestamp: “weKnow is the coolest 1520173909”.

Sometimes, we only want to invalidate the cache based on a specific argument in the URL query. We can do that too:

public function cacheContextsByUrlParam() {
  return [
    '#markup' => t('WeKnow is the coolest @time', ['@time' => time()]),
    '#cache' => [
      'contexts' => ['url.query_args:your_query_param'],
    ]
  ];
}

Now if we visit the following URL:

 http://your_drupal_site.test/d8_cache/contexts-param?your_query_param=value

Only then does the message change:“weKnow is the coolest 1520173909” If we visit the same URL with the same query parameter set (your_query_param), the cache is invalidated and we get a new timestamp once again:

“weKnow is the coolest 1520173910”
And so on…

The url.query_args:your_query_param value we passed to contexts in our render array instructs Drupal to only invalidate the cache if a certain URL query parameter is set. 

If we visit:

http://your_drupal_site.test/d8_cache/contexts-param?this_is_another_query_param=value

The message is “weKnow is the coolest 1520173910” (first second)
“weKnow is the coolest 1520173910” (next second)
“weKnow is the coolest 1520173910” (after few minutes)

And so on!

Notice the message doesn’t change. This is because we set to invalidate cache on the query param “your_query_param” and above is another query param. Since your_query_param is not in our URL, Drupal will never invalidate the cache. 

Caching by the URL query isn’t the only context available in Drupal. There are several others:

  • theme (vary by negotiated theme)
  • user.roles (vary by the combination of roles)
  • user.roles:anonymous (vary by whether the current user has the 'anonymous' role or not, i.e. "is anonymous user")
  • languages (vary by all language types: interface, content …)
  • languages:language_interface (vary by interface language — LanguageInterface::TYPE_INTERFACE)
  • languages:language_content (vary by content language — LanguageInterface::TYPE_CONTENT)
  • url (vary by the entire URL)
  • url.query_args (vary by the entire given query string)
  • url.query_args:foo (vary by the ?foo query argument

Refer to drupal 8 contexts official documentation for more details about cache “contexts”.

Cache “tags” 

The contexts cache type is really versatile, but sometimes we need more complete control over what is and isn’t cached. For that, there’s tags. Open the controller and modify the cacheTags() method to be the following:

public function cacheTags() {
  $userName = \Drupal::currentUser()->getAccountName();
  $cacheTags =   User::load(\Drupal::currentUser()->id())->getCacheTags();
  return [
    '#markup' => t('WeKnow is the coolest! Do you agree @userName ?', ['@userName' => $userName]),
    '#cache' => [
     // We need to use entity->getCacheTags() instead of hardcoding "user:2"(where 2 is uid) or trying to memorize each pattern.
      'tags' => $cacheTags,
    ]
  ];
}

Ok, now let’s login with our username  -- this post uses “Eduardo” -- and visit:

http://your_drupal_site.test/d8_cache/tags

Above code prints “weKnow is the coolest! Do you agree Eduardo?” If we hit the page again it will say “weKnow is the coolest! Do you agree Eduardo?” and subsequent requests will say the same.

If we edit our own username to “EduardoTelaya” and hit save our tag cached page changes:

“weKnow is the coolest! Do you agree EduardoTelaya?”

Why is that?

If you look closely at the method, you’ll notice we get a list of cache tags for the current user. If we use a debugger to see the value of $cacheTags, it will say “user:userID” where userID is the user’s unique ID number. When we updated our user account, Drupal invalidated any cached content associated with that tag. Cache tags let us build a dependency into our cache on another entity or entities in the site. We can even define our own tags to have full control!

Tips and tricks

In the above examples we only had one #cache in each render array. Drupal allows us to specify the caching at different levels in the tree depending on need. Let’s suppose we have the following, a tree of render array:

public function cacheTree() {

   return [
     'permanent' => [
       '#markup' => 'PERMANENT: weKnow is the coolest ' . time() . '<br>',
        '#cache' => [
          'max-age' => Cache::PERMANENT,
       ],
     ],
     'message' => [
       '#markup' => 'Just a message! <br>',
       '#cache' => [
       ]
     ],
     'parent' => [
         'child_a' => [
           '#markup' => '--->Temporary by 20 seconds ' . time() . '<br>',

         '#cache' => [
           'max-age' => 20,
       ],
     ],
      'child_b' => [
        '#markup' => '--->Temporary by 10 seconds ' . time() . '<br>',
        '#cache' => [
          'max-age' => 10,
        ],
      ],
    ],
    'contexts_url' => [
      '#markup' => 'Contexts url - ' . time(),
      '#cache' => [
        'contexts' => ['url.query_args'],
      ]
    ]
  ];
}

If we visit the first time http://your_drupal_site.test/d8_cache/tree:

We get this:

PERMANENT: weKnow is the coolest 1520261602
Just a message! 
--->Temporary by 20 seconds 1520261602
--->Temporary by 10 seconds 1520261602
Contexts url - 1520261602

(Please refer to timestamp above for example purposes)

In the next second, if we visit the same page again, we get the same message. But once it reaches 10 seconds, the cache is invalidated thanks to the render array element “child_b” (which was set to expire/invalidate to 10 seconds) and we are going to have a different message:

PERMANENT: weKnow is the coolest 1520261612
Just a message! 
--->Temporary by 20 seconds 1520261612
--->Temporary by 10 seconds 1520261612
Contexts url - 1520261612

Notice how not only “child_b” was updated but also the rest of render array elements. The same will happen if you wait 20 seconds or visit /d8_cache/tree?query=value, which invalidates cache according to url query contexts.

This is called  “bubbling up cache”. This can affect the response cache you can see as a whole! In order to avoid that you should use “keys” attribute in order to cache individual elements. By adding “keys” you protect from cache invalidation from siblings array elements and children array elements. Let’s add a new method and path to our code in order to add keys:

public function cacheTreeKeys() {

 return [
   'permanent' => [
     '#markup' => 'PERMANENT: weKnow is the coolest ' . time() . '<br>',
     '#cache' => [
       'max-age' => Cache::PERMANENT,
       'keys' => ['d8_cache_permament']
     ],
   ],
   'message' => [
     '#markup' => 'Just a message! <br>',
     '#cache' => [
       'keys' => ['d8_cache_time']
     ]
   ],
   'parent' => [
     'child_a' => [
       '#markup' => '--->Temporary by 20 seconds ' . time() . '<br>',
       '#cache' => [
         'max-age' => 20,
         'keys' => ['d8_cache_child_a']
       ],
     ],
     'child_b' => [
       '#markup' => '--->Temporary by 10 seconds ' . time() . '<br>',
       '#cache' => [
         'max-age' => 10,
         'keys' => ['d8_cache_child_b']
       ],
     ],
   ],
   'contexts_url' => [
     '#markup' => 'Contexts url - ' . time(),
     '#cache' => [
       'contexts' => ['url.query_args'],
      'keys' => ['d8_cache_contexts_url']
     ]
   ]
 ];
}

If we visit now /d8_cache/tree-keys

We will get:

PERMANENT: weKnow is the coolest 1520261612
Just a message! 
--->Temporary by 20 seconds 1520261612
--->Temporary by 10 seconds 1520261612
Contexts url - 1520261612

And if we wait for 10 seconds we are going to see:

PERMANENT: weKnow is the coolest 1520261612
Just a message! 
--->Temporary by 20 seconds 1520261612
--->Temporary by 10 seconds 1520261622
Contexts url - 1520261612

Notice how just “--->Temporary by 10 seconds 1520261622” gets updated but the rest of the output doesn’t get updated (this is thanks to keys attribute that prevent cache invalidation to the rest of array elements).

Download

You can download full source code for this post on Github.

Recap

In this post, we saw an overview of render arrays, how to use three different cache types. We used max-age for simple, time-based caching. Cache contexts provides a caching strategy based on a variety of dynamic conditions. The tags cache type lets us invalidate caches based on the activity on other entities or full control via custom tag names. Finally, we used  “cache keys” to protect against other cache invalidation in a render array tree.

This is it! I hope you enjoyed this tutorial! Stay tuned for more!

This post was contributed by Eduardo Telaya, a former member of the weKnow team. You can find him on Twitter at @Edutrul, or speaking at Drupal events in Latin America such as Drupalcamp Costa Rica.

Mar 07 2018
Mar 07

By Jesus Manuel OlivasHead of Products | March 07, 2018

By Jesus Manuel OlivasHead of Products | March 07, 2018

The Configuration Management (CM) system, is probably one of the most well known and exciting features of Drupal 8. But wouldn't it be even more awesome to be able to install a site, export configuration and then re-install site from scratch importing the previously exported configuration?

For those who are not yet clear on what we are talking about, this post is related to fixing the infamous exception error message when importing configuration:

"Site UUID in source storage does not match the target storage."

Why would you want to be able to install your site from an existing configuration?

A couple of big reasons come to mind:

  • Automate the creation of reproducible build/artifacts from scratch at any stage (Development, QA, Production) to test, launch or deploy your site.
  • Simplify onboarding for new developers to any project without the need to obtain a database-dump. Developers will be able spin-up sites from scratch just by installing the site and importing configuration files.

How to achieve this using Drupal Console?

Installing a site from a previously exported configuration using Drupal Console is as simple as 
updating your `console/config.yml` and append this configuration to the new overrides section. 

application:
...
  overrides:
    config:
      skip-validate-site-uuid: true

Executing the commands to install the site and import your previously exported configuration:

drupal site:install --force --no-interaction
drupal config:import --no-interaction

Simple and easy right? Well, this is possible using Drupal Console starting with the 1.7.0 version. This functionality is not supported by Drupal Core out-of-the-box. However, providing a better user experience while using Drupal 8 is one of the goals of Drupal Console and this is the reason we introduce features as the one we mentioned above.

What if my site does not have Drupal Console installed?

Download Drupal Console using composer in your site. if you do not have it already.

composer require drupal/console:~1.0 --prefer-dist --optimize-autoloader

Create a Drupal Console configuration file for your site.

drupal init --site --no-interaction

What if my site is using an old version of Drupal Console?

Update per-site installation using composer:

composer update drupal/console --with-dependencies

Update the Launcher.

drupal self-update

Can I automate the execution of the site installation and import configuration commands?

Yes, using a chain command. A chain command is a custom command that helps you automate multiple command execution, allowing you to define and read an external YAML file containing the definition name, options, and arguments of multiple commands and execute that list based on the sequence defined in the file.

For more information about chain commands, refer to the Drupal Console documentation.

This is an example of a chain command to install a site and import a previously exported configuration.

command:
  name: build
  description: 'Build site by installing and importing configuration'
commands:
  # Install site
  - command: site:install
    options:
      force: true
    arguments:
      profile: standard
  # Import configurations
  - command: config:import
  # Rebuild cache
  - command: cache:rebuild
    arguments:
        cache: all

After adding this file, you can execute one command

drupal build

If you have any continuous integration or continuous deployment workflow you can integrate this command as part of that workflow.

Will this work with other modules like config_split?

Yes, you can use the config_split provided Drupal Console command. You should use the provided command to import the configuration and it will work as expected, without any issues or errors related to the uuid values.

drupal config_split:import --split=development --no-interaction

Note that you should replace `development` with the name you gave to your split.

Do I have other alternatives?

Yes, the other two well-known alternatives are:

Using config_suite module:

  • Create a new custom profile.
  • Add the drupal/config_suite dependency using composer.
  • Add the config_suite module to your custom profile and have it as a dependency on your profile_name.info.yml file.
  • Install the site using your new custom profile.
  • Export your site's configuration.

After following these steps, you will be able to reinstall your site using the custom profile and import the previously exported configuration.

Using the config_installer profile:

  • Install the site using your preferred contrib or custom profile.
  • Remove the `install_profile` key from your settings.php file.
  • Add patches to your composer.json file.
  • Add the drupal/config_installer dependency using composer.
  • Export your site's configuration.

After following these steps you will be able to reinstall your site. Note that you will be using the `config_installer` profile for any subsequent site installation, instead of the profile your site is currently using.

Read more about using `config_installer` profile:

Wrapping up

Feel free to update Drupal Console to latest 1.7.0 and try this new feature while is hot and provide feedback. Also, make sure you let us know which other UX/DX improvements you will like to see on the Drupal Console project.

Big or Complex Project,
Not Enough Devs?

We help development teams meet their deadlines by seamlessly integrating our highly skilled developers.

Jan 22 2018
Jan 22

Despite being on the market for over a decade, to many, MongoDB still carries a mythical tone with a hint of ‘wizardry’.

The popular misconception is that MongoDB is only suitable for hip startups and former startups that are still considered ‘hip’ and out of the box, such as AirBnB.

Even with all the buzz and talk around MongoDB, the adoption rate remains relatively low in comparison with other ‘standard’ relational database technologies. Not many seem to understand that to be successful in the world of code you must approach everything new with an open mind.

Besides bearing an open mind, you need to incorporate an avenue to test and learn new technologies and tools. Personally, I choose to learn how to use new tools by trying to accomplish routine tasks.

In this blog I’ll explain to backup and restore data between different MongoDB environments. A simple yet critical task that we to do all too often.

Basic tools for MongoDB backup and restore

First things first, we need to include the CLI tools needed to access and operate our mongo databases. These tools are usually available in the same package that contains the mongo CLI client.

Installation on Mac is a piece of cake using [brew] using the following command.

$ brew install mongodb

If you are looking a more intuitive interface to interact with your Mongo Databases, I recommend using RoboMongo (even though it doesn't include backup features).

Connecting to the Database

Local database

As with any regular database, to connect, you need the server, port and database name (when using local setup). If you are connecting to a remote database, you need to provide a username, password, and authentication mode.

For example, to connect to a database named meteor inside your localhost, and running in port 3001, you would use the following command*.

$ mongo 127.0.0.1:3301/meteor

*The client shell version and server versions don’t necessarily have to match. You can also see the warning generated in the image below.

Remote database

As previously mentioned connecting to a remote MongoDB database, requires more information, in this example, I'll use the server Mongo Cloud Atlas.

In addition, to being a None-SQL database, MongoDB is also a distributed database that automatically implements the replication concept, this is an incredibly significant feature. I still have nightmares from the first time I tried to implement that in MySQL.

To connect to the remote server, you need to provide the information of all replicas or nodes, as you can see in the following command.

$ mongo "mongodb://cluster0-shard-00-00-XXX.mongodb.net:27017,cluster0-shard-00-01-XXX.mongodb.net:27017,cluster0-shard-00-02-XXX.mongodb.net:27017/test?replicaSet=Cluster0-shard-0" --authenticationDatabase admin --ssl --username MYSUPERUSER --password MYSUPERPASS

In this case, MongoDB Cloud atlas utilizes different methods of authentication; the default method is using an internal database for users. Also, the method of connection we’ll use is SSL, it’s important to try to keep our information secure.

If everything went as expected, you now have access to a regular command line, and you can execute queries as usual as you do with your local database.

 

Backing up your Mongo database

To backup our database local or remote we’ll use the program mongodump

Local Database

Depending on your connection the order and format may differ a little, but the output folder containing your backup should be the same

$ mongodump --host 127.0.0.1 -d meteor --port 3201 --out ~/Downloads/

After a successful execution, inside the output folder, a new folder with the name of the database will be created, in this example a ‘meteor’ folder.

Inside the folder, you will find two files per collection in database. One file with extension json which will contain your collection’s metadata about the structure and definitions, and another with extension bson, where b stands for binary which is where the data is stored.

Remote Database

mongodump --host cluster0-shard-00-00-XXX.mongodb.net --port 27017 -d MYDATABASE --username MYSUPERUSER --password MYSUPERPASS --authenticationDatabase admin --ssl --out ~/Downloads/

Just as we had done earlier in the connection step, we provide username, password and authentication method, ensuring that we use SSL for our connection.

We also provide the database we want to backup, in this case only one node is required, because in theory all of them are sync.

Restoring your Mongo Database.

To import our backup we will use the program **mongorestore**

Local Database

This command follows the same rules of mongodump, below you could find an example


$ mongorestore --host 127.0.0.1:3201 -d meteor ~/Downloads/meteor/

If we want to import only one specific collection, we just need to include the extra information for the particular collection as you can see below.

$ mongorestore --host 127.0.0.1:3201 -d meteor --collection mycollection ~/Downloads/meteor/mycollection.bson

Remote Database

Here again we need to provide all node replicas, check the following example

$ mongorestore --host "Cluster0-shard-0/cluster0-shard-00-00-XXX.mongodb.net:27017,cluster0-shard-00-01-XXX.mongodb.net:27017,cluster0-shard-00-02-XXX.mongodb.net:27017" -d MYDATABASE -u MYSUPERUSER -p 'MYSUPERPASS' --authenticationDatabase admin --nsExclude 'admin.system.users' --nsExclude 'admin.system.roles' --ssl ~/Downloads/meteor

You can clearly see that the fundamentals are not that far removed from any backup and restore in a SQL based DB. I hope this guide eliminates an excuse that has been holding you back from dipping your toes in MongoDB.

Happy NO-SQL queries!

Jan 11 2018
Jan 11

Setting up a new local environment can be challenging and really time-consuming if you're doing it from scratch. While this might not be a big deal when working as a single developer on a project, in a team-based scenario it's important to share the same infrastructure configuration. That's why we highly recommended using a tool like Docker to simplify the process.

Last summer, Jesus Manuel Olivas (Project lead) and I started working on a new project, and we had to discuss which setup we should use for the local environments. Since the project was already set up to use Lightning and BLT, we both agreed to use DrupalVM with Vagrant. Everything seemed to work great apart from some permissions conflicts, which we could easily resolve since the project only had two developers at the time.

DrupalVM is a tool used in creating Drupal development environments quick and easy, and it comes with the option to use Docker instead of or in addition to Vagrant but is mostly known and used with Vagrant.

Why We Switched to Docker?

After a few weeks of development, more developers came on board on the project and we started running into some issues. Vagrant was not working as expected on some machines and we were spending way too much time researching and fixing the provisioning issues, so Jesus and I had to go back to the drawing board to come up with a comprehensive solution, and we decided to switch from Vagrant to Docker.

Trying Docker

Docker is a tool for building and deploying applications by packaging them into lightweight containers. A container can hold pretty much any software component along with its dependencies (executables, libraries, configuration files, etc.), and execute it in a guaranteed and repeatable runtime environment.

This makes it very easy to build your app once and deploy it anywhere - on your laptop for testing, then on different servers for live deployment, etc.

There are plenty of 'ready to use' tools to implement Docker with Drupal, just to mention a few:

At this point we didn't want to add an extra layer or tool to the setup process, so we decided to go straight to a plain vanilla Docker configuration.

How To Implement a Basic Docker Configuration For Drupal

Installing Docker

This should be an easy step and once the installation is completed you should have the docker daemon running, confirm by running docker on your terminal and you should see the list of available commands. Download link

Step 1. Add the hostname

Edit your /etc/hosts file and add the new site name.

127.0.0.1 site.local 

Step 2. Add the docker-compose.yml file

Add the following file to your project root.

version: "2"

services:
  mariadb:
    image: wodby/mariadb:10.1-2.3.3
    env_file: .env
    environment:
      MYSQL_ROOT_PASSWORD: ${MYSQL_ROOT_PASSWORD}
      MYSQL_DATABASE: ${DATABASE_NAME}
      MYSQL_USER: ${DATABASE_USER}
      MYSQL_PASSWORD: ${DATABASE_PASSWORD}
    ports:
      - '3306:3306'
    volumes:
      - mysqldata:/var/lib/mysql

  php:
    image: wodby/drupal-php:7.0-2.4.3
    env_file: .env
    environment:
      PHP_SENDMAIL_PATH: /usr/sbin/sendmail -t -i -S mailhog:1025
      DB_HOST: ${DATABASE_HOST}
      DB_USER: ${DATABASE_USER}
      DB_PASSWORD: ${DATABASE_PASSWORD}
      DB_NAME: ${DATABASE_NAME}
      DB_DRIVER: mysql
    volumes:
      - ./:/var/www/html:cached
#      - ./mariadb-init:/docker-entrypoint-initdb.d

  nginx:
    image: wodby/drupal-nginx:8-1.13-2.4.2
    depends_on:
      - php
    environment:
      NGINX_STATIC_CONTENT_OPEN_FILE_CACHE: "off"
      NGINX_ERROR_LOG_LEVEL: debug
      NGINX_BACKEND_HOST: php
      NGINX_SERVER_ROOT: /var/www/html/docroot
    volumes:
      - ./:/var/www/html:cached
    labels:
      - 'traefik.backend=nginx'
      - 'traefik.port=80'
      - 'traefik.frontend.rule=Host:${TRAEFIK_HOST}'

  mailhog:
    image: mailhog/mailhog
    labels:
      - 'traefik.backend=mailhog'
      - 'traefik.port=8025'
      - 'traefik.frontend.rule=Host:mailhog.${TRAEFIK_HOST}'

  traefik:
    image: traefik
    command: -c /dev/null --web --docker --logLevel=INFO
    ports:
      - '80:80'
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
volumes:
  mysqldata:
    driver: "local"

Step 3. Add the .env file

Create a new file named .env  at the project root, to provide per-environment configuration.

# ENV 
ENVIRONMENT=local 

# Database 
MYSQL_ROOT_PASSWORD=password 
DATABASE_NAME=drupal 
DATABASE_USER=drupal 
DATABASE_PASSWORD=drupal 
DATABASE_HOST=mariadb 
DATABASE_PORT=3306 

# Traefik host 
TRAEFIK_HOST=site.local

Step 4. Starting the containers

To start the containers you need to execute the following command docker-compose up -d, grab some coffee or a beer and be patient while the images are downloaded to your local computer.

Step 5. Importing a database dump (optional)

You can import previously exported DB dump by copying the dump file under the mariadb-init directory and uncommenting the following line on your docker-compose.yml file.

- ./mariadb-init:/docker-entrypoint-initdb.d

Step 6. Checking for used ports

One common issue you'll likely run into while starting the containers, is finding the ports in use, this could mean an instance of Apache, Nginx, MySQL or other service is already running, so if you want to know what is using the ports you can run this commands on your terminal:

lsof -i :<PORT_NUMBER>

Useful docker-compose commands

Starting the containers, using detached mode

docker-compose up -d

Stopping the containers.

docker-compose stop

Destroying the containers

docker-compose down [-v]

NOTE: You can pass the -v flag to destroy the shared volumes as well. Be careful this will destroy any data on the shared volumes between the container and the local machine.

Checking the logs

docker-compose logs -f <CONTAINER_NAME>

Executing CLI commands.

While working with containers is common to see developers ssh-ing into the machine to execute commands. To avoid this practice you can take advantage of the docker-compose exec command.

docker-compose exec <CONTAINER_NAME> <COMMAND_NAME>

Using Composer

Drupal 8 really takes a lot of advantages of using composer, you can install/uninstall dependencies and patches. Although it’s a good practice to run this commands inside your container because if you have a PHP version on your local machine, you could install dependencies that are not suitable for your container instance.

docker-compose exec --user=82 php composer <COMMAND_NAME>

Using DrupalConsole

If you want to use DrupalConsole on your project you can add an alias file to the repo at console/sites/site.yml containing the following configuration.

local:
  root: /var/www/html
  extra-options: docker-compose exec --user=82 php
  type: container

After this file is added you will be able to run commands locally but actually execute them on the container by calling DrupalConsole commands you can use:

drupal @site.local <COMMAND_NAME>

Find more information here: https://docs.drupalconsole.com/en/alias/using-site-alias.html

On the other hand, if you prefer the direct way to run the commands you can use

docker-compose exe --user=82 php drupal <COMMAND_NAME>

Wrapping up

The new setup worked really well on everyone’s computer, and we didn’t have any more issues on this since we changed, now the project went live and we got a great experience and we plan to keep using Docker for future projects.

If you have the feeling Docker’s architecture is hard to understand and could be complex to get up and running you can take advantage of the projects mentioned below (Lando, Docksal, etc) to make it easy for you to start working with containers.

UPDATE:
Because of the nature of the project a Drupal site, we based our docker configuration on the Drupal4Docker project by wodby. For other projects using technologies as Symfony, ReactJS, MeteorJS we create our own custom Docker files and custom images.

Dec 26 2017
Dec 26

A few months ago I had the pleasure of starting a new journey in my professional career, joining the weKnow family. This was a natural step after collaborating in the last couple of years with Jesús and Enzo in open source projects like DrupalConsole. Right from the start, working to reach our projects’ milestones has been a really fun adventure, with lots of new knowledge and lessons learned along the way.

One of my first projects was leading the effort to rebuild weKnow’s new site. Most of you can probably relate to the fact that 'you are your toughest client', which is why we needed to strategize intensely before deciding on what approach to use, we treated this project as a functional prototype for the implementation of our new workflows in future projects with our clients and partners.

In this series of blog posts, I'll share how we developed the theme for the weKnow site using Pattern Lab. Drupal 8 is no longer an island, and there’s now a world of different options and tools to implement in our projects that makes the developing process much more efficient. 

Component-Driven Theming

The adoption of component-driven theming in Drupal has increased significantly, due to its many advantages:

  • Helps break-up huge sophisticated projects into smaller pieces of software.
  • Narrows focus on user experience and functionality of a single component at a time.
  • Eliminates having to deal with cumbersome drupal quirks aka “Drupalisms”.
  • Begin work without depending on a Drupal installation. 
  • Work the backend and frontend simultaneously. 
  • Reusability and portability of components. 

To make this work you need to define a stage to integrate with Drupal, and if this is your first time working with components, it can get a little complicated. That said, the objective of this blog post series is to share the lessons we learned on this process and make the path clearer for you.

Our Weapon of Choice

There are several tools that can help you start using components in your projects, each one with its own advantages. Some of the popular ones include:

We decided to go with Pattern Lab, a tool we liked for its concept of “Atomic Design”, which I'll expand on in detail in a bit.

Pattern Lab

Pattern Lab is a tool that facilitates the implementation of Atomic Design. It has extensive documentation and examples, supports PHP, and leverages twig, the template engine of Drupal 8. 

At its most simple, Pattern Lab is a static site generator that allows you to start building components for your Drupal website even before installing Drupal, which decreases the ramp-up time for developers and helps them jump in and start creating components right away.

Atomic Design

Atomic Design is a methodology for creating design systems, created by Brad Frost and based on five distinct levels:

  • Atoms
  • Molecules
  • Organism
  • Templates
  • Pages

Atoms

Atoms are the smallest blocks of our components. We can think of them as the most basic HTML tags of our website, such as labels, buttons, inputs, images and even some simple components like a CTA made up of a background color, text, and a URL.

Molecules

With molecules we start grouping our atoms to create some fundamental components for our website, for example a search form that is basically the grouping of an input, a label, and a search button.

Organisms

In an organism, we are combining our molecules to build a more complex component and create distinct sections for our UI. An example could be the header region of a website, which groups molecules like the main nav and a search form, plus the main logo of the site.

Templates

In the template we put together all these pieces in one place and we can start seeing the behavior of our layout. This is the exciting moment when the look of our website really starts coming together, though at this point we still use placeholders to render our components. 

Pages

In this instance our templates are ready to use some real content with the purpose of showing what the future website will really look like.

Choosing our base Theme

For the Drupal community there are curated resources that can help you get on track to use Pattern Lab. In our case we decided to use Particle, previously known as Pattern Lab Starter, which out of the box offered:

  • Easy and fast setup.
  • Highly configurable gulp tasks.
  • No need to have Drupal installed to start working with it.
  • Generators, taking advantage of some really cool tools like Yeoman
  • Use of dummy data using Faker.

To start working with this theme, we first had to meet the following requirements in our local machine:

  • Node v6 (for using npm).
  • PHP 5.4, 5.5, 5.6 or 7.
  • Composer.

At the moment, there is no way to use this theme as a Composer dependency in our Drupal project, but this makes absolute sense since this is not just a Drupal Theme, it can also be used as a standalone project.  That's why before getting started to work on this theme, we needed to download the project's repo and place it in the theme's directory. After that, we could then rename the folder and the configuration files of our theme, as suggested in this documentation

Once that part was done, we needed to download the dependencies for the project. Fortunately, Particle comes with some useful commands to make this process easier. In our case, we needed to run the following commands:

npm install
To download all node dependencies.

npm run setup
This is a combination of two commands: bower install,to download our frontend packages (this also can be managed by npm using yarn), and composer install,because Pattern Lab has its own PHP dependencies.

npm start
This command is the equivalent to run gulp default task.

After this, you can access http://localhost:3050/pattern-lab/public/ and see your patternlab style guide.

Particle’s commands to run the most common tasks include:

npm run compile
Compiles all source assets and creates the build of our theme. 

npm run test
Validates our source code according to the linter of our theme.

npm run update
Updates the Node and PHP dependencies of the project.

npm run new
This is the most used command in the development process, this command executes a Yeoman generator that creates a new component with a scss, twig, js and json files. After the component is created, it also updates the Pattern Lab configuration file and the libraries.yml file of our theme. In a future post we will cover this command in more detail.

That’s it for now! Stay tuned for our follow-up blog posts, where we will dive into how to implement Pattern Lab in our Drupal project.

Dec 18 2017
Dec 18

By Kenny AbarcaCEO | December 18, 2017

By Kenny AbarcaCEO | December 18, 2017

weKnow is a fully distributed company, something we proclaim loudly and proudly to our partners and potential clients when engaging with them. It’s a characteristic that gives us the competitive edge because it highlights weKnow’s core values and the character of every individual that works at here.

I decided to write this because our clients are always amazed by how seamless our operations and projects run. They always seem amazed by the fact that we span 12 countries and cover 6 time zones, yet seamlessly integrate into their projects from kickoff to completion without a hitch. This is how we keep things running smooth…

Happy Accident

Back in 2013, working remotely was a benefit tied to tenure and seniority. Qualifying employees got to work from home 2 days a week and everything was working smoothly. However, we were one day shocked to receive an eviction notice from our landlord. Turns out he was renting the space to us without a business permit and the authorities were shutting down his operation. At that point we had been thinking about going full remote for some time, so we just took advantage of the opportunity and made the jump to a fully distributed model. 

It’s been 4 years since we started perfecting the methodology, certainly learning through our mistakes and making adjustments especially because we moved from a local team to a globally distributed team. While he administrative complexities can be burdensome, they are easily overshadowed by the sheer quantity of talent at our disposal.

There are three pillars that we’ve identified necessary to make the distributed team actually work, those pillars are:

Hiring A Distributed Team

The key driving factor that led us to global distribution was to expand our scope for talent acquisition. Not all potential candidates are suitable for a globally distributed system. Most prospects are attracted to the concept of managing their own time and location, but are not prepared to handle the responsibilities. We often quip the quote “with great power, comes great responsibility”, while talking to prospective employees.

Our personality interview includes questions related to self-Motivation, strong communication skills, attention to detail and enthusiasm to determine if a candidate is suitable for a remote position.

Here’s an old post from Recruiter.com that best summarizes the personality types suitable for virtual work.

Communication

Lack of communication leads to conflict and mistakes, that’s why while working remote, everyone should maintain healthy communication with all teams, both internally (within the company) and externally (client teams).

We place a lot of emphasis on ‘raising your hand’ when help is needed, having a hero complex in a distributed system leads to a loss of time and ROI potential. Nobody should stay up all night fixing what would take another, more experienced team member, five minutes to address or advice.

Managing A Distributed Team

weKnow is a horizontal organization, all decisions and challenges are addressed at the same level. But this also means that all employees, despite seniority, have direct access to anyone in the organization. This enhances the team’s problem solving capacity and facilitates team-wide adoption of the solutions.

For instance, we standardized holidays throughout the company, that means that whether you’re based in Argentina or France you still take the 12 holidays of the Costa Rican calendar, as a reference and because that’s our baseline. Individuals can trade holiday days but as a default, we all share the Costa Rican holidays.

One challenge that proved to be a struggle was finding a Time Tracking and Resources Administration tool. We pretty much tried them all but they all had either too much for our needs, or just weren’t a fit.

We decided it was time to have our own custom fit web application to handle Time Tracking, Resources, Skills, Requests, Prospects, and many more in a way that it syncs with our company’s style and personality. We even made it a cool internal project where developers transitioning to a new client could help out. It was built in React, MeteorJS and MongoDB which was very attractive challenge to our team, because they were new technologies, thus a learning opportunity.

One of the features that I like the most about ‘KeepTrack’, our tool to track pretty much everything at weKnow,  is the ‘Skills feature’, where people input their skills as well as rate how much experience they have on that skill.  Additionally, that same feature is a place where we  have our technology radar for the skills our team has but also the ones that we lack of or should invest for the near future.

This is already helping our Capabilities team identify who’s best suitable for an upcoming project without having to ask our IT team. The ‘technology radar’ feature, is helping our team learn about technologies that are taking off so they can focus on learning these frameworks or languages.

Our Tools For Managing A Distributed Team

Here's the mix of tools we use to make this work:

Hope you can find this article useful. If you already have a distributed team, this might help you improve the model, and if you are thinking about implementing it, then you certainly can make good use of one or two tips.

Please comment or reach out if you have any questions or would like us to expand on a specific aspect of managing distributed teams.

Dec 14 2017
Dec 14

By Jesus Manuel OlivasHead of Products | December 14, 2017

By Jesus Manuel OlivasHead of Products | December 14, 2017

It’s been a little over a year since weKnow came to life as a rebranding for Anexus, which allowed me to join the company as a new business partner. My main goal within the company is directing our efforts to explore new fields and technologies (that’s right, we are not just a Drupal shop anymore!)

As a web solution provider, having a website that accurately reflects what we do is a challenging task, because usually our plate is full with client work, and it’s not uncommon to put your own website at the end of the queue. This is why last year we decided to put together a basic landing page while setting aside some time to work on the company image as part of the rebranding.

Come 2017, we had the designs ready and started working on the architecture of the site, with the intention of taking advantage of the techniques, technologies, and lessons we learned while working on our clients’ projects, which led to the successful launch of our revamped site!

During the upcoming weeks, we plan to publish a series of blog posts in which we will explain the tools we used, the modules we selected, and the general reasoning that led to the approach we took. Some of the topics that we will discuss include: 

  • Component-Driven design using Pattern Lab.
  • Local development using Docker, Docker Compose, and Traefik.
  • Using Composer to manage project dependencies.
  • Site Configuration Management.
  • Content Synchronization.
  • Continuous Integration workflow and Continuous Deployments.

Stay tuned!
 

Nov 16 2017
Nov 16

By Eduardo GarcíaCTO | November 16, 2017

By Eduardo GarcíaCTO | November 16, 2017

Before the existence of the Drupal Console as a project, it all began with an idea to make Drupal 8 better. Every great invention/innovation begins with an idea, and the Drupal transition from 7 to 8 came with massive changes to the fundamental operating procedures of yesterday. Symfony components were making a splash into Drupal Core.

Jesus and David, the initiators of Drupal Console project, came up with the idea of including the symfony console into the Drupal core. The same way that other symfony components were being included into the Drupal core.

People Working

Powering Though Frustration

As helpful as the Drupal Console project is nowadays, it wasn’t very widely accepted into the drupal community initially. In fact, it turned out to be a huge challenge to get anyone to listen to the idea. For Jesus and David, the primary objective to include the Symfony Console in Drupal was to have the option to have code generators, in the same way, the Symfony community does. Who wouldn’t want that? A way to automate the annoying redundancies that plague developers everywhere. So they decided to propose the idea to the Drupal core maintainers via the issue queue. That idea was however quickly dismissed.

After few attempts to request the inclusion and trying to collaborate into different drupal projects, it dawned on Jesus and David that inclusion and collaboration was not going to happen. They needed to regroup and find a better approach.

While at lunch at Drupalcamp Costa Rica, Jesus and David were casually discussing the frustrations they had encountered trying to bring innovation to Drupal and related projects, and Larry Garfield chimed in “someone needs to create a separate project that includes Symfony Console and code generation”. That sentence gave birth to the Drupal Console project as you know it today.

someone needs to create a separate project that includes Symfony Console and code generation.

Building A Community

Jesus stacked up his calendar with almost every Drupal event in the U.S. The goal was to talk about the project in sessions at all Drupal community gatherings he could physically attend, or at minimum, present the idea at BOFs where sessions were not possible. The code sprints helped him interact with developers and users forming a critical source of feedback.

Along the way, he convinced me to join the project as a maintainer. I also embarked on his outreach campaign to help spread the word. Only, my campaign was global because it was important to reach non-english speakers because they often feel left out of major open source projects. Currently, the Drupal Console project is available, with some variations, in the 18 languages listed below.

  • English

  • Spanish

  • Catalán

  • French

  • Korean

  • Hindi

  • Hungarian

  • Indonesian

  • Japanese

  • Marathi

  • Punjabi

  • Brazilian Portuguese

  • Romanian

  • Russian

  • Tagalog

  • Vietnamese

  • Chinese Simplified

  • Chinese Traditional

One Million Downloads! 

After four years of development in July 2017, we reached our first million downloads across different releases. This achievement is thanks to our more that 250 contributors across the globe.

This brought a great sense of validation for deciding to stick to our guns, do the right thing and most importantly... be globally inclusive.

Oct 31 2017
Oct 31

By Michael KinyanjuiMarketing | October 31, 2017

By Michael KinyanjuiMarketing | October 31, 2017

BADCamp holds a special place in the history of weKnow. Kenny Abarca and Enzo Garcia co-founders of the company, had their first foray into the Drupal community at BADCamp 7 years ago. Enzo and Jesús met at BADCamp, and this meeting changed the trajectory of the Drupal Console as we know it. So this time around Jesús and Omar had the ‘grueling’ task to represent the weKnow team in the Bay Area. We highly recommend placing BADCamp on your bucket list, there are no words to describe the atmosphere, maybe … ‘the Woodstock of the Drupal calendar’?

So here are our highlights after interviewing Jesús and Omar, besides the social events, BADCamp offers a lot of insight on innovations and the general direction of the Drupal project.

Omar goes to BADCAMP 2017

Omar pointed out that he noticed that the camp focused, and was split, around two themes that are currently abuzz in the Drupal kingdom, Theming and DevOps. For theming, he was quick to point out how Twig has continued to make a massive impact in the frontend sector.

“The arrival of engine templates like Twig, has helped avoid ‘drupalism’ in how we create themes for Drupal, this is a great step for the productivity during the development process, the theming focuses on components, allowing us to work independently in the theming layer of the Drupal backend and site building, without the need to think in an integrated architecture perspective. The complexity to a big group of front end developers, without Drupal knowledge in theming, was one of the main reasons to use decoupled architecture in Drupal 8, it’s definitely one of the best things Drupal 8 has to offer” Omar added.

According to Omar, discussions around the DevOps subject seemed to revolve mostly around enterprise development needs. This ranged from workflow to testing and using bots to automate the quality assurance.

Jesús goes to BADCAMP 2017

Jesús took the opportunity to give a talk on creating, building, testing and deploying Drupal 8. He shared tools that are helpful in the process of setting up the local development environment and how to make use of them for a successful deployment.

“Everything is Docker!”

That was Jesús’s simple reply to the question “What were your BADCamp highlights?”

Jesús added that it looked like every Drupal developer was attracted to Docker’s flexibility and was quickly becoming the preferred setup for the local development environment. He based his observation on the emergence of a myriad of tools that simplify Docker setup such as Lando, Docksal, DrupalVM.

“This is a natural evolution when you realize creating and building a Drupal 8 site is more complex than previous versions. In order to start working on a Drupal 8 site, your local environment requires some tools; a package manager as composer for handling the site dependencies and talking about theming something similar is happening since you will be probably using gulp or npm.” Jesús added.

Serverless + FaaS

As Jesús has come to learn, a lot of the nuggets of innovation are hidden deep in the hallways of Drupal events. One such nugget was acquired while in a conversation with Thom Toogood from Australia. Thom introduced him to ‘Composer as a Service’, a project he is working on using , which is an open source framework that you can use to run any CLI-driven binary program embedded in a Docker container… making it a Function as a Service.

GatsbyJS Drupal Plugin

One of the main highlights for Jesús came in the last session he attended, about a static site generator, GatsbyJS. It’s based on ReactJS and packs some awesome benefits, just to name a few;

  • Takes advantage of React.js, Webpack and modern JavaScript and CSS.
  • Pre-build pages to load flat files and avoid accessing expensive server side resources.
  • Pre-fetches page’s resources as JS, CSS and other pages. This makes navigate the site feels really fast.

The session also included a live demo showing the Gatsby source Drupal plugin that allows you to build static sites using Drupal as the source data.

Interesting Tools Discovered at BADCAMP

Most Interesting Sessions of BADCAMP 2017

Oct 13 2017
Oct 13

By Michael KinyanjuiMarketing | October 13, 2017

By Michael KinyanjuiMarketing | October 13, 2017

The weKnow team is excited to announce that we’ll be attending BadCamp October 18-21. BadCamp is a celebration of open source software and one of the most prominent events in the Drupal universe. We take great pride in our track record for giving back to the open source community, so we are also happy to announce that Jesus and Omar will be holding a training session on hands-on Drupal 8 module development using Drupal Console.

Hands on Drupal 8 Module Development using DrupalConsole

Over the years we’ve met many open source enthusiasts at BadCamp, so feel free to say hello… you can’t miss us in our weKnow gear !

Jul 27 2017
Jul 27

By Kenny AbarcaCEO | July 27, 2017

By Kenny AbarcaCEO | July 27, 2017

We are excited to announce our line-up for the 2017 Drupal Camp Costa Rica. As proud members of a great community in Costa Rica, weKnow is committed to growing the community by sharing information and insights. We also take this opportunity to thank our team members for consistently sharing knowledge with the Drupal community in Costa Rica, as well as around the world in our global outreach initiatives.

Here are the topics of our sessions, they range from all expertise levels as well as technologies:

All sessions will be recorded and uploaded to the web in case you can’t make it to the camp.

Mar 14 2017
Mar 14

By Kenny AbarcaCEO | March 14, 2017

By Kenny AbarcaCEO | March 14, 2017

After all the hard work we have been putting into building “WeKnow” as a company that primarily focus on training, we are excited to announce that the company is reaching one of its first milestones and that is to provide a training in a DrupalCon. This goal is coming to a reality in Baltimore where we will be presenting Mastering Drupal 8 Development.

The training, created by Jesus Olivas and Enzo García will provide an introduction to the most important changes for developers in Drupal 8, allowing attendees to learn by practicing, while at the same time providing a solid knowledge of the process of building modules for Drupal 8.

During the workshop, students will create a custom module and other components by using various APIs, plugins, and hooks. By the end of the training, trainees will have a better understanding of Drupal 8 and how the introduction of Symfony components are changing the way modules should be written.

Originally the training was going to be provided only by two trainers but there was such an overwhelming response from attendees that we had to increase the trainers to 4 in order to provide the best training experience for all attendees.

Sign up now! There are still some tickets left to attend the training but they are selling out quickly.

Additionally, we also have a presentation session called “Improving your Drupal 8 Development Workflow”. Make sure you stop by and say hello.

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web