Feeds

Author

Feb 06 2019
Feb 06

If you are a programmer looking to improve your professional craft, there are many resources toward which you will be tempted to turn. Books and classes on programming languages, design patterns, performance, testing, and algorithms are some obvious places to look. Many are worth your time and investment.

Despite the job of a programmer often being couched in technical terms, you will certainly be working for and with other people, so you might also seek to improve in other ways. Communication skills, both spoken and written, are obvious candidates for improvement. Creative thinking and learning how to ask proper questions are critical when honing requirements and rooting out bugs, and time can always be managed better. These are not easily placed in the silo of “software engineering,” but are inevitable requirements of the job. For these less-technical skills, you will also find a plethora of resources claiming to help you in your quest for improvement. And again, many are worthwhile.

For all of your attempts at improvement, however, you will be tempted to remain in the non-fiction section of your favorite bookstore. This would be a mistake. You should be spending some of your time immersed in good fiction. Why fiction? Why made-up stories about imaginary characters? How will that help you be better at your job? There are at least four ways.

Exercise your imagination

Programming is as much a creative endeavor as it is technical mastery, and creativity requires a functioning imagination. To quote Einstein:

Imagination is more important than knowledge. For knowledge is limited, whereas imagination embraces the entire world, stimulating progress, giving birth to evolution.

You can own a hammer, be really proficient with it, and even have years of experience using it, but it takes imagination to design a house and know when to use that hammer for that house. It takes imagination to get beyond your own limited viewpoint. This can make it easier to make connections and analogies between things that might not have seemed related which is a compelling definition of creativity itself.

Your imagination works like any muscle. Use it or lose it. And just like any other kind of training, it helps to have an experienced guide. Good authors of fiction are ready to be your personal trainers.

Understanding and empathy

The best writers can craft characters so real that they feel like flesh and blood, and many of those people can be similar to actual people you know. Great writers are, first and foremost, astute observers of life, and their insight into minds and motivations can become your insight. Good fiction can help you navigate real life.

One meta-study suggests that reading fiction, even just a single story, can help improve someone’s social awareness and reactions to other people. For any difficult situation or person you come across in your profession, there has probably been a writer that has explored that exact same dynamic. The external trappings and particulars will certainly be different, but the motivations and foibles will ring true.

In one example from Jane Austen’s Mansfield Park, Sir Thomas is a father who puts too much faith in the proper appearances, and after sternly talking to his son about an ill-advised scheme, the narrator of the books says, “He did not enter into any remonstrance with his other children: he was more willing to believe they felt their error than to run the risk of investigation.”

You have probably met a person like this. You might have dealt with a project manager like this who will limit communication rather than “run the risk of investigation". No news is good news. Austen has a lot to teach you about how one might maneuver around this type of personality. Or, you might be better equipped to recognize such tendencies in yourself and snuff them out before they cause trouble for yourself and others.

Remember, all software problems are really people problems at their core. Software is written by people, and the requirements are determined by other people. None of the people involved in this process are automatons. Sometimes, how one system interfaces with another has more to do with the relationship between two managers than any technical considerations.

Navigating people is just as much a part of a programmer’s job as navigating an IDE. Good fiction provides good landmarks.

Truth and relevance

This is related to the previous point but deserves its own section. Good fiction can tell the truth with imaginary facts. This is opposed to much of the news today, which can lie with the right facts, either by omitting some or through misinterpretation.

Plato, in his ideal republic, wanted to kick out all of the poets because, in his mind, they did nothing but tell lies. On the other hand, Philip Sidney, in his Defence of Poesy, said that poets lie the least. The latter is closer to the truth, even though it might betray a pessimistic view of humanity.

Jane Austen’s novels are some of the most insightful reflections on human nature. Shakespeare’s plays continue to last because they tap into something higher than “facts”. N.N. Taleb writes in his conversation on literature:

...Fiction is a certain packaging of the truth, or higher truths. Indeed I find that there is more truth in Proust, albeit it is officially fictional, than in the babbling analyses of the New York Times that give us the illusions of understanding what’s going on.

Homer, in The Iliad, gives us a powerful portrait of the pride of men reflected in the capriciousness of his gods. And, look at how he describes anger (from the Robert Fagles translation):

...bitter gall, sweeter than dripping streams of honey, that swarms in people's chests and blinds like smoke.

That is a description of anger that rings true and sticks. And maybe, just maybe, after you have witnessed example after vivid example of the phenomenon in The Iliad, you will be better equipped to stop your own anger from blinding you like smoke.

How many times will modern pundits get things wrong, or focus on things that won’t matter in another month? How many technical books will be outdated after two years? Homer will always be right and relevant.

You also get the benefit of aspirational truths. Who doesn’t want to be a faithful friend, like Samwise Gamgee, to help shoulder the heaviest burdens of those you love? Sam is a made up character. Literally does not exist in this mortal realm. Yet he is real. He is true.

Your acts of friendship might not save the world from unspeakable evil, but each one reaches for those lofty heights. Your acts of friendship are made a little bit nobler because you know that they do, in some way, push back the darkness.

Fictional truths give the world new depth to the reader. C.S. Lewis, in defending the idea of fairy tales, wrote:

He does not despise real woods because he has read of enchanted woods: the reading makes all real woods a little enchanted.

Likewise, to paraphrase G.K. Chesterton, fairy tales are more than true — not because they tell us dragons exist, but because they tell us dragons can be beaten.

The right words

One of the hardest problems in programming is naming things. For variables, functions, and classes, the right name can bring clarity to code like a brisk summer breeze, while the wrong name brings pain accompanied by the wailing and gnashing of teeth.

Sometimes, the difference between the right name and the wrong name is thin and small, but represents a vast distance, like the difference between “lightning” and “lightning bug,” or the difference between “right” and “write”.  

Do you know who else struggles with finding the right words? Great authors. And particularly, great poets. Samuel Taylor Coleridge once said:

Prose = words in their best order; — poetry = the best words in the best order. 

"The best words in the best order" could also be a definition of good, clean code. If you are a programmer, you are a freaking poet.

Well, maybe not, but this does mean that a subset of the fiction you read should be poetry, though any good fiction will help you increase your vocabulary. Poetry will just intensify the phenomenon. And when you increase your vocabulary, you increase your ability to think clearly and precisely.

While this still won’t necessarily make it easy to name things properly - even the best poets struggle and bleed over the page before they find what they are looking for - it might make it easier.

What to read

Notice the qualifier “good”. That’s important. There were over 200,000 new works of fiction published in 2015 alone. Life is too short to spend time reading bad books, especially when there are too many good ones to read for a single lifetime. I don’t mean to be a snob, just realistic.

Bad fiction will, at best, be a waste of your time. At worst, it can lie to you in ways that twist your expectations about reality by twisting what is good and beautiful. It can warp the lens through which you view life. The stories we tell ourselves and repeat about ourselves shape our consciousness, and so we want to tell ourselves good ones.

So how do you find good fiction? One heuristic is to let time be your filter. Read older stuff. Most of the stuff published today will not last and will not be the least bit relevant twenty years from now. But some of it will. Some will rise to the top and become part of the lasting legacy of our culture, shining brighter and brighter as the years pass by and scrub away the dross. But it's hard to know the jewels in advance, so let time do the work for you.

The other way is to listen to people you trust and get recommendations. In that spirit, here are some recommendations from myself and fellow Lullabots:

Jul 24 2017
Jul 24

Migrations provide an excellent opportunity to take stock of your current content model. You’re already neck deep in the underlying structures when planning for data migrations, so while you’re in there, you might as well ensure the new destination content types will serve you going forward and not present the same problems. Smooth the edges. Fill in some gaps. Get as much benefit out of the migration as you can, because you don’t want to find yourself doing another one a year from now.

This article will walk through an example of migrating part of a Drupal 7 site to Drupal 8, with an eye toward cleaning up the content model a bit. You will learn:

  • To write a custom migrate source plugin for Drupal 8 that inherits from another source plugin.
  • To take advantage of OO inheritance to pull field values from other entities with minimal code.
  • To use the Drupal 8 migrate Row object to make more values available in your migration yaml configuration.

Scenario: A music site moving from Drupal 7 to Drupal 8

Let’s say we have a large music-oriented website. It grew organically in fits and starts, so the data model resembles a haphazard field full of weeds instead of a well-trimmed garden. We want to move this Drupal 7 site to Drupal 8, and clean things up in the process, focusing first on how we store artist information.

Currently, artist information is spread out:

  • Artist taxonomy term. Contains the name of the artist and some other relevant data, like references to albums that make up their discography. It started as a taxonomy term because editors wanted to tag artists they mentioned in an article. Relevant fields:

    • field_discography: references an album content type.
       
  • Artist bio node. More detailed information about the artist, with an attached photo gallery. This content type was implemented as the site grew, so there was something more tangible for visitors to see when they clicked on an artist name. Relevant fields:
     
    • field_artist: term reference that references a single artist taxonomy term.
    • field_artist_bio_body: a formatted text field.
    • field_artist_bio_photos: a multi-value file field that references image files.
    • field_is_deceased: a boolean field to mark whether the artist is deceased or not.

Choosing the Migration’s Primary Source

With the new D8 site, we want to merge these two into a single node type. Since we are moving from one version of Drupal to another, we get to draw on some great work already completed.

First, we need to decide which entity type will be our primary source. After some analysis, we determine that we can’t use the artist_bio node because not every Artist taxonomy term is referenced by an artist_bio node. A migration based on the artist_bio node type would leave out many artists, and we can’t live with those gaps.

So the taxonomy term becomes our primary source. We won’t have an individual migration at all for the artist_bio nodes, as that data will be merged in as part of the taxonomy migration.

In addition to the migration modules included in core (migrate and migrate_drupal), we’ll also be using the migrate_plus module and migrate_tools.

Let’s create our initial migration configuration in a custom module, config/install/migrate_plus.migration.artists.yml.

id: artists
label: Artists
source:
  plugin: d7_taxonomy_term
  bundle: artist
destination:
  plugin: entity:node
  bundle: artist
process:
  title: name

  type:
    plugin: default_value
    default_value: artist

  field_discography:
    plugin: iterator
    source: field_discography
    process:
      target_id:
        plugin: migration
        migration: albums
        source: nid

This takes care of the initial taxonomy migration. As a source, we are using the default d7_taxonomy_term plugin that comes with Drupal. Likewise, for the destination, we are using the default fieldable entity plugin.

The fields we have under “process” are the fields found on the Artist term, though we are just going to hard code the node type. The field_discography assumes we have another migration that is migrating the Album content type.

This will pull in all Artist taxonomy terms and create a node for each one. Nifty. But our needs are a bit more complicated than that. We also need to look up all the artist_bio nodes that reference Artist terms and get that data. That means we need to write our own Source plugin.

Extending the Default Taxonomy Source Plugin

Let’s create a custom source plugin, that extends the d7_taxonomy_term plugin.

use Drupal\taxonomy\Plugin\migrate\source\d7\Term;
use Drupal\migrate\Row;

/**
 *
 * @MigrateSource(
 *   id = "artist"
 * )
 */
class Artist extends Term {

  /**
   * {@inheritdoc}
   */
  public function prepareRow(Row $row) {
    if (parent::prepareRow($row)) {
      $term_id = $row->getSourceProperty('tid');

      $query = $this->select('field_data_field_artist', 'fa');
      $query->join('node', 'n', 'n.nid = fa.entity_id');
      $query->condition('n.type', 'artist_bio')
        ->condition('n.status', 1)
        ->condition(fa.field_artist_tid, $term_id);

      $artist_bio = $query->fields('n', ['nid'])
        ->execute()
        ->fetchAll();

      if (isset($artist_bio[0])) {
        foreach (array_keys($this->getFields('node', 'artist_bio')) as $field) {
          $row->setSourceProperty($field, $this->getFieldValues('node', $field, $artist_bio[0]['nid']));
        }
      }

    }
  }
}

Let’s break it down. First, we see if there is an artist_bio that references the artist term we are currently migrating.

      $query = $this->select('field_data_field_artist', 'fa');
      $query->join('node', 'n', 'n.nid = fa.entity_id');
      $query->condition('n.type', 'artist_bio')
        ->condition('n.status', 1)
        ->condition(fa.field_artist_tid', $term_id);

All major D7 entity sources extend the FieldableEntity class, which gives us access to some great helper functions so we don’t have to write our own queries. We utilize them here to pull the extra data for each row.

      if (isset($artist_bio[0])) {
        foreach (array_keys($this->getFields('node', 'artist_bio')) as $field) {
          $row->setSourceProperty($field, $this->getFieldValues('node', $field, $artist_bio[0]['nid']));
        }
      }

First, if we found an artist_bio that needs to be merged, we are going to loop over all the field names of that artist_bio. We can get a list of all fields with the FieldableEntity::getFields method.

We then use the FieldableEntity::getFieldValues method to grab the values of a particular field from the artist_bio.

These field names and values are passed into the row object we are given. To do this, we use Row::setSourceProperty. We can use this method to add any arbitrary value (or set of values) to the row that we want. This has many potential uses, but for our purposes, the artist_bio field values are all we need.

Using the New Field Values in the Configuration File

We can now use the field names from the artist_bio node to finish up our migration configuration file. We add the following to our config/install/migrate_plus.migration.artists.yml:

field_photos:
    plugin: iterator
    source: field_artist_bio_photos
    process:
      target_id:
        plugin: migration
        migration: files
        source: fid

'body/value': field_artist_bio_body
'body/format':
    plugin: default_value
    default_value: plain_text

field_is_deceased: field_is_deceased

The full config file:

id: artists
label: Artists
source:
  plugin: d7_taxonomy_term
  bundle: artist
destination:
  plugin: entity:node
  bundle: artist
process:
  title: name

  type:
    plugin: default_value
    default_value: artist

  field_discography:
    plugin: iterator
    source: field_discography
    process:
      target_id:
        plugin: migration
        migration: albums
        source: nid

field_photos:
    plugin: iterator
    source: field_artist_bio_photos
    process:
      target_id:
        plugin: migration
        migration: files
        source: fid

'body/value': 'field_artist_bio_body/value'
'body/format':
    plugin: default_value
    default_value: plain_text

field_is_deceased: field_is_deceased

Final Tip

When developing custom migrations with the Migrate Plus module, configuration is stored in the config/install of a module. This means it will only get reloaded if the module is uninstalled and then installed again. The config_devel module can help with this. It gives you a drush command to reload a module’s install configuration.

Jul 14 2015
Jul 14

There are many services out there that want to talk to your application. An event happens, such as a new subscriber, and the service wants to tell you about it. Often, this functionality is implemented via a webhook pattern, where your application exposes a public url for the express purpose of receiving such communication.

Let me offer an example. I recently needed Mailchimp to send me a notification whenever a campaign was sent to one of our mailing lists. But how to test this? Mailchimp needed a URL it could access publicly, but I didn’t want to test anything on a live, public server. That would be time consuming, and probably a little dangerous. I needed to expose a public URL from my local machine.

There are a few tools that offer this functionality, but the best was ngrok. After downloading the tool, you can run it via the command line. The following commands assume a linux/Mac computer. If you are on Windows, just ignore the “./” in front of the command.

./ngrok http 80

This brings up a UI with some information.

ngrok command line ui

You get a temporary hash subdomain (03fcdb2a) that is now forwarded to your localhost. And you can see connections as they happen. Pretty cool. This is the domain I needed to put into the Mailchimp settings, so it knows where to send updates.

If your app is actually running at localhost on port 80, this is all you need to do. Chances are, however, that you have more than one development environment. How do you point to the right one?

Virtual Hosts and Virtual Machines

If you are running Apache locally as your webserver, you might have several virtual hosts set up to respond either to different port numbers or different host headers. If you have distinguished them with ports, just put a different port number in the command.

./ngrok http 5000

If your virtual hosts will only respond to a certain host header, there’s an option for that:

./ngrok http -host-header=example.local 80

If you use vagrant and virtual machines, ngrok can work for you too. Just ensure that traffic is forwarded to the local hostname of your vagrant instance:

./ngrok http -host-header=example.local example.local:80

Now we have a public URL that is forwarded to a place on our local machine. An outside service can send us data. But now we need to test how everything is working together, and ngrok provides some tools that make this easier.

Faster Development with Replay

One of the most useful features of ngrok is the web interface. It provides local access to one for each connection you forward.

ngrok web dashboard

Here you can view both the request you receive and the response you send. The default is an easy summary of the post data, but you can view it raw, or even in binary if you want. If you’re having issues, this dashboard is one of the first places you should look, since it can even help you pinpoint problems coming from the source service.

The most useful feature of this dashboard is the replay functionality. It’d be tedious to send a test campaign every time I wanted to test the webhook from Mailchimp. Instead, I click the replay button, and ngrok will resend the same request.

Similar Tools

There are other tools, like Ultrahook and localtunnel, that offer similar functionality, but I found ngrok the best to work with for a few reasons:

It has no dependencies. No need to mess with Ruby gems or npm.

Lots of flexibility that is well documented. Add your own subdomain or custom domain, HTTP authentication, and more. Modifying the host header, as I did above, wasn’t possible with some of the other tools.

The user-friendly dashboard with replay functionality is specifically made for developing for things like webhooks.

Bonus: Responsive Design Testing and Other Collaboration

Developing responsive websites becomes a little easier with the help of ngrok. Do you have several devices you want to test your web application on, but don’t want to push anything up to a publicly accessible server? Serve it from your local machine.

If Mailchimp and other services can reach your localhost, so can your own collection of various-sized devices. And so can your co-workers, no matter where they are located.

Jul 13 2015
Jul 13

Out of the box, Drupal offers the rudimentary ability to send automatic email notifications, such as Account Activation or Password Recovery emails. These are examples of what are called transactional emails. A transactional email is a message that is sent to an individual (not part of a mass-mailing), in response to a certain event. This event can be an action taken by some user, and sometimes even a lack of action. Besides the two examples already given, some more examples of transactional emails are:

  • An order summary sent to a customer.
  • “Jason has commented on your article.”
  • A listing of upcoming events for groups a user has subscribed to.

Depending on your site, this type of email can range from something that’s merely “nice to have” all the way to a feature that’s critical for your business.

For emails that are critical (say, a shipping confirmation for an ecommerce site), ensuring delivery of these emails is important. Drupal, by default, will use PHP’s mail function. This is fine for simple applications, but falls short in at least two ways:

  • Reputation. Mail you send could be considered spam, and doesn’t reach your users, even though there are no error messages. Some email providers automatically block all email initiated by PHP mail, and sometimes your server’s IP address could be on one or more blacklists. The latter is a problem that can require constant monitoring to ensure it doesn’t affect you.
  • Analytics. You get almost no information on who opened your mail, delivery success rate, or clicks on included links. You’re sending mail blind.

Mandrill, a service by Mailchimp, aims to help solve these issues, and with the Mandrill Drupal module, it becomes easy to integrate. After you sign up for a free Mandrill account, create your first API key.

Go to Configuration->Mandrill and enter your new API key.

Mandrill API Key text box

After entering a valid API key, you’ll be presented with a lot more options, including a “Send Test Mail” tab at the top. You’ll want to send a test email to ensure everything is working properly.

At this point, Drupal is still using PHP mail to send all of its email, so be sure and go to Configuration->Mail System and change Site-wide Default to use the MandrillMailSystem. Its possible to be more granular with what types of emails you actually send through Mandrill, but for now, this setting will get you started, and is fine for most use cases.

Drupal Mail System Settings

Drupal will now use Mandrill to send all of its mail, and you get access to the rich set of reports so you have a better idea of what is going on. That’s all you need to do to ensure more reliable delivery of your site’s mail, with performance analytics.

As a nice bonus, each kind of email that Drupal sends has a unique identifier, and the module automatically tags emails with this identifier before forwarding them to Mandrill for final processing. These tags can be used to filter reports or to perform A/B split tests on your email content to help improve user engagement. Below, you can see how easy it is to target password recovery emails for split testing.

Mandrill Split Test Wizard

The Mandrill module comes with a sub-module called Mandrill Reporting that gives you some basic reporting from within Drupal itself, but you’ll probably want to stick with the native Mandrill interface so you are sure to get all the details.

Another included submodule is Mandrill Template, which allows different types of emails to each be wrapped in different templates. This is a more advanced use case and requires additional code and knowledge of Mailchimp’s Merge Tags to take full advantage of, but the possibilities are there.

There are similar services that solve the same problems, like Postmark and Sendgrid, and each has their respective Drupal module as well. However, if you are looking for a service that includes a very attractive free tier (12,000 free emails per month!) along with a mature, active, and heavily backed module, Mandrill could be the right choice for you.

Update 7/21/2015: Shortly after publication, Mandrill changed their free tier to just 2000 emails per month. This is still a good deal for those looking to try it out, although Sendgrid now offers more in their own free teir.

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web