Feb 08 2019
Feb 08

Innovation within Canadian healthcare continues to provide better care experiences for those using the system.  As the population ages and strains our facilities to care for those nearing their end-of-life, hospitals are looking at technological solutions to ease the burden on emergency rooms and give people access to accurate and timely healthcare.   Nextide partnered with uCarenet, a Toronto-based e-health company, to create an innovative health and wellness application to monitor the condition of palliative care patients for a major Canadian hospital.

The Requirements:

The hospital required an application that could gather critical patient data through forms, compile the results daily, report those results to the patient’s healthcare team and alert the team in the event the results warranted intervention.  The application had to be accessible to all participants, easy to use and mimic the existing paper-based data collection mechanism the palliative care team employed. The hospital staff required administrative logins to create users and monitor the entered data in a simple-to-use and uncluttered dashboard.  Doctors and nurses should be able to determine who has symptoms which require immediate attention, and be able to view the discrete data if required.

The Solution:

At Nextide, we look at Drupal as a platform to build applications and not purely as a feature-rich content management system for websites.  We were confident that Drupal was the right selection for the application. Through careful design, our team was able to create an interactive application that gathered patient data, compiled and weighted the results based on current and historical data, and send the palliative care team notifications when warranted.

First, the palliative care user’s experience:

User's view when ability to enter data is open or closed

The image above shows two versions of the palliative care user’s home page.  The left side is the home page when they are able to complete their daily forms.  On the right, if the user completed their forms within the last 24 hours, they’re locked out of completing a new set of forms until the 24 hour window has passed.

Once daily the palliative care users are able to fill in the three required forms.  An example of one of the forms is shown here:

Showing the Brief Pain Inventory Form

On any device the patient or caregiver has, patient data can recorded using Drupal to store all form data in custom entities.  Themed form buttons and select lists are presented in easy-to-use clickable human bodies, sliders and radio buttons.

Upon completion of the patient information forms, the results are compiled according to the hospital’s predefined requirements.  The three forms have at least 50 discrete data points in total. We take the data entered, compare it to the last three to five days of the patient’s data and determine trends within those fields which require an alert being triggered.  If a user is deemed to be “alerting”, we send a notification to the healthcare team and flag the user and the fields as alerting.

The Healthcare User’s Experience:

When a healthcare user enters the system, their homepage is comprised of a simple to use dashboard.  The dashboard presents the user with an easy-to-read listing of patients with their status plainly visible.

Healthcare Dashboard

The healthcare user is able to drill into any patient’s data, giving them a graphical breakdown of the historical patient data.  The nurse or doctor is able to see the trends in the patient data and is able to take appropriate action. Healthcare users are able to acknowledge the alerts, thereby clearing the status of the patient, and they’re able to silence alarms for any given patient so they do not receive alerts for 24, 48 or 72 hours.

Drilldown into stats

We generate on-the-fly graphs of patient data.  Healthcare professionals can also drill even further into each entity where they’re able to view each form’s entry and mute any alerting fields.

Alerts panel

Themed Drupal entities give the healthcare professional a quick breakdown of patient entered values.  If a field is alerting, it’s highlighted and gives the user the option to silence the alarm by clicking on the alerting symbol.  Users are able to scan back and forth through daily entries. The interface is easy to use, uncluttered and responsive in design.


The Outcomes:

The initial pilot phase of the project had between 15 and 20 patients entering data at any given time.  In the first 3 months of the pilot:

  • 105 critical alerts were generated by the system and sent to the patients’ circle of care.
  • 46 of the 105 critical alerting patients received follow-up intervention.
  • 7 patients who had not triggered an alert received telephone intervention after the healthcare staff noticed trends in their data.
  • Saved an estimated $62 000.  That’s right!  This single application over the course of three months saved over sixty-thousand dollars in unnecessary care in the Emergency Room!

Every time a patient receives proactive care, the likelihood that they require an Emergency Room visit is reduced, thereby reducing the load on the hospital.  When you consider the small pilot size and the number of interventions required, each intervention could have eliminated a potential Emergency Room visit. A resounding success!

When you look at Drupal as “just a content management system”, you lose sight of what it really is:  a platform to produce applications that matter. At Nextide, we have been focused on Drupal since 2009. Our experience in Drupal ensures that we utilize the most effective modules to construct sites and applications that are optimized for performance and functionality. We continue to leverage Drupal in many more ways than just content management.  Our core strengths are focused on the design and build of creative sites, and web based business applications, that mirror the way people think and work. We can custom create any business function or create the ideal work environment for people to perform at their best. The capability to deliver robust custom applications to our customers allows us to deliver solutions from healthcare, to HR, and everything in-between.

Need Help?

Do you have a Healthcare initiative we can help with?

Jan 29 2019
Jan 29

Innovation within Canadian healthcare continues to provide better care experiences for those using the system.  As the population ages and strains our facilities to care for those nearing their end-of-life, hospitals are looking at technological solutions to ease the burden on emergency rooms and give people access to accurate and timely healthcare.   Nextide partnered with uCarenet, a Toronto-based e-health company, to create an innovative health and wellness application to monitor the condition of palliative care patients for a major Canadian hospital.

The Requirements:

The hospital required an application that could gather critical patient data through forms, compile the results daily, report those results to the patient’s healthcare team and alert the team in the event the results warranted intervention.  The application had to be accessible to all participants, easy to use and mimic the existing paper-based data collection mechanism the palliative care team employed. The hospital staff required administrative logins to create users and monitor the entered data in a simple-to-use and uncluttered dashboard.  Doctors and nurses should be able to determine who has symptoms which require immediate attention, and be able to view the discrete data if required.

The Solution:

At Nextide, we look at Drupal as a platform to build applications and not purely as a feature-rich content management system for websites.  We were confident that Drupal was the right selection for the application. Through careful design, our team was able to create an interactive application that gathered patient data, compiled and weighted the results based on current and historical data, and send the palliative care team notifications when warranted.

First, the palliative care user’s experience:

User's view when ability to enter data is open or closed

The image above shows two versions of the palliative care user’s home page.  The left side is the home page when they are able to complete their daily forms.  On the right, if the user completed their forms within the last 24 hours, they’re locked out of completing a new set of forms until the 24 hour window has passed.

Once daily the palliative care users are able to fill in the three required forms.  An example of one of the forms is shown here:

Showing the Brief Pain Inventory Form

On any device the patient or caregiver has, patient data can recorded using Drupal to store all form data in custom entities.  Themed form buttons and select lists are presented in easy-to-use clickable human bodies, sliders and radio buttons.

Upon completion of the patient information forms, the results are compiled according to the hospital’s predefined requirements.  The three forms have at least 50 discrete data points in total. We take the data entered, compare it to the last three to five days of the patient’s data and determine trends within those fields which require an alert being triggered.  If a user is deemed to be “alerting”, we send a notification to the healthcare team and flag the user and the fields as alerting.

The Healthcare User’s Experience:

When a healthcare user enters the system, their homepage is comprised of a simple to use dashboard.  The dashboard presents the user with an easy-to-read listing of patients with their status plainly visible.

Healthcare Dashboard

The healthcare user is able to drill into any patient’s data, giving them a graphical breakdown of the historical patient data.  The nurse or doctor is able to see the trends in the patient data and is able to take appropriate action. Healthcare users are able to acknowledge the alerts, thereby clearing the status of the patient, and they’re able to silence alarms for any given patient so they do not receive alerts for 24, 48 or 72 hours.

Drilldown into stats

We generate on-the-fly graphs of patient data.  Healthcare professionals can also drill even further into each entity where they’re able to view each form’s entry and mute any alerting fields.

Alerts panel

Themed Drupal entities give the healthcare professional a quick breakdown of patient entered values.  If a field is alerting, it’s highlighted and gives the user the option to silence the alarm by clicking on the alerting symbol.  Users are able to scan back and forth through daily entries. The interface is easy to use, uncluttered and responsive in design.


The Outcomes:

The initial pilot phase of the project had between 15 and 20 patients entering data at any given time.  In the first 3 months of the pilot:

  • 105 critical alerts were generated by the system and sent to the patients’ circle of care.
  • 46 of the 105 critical alerting patients received follow-up intervention.
  • 7 patients who had not triggered an alert received telephone intervention after the healthcare staff noticed trends in their data.
  • Saved an estimated $62 000.  That’s right!  This single application over the course of three months saved over sixty-thousand dollars in unnecessary care in the Emergency Room!

Every time a patient receives proactive care, the likelihood that they require an Emergency Room visit is reduced, thereby reducing the load on the hospital.  When you consider the small pilot size and the number of interventions required, each intervention could have eliminated a potential Emergency Room visit. A resounding success!

When you look at Drupal as “just a content management system”, you lose sight of what it really is:  a platform to produce applications that matter. At Nextide, we have been focused on Drupal since 2009. Our experience in Drupal ensures that we utilize the most effective modules to construct sites and applications that are optimized for performance and functionality. We continue to leverage Drupal in many more ways than just content management.  Our core strengths are focused on the design and build of creative sites, and web based business applications, that mirror the way people think and work. We can custom create any business function or create the ideal work environment for people to perform at their best. The capability to deliver robust custom applications to our customers allows us to deliver solutions from healthcare, to HR, and everything in-between.

Need Help?

Do you have a Healthcare initiative we can help with?

Oct 22 2018
Oct 22

NASA’s Jet Propulsion Laboratory (JPL) needed to automate an internal document approval process where any given launch of the workflow could:

  • Have unique and a varying number of approvers.
  • Abort the approval process immediately upon a single approver rejecting, even if other approvers have approved or have yet to view the document.
  • Re-route rejected documents to the initiator of the approval process.
  • Upon all assigned approvers approving the document, route the document to completion, notifying stakeholders in the process.

Using Drupal and Nextide’s Maestro workflow module, JPL was able to prototype a base workflow template to automate their process.  However, the missing element was the ability to implement a workflow that allows for on-the-fly selection of approvers, the number of approvers and managing the acceptance and rejection of the document.

 

The Issue

Maestro supports the ability to assign a single unique instance of a task to one or more users dynamically.  In this scenario a single task would be created with multiple users assigned to it, however, the first person to complete the task would complete it for everyone assigned. This produces a “one task, one user completes” scenario regardless of how many people were assigned to the task. JPL’s requirement was the need to assign a multiple number of unique instances of the same approval task to a set of users and continue the workflow past the multiple-assignee stage when all tasks are approved or a single task is rejected. 

In a standard workflow template, the workflow administrator would logically build the template accounting for the tasks required and who is assigned to those tasks. 

Figure 1 - Typical Workflow Approval TemplateFigure 1 - Typical Approval Workflow Template

To illustrate the problem, Figure 1 shows a simple (and non-functional!) approval workflow template with four approvers.  The template is like most other simple approval processes, however, JPL’s issue was that you never know how many approvers will be assigned to any given document.  In the shaded area of Figure 1, you see four tasks: Approval 1 through 4.  This is optimal if you always have four approvers.  JPL required a dynamic number of approvers, each with their own task.  What if there were five approvers required?  What if there were three?  Ten?  The issue for this use-case is that dynamic assignments and rigid templates are not the answer.

The Solution

Our solution was to get Maestro to spawn a variable number of sub-approval processes as required by the initiator.  The parent process was able to spawn as many child approval processes as required, and each child was able to signal its completion, accepted or rejected status to the parent.

Figure 2 - Our Solution TemplateFigure 2 - Simplified Workflow Solution Template

Figure 2 is an over-simplified representation of the workflow for JPL.  The workflow template shows the parent process which is responsible for spawning the n-number of approval sub-flows and waiting for the sub-approval flows to be approved or rejected before continuing on.

Figure 3 - JPL Data FlowFigure 3 - JPL Data Flow

Figure 3 illustrates how the parent process and the child communicate with each other using the Maestro API.  The parent is able to spawn sub-flows, while the sub-flows are able to communicate their approval status back to the parent.  The “Wait” task in the parent process is able to detect if all sub-flows are complete or if a sub-flow has signalled a rejection.

The Outcome

The workflows were put into production, successfully automating an internal business approval process for JPL.  The shuffling of paper, sending manual emails to people, and the complete lack of visibility of where processes stall has been eliminated.  Using our workflow as a base, JPL has automated a number of other internal processes.  Letting Drupal manage the content and Maestro manage the business process has proven to be a success.

Need Automation?

Do you have manual processes that are slowing your business down? Do you still have untapped areas for business improvements?

Sep 19 2018
Sep 19

Gartner recently released an interesting tech note discussing how automated business processes and online integration and transformation of business workflow should be a focus for businesses. By 2022, 50% of digital business technology platform projects will connect events to business outcomes using event-driven intelligent business process management suite (iBPMS)-oriented frameworks - here is a link to the Gartner article.

More than 80% of organizations believe digital transformation is important or very important and 75% say process automation is a must do in business.i Forrester agrees, saying process improvement is required for digital transformation and improving customer experience. Digital technology innovations in the workflow resource ecosystem have already been applied to automate many core business processes. According to AIIM, the top processes being automated include: 

  • Accounts Payable/Accounts Receivable 
  • HR, Recruiting, On-Boarding 
  • Contract Management 
  • Records Management 
  • Legal 
  • Technical Documentation 

A number of these top processes are managed or have involvement of the Human Resources or HR department. The HR departments have a lot of document centric business processes that involve communications or actions by other employees. Human resource management is an essential part of every company. Whether it’s hiring new employees, training, or ensuring that local labor laws are complied with, HR processes are a vital part of every company.

But HR has usually been thought of as a highly manual department process. They are used to rolling up their sleeves and getting the job done themselves. But all that’s changing and they can take advantage of available solutions to optimize these processes and improve the over all process and communications. At the same time, these solutions automate the management of online records and reporting of metrics such as:

  • How long does a process take or specific tasks
  • Were are the typical bottlenecks - one may assume they know but online metrics are very useful for management
  • How many requests do we get a week, month


Onboarding

Employee onboarding is known to be one of the most manual HR processes. It includes collecting documents for verification, giving them network access, setting up or ordering hardware and application access, etc. But all of this can be done automatically, using an online workflow solution.

The automated process ensures that there’s an easy checklist that can be verified by both the human resource management staff, the new employee, and anyone else that is part of the employee onboarding process. Using this, documents can be collected electronically, devices can be delivered without needing to wait around until IT staff arrive, and tool access takes hours, not weeks.

Solution:

  • Online self-service form allows management or HR personnel to submit onboarding requests.
  • Intuitive form design and workflow design interfaces allow customization of documents and processes to enterprise need.
  • Workflows can be tailored to job type, and can specify stakeholders necessary at each stage of onboarding process.
  • Onboarding requests are automatically routed for proper review and approval.
  • Orientation and training sessions can be orchestrated via the workflow, including notification/reminders to attendees.
  • New hires can be automatically sent a packet of orientation materials.
  • Online orientation forms can be included allowing new employees to submit necessary onboarding documents such as agreements, disclosures, benefit applications, etc.

Benefits:

  • Reduction in manual paperwork reduces errors and delays, driving sizable time and costs savings for the enterprise.
  • HR accelerates response times and new hire processing, building goodwill with all stakeholders.
  • Centralized tracking and reporting allows HR to have real-time overview of all onboarding workflows.
  • Workflows are automatically archived and accessible for auditing.
  • E-signature integration standardizes secure approvals.

An example of a onboarding workflow that's been created using the Nextide Maestro workflow solution for Drupal is shown below. The process is launched by the hiring manager and passes through two levels of approval. There are interactive review tasks that will automatically route the process back to the hiring manager. Comments can be collected and task assignment determined automatically via lookups of the company organization information in a database or LDAP directory.

The solution can definitely improve tracking and reduce email and simultaneously, improve visibility and management of a critical process that can make all the difference to a new employee's first day.

HR Onboarding workflow

Once the request is approved, parallel tasks can be launched which make requests to different departments to complete the new employee onboarding request. The workflow can handle processes with dependencies and unique business rules. Example, it's easy to determine if hardware is required from the submitted form and if needed submit a request to purchasing. If needed, there could be a separate sub-workflow for purchasing approval, if the request included special hardware or costs exceed a threshold amount.

The process can include sending of a new hire package to the employee, sending an introduction email to the department or sending out a welcome email on the employees first day. Any other custom communications or reminders to the new employee or stake holders can be automated at various stages of the workflow automatically.

The process stakeholders can easily see where in the process the request is by exploring the request details which include a status bar which highlights the request's current stage in the workflow. Additionally, Maestro keeps track of the time the task was assigned, when it was first clicked on by the task owner and when it was completed. These metrics are available for custom reporting and tracking.

Read more about our recent release and features including information about our demo site that's built to show case an online insurance quote process - Maestro 2.1 Release and Demo Site.

Nextide’s expertise is aimed at helping clients automate these manual systems. Our initial approach with clients is always to see if we can fix what is in place today. It can be that simple and inexpensive. A 5 minute conversation with a Nextide consultant can quickly determine if we can be of assistance.

Contact Us

i Forrester 2018 White paper

Apr 10 2018
Apr 10

As Drupal module maintainers, we at Nextide need to be constantly updating our modules to add new features or patch issues.  Whether your module is available for download or is a custom module for a client site, you can't expect users to uninstall and reinstall it to pick up new features.  If you have data or configuration changes, update hooks are mandatory to learn.  This post will show how we created a new content entity in a Drupal update hook.

Our Maestro module's first release for Drupal 8 housed the core workflow engine, task console and template builder.  It was a rewrite of the core engine's capabilities and was a major undertaking.  However, as Drupal 8's core matures, and as our deployments of Maestro continue, new features, patches and bug fixes for Maestro are inevitable.  In order to bring those new features to the core product, we had to ensure that anything we added to the module was available to existing users and deployed sites via the Drupal update routine.

One of the first new features of Maestro, not included in the initial release, is the capability to show a linear bird's eye view of the current progression through a process.  This feature requires a new content entity to be created.  A fresh install of Maestro would install the entity, however, for existing sites, uninstalling and reinstalling Maestro is not an option.  A Drupal update hook is required to install the entity.  What we found was the available documentation describing how to create a new content entity via an update hook was nearly non-existent.  Some of the top Google results showing how others have solved this issue provide outdated and even dangerous methods to solve the problem.

 

Define Your Content Entity

The first step is to define your entity.  Drupal 8's coding structure requires that you place your content entity in your /src/Entity folder.  There's a good deal of documentation on how to create a content entity for Drupal 8 on module install.  Follow the documentation on creating the entity's required permissions and routes based on your requirements. You can find our Maestro Process Status entity defined in code here:  https://cgit.drupalcode.org/maestro/tree/src/Entity/MaestroProcessStatus.php

The Maestro Process Status Entity code in conjunction with the appropriate permissions, routes and access control handlers will install the Maestro Status Entity on module install.  However, if you already have Maestro installed, configured and executing workflows, running update.php will not install this entity!  Update hooks to the rescue...

Creating a Drupal 8 Update Hook

There's good documentation available on how to write update hooks, but what's missing is how to inject a new content entity for already installed modules. If you use the Google machine, you'll come across many posts and answers to this question showing the following as the (wrong) solution:

/**
  * This is an example of WHAT NOT TO DO! DON'T DO THIS!
  *
  */
function hook_update_8xxx() {
  //For the love of Drupal, do not do this.
  \Drupal::entityDefinitionUpdateManager()->applyUpdates(); //No, really, don't do this.
  //Are you still trying to do this?  Don't.
}

WRONG!!!  DON'T DO THIS!!!

This simple update hook will most certainly pick up and install your new entity.  It will also apply any other module's entity updates along with yours!  This can cause catastrophic issues when modules that have not yet been updated get their entities altered by other modules.  What you need to do is tell Drupal to install the new entity explicitly with this less than obvious piece of code which I will explain after showing it:

/**
 * Update 8001 - Create maestro_process_status entity.
 */
function maestro_update_8001() {
  //check if the table exists first.  If not, then create the entity.
  if(!db_table_exists('maestro_process_status')) {
    \Drupal::entityTypeManager()->clearCachedDefinitions();
    \Drupal::entityDefinitionUpdateManager()
      ->installEntityType(\Drupal::entityTypeManager()->getDefinition('maestro_process_status'));
  }
  else {
    return 'Process Status entity already exists';
  }
}

The update hook is found in the maestro.install file and I've removed some of the extra Maestro-specific code to simply show how to get your content entity recognized and installed. 

  1. We do a simple check to see if the maestro_process_status table exists.  Since content entities store their data in the database, if the table doesn't exist, our content entity is not installed. 
  2. We clear the cached definitions from the entityTypeManager.  This should force Drupal to read in all of the definitions from storage.
  3. Using the entityDefinitionUpdateManager (also used in the "wrong" example), we use the installEntityType method which takes an entity definition as an input.
  4. We pass in the maestro_process_status definition using the getDefinition method of the entityTypeManager object.

At this point, Drupal installs the entity based on the definition I showed above.  Your content entity is installed, including the database table associated with the entity.

Mar 21 2018
tom
Mar 21

In my last post “Untapped Areas for Business Improvements” I attempted to point out the various areas where the potential exists for significant returns for your business through intelligent work automation. As well, time was given to examine some of the more obvious impediments as to why so little is done in this area.

I would like to take the discussion to the next level in reviewing the potential for rewards. No one is going to make intelligent work automation a priority unless there is some ‘gold’ to be discovered. The key to this is centered on employee productivity. And while I will dedicate most of this article to the rewards, we also need to look at what is at stake by ignoring these suggestions and doing nothing.

In a more recent update to an older IDC report, their data shows that 2.5 hours each day are wasted while employees search for information so that they can complete a task or even do their job. And that the complexity in a data driven environment is increasing. If we can significantly reduce that lost time, there are direct benefits. A more recent report, prepared this year by Avanade, shows that productivity gains go hand in hand with intelligent automation in digital business. They stress that “embracing intelligent automation will be key to organizations breaking through the productivity plateau and remaining competitive”.

If any of this resonates then there are some key areas to examine where you can find value. But first, I want to bring our Maestro – Intelligent Work Automation tool into the conversation. Maestro for Drupal 8 is a 4th generation workflow engine that can be applied to almost any work process. Think of it this way: Maestro can transform the use of paper documents and client based tools into a digital business.

Let’s take ‘product development’ as a work example. Assuming that marketing initiates a request, from here there are a number of individuals and different parts of the organization that need to participate in making a ‘go/no go’ decision. They can vary from engineering, production, legal, financial, sales and IT. Data and various other forms of content need to be captured, shared and applied to a process. Maestro provides the framework to manage the process, retrieve relevant information and do all this in the most expeditious manner.

This is just one example of where work automation can be a game changer. So let’s look at the benefits that can be derived from this example. Yet while there are many, I want to talk about the five that deliver the most value.

Consistency: Intelligent Automation greatly reduces the risk of human error through a variety of features. Integration with other applications to retrieve data or verify information produces more accurate work. Requests to clarify information or ask questions of others are managed in the workflow so as to not slow down the process. Maestro guided processes provide predictability in accuracy and timeliness.

Time Savings: Intelligent Automation can not only accelerate the process but can often eliminate steps where human intervention can cause delays. Initiators can manage variables such as time to respond so that tasks don’t go unaddressed. The integration capabilities of Maestro can access and update data from other key applications without manual intervention.

Predictability: Intelligent Automation can help where there is time sensitivity to the completion of the work cycle. This is essential where clients are involved with fully automated services or where completion dates are a requirement. Maestro manages all aspects of the work process, can escalate matters that get stalled, can reroute work if necessary and can keep the stakeholders aware of all changes or delays in the process. Initiators have access to a dashboard with a graphical display of which stage each flow is at, all in real time.

Metrics: Intelligent Automation can record specific data at every step of the work process. It can reveal problem areas in the process as well as help identify areas of improvement. Time can be one such metric that can provide management with a cost analysis of the work. In the example used, some organizations apply for the Canadian SR&ED grant program where Maestro can capture time and activity data to support any such claims and saving significant effort in manually tracking CRA required documentation..

Reduced Costs: The most significant area to reduce costs is in employee productivity. Reducing the time to retrieve information, research variable data, and having visibility to the entire process all contribute to labour savings. Existing Maestro clients have identified from 20 to 50% productivity gains through automation. We offer an application assessment service where we can help you prepare a ROI analysis of your workflow process.

Aggregating all of these benefits allow knowledge workers to focus on bringing value to the business without dealing with mundane and repetitive tasks. Employees feel more empowered. Customer response times are lowered. And more work can be done by fewer people.

Sometimes it is difficult to visualize the ‘state of the possible’. To this end we are happy to discuss any potential requirement you may have to assess the suitability of Maestro. In addition, we are currently in the final stages of preparing a fictional demo of an online insurance quote generation system. This will be launched on our site before the end of March 2018. The demo will allow you to experience a completely automated process that is driven by Maestro. Stay tuned, as my next post will be focused on the highlights of the demo.

Nov 02 2017
Nov 02

This is part 4 of the Maestro for Drupal 8 blog series, defining and documenting the various aspects of the Maestro workflow engine.  Please see Part 1 for information on Maestro's Templates and Tasks, Part 2 for the Maestro's workflow engine internals and Part 3 for information on how Maestro handles logical loopback scenarios.

Expanding on my blog about Maestro's Tasks, this post is to help clarify the usage of some of the options in the task editor for interactive tasks.  Interactive tasks present the actor assigned to a task defined in the workflow template with an interface that allows them to "interact" with the workflow and complete the task.  Interactive task types are human-executable tasks, therefore the workflow's progression through a template will be held up by incomplete (not executed) interactive tasks as the workflow is waiting for human intervention to complete it.  The Interactive (Function*) and the Content Type Task are the two interactive task types included with Maestro.  The differences between the Interactive (Function*) task type and the Content Type task type are as follows:

  • Interactive Function task types allow the workflow administrator to configure the task to show the end user a specific task completion interface when clicked on in the task console. The task completion interface is a function written in a Drupal module.
  • Content Type task types allow the workflow administrator to configure the task to show an existing content type for editing in the task console.

(* - In the Maestro interface, the Interactive Function task type is simply called an "Interactive Task".)

For all Interactive Task Types: Assignment Details

Interactive, or human executable tasks, are assigned one of two ways.  Via a fixed value or via variable.
 

Maestro D8 Concepts Interactive Task Edit fixed assignmentInteractive Task Edit - Fixed value assignment

As shown in the image, when assigned by a fixed value, you will have the option of assigning to a user or role out-of-the-box.  The assignments are then shown in the table where the assignment to what, by and who is assigned are shown.  Assignments by fixed value means each time the task is created in the queue for execution, those users or roles will always be assigned.  There can be no deviation from assignment when done by fixed value unless assignments are altered manually or via the APIs.

Assignment by variable allows the administrator to assign a task to a user or role (out of the box) based on the value stored in a template variable.

Maestro D8 Conepts Interactive Task Edit Assignment by VariableInteractive Task Edit - By Variable assignment

The assignment by variable produces dynamically assigned workflows where assignments are done based on inputs into the workflow, stored in variables, which can be unique every time the workflow executes.  Assignments to users or roles are supported out-of-the box.  Assignments to users or roles are done via the user account name or role name.


For all Interactive Task types: Notification Details

Similar to assignments, notifications can be done by fixed value to a role or user, or by variable to roles or users.  

Maestro D8 Concepts Interactive Task Notification AssignmentsA portion of the Interactive Task Notification Assignments


There are three notification types:

1. Assignment
    When a task is assigned, a notification email is sent to the assigned actor or role with an assignment message.  You can customize the assignment message.

2. Reminder
    Based on the "Reminder After (days)" input field, a reminder notification email will be sent to the assigned actor or role.  The message is customizable as well.

3. Escalation
    Based on the "Escalation After (days)" input field, an escalation notification email will be sent to the assigned actor or role.  The message is also customizable.


Edit Options

Interactive (Function) tasks:

The Interactive Function task allows workflow admins and developers to create custom UIs for their workflows.  

Maestro D8 Concepts Interactive Task General EditInteractive Task General Edit Options

Out-of-the-box, the Interactive Task has a simple Accept or Reject set of buttons.  In our Form Approval workflow example (example module included with Maestro), we have a customized modal manager-approval Interactive Task which shows the actor the submitted form from the user.  The "Handler" field is where you would specify the function you wish the Task Console to use to show the assigned actor their completion options.

In the image shown, the handler field is filled in with a function name.  When filled in, Maestro hands off the creation of the Interactive Task's user facing UI to the function listed in the Handler field.  When left blank,  Maestro defaults to a simple Accept/Reject button UI.

The "Return Path" option specifies where to redirect the user to after completion.  By default this is set to "taskconsole", which means that the task will redirect to the url "/taskconsole" after completion.

The "Task presentation" drop down gives you two options:  Modal or Full Page.  The Modal option will pop up the task's user facing UI in a modal dialog window, whereas the Full Page option takes the user to a full page rendering of the handler.


Content Type tasks:

The Content Type task, when executed from the task console, directs the assigned user to the edit page of a content type.

Maestro D8 Concepts Content Type Task Edit OptionsContent Type Task Edit Options

The editable options begin with "Content type".  In this field you enter the machine name of the content type you wish to show the user.  In our Form Approval example workflow, we've created a content type with the machine name of "approval_form" as shown in the image.

"Give this piece of content a unique identifier" option is critical for the operation of the task.  In any given workflow, you can have many types of content associated with the flow.  In fact, you can attach multiple different instances of the same content type to your workflow.  Therefore, Maestro needs to know which specific piece of content you wish to present to the user.  In our example image, we have "request" as the unique identifier.  We could have another Content Type task later in our flow that uses the same unique identifier in order for Maestro to show THIS specific piece of content to the end user.  In your workflow, you can have a second Content Type task that has a different unique identifier with the same content type, and that will inject a second unique piece of content into the workflow to manage.

The "Show a Save and Edit Later button..." option will show a "Save and Edit Later" option on the bottom of the content type.  The button allows users to fill in the content, saving it, but not completing the content type task.  The task will remain in their console until such time they save and complete the task.  Keep in mind, holding an interactive task uncompleted in a task console means the workflow after the task is stalled until completion.

"Return Path" is functionally identical to that of the Interactive tasks's Return Path option.  In our example, Maestro will redirect back to /taskconsole.

Nov 02 2017
Nov 02

This is part 3 of our series on developing a Decoupled Drupal Client Application with Ember. If you haven't yet read the previous articles, it would be best to review Part1 first. In this article, we are going to clean up the code to remove the hard coded URL for the host, move the login form to a separate page and add a basic header and styling.

We currently have defined the host URL in both the adapter (app/adapters/application.js) for the Ember Data REST calls as well as the AJAX Service that we use for the authentication (app/services/ajax.js). This is clearly not a good idea but helped us focus on the initial goal and our simple working app.

Ember CLI ships with support for managing your application's environment. Ember CLI will setup a default environment config file at config/environment. Here, you can define an ENV object for each environment, which are currently limited to three: development, test, and production.

The ENV object contains several important keys but we will focus on:

  • APP can be used to pass flags/options to your application instance.
  • environment contains the name of the current enviroment (development,production or test).

You can access these environment variables in your application code by importing from your-application-name/config/environment.

Edit emberapp/config/environment.js file and under the APP key, add a property for API_HOST

    APP: {
      // Here you can pass flags/options to your application instance
      // when it is created
      API_HOST: '<enter the url for your site>',
    }

You will notice in this config file, you can set properties that are used in the three different environments. For now we are only using the development environment and leave you to explore further in the online Ember Guide section.

Edit our adapter and import in the config file to use the APP.API_HOST property instead of the hard coded host URL.

// app/adapters/application.js
import DS from 'ember-data';
import config from '../config/environment';

export default DS.JSONAPIAdapter.extend({
    host: config.APP.API_HOST,
    namespace: 'jsonapi',

You should still be able to retrieve the article listing from your site at localhost:4200/articles. 

Do the same now for the services/ajax.js and verify that you can still login.

// app/services/ajax.js
import Ember from 'ember';
import AjaxService from 'ember-ajax/services/ajax';
import config from '../config/environment';

export default AjaxService.extend({
    host: config.APP.API_HOST,

Add a basic menu and styling

Add the following css to app/styles/app.css

/* Add a black background color to the top navigation */
.topnav {
    background-color: #333;
    overflow: hidden;
}

/* Style the links inside the navigation bar */
.topnav a {
    float: left;
    display: block;
    color: #f2f2f2;
    text-align: center;
    padding: 14px 16px;
    text-decoration: none;
    font-size: 17px;
}

/* Change the color of links on hover */
.topnav a:hover {
    background-color: #ddd;
    color: black;
}

/* Add a color to the active/current link */
.topnav a.active {
    background-color: #4CAF50;
    color: white;
}

Now, edit the templates/application.hbs and add our basic menu with links to a Login and Logout that will switch depending on if we are logged in or not. This won't work yet as we are missing some pieces.

{{! app/templates/application.hbs }}
<h1>BasicEmber App using Drupal</h1>

<div class="topnav" id="myTopnav">
{{#link-to 'index' class="button"}}Home{{/link-to}}

{{#if session.isAuthenticated}}
  {{#link-to 'user' class="button"}}Logout{{/link-to}}
{{else}}
  {{#link-to 'user' class="button"}}Login{{/link-to}}
{{/if}}
</div>

<div style="margin-top:20px;padding:20px;">
    {{#if session.isAuthenticated}}
        <button {{action "logout"}}>Logout</button>
    {{else}}
        <form {{action 'authenticate' on='submit'}}>
            {{input value=name placeholder='Login'}}
            <div style="margin-top:5px;">{{input value=pass placeholder='Password' type='password'}}</div>
            <div style="padding-top:10px;"><button type="submit">Login</button></div>
        </form>
    {{/if}}
</div>

{{outlet}}

We need to create the /user route and we are also going to refactor our login form into a component. Let's create the default stub code using Ember CLI.

  • ember g route user
  • ember g component user-login

Components are a vital part of an Ember application. They allow you to define your own, application-specific HTML tags and implement their behavior using JavaScript. An Ember component consists of a handlebars template file and a javascript file under app/components. We will now move the login form HTML to the new user-login component template file.

{{! app/templates/compponents/user-login.hbs }}
<div style="margin-top:20px;padding:20px;">
    {{#if session.isAuthenticated}}
        <button {{action "logout"}}>Logout</button>
    {{else}}
        <form {{action 'authenticate' on='submit'}}>
            {{input value=name placeholder='Login'}}
            <div style="margin-top:5px;">{{input value=pass placeholder='Password' type='password'}}</div>
            <div style="padding-top:10px;"><button type="submit">Login</button></div>
        </form>
    {{/if}}
</div>

Edit the application.hbs file and we will remove the login form HTML and related template code that was moved to the user-login component template or else we will soon be seeing it on the page twice. We want to only display the login form when we navigate to the /user route. We've added the route using Ember CLI but have not yet added anything to the templates/user.hbs so your not currently seeing anything on the page.

Edit templates/user.hbs to render the user-login component by simply adding {{user-login}}

{{! app/templates/user.hbs }}
{{user-login}}

The login form should now be appearing but won't yet work as we have to move the code for the authenticate and logout actions to the components/user-login.js and we will have a self contained component.

// app/components/user-login.js
import Ember from 'ember';

export default Ember.Component.extend({
  ajax: Ember.inject.service(),
  session: Ember.inject.service('session'),

  actions: {

      authenticate() {

          // Use the ember-simple-auth addon authenticate action in authenticator/drupal.js
          let { name, pass } = this.getProperties('name', 'pass');
          console.log('login-page.js - call the authenticator');
          this.get('session').authenticate('authenticator:drupal', name, pass).then(function () {
              console.log('login-page.js - authentication completed successfully')
          }).catch((reason) => {
              this.set('authenticate errorMessage', reason.error || reason);
          });
      },

      logout() {
          // Use the ember-simple-auth addon invalidate action in authenticator/drupal.js
          let isAuthenticated = this.get('session.isAuthenticated');
          console.log('Logout Action - isAuthenticated:', isAuthenticated);

          this.get('session').invalidate().then(function (response) {
                  console.log('login-page component - logout promise successful response ', response);
              }
          ).catch(function (error) {
                  console.log('login-page component - logout promise Error ', error);
              }
          );
          return;

      },

  }
});

Be sure to clean up and remove the code from the application controller now - let's strip it back to the default empty controller class and verify the login/logout functionality is working.

// app/controllers/application.js
import Ember from 'ember';

export default Ember.Controller.extend({
    
});

Why is the login menu item not changing - it should change to logout.

  • We need to inject the SESSION service into the application controller so the template has access to session.isAuthenticated variable.

 Updated controllers/application.js

import Ember from 'ember';

export default Ember.Controller.extend({
    session: Ember.inject.service('session'),
    
});

Last step, remove the div wrapper and the login form from application.hbs since  we are now using the user-login component.

{{! app/templates/application.hbs }}
<h1>BasicEmber App using Drupal</h1>

<div class="topnav" id="myTopnav">
{{#link-to 'index' class="button"}}Home{{/link-to}}

{{#if session.isAuthenticated}}
  {{#link-to 'user' class="button"}}Logout{{/link-to}}
{{else}}
  {{#link-to 'user' class="button"}}Login{{/link-to}}
{{/if}}
</div>

{{outlet}}

Now the menu should toggle between Login and Logout.  Next in Part 4 of this series we will add the ability to create a new article.

Nov 02 2017
tom
Nov 02

Many organization still struggle with the strain of manual processes that touch critical areas of the business. And these manual processes could be costlier that you think. It’s not just profit that may be slipping away but employee moral, innovation, competitiveness and so much more.

By automating routine tasks you can increase workflow efficiency, which in turn can free up staff for higher value work, driving down costs and boosting revenue. And it may be easier to achieve productivity gains simpler, faster, and with less risk that you may assume.

Most companies with manual work processes have been refining them for years, yet they may still not be efficient because they are not automated. So the question to ask is, “can I automate my current processes?”.

Just to be clear, let me add my definition of manual processes: These are processes where human intervention is required at every step.  It does not preclude the use of technology such as electronic forms, spreadsheets and collaboration tools. But it relies on a staff member to make decisions at each step.

So what might an automated process look like versus a manual process? The automated process will route work to the right staff member based on predefined conditions. The movement of work can be time tracked and escalated if required. Integration with your systems and databases can retrieve required information and also push data to other systems. Valuable work metrics can be captured to provide meaningful reporting. And the initiator has complete visibility of the entire workflow at every step. Think of it this way - if you can flow chart the process, then it can be automated.

Let’s look at some areas where manual processes are still likely to exist. Some of these business processes can be substantial in size as well as importance within the organization. Here are some common examples:

  • New product development
  • New employees on-boarding
  • Credit checks and approvals
  • Engineering change requests
  • Custom sales quotations
  • Product customization
  • Resource management
  • QA testing
  • Employee performance reviews

Commercially available packages, such as ERP (Enterprise Resource Management) have many key features built in based on your industry. Simply put, they are automated business processes defined by specific steps, actions, sequence, conditions and most importantly, they are repeatable.

Yet most business have these ‘other’ processes as in the examples given above. They typically have many unique attributes and as a result have evolved over time as manual processes aided with the injection of standard technology such as spreadsheets, email, electronic forms and other end user tools. They may also abide to certain methodologies, for example a ‘New Product Development’ process may follow a ‘Gated Review Methodology’. The urgency here is to make that ‘go/no go’ decision as early as possible.

But they may still have weaknesses; prone to errors and inefficiencies. When this happens a logical recourse is to initiate some sort of review using either internal or external experts.

But this often leads to reapplying tools and procedures that you are already using and may result in adding more checks and balances which only adds to the inefficiency. There are more direct approaches to provide fixes, but first, let’s look at where issues arise with these systems.

There are multiple sources of issues that arise in these manual or semi-automated processes and are often the problem areas that don’t get addressed or fixed. Examples include:

  • Human failure - errors aren’t identified/validated at the source or the ‘right’ staff is unavailable (sick, away, other priorities).
  • Lack of visibility - what stage is the workflow at any given time.  
  • Redundancy of data entry - often the same data needs to be reentered at different times adding effort and possible errors. Data may end up being stored in different locations and out of sync when changes are made.
  • Lack of training - less automated system require more staff training.
  • Key metrics aren’t recorded - creates a lost opportunity for reporting of resources, costs and/or material usage.
  • Exceptions - everything can run fine until there is an exception when options aren’t clear or understood.
  • Missed deadlines - accountability for meeting required dates. Critical when there are other dependencies on a specific action.
  • Conflicting priorities - Employees with specific roles can get distracted with other actions and ignore important task based operations.
  • Undefined and unmeasured activities - how long do you wait for a critical action to be completed. What are the consequences of a missed task date.
  • Resources - How does team A compare with team B to do the same work. Track activities and results versus tracking just time.
  • Missing information - can result in stalled activities.

If you have good procedures but find you are prone to these types of issues and errors then there is help available. And there are ways to automate processes without changing what works. Today you should be able to track documents requiring additional information or a simple approval. Exceptions should be managed with the same efficiencies as any regular task. Rerouting of work when it is stalled for any reason. Escalation of critical activities before the work implodes.

Nextide has been helping clients automate these manual systems for over 10 years. Our expertise is aimed at helping clients so that these processes can be defined and automated. Our initial approach with clients is always to see if we can fix what is in place today. It can be that simple and inexpensive.

I will be adding more to this subject line over the coming months and to delve into more specific examples of application areas that we help automate and some of the tools used.

Contact Nextide if your want to talk about how we can assist with improving your business processes.

Nov 02 2017
Nov 02

This is part 3 of the Maestro for Drupal 8 blog series, defining and documenting the various aspects of the Maestro workflow engine.  Please see Part 1 for information on Maestro's Templates and Tasks, and Part 2 for the Maestro's workflow engine internals.  This post will help workflow administrators understand why Maestro for Drupal 8's validation engine warns about the potential for loopback conditions known as "Regeneration".

Loopbacks and Regeneration

Since 2003, Maestro had the concept of "Regeneration".  Regeneration for Maestro on Drupal 7 was a rather nebulous concept if you didn't understand the complexities that exist within the original engine.  Most administrators would simply check the regeneration option to "on" and the option to "recreate all in production tasks" and never worry about what it actually meant.  Simply put, as a human reading a workflow diagram, we can understand when a logic condition causes a loop-back over already executed tasks. However, for a machine to detect when we're looping-back over already executed tasks produces a set of logical issues that are not easily overcome.

Unfortunately for the machine logic in the Drupal 7 Maestro engine, a loop-back over already executed tasks signaled that something is terribly wrong!  It is a safety mechanism that prevented logic traps when executing a single task multiple times.  The solution was to have an administrative option in Drupal 7 (and prior non-Drupal versions) to tell the engine that indeed we do want to re-create the task in the queue and that we would like it if the workflow could re-spawn itself with a wholly new process ID and copy over all in-production tasks and data.  Regeneration was an evil necessity until recently.

We embarked on a project to simplify the engine execution of templates by trying to remove the notion of regeneration on logical loopbacks.  The obvious thing we could have done was to hide the regeneration option and just have it turned on for each task.  Problem solved!  Well, sort of.  Regeneration, while crafty in its approach, produces multiple processes linked via a linked list from the parent to its children.  The cascading of processes caused confusion to new Maestro admins and is a concept that is hard to grasp except for those of us who have lived with the logic of regeneration for nearly 15 years.

The resolution to the regeneration problem is to re-code the engine such that it recognizes a logical loopback condition and tries to understand whether it should or should not create tasks that have already been created in the queue.  The default scenario of "always regenerate" is now the default in the Maestro D8 engine, but it has some caveats explained further here.

Consider the workflow of the example Form Approval Flow provided with Maestro:

Workflow showing a regeneration or loopback situationContent Type task is internally flagged as having multiple pointers.


As shown, the content type task is the first task in the task chain that has the capacity to be executed multiple times during the workflow's lifetime.  It is the first "real" task of the workflow (Start task is the first task, but is just a placeholder) and it can be looped over and over again until the form is approved.  Maestro for D8 will now re-create the content type task as many times as required all without ever changing the process ID!  This not only has simplified the administration of the engine, but also removes a point of possible failure if a regeneration copy of the process failed.  While the tasks following the Content Type task are re-executed during a loopback, it is critical that each task doesn't flag the engine as being "regenerated", thus causing the engine to constantly flag and update the existing open process' tasks as regenerated.  This too is handled by the new Maestro engine in Drupal 8.

Because this task has multiple pointers pointing to it from the Start task and from the Set Process Variable task, and the fact that it is not one of the two special task types of AND or OR (more on this later), a loopback condition is signaled in the engine.  The engine automatically handles the loopback condition, preserving the process ID and all in-production and executed tasks in the queue.

Validating the workflow with regeneration messageRegeneration message on validity check.

When you validate the example Form Approval flow, you should see the warning message shown above.  The message reads, "This task, with two pointers to it, will cause a regeneration to happen".  This message is either confusing to you or is what you expect to happen anyway!  What the validation engine is telling you is that because this is not an AND or OR task, the engine will recreate the already created once task for you in the queue.  The validation engine is showing this as a warning/informational message only and will allow the template to be put into production.


Special Tasks - The AND and OR

The AND and OR tasks are the ONLY two tasks that the engine will allow two or more pointers to and will NOT signal a loopback condition.  It is critical as a workflow designer to understand that if you wish to do a loopback to previously executed tasks, pointing to an AND or OR task will NOT produce a loopback scenario.  Looping back to an AND task is logically incorrect to begin with as the AND task will never complete in the first place if one of its input pointers is coming from an area of the template that falls logically after the AND.  Looping back to an OR task, while not logically incorrect, simply does not reproduce the OR.  A loopback to an OR may simply just stall the workflow at that point as the OR has already been created in the queue and does not produce a loopback regeneration condition.

In short, excluding the start, end, OR and AND tasks, all other tasks produce a logical loopback condition and will signal the engine to re-create already created tasks in the queue for execution.  The workflow administrator's job of figuring out where logical loopbacks may occur or which options to enable on the workflow are now a thing of the past. Maestro for Drupal 8 now takes care of the logic and provides the workflow administrator with validation messages, predicting where loopback conditions may occur.

Nov 02 2017
Nov 02

The Maestro Engine is the mechanism responsible for executing a workflow template by assigning tasks to actors, executing tasks for the engine and providing all of the other logic and glue functionality to run a workflow.  The maestro module is the core module in the Maestro ecosystem and is the module that houses the template, variable, assignment, queue and process schema.  The maestro module also provides the Maestro API for which developers can interact with the engine to do things such as setting/getting process variables, start processes, move the queue along among many other things.

As noted in the preamble for our Maestro D8 Concepts Part 1: Templates and Tasks post, there is jargon used within Maestro to define certain aspects of the engine and data.  The major terms are as follows:

Template:  The workflow pattern which you can edit via the Template Builder module.  Defines the logical pattern the engine should follow.

Tasks:  Tasks are defined in the Template Builder and are either executed by the engine or assigned to actors.  

Template Variables:  Variables defined on a template and used in the Template Builder for logical operations, task operations or task assignments.

Process:  When a template has been started (there are multiple ways to start a template), the template is used by the engine as the path for execution.  Thus, the first entity created by the engine when a template is started is the process entity.  Thus a template put into production is a process and each process has it's own unique ID called the process ID.  

Queue:  The queue is the entity that holds information for what the engine is to execute or what is presented to the task console for the assigned actors to execute.  Each task in the template becomes an entry in the queue when a template has been put into production.  Only those tasks which are ready to be executed are created in the queue table by the engine.

Process Variables:  Template variables that have been copied over into the execution process variables entity to track and store variables.  Using the Maestro API, developers can get/set values, tasks can be assigned and logic tests can be done on variable values.  Variable values can be unique from process to process, thus producing routing or logic operations that are different from process to process even if those processes are based on the same template.

Production Assignment:  Tasks are assigned to actors or roles either explicitly or by variable in the template builder.  When a template is put into production, those assignments are translated from the template and injected into the production assignments entity.  Only interactive tasks such as the interactive task, manual web task and content type task (shipped by Maestro at this point) are assignable to human actors.  Thus only those task types will have production assignments in the production assignments entity.  Engine executable tasks are simply executed by Maestro.

The Orchestrator:  The Orchestrator is the crucial piece to the Maestro puzzle.  The Orchestrator is responsible for marshalling the process to completion.  There is one and only one instance of the orchestrator that is allowed to run at any given time, thus forcing the movement of processes forward task-by-task.  It is highly recommended that the Orchestrator be run on a relatively frequent basis via the Orchestrator URL (http://site_url/orchestrator/{token}).  You can also run the Orchestrator via refreshes of the Task Console.  You can configure the Orchestrator by going to /admin/config/workflow/maestro, and setting a token to be used in the Orchestrator URL and turning on/off the execution via the Task Console.  Setting the Orchestrator to run every minute is not uncommon and allows processes to continue without the need to hit the Task Console to move the processes forward.

Task Completion:  When a task completes, the software flags the task's status as being complete.  During the Orchestrator's execution, it reads the status of the task and determines if it has been completed and will flag the task as "archived" once the next task in the template has been created.  

Process Completion:  When the End task is executed, the engine will execute code to flag the process as complete.  If a process has open tasks appearing in users' Task Consoles and the process is flagged as completed, those tasks disappear as only active processes and active process' tasks are actioned upon by Maestro.  If you do not have an End task for any flows that can have multiple end points, the process will never be flagged as complete unless you flag it as such via the Maestro API.

Validation:  New for Maestro on Drupal 8, the validation engine helps ensure that templates are created properly and that the template has a higher probability of executing error free (no guarantees as the engine has no concept if a user SHOULD be assigned to tasks or if your logic is sound).  Thus when you make changes to a template in the template builder, the template is flagged as invalid and cannot be put into production if it has not been validated.

Nov 02 2017
Nov 02

This is part 2 of our series on developing a Decoupled Drupal Client Application with Ember. If you haven't yet read Part 1, it would be best to review Part1 first, as this article continues on with adding authentication and login form to our application. Shortly, we will explore how to create a new article but for that we will need to have authentication working so that we can pass in our credentials when posting our new article.

There are different authentication methods that we could implement but we are going to add Cookie based authentication, which is part of Drupal 8 core. Other authentication methods are provided via contrib for Drupal - refer to the D8 REST documentation pages and the page on Using other Authentication protocols.

The ember add on that we will be using is Ember-Simple-Auth. This addon is well maintained and provides the main building blocks that will allow us to implement a custom authenticate method with a client session store. The session store will be used to persist the account login tokens and our login state information between page refreshes of the ember client using local browser storage. The session store, this module provides is a implemented as an Ember.Service that is injected into our application components and controllers. This provided service is also very useful for persisting other application related data.

We will also need the Ember-Ajax addon which is a wrapper around the jQuery ajax method. It's also implemented as an Ember.Service and we will extend the ajax service to define our host URL, but we also have access to custom error handing, customizing the endpoints or headers as well - refer to the addon git page for more info.

Ok, so let's look at the code and how we install and setup these addon's using the Ember CLI.

Install the two required addons using the Ember CLI

  • ember install ember-ajax
  • ember install ember-simple-auth

Generate the ajax service that we will use to customize the provided ember-ajax service

  • ember generate service ajax

Modify the generated service/ajax.js file as per below but be sure to enter the URL for the correct host: property so the AJAX requests are being sent to your server. We had to do the same in Part 1 for the Ember-Data JSONAPI Adapter. Note: I will show you shortly how to setup a ember app config file to have one place to setup these environment variables. 

import Ember from 'ember';
import AjaxService from 'ember-ajax/services/ajax';

export default AjaxService.extend({
    host: '<enter the url for your site>',

    isSuccess(status, headers, payload ) {
        let isSuccess = this._super(...arguments);
        if (isSuccess && status === 200) {
            console.log('Ajax Service success 200', status, headers, payload);
        }
        else {
            console.log('Ajax Service NOT successful', status, headers, payload);
        }
        return isSuccess;
    }
});

Now, we need to add our custom authenticator for the ember-simple-auth addon. The authenticator will implement the authenticate method to login, restore method that will restore our saved login state information between requests and client page refreshes as well as the invalidate method that is called on logout to destroy our session. We will use the Ember CLI to generate the stub for us and update it's registry but then modify it. The code below is a complete replacement for the drupal.js file - let's call our authenticator drupal. 

  • ember generate authenticator drupal
// app/authenticators/drupal.js
import Ember from 'ember';
import Base from 'ember-simple-auth/authenticators/base';
import {isAjaxError, isBadRequestError, isNotFoundError, isForbiddenError} from 'ember-ajax/errors';

export default Base.extend({
 ajax: Ember.inject.service(),
 username: '',
 
 restore: function(data) {
   console.log('authenticators/drupal.js - restore action', data);
   return new Ember.RSVP.Promise(function (resolve, reject) {
     if (!Ember.isEmpty(Ember.get(data, 'name'))
        && !Ember.isEmpty(Ember.get(data, 'logout_token'))
        && !Ember.isEmpty(Ember.get(data, 'csrf_token'))
      ) {
       resolve(data);
      }
     else {
       reject();
     }
   });
 },

 authenticate(name, pass) {
   const loginData = JSON.stringify({name: name, pass: pass});
   console.log('authenticators/drupal.js - loginData', loginData);
   return this.get('ajax').request('/user/login?_format=json', {
     method: 'post',
     data: loginData,
     xhrFields: {
       withCredentials: true,
     },

   }).then(function (response) {
     console.log('authenticators/drupal.js - successful result', response);
     return { name, logout_token: response.logout_token, csrf_token: response.csrf_token };

   }).catch(function (error) {
     if(isBadRequestError(error)) {
       console.log('authenticators/drupal.js - Bad Login - 400');
       return;
     }
     if(isAjaxError(error)) {
       console.log('authenticators/drupal.js - Ajax Error');
       return;
     }
     console.log('authenticators/drupal.js - error', error);
   });
 },

 invalidate(data) {
   // This is where I add the call to user/logout
   console.log('authenticators/drupal.js - invalidate action', data);
   let logout_token = data.logout_token;
   let csrf_token = data.csrf_token;
   console.log('logout token is', logout_token);
   if(logout_token != null) {
     let result =  this.get('ajax').request('/user/logout?_format=json&token=' + logout_token + '&csrf_token=' + csrf_token, {
       method: 'post',
       xhrFields: {
         withCredentials: true,
       },
     }).then(function (response) {
       // Not executing when we get a 200 response - possibly related to: https://github.com/ember-cli/ember-ajax/issues/101
       // But we are expecting a 204 from Drupal on a successfull logout so we are fine
       console.log('authenticators/drupal.js - logout successful result', response);
     }).catch(function (error) {
       console.log('authenticators/drupal.js - Logout Request Error', error);
     });
     return result;
   }
   else {
     console.log('Logout Token is not set');
     this._super(...arguments);
   }
 },
});

A few notes about the above authenticator code

  • We implemented the three methods and have injected the ajax service which is used in the authenticate and invalidate methods to login and logout. The ajax service returns a promise that creates an async request to the server and will await for the response from the server. The response is then tested and if success, we execute the then() or use the AJAX returned result to report the error in the catch() functions. The Drupal /user/login REST API will return two tokens that need to be saved in our session store. A session cookie is also returned for the browser to send automatically along for subsequent requests to keep the app logged in. Well it should except we will see shortly there is an issue but more about that in a min. Let's finish wiring this up so we can test out the login and see the issue.
  • For both ajax requests this.get('ajax').request, we are passing in the URL for the request as well as an settings array for the jQuery AJAX function. We need to pass in the option to set extra fields on the XHR request for cross-domain-requests. Since we are (in my case and maybe your case) running the local ember app on localhost:4200 and the website is accessed on http://d8site6.dd:8083 (using devdesktop). So different domain and different port. You will have a CORS issue if either port or domain are different.

Our app needs a login form so the user can enter their login credentials. Modify the app/templates/application.hbs file as per below. For now we will keep it simple but once we have the login working, we will come back and improve this so the logout button will render instead of the login form.

You will notice that as soon as you save the changes to the application.hbs file, the form appears but you will see an error in the browser console because, On Submit, this form is calling an 'authenticate' action which we need to implement. 

{{! app/templates/application.hbs }}
<h1>My Sample Ember App using Drupal</h1>

<div style="margin-top:20px;padding:20px;">
  <form {{action 'authenticate' on='submit'}}>
      {{input value=name placeholder='Login'}}
      <div style="margin-top:5px;">{{input value=pass placeholder='Password' type='password'}}</div>
      <div style="padding-top:10px;"><button type="submit">Login</button></div>
  </form>
</div>

{{outlet}}

We haven't yet discussed Ember Controllers but this is where we will add our logic to handle the login form actions. Our application as it is now has a default route when we access http://localhost:4200 and the app renders the application.hbs template. By default then Ember will be looking for a controller called application as well. Use the ember cli to generate the application controller and then modify as below which shows the code for the authenticate action.

  • ember generate controller application
import Ember from 'ember';
import {isAjaxError, isBadRequestError, isNotFoundError, isForbiddenError} from 'ember-ajax/errors';

export default Ember.Controller.extend({
    ajax: Ember.inject.service(),
    session: Ember.inject.service('session'),

    actions: {

        authenticate() {

            // Use the ember-simple-auth addon authenticate action in authenticator/drupal.js
            let { name, pass } = this.getProperties('name', 'pass');
            console.log('call the authenticator');
            var router = Ember.getOwner(this).lookup('router:main');
            this.get('session').authenticate('authenticator:drupal', name, pass).then(function () {
                console.log('authentication completed successfully')
                router.transitionTo('/');
            }).catch((reason) => {
                this.set('authenticate errorMessage', reason.error || reason);
            });

        },

    }

});

Test out the login

You will now see the login form and we can test out our app. If your watching the browser debugger, you will see the debugging information we are sending out in the console.log statements.

Chrome Debugger showing REST login API results.

Oh no, we have an error

But we have a 403 error and an Error Message complaining about the Access-Controll-Allow-Credentials header.  Our application is sending the correct request and asking the server to allow a cross domain request but the server is not sending back the expected headers acknowledging to the browser that it accepts the request. The reason is drupal, we need to enable this ability via the services.yml file. We need to tell Drupal to set this header. Set this option TRUE and then be sure to clear your site drupal cache or the change won't be picked up. In the cors.config: section (currently last line in the services.yml file)

    # Sets the Access-Control-Allow-Credentials header.
    supportsCredentials: false

Let's try the login now with Drupal site CORS enabled

Success, we get a HTTP 200 from the login request and the tokens are returned. If you explore the browser debugger further, under Network tab, you can view the request and response headers and see the Drupal site session cookie is also sent. Drupal now sets the headers correctly on the response to satisfy the browser requirements for CORS.

Access-Control-Allow-Credentials: true
Access-Control-Allow-Origin: http://localhost:4200

Screenshot of devtools showing successful login

If the login did not work

  • Be sure you cleared the Drupal site cache after making any changes to services.yml
  • If your using the same browser but different tabs to run the ember app and as well as logged into the drupal site, then you will not be able to login because the browser tabs are using the same session and drupal will think your already logged. Logout of the drupal site and try the ember app login again. I recently created a drupal issue about this and Wim Leers as already submitted a working patch to improve the message returned.
  • If need, purge the sessions table for the site and clear out any active or stale sessions.

Now let's add the Logout functionality

Modify the application.hbs template to test if we are logged in by checking the session store isAuthenticated property and if TRUE then show a logout button that will fire the logout action that we will add to the application controller.

{{! app/templates/application.hbs }}
<h1>My Sample Ember App using Drupal</h1>

<div style="margin-top:20px;padding:20px;">
    {{#if session.isAuthenticated}}
        <button {{action "logout"}}>Logout</button>
    {{else}}
        <form {{action 'authenticate' on='submit'}}>
            {{input value=name placeholder='Login'}}
            <div style="margin-top:5px;">{{input value=pass placeholder='Password' type='password'}}</div>
            <div style="padding-top:10px;"><button type="submit">Login</button></div>
        </form>
    {{/if}}
</div>

{{outlet}}

Add the logout action now to the controllers/application.js file

        logout() {
            // Use the ember-simple-auth addon invalidate action in authenticator/drupal.js
            let isAuthenticated = this.get('session.isAuthenticated');
            console.log('Logout Action - isAuthenticated:', isAuthenticated);

            this.get('session').invalidate().then(function (response) {
                    console.log('logout promise successful response ', response);
                }
            ).catch(function (error) {
                    console.log('logout promise Error ', error);
                }
            );
            return;

        },

Final Testing

You should now be able to login and upon a success, the front page login form will change to a logout button. Clicking on the logout button will fire the logout action in application.js that will call the invalidate method in authenticators/drupal.js which makes the server request to /user/logout and passes the logout token and csrf_token. I found that it was necessary to send the csrf_token as well.

Next in Part 3 of this series, we will refactor our app and create a separate component for the login and apply some basic theming to our app with a menu. In Part 4, lets add the ability to create a new article.

Additional background references:

Nov 02 2017
Nov 02

Templates and tasks make up the basic building blocks of a Maestro workflow.  Maestro requires a workflow template to be created by an administrator.  When called upon to do so, Maestro will put the template into "production" and will follow the logic in the template until completion.  The definitions of in-production and template are important as they are the defining points for important jargon in Maestro.  Simply put, templates are the workflow patterns that define logic, flow and variables.  Processes are templates that are being executed which then have process variables and assigned tasks in a queue.

Once created, a workflow template allows the Maestro engine to follow a predefined set of steps in order to automate your business process.  When put into production, the template's tasks are executed by the Maestro engine or end users in your system.  This blog post defines what templates and tasks are, and some of the terms associated with them.

Templates define the logical progression of a workflow pattern from a single start point to one or more end points.  Templates are stored in Drupal 8 as config entities provided by the maestro module and are managed through the maestro_template_builder module.  A Maestro template defines a few static and non-deletable elements:

Template machine name:  The machine name of the template is initially derived from the template human-readable label, however, you can edit the machine name to suit your requirements.

Template Canvas height and width:  The height and width, in pixels, of the template as shown in the template editor.  

"initiator" template variable:  The initiator variable appears once a new template has been saved.  You are unable to remove the initiator variable.  The initiator variable is set by the workflow engine when a template is put into production and is set to the user ID of the person starting the workflow.  The initiator variable is helpful in using to assign tasks back to the individual who kicked off a process.  You are able to edit/alter the initiator's value via the Maestro API.

"entity_identifiers" variable:  The entity_identifiers variable also appears once a new template has been saved.  You are also unable to remove the entity_identifiers variable.  entity_identifiers is used to store any entities used by the workflow in a specific format.  As an example, the Content Type Task uses the entity_identifiers variable as a means to store the unique IDs of content created and also to fetch that content for later use.  The format of the variable is as follows:    type:unique_identifier:ID,type:unique_identifier:ID,...  Where 'type' is the type of entity.  For content type tasks, this is set as the actual content type machine name (e.g. article).  'unique_identifier' is used to give each piece of content a unique ID used in the engine and task console to pick off which of the entities it should be actioning upon.  'ID' is the actual unique ID of the entity where in the Content Type Task's case, is the node ID.  While this may sound confusing, it's simply a list of entities which are used in the workflow.  As a workflow designer, you do not have to use the entity_identifiers to store unique IDs -- you can create and use variables as you see fit.

Maestro Workflow Concepts: Initiator and entity_identifier variablesThe template variable editor showing initiator, entity_identifiers and a third variable.

Start Task: When a template is created, a Start task is automatically generated.  This task is a non-deletable task and always has the machine name of "start".  The workflow engine will always begin execution of a process using the 'start' task (unless you specify via an API spawned process otherwise).

End Task:  Although deletable, the end task is generated automatically when a template is created.  A template can have multiple end tasks and as such, the end task is deletable and can be added back in to a template.

Already noted in the Template section above, the initiator and entity_identifiers variables are created by default for each template.  These variables are used primarily by the engine and tasks to store important information about what is going on in the execution of the process.  As a workflow administrator, you can create template variables that can be used by your workflow application to assign tasks to users or roles or to make logical branching determinations based on values.

You can create any number of template variables and assign them a default value.  It is advisable to set default values to avoid logic issues in your workflow when testing for specific values.  Each time your template is put into production, the variables you've created on the template are created in the process.  Process variables and their values are used by the workflow engine for assignment or logic branching.  It is up to you to determine how best to use the variables.

Tasks are used on Templates and are either assigned to actors in the workflow (called Interactive Tasks) or are executed by the Maestro engine (called Engine Tasks).  The following list of tasks are shipped with Maestro D8:

Start Task: Automatically created by the engine for each template and is non-deletable.  This task must be present for a workflow to function.

End Task: You can have any number of End tasks on your template, however, you must at least have one end task in order for your template to be validated for production usage.  The end task is responsible for ending a workflow and properly signalling the engine to close off the process and set the appropriate flags.  If you terminate a non-end-task-terminated workflow branch by having no other tasks after it, the process will never be flagged as complete and archivable.  In such a scenario, the process will appear to never complete.

And Task:  Logical AND.  This task takes multiple branches of a workflow and ANDs them together, meaning that the flow of the process will HOLD at the AND task until all tasks that point to the AND are complete before continuing execution. 

Or Task:  Logical OR.  This task takes multiple branches of a  workflow and ORs them together, meaning that the flow of the process will NOT hold at the OR task.  The OR is used to combine multiple branches of a workflow together into a single serial point of execution. 

Batch Function: The Batch Function task allows you to create a function that will be executed by the engine.  This is a non-interactive task and requires that the batch function return a TRUE in order to signal the engine that the task has completed.

Content Type Task:  The Content Type task provides an interactive task to the user to fill in a content type and have the content attach itself to the workflow via the "entity_identifiers" variable.  The Content Type task requires the administrator to attach a unique identifier to the content so that the content can be referenced in the workflow across multiple content type tasks.

If Task:  The If task provides logical branching based on the status of the last executed task preceding the IF, or based on the value of a variable.  The IF task provides a TRUE and a FALSE branch and is the mechanism used to generally cause a logical loop-back condition.

Interactive Task:  The Interactive task is a user-executed task that is generally run as a modal dialog in the Maestro Task Console.  Interactive tasks are completely customizable by the workflow developers to present whatever type of information is required to the end user.  Interactive tasks will only complete when an end user assigned to the task completes it.  The workflow will not continue until such time.

Manual Web Task:  The Manual Web task is used to redirect the user to a different Drupal page, or even an external page from the workflow application.  The redirection to a page is done in the Maestro Task Console and provides the Maestro Queue ID (queueid=xxxx) as a parameter in the URL when redirecting to the page.  It is 100% up to the manual web task's ultimate endpoint to complete the task via the Maestro API.  The workflow will not continue until the Manual Web Task's endpoint has completed the task.

Set Process Variable Task: The Set Process Variable task (SPV) is used to set the value of a process variable either though a hard-coded value, adding or subtracting a value from the variable, or by specifying a function to fetch data with.

Maestro's API in conjunction with the power of Drupal 8's underlying structure means that if a type of task that you require is missing, one can be written.  Examples of both interactive and non-interactive tasks are shipped with Maestro. 

Nextide provides consulting and support services for all of your workflow needs.  Contact us today with your project requirements!

Nov 02 2017
Nov 02

This is the first in a series of articles that will document lessons learned while exploring using Ember as a decoupled client with Drupal.

You will need to have Ember CLI installed and a local Drupal 8 (local development assumed). This initial series of articles is based on Ember 2.14 and Drupal 8.3.5 but my initial development was over 6 months ago with earlier versions of both Ember so this should work if you have an earlier ember 2.11 or so installed.

You should read this excellent series of articles written by Preston So of Acquia on using Ember with Drupal that provides a great background and introduction to Ember and Drupal.

The Ember community has excellent documentation and guides including how to install Ember and the Ember Cli, The Ember Cli is like drush or drupal console and will be used to create the initial ember app stub code and install ember addons.

We will be using the JSON API format which is an emerging specification for REST APIs in JSON. The JSONAPI was initially developed by Yehuda Katz, who is the one of the creators of emberjs and thus has tight integration in Ember and Ember Data. There is an open issue and plan to include the JSONAPI module as a Drupal 8 core experimental module in Drupal 8.4 but for now you will need to install this module. Also for reference and background - the D.O documentation page for the JSON API module.

In this first article, we will create and setup a basic ember app to retrieve a listing of nodes (articles) with a default Drupal 8.3.5 install. The default node permissions allow view access for anonymous but our next article will explore adding authentication and pagination, exploring layout control using bootstrap.

Dependencies to walk along with this introduction.

  • Drupal 8.3.5 install - we used a default install (no edits to settings.yml files) to start so we can explore initial issues and how to resolve.
  • Ember 2.14 is being used
  • Ember Inspector added to our browser (chrome in our case)
  • Was using Acquia DevDesktop and my local site URL: http://d8site6.dd:8083

Install and enable the following Drupal modules

  • devel
  • devel_generate
  • jsonapi

drush dl devel jsonapi

drush en -y devel, devel_generate, jsonapi

Generate 20 initial nodes with Devel Generate

drush genc 20

Creating our Ember App

We will be using the Ember Cli to create the ember app stub code and built-in browser for local development. From the root directory of your Drupal installation:

Creates the initial ember application - call it what ever you like.

  • ember new emberapp
  • This will take a few minutes as it's doing a fair amount of work and logs it's progress to your console.

We can now change directories into the ember app and start the built-in ember server. It's best to do this in a new command window (shell) so that you still have access to the ember cli. The ember app will be automatically rebuilt as we add/edit the app source code.

  • cd emberapp 
  • ember serve
  • The application will be compiled and any errors displayed. Once built, you will see Build successful (7737ms) – Serving on http://localhost:4200

Now bring up the app in your browser and you should get the initial ember app welcome page.

Let's edit the app landing page so we can see how live editing works. You will see the application code is located under emberapp/app and has a very structured layout. Ember has formal conventions or best practices that make it easier to ramp up and understand other developers code.

  • edit the app/templates/applications.hbs. Remove the {{welcome-page}} component and add our basic title for now.
  • If you still have the ember server running, you will see the application rebuild and browser refresh automatically with the affected change.
{{! app/templates/application.hbs }}
<h1>My Sample Ember App using Drupal</h1>

{{outlet}}

Next, let's work on retrieving a list of articles using the ember app. Ember is URL-driven so it always starts at the URL. We will create the route stub code using the ember cli and it will automatically update the main app/router.js and create a route handler with a default template that we can customize. You should be in the created <emberapp> directory now so the generated stub code is created as part of the new ember app.

  • ember g route articles  (note need to be in the created emberapp directory)
Blaines-MacBook-Pro:app blaine$ ember g route articles
installing route
  create app/routes/articles.js
  create app/templates/articles.hbs
updating router
  add route articles
installing route-test
  create tests/unit/routes/articles-test.js

We need to define the model to represent the persistent data store. When data changes, or new data is added, the model is saved.  We will create the model using the full entity name so that it matches the expected returning JSON object type from Drupal. If you navigate to {siteurl}/jsonapi/node/article?_format=api_json, you can see the format of the REST API response and the data format, note the type attribute is set to node--article.

// app/models/node--article.js
import DS from 'ember-data';

export default DS.Model.extend({
  nid: DS.attr(),
  uuid: DS.attr(),
  title: DS.attr(),
  created: DS.attr(),
  body: DS.attr()
});

Now we have the model defined and need to update the generated route stub code we created for articles. Our changes will fetch the data to populate the model when the route is used. The Ember route API has quite a few methods but we will just need the model method to have it return all articles - update your app/routes/articles.js as below.

import Ember from 'ember';

export default Ember.Route.extend({
  model() {
    return this.get('store').findAll('node--article');    
  }
});

When the model hook fires, it will make the API request to Drupal. We need to create and modify the Ember adapters for connecting to Drupal. In Ember Data, the Adapter determines how data is persisted to a backend data store, such as the URL format and headers for a REST API.  First we will create the adapter stub code and then modify the default implementation.

  • ember g adapter application
// app/adapters/application.js
import DS from 'ember-data';

export default DS.JSONAPIAdapter.extend({
  host: 'http://d8site6.dd:8083',
  namespace: 'jsonapi',

  pathForType(type) {
      let entityPath;
      switch(type) {
        case 'node--article':
          entityPath = 'node/article';
          break;
      }
      return entityPath;
    },

    buildURL() {
      return this._super(...arguments) + '?_format=api_json';
    }

});

  • Modify the adapters/application.js as noted in the above code.
  • We need to define the host URL and namespace that will be used to build the url in the buildURL method. Note the call to this._super method(). You will see this type of implementation frequently in ember code. This is how you call the parent implementation for the method (i.e. the object you are extending), so you can override methods but still access the implementation of your parent class.
  • Replace the host URL with your site specific URL 
  • There are several Ember Data Adapters but we will be using the JSONAPI Adapter and we extend from that base class. The adapter class has a handful of hooks that are commonly used to customize it and we need to alter the URL path used to make the REST API request. By default, it will use the data model name 'type' that we created. Although the Drupal entity type and data model is node--article, the API URL that we need is node/article so we need to override the pathForType method - reference: https://guides.emberjs.com/v2.14.0/models/customizing-adapters

We are getting close, we now need to update our templates to display the articles. When we created the route articles, it created templates/articles.hbs. We will modify to iterate over the model records and it will render automatically into the application.html {{outlet}}.

<h2>List of articles</h2>
<ul>
  {{#each model as |article|}}
    <li>{{article.title}}</li>
    <p>{{article.body.value}}</p>
  {{/each}}
</ul>

 

Let's navigate in our browser to the articles route http://localhost:4200/articles

Oh no, you likely see a blank white page which is a good indication of an error. Check the browser console using the browser devtools.

Screenshot of devtools showing CORS issue

We have a CORS issue because we are making the API request from a different URL (domain and port) and this is not allowed by browsers for security reasons by default. There is nothing we can really do from the browser or client side and have to grant permission. This is more a local development issue because you would deploy the compiled production Ember App to the server running Drupal most likely but if not, then we need to address this issue the same way. 

Fortunately for us, Drupal as of Drupal 8.2, there is native support for CORS - https://www.drupal.org/node/2715637

  • copy the sites/default/default.services.yml to services.yml and enable the CORS service. The minimum settings to work for now are:
  • # Configure Cross-Site HTTP requests (CORS).
       # Read https://developer.mozilla.org/en-US/docs/Web/HTTP/Access_control_CORS
       # for more information about the topic in general.
       # Note: By default the configuration is disabled.
      cors.config:
        enabled: true
        # Specify allowed headers, like 'x-allowed-header'.
        allowedHeaders: ['Content-Type', 'Access-Control-Allow-Headers']
        # Specify allowed request methods, specify ['*'] to allow all possible ones.
        allowedMethods: []
        # Configure requests allowed from specific origins.
        allowedOrigins: ['*']
        # Sets the Access-Control-Expose-Headers header.
        exposedHeaders: false
        # Sets the Access-Control-Max-Age header.
        maxAge: false
        # Sets the Access-Control-Allow-Credentials header.
        supportsCredentials: false
  • Once the services.yml is in place, clear your drupal site cache and refresh the browser.

You should now be able to see the article listing.

Screenshot of ember app showing article listing

Next in Part 2, let's explore adding authentication and CORS will once again resurface.

Nov 02 2017
Nov 02

We've put together a Maestro overview video introducing you to Maestro for Drupal 8.  Maestro is a workflow engine that allows you to create and automate a sequence of tasks representing any business process. Our business workflow engine has existed in various forms since 2003 and through many years of refinements, it was released for Drupal 7 in 2010. 

If it can be flow-charted, then it can be automated

Now, with the significant updates for Drupal 8, maestro was has been rewritten to take advantage of the Drupal 8 core improvements and module development best practices. Maestro now provides a tighter integration with native views and entity support.

Maestro is a solution to automate business workflow which typically include the movement of documents or forms for editing and review/approval. A business process that would require conditional tests - ie: IF this Then that.

  • if this document is approved then send it to the department manager else return to the initiator
  • or ... if this form/document requires a security review, then sent it to the security team
  • or .. if the last task set the flag for Purchasing .. then launch the purchasing sub workflow

A number of condition checks (if tasks) can be incorporated through simple admin functions without any coding. Complex business processes which include parallel approvals and serial grouping of tasks with dynamic routing can be modeled. There really is no limit to how large or complex a workflow can be. All business workflows are just a series of interactive tasks, batch tasks, and conditional tasks. The module provides a number of different interactive and batch tasks and the module API makes it easy to extend and provide your own custom tasks. The visual workflow editor will automatically support your custom task types.

Example business workflows:

  • Expense Approval
  • New employee hire (procurement, application access, building access, office manager ....)
  • Product Development (marketing, engineering, sales, design, manufacturing ... )
  • Project Management
  • New Idea Submission
  • Contract Management
  • Document Management / Revision Management
  • Legal
  • Request Tracking

This video will provide an overview of how to create a workflow template (set of tasks) for a simple approval workflow and the admin features like tracking or tracing workflows. 

[embedded content]

  • Link to the maestro project page and download on drupal.org
  • Subscribe to our YouTube channel so you don't miss any videos about how to use Maestro for your workflow automation projects.
Nov 02 2017
Nov 02

The Maestro Workflow Engine for Drupal 8 is now available as a Beta download!  It has been many months of development to move Maestro out of the D7 environment to a more D8 integrated structure and we think the changes made will benefit both the end user and developer.  This post is the first of many on Maestro for D8, which will give an overview of the module and provide a starting point for those regardless of previous Maestro experience.

If you can flowchart it, we can automate it!

Maestro is a business process workflow automation engine which ships with a process template editor and end user task console.  Maestro's slogan is: "If you can flowchart it, we can automate it".  With Maestro's template editor, you can visually design a workflow which has human-interaction-required tasks and machine-executed tasks, complete with logic, looping, branching and variables.

Maestro Workflow Engine for Drupal 8 Template EditorMaestro Drupal 8 Template Editor

Maestro's first generation release was back in 2003 where a rudimentary version of our current workflow engine was developed.  Over the intervening 14 years, Maestro has been steadily updated and in 2011, Maestro was released as a full version module on Drupal 7.  Maestro's D7 release heralded a new beginning for the engine with Nextide's focus for Maestro being solely centred on Drupal.  Maestro for D8 is now more integrated with Drupal, provides a rich Drupal 8 development environment while still delivering the basic functionality out of the box.

Maestro's workflow engine is known as the "Orchestrator".  The Orchestrator is responsible for marshaling workflow processes through their templated paths.  The maestro module contains the workflow engine, the entities and the API used for managing workflow processes.  The Orchestrator is designed to run independently from the rest of Drupal in a similar fashion to Drupal Cron, thus allowing the workflow process to proceed without having to wait for a user to hit the site and bootstrap Drupal.

Entities

Maestro for D8 now uses Drupal 8 config entities for the templates and content entities for the in-production templates (called processes).  The introduction of native Drupal 8 entities for the engine means that site builders can now use Views as a means to create interfaces and reports for active processes.  Maestro ships with a few out-of-the-box views for showing outstanding and in-production tasks to help administrators with their daily routines. 

Maestro Drupal 8 Configuration EntityTemplates are now Config Entities

 

API

Maestro's API has been revamped and updated to provide a much more streamlined development experience.  A separate blog post will be used to highlight the basic API.  The maestro_engine provides the basic API for use in your own Maestro-related applications.

The Template Builder is the interface workflow administrators will spend a great deal of time.  As shown in the image in the "What is Maestro" section, the interface allows administrators to drag-and-drop tasks on to a visual workflow designer.  The Maestro D8 template builder is an SVG-based tool that works on all platforms, including iOS and Android browsers.  The editor interface has been updated to allow single-click operations for editing and moving of tasks in the interface. 

Developers using the new template builder will be pleased to know that the edit screens for tasks are now full Form API driven interfaces, giving developers the ability to inject their own form fields and manage that data as they see fit.

Task Editing via the Form APITask Editing interface

Maestro for D8 has introduced the concept of validated templates.  In order for a template to be put into production, it must first be validated by the engine.  The validation routine ensures that tasks have the appropriate fields or task pointers in place to allow a template to function as error free as possible.

Ultimate end users of the workflow engine are not administrators, but rather the users in your environment that must complete assigned tasks in order for the workflow to continue.  The task console is the default application provided by Maestro to show users their tasks.  Tasks are assigned to users based on the template and the configuration provided for the task.

Maestro's Task ConsoleMaestro's Task Console

The task console provides users with a simple interface to execute their tasks.  Maestro developers can customize the functionality, look and feel of the tasks to suit the business' requirements.  A number of APIs and hooks exist for the Maestro Engine and the task console to augment and customize the text and appearance of the console columns themselves.

Nov 01 2017
Nov 01

This is part 4 of the Maestro for Drupal 8 blog series, defining and documenting the various aspects of the Maestro workflow engine.  Please see Part 1 for information on Maestro's Templates and Tasks, Part 2 for the Maestro's workflow engine internals and Part 3 for information on how Maestro handles logical loopback scenarios.

Oct 16 2017
Oct 16

This is part 3 of our series on developing a Decoupled Drupal Client Application with Ember. If you haven't yet read the previous articles, it would be best to review Part1 first. In this article, we are going to clean up the code to remove the hard coded URL for the host, move the login form to a separate page and add a basic header and styling.

We currently have defined the host URL in both the adapter (app/adapters/application.js) for the Ember Data REST calls as well as the AJAX Service that we use for the authentication (app/services/ajax.js). This is clearly not a good idea but helped us focus on the initial goal and our simple working app.

Oct 06 2017
tom
Oct 06

Many organization still struggle with the strain of manual processes that touch critical areas of the business. And these manual processes could be costlier that you think. It’s not just profit that may be slipping away but employee moral, innovation, competitiveness and so much more.

By automating routine tasks you can increase workflow efficiency, which in turn can free up staff for higher value work, driving down costs and boosting revenue. And it may be easier to achieve productivity gains simpler, faster, and with less risk that you may assume.

Most companies with manual work processes have been refining them for years, yet they may still not be efficient because they are not automated. So the question to ask is, “can I automate my current processes?”.

Sep 30 2017
Sep 30

This is part 3 of the Maestro for Drupal 8 blog series, defining and documenting the various aspects of the Maestro workflow engine.  Please see Part 1 for information on Maestro's Templates and Tasks, and Part 2 for the Maestro's workflow engine internals.  This post will help workflow administrators understand why Maestro for Drupal 8's validation engine warns about the potential for loopback conditions known as "Regeneration".

Sep 30 2017
Sep 30

The Maestro Engine is the mechanism responsible for executing a workflow template by assigning tasks to actors, executing tasks for the engine and providing all of the other logic and glue functionality to run a workflow.  The maestro module is the core module in the Maestro ecosystem and is the module that houses the template, variable, assignment, queue and process schema.  The maestro module also provides the Maestro API for which developers can interact with the engine to do things such as setting/getting process variables, start processes, move the queue along among many other things.

As noted in the preamble for our Maestro D8 Concepts Part 1: Templates and Tasks post, there is jargon used within Maestro to define certain aspects of the engine and data.  The major terms are as follows:

Sep 30 2017
Sep 30

This is part 2 of our series on developing a Decoupled Drupal Client Application with Ember. If you haven't yet read Part 1, it would be best to review Part1 first, as this article continues on with adding authentication and login form to our application. Shortly, we will explore how to create a new article but for that we will need to have authentication working so that we can pass in our credentials when posting our new article.

Sep 30 2017
Sep 30

Templates and tasks make up the basic building blocks of a Maestro workflow.  Maestro requires a workflow template to be created by an administrator.  When called upon to do so, Maestro will put the template into "production" and will follow the logic in the template until completion.  The definitions of in-production and template are important as they are the defining points for important jargon in Maestro.  Simply put, templates are the workflow patterns that define logic, flow and variables.  Processes are templates that are being executed which then have process variables and assigned tasks in a queue.

Once created, a workflow template allows the Maestro engine to follow a predefined set of steps in order to automate your business process.  When put into production, the template's tasks are executed by the Maestro engine or end users in your system.  This blog post defines what templates and tasks are, and some of the terms associated with them.

 

Sep 30 2017
Sep 30

This is the first in a series of articles that will document lessons learned while exploring using Ember as a decoupled client with Drupal.

You will need to have Ember CLI installed and a local Drupal 8 (local development assumed). This initial series of articles is based on Ember 2.14 and Drupal 8.3.5 but my initial development was over 6 months ago with earlier versions of both Ember so this should work if you have an earlier ember 2.11 or so installed.

You should read this excellent series of articles written by Preston So of Acquia on using Ember with Drupal that provides a great background and introduction to Ember and Drupal.

Sep 30 2017
Sep 30

We've put together a Maestro overview video introducing you to Maestro for Drupal 8.  Maestro is a workflow engine that allows you to create and automate a sequence of tasks representing any business process. Our business workflow engine has existed in various forms since 2003 and through many years of refinements, it was released for Drupal 7 in 2010. 

If it can be flow-charted, then it can be automated

Now, with the significant updates for Drupal 8, maestro was has been rewritten to take advantage of the Drupal 8 core improvements and module development best practices. Maestro now provides a tighter integration with native views and entity support.

Maestro is a solution to automate business workflow which typically include the movement of documents or forms for editing and review/approval. A business process that would require conditional tests - ie: IF this Then that.

Sep 30 2017
Sep 30

The Maestro Workflow Engine for Drupal 8 is now available as a Beta download!  It has been many months of development to move Maestro out of the D7 environment to a more D8 integrated structure and we think the changes made will benefit both the end user and developer.  This post is the first of many on Maestro for D8, which will give an overview of the module and provide a starting point for those regardless of previous Maestro experience.

Sep 29 2017
Sep 29

This is part 3 of the Maestro for Drupal 8 blog series, defining and documenting the various aspects of the Maestro workflow engine.  Please see Part 1 for information on Maestro's Templates and Tasks, and Part 2 for the Maestro's workflow engine internals.  This post will help workflow administrators understand why Maestro for Drupal 8's validation engine warns about the potential for loopback conditions known as "Regeneration".

Sep 08 2017
Sep 08

The Maestro Engine is the mechanism responsible for executing a workflow template by assigning tasks to actors, executing tasks for the engine and providing all of the other logic and glue functionality to run a workflow.  The maestro module is the core module in the Maestro ecosystem and is the module that houses the template, variable, assignment, queue and process schema.  The maestro module also provides the Maestro API for which developers can interact with the engine to do things such as setting/getting process variables, start processes, move the queue along among many other things.

As noted in the preamble for our Maestro D8 Concepts Part 1: Templates and Tasks post, there is jargon used within Maestro to define certain aspects of the engine and data.  The major terms are as follows:

Sep 08 2017
Sep 08

This is part 2 of our series on developing a Decoupled Drupal Client Application with Ember. If you haven't yet read Part 1, it would be best to review Part1 first, as this article continues on with adding authentication and login form to our application. Shortly, we will explore how to create a new article but for that we will need to have authentication working so that we can pass in our credentials when posting our new article.

Sep 08 2017
Sep 08

Templates and tasks make up the basic building blocks of a Maestro workflow.  Maestro requires a workflow template to be created by an administrator.  When called upon to do so, Maestro will put the template into "production" and will follow the logic in the template until completion.  The definitions of in-production and template are important as they are the defining points for important jargon in Maestro.  Simply put, templates are the workflow patterns that define logic, flow and variables.  Processes are templates that are being executed which then have process variables and assigned tasks in a queue.

Once created, a workflow template allows the Maestro engine to follow a predefined set of steps in order to automate your business process.  When put into production, the template's tasks are executed by the Maestro engine or end users in your system.  This blog post defines what templates and tasks are, and some of the terms associated with them.

 

Sep 08 2017
Sep 08

This is the first in a series of articles that will document lessons learned while exploring using Ember as a decoupled client with Drupal.

You will need to have Ember CLI installed and a local Drupal 8 (local development assumed). This initial series of articles is based on Ember 2.14 and Drupal 8.3.5 but my initial development was over 6 months ago with earlier versions of both Ember so this should work if you have an earlier ember 2.11 or so installed.

You should read this excellent series of articles written by Preston So of Acquia on using Ember with Drupal that provides a great background and introduction to Ember and Drupal.

Sep 08 2017
Sep 08

We've put together a Maestro overview video introducing you to Maestro for Drupal 8.  Maestro is a workflow engine that allows you to create and automate a sequence of tasks representing any business process. Our business workflow engine has existed in various forms since 2003 and through many years of refinements, it was released for Drupal 7 in 2010. 

If it can be flow-charted, then it can be automated

Now, with the significant updates for Drupal 8, maestro was has been rewritten to take advantage of the Drupal 8 core improvements and module development best practices. Maestro now provides a tighter integration with native views and entity support.

Maestro is a solution to automate business workflow which typically include the movement of documents or forms for editing and review/approval. A business process that would require conditional tests - ie: IF this Then that.

Sep 08 2017
Sep 08

The Maestro Workflow Engine for Drupal 8 is now available as a Beta download!  It has been many months of development to move Maestro out of the D7 environment to a more D8 integrated structure and we think the changes made will benefit both the end user and developer.  This post is the first of many on Maestro for D8, which will give an overview of the module and provide a starting point for those regardless of previous Maestro experience.

Sep 06 2017
Sep 06

The Maestro Engine is the mechanism responsible for executing a workflow template by assigning tasks to actors, executing tasks for the engine and providing all of the other logic and glue functionality to run a workflow.  The maestro module is the core module in the Maestro ecosystem and is the module that houses the template, variable, assignment, queue and process schema.  The maestro module also provides the Maestro API for which developers can interact with the engine to do things such as setting/getting process variables, start processes, move the queue along among many other things.

As noted in the preamble for our Maestro D8 Concepts Part 1: Templates and Tasks post, there is jargon used within Maestro to define certain aspects of the engine and data.  The major terms are as follows:

Template:  The workflow pattern which you can edit via the Template Builder module.  Defines the logical pattern the engine should follow.

Tasks:  Tasks are defined in the Template Builder and are either executed by the engine or assigned to actors.  

Template Variables:  Variables defined on a template and used in the Template Builder for logical operations, task operations or task assignments.

Process:  When a template has been started (there are multiple ways to start a template), the template is used by the engine as the path for execution.  Thus, the first entity created by the engine when a template is started is the process entity.  Thus a template put into production is a process and each process has it's own unique ID called the process ID.  

Queue:  The queue is the entity that holds information for what the engine is to execute or what is presented to the task console for the assigned actors to execute.  Each task in the template becomes an entry in the queue when a template has been put into production.  Only those tasks which are ready to be executed are created in the queue table by the engine.

Process Variables:  Template variables that have been copied over into the execution process variables entity to track and store variables.  Using the Maestro API, developers can get/set values, tasks can be assigned and logic tests can be done on variable values.  Variable values can be unique from process to process, thus producing routing or logic operations that are different from process to process even if those processes are based on the same template.

Production Assignment:  Tasks are assigned to actors or roles either explicitly or by variable in the template builder.  When a template is put into production, those assignments are translated from the template and injected into the production assignments entity.  Only interactive tasks such as the interactive task, manual web task and content type task (shipped by Maestro at this point) are assignable to human actors.  Thus only those task types will have production assignments in the production assignments entity.  Engine executable tasks are simply executed by Maestro.

The Orchestrator:  The Orchestrator is the crucial piece to the Maestro puzzle.  The Orchestrator is responsible for marshalling the process to completion.  There is one and only one instance of the orchestrator that is allowed to run at any given time, thus forcing the movement of processes forward task-by-task.  It is highly recommended that the Orchestrator be run on a relatively frequent basis via the Orchestrator URL (http://site_url/orchestrator/{token}).  You can also run the Orchestrator via refreshes of the Task Console.  You can configure the Orchestrator by going to /admin/config/workflow/maestro, and setting a token to be used in the Orchestrator URL and turning on/off the execution via the Task Console.  Setting the Orchestrator to run every minute is not uncommon and allows processes to continue without the need to hit the Task Console to move the processes forward.

Task Completion:  When a task completes, the software flags the task's status as being complete.  During the Orchestrator's execution, it reads the status of the task and determines if it has been completed and will flag the task as "archived" once the next task in the template has been created.  

Process Completion:  When the End task is executed, the engine will execute code to flag the process as complete.  If a process has open tasks appearing in users' Task Consoles and the process is flagged as completed, those tasks disappear as only active processes and active process' tasks are actioned upon by Maestro.  If you do not have an End task for any flows that can have multiple end points, the process will never be flagged as complete unless you flag it as such via the Maestro API.

Validation:  New for Maestro on Drupal 8, the validation engine helps ensure that templates are created properly and that the template has a higher probability of executing error free (no guarantees as the engine has no concept if a user SHOULD be assigned to tasks or if your logic is sound).  Thus when you make changes to a template in the template builder, the template is flagged as invalid and cannot be put into production if it has not been validated.

Sep 01 2017
Sep 01

This is part 2 of our series on developing a Decoupled Drupal Client Application with Ember. If you haven't yet read Part 1, it would be best to review Part1 first, as this article continues on with adding authentication and login form to our application. Shortly, we will explore how to create a new article but for that we will need to have authentication working so that we can pass in our credentials when posting our new article.

There are different authentication methods that we could implement but we are going to add Cookie based authentication, which is part of Drupal 8 core. Other authentication methods are provided via contrib for Drupal - refer to the D8 REST documentation pages and the page on Using other Authentication protocols.

The ember add on that we will be using is Ember-Simple-Auth. This addon is well maintained and provides the main building blocks that will allow us to implement a custom authenticate method with a client session store. The session store will be used to persist the account login tokens and our login state information between page refreshes of the ember client using local browser storage. The session store, this module provides is a implemented as an Ember.Service that is injected into our application components and controllers. This provided service is also very useful for persisting other application related data.

We will also need the Ember-Ajax addon which is a wrapper around the jQuery ajax method. It's also implemented as an Ember.Service and we will extend the ajax service to define our host URL, but we also have access to custom error handing, customizing the endpoints or headers as well - refer to the addon git page for more info.

Ok, so let's look at the code and how we install and setup these addon's using the Ember CLI.

Install the two required addons using the Ember CLI

  • ember install ember-ajax
  • ember install ember-simple-auth

Generate the ajax service that we will use to customize the provided ember-ajax service

  • ember generate service ajax

Modify the generated service/ajax.js file as per below but be sure to enter the URL for the correct host: property so the AJAX requests are being sent to your server. We had to do the same in Part 1 for the Ember-Data JSONAPI Adapter. Note: I will show you shortly how to setup a ember app config file to have one place to setup these environment variables. 

import Ember from 'ember';
import AjaxService from 'ember-ajax/services/ajax';

export default AjaxService.extend({
    host: '<enter the url for your site>',

    isSuccess(status, headers, payload ) {
        let isSuccess = this._super(...arguments);
        if (isSuccess && status === 200) {
            console.log('Ajax Service success 200', status, headers, payload);
        }
        else {
            console.log('Ajax Service NOT successful', status, headers, payload);
        }
        return isSuccess;
    }
});

Now, we need to add our custom authenticator for the ember-simple-auth addon. The authenticator will implement the authenticate method to login, restore method that will restore our saved login state information between requests and client page refreshes as well as the invalidate method that is called on logout to destroy our session. We will use the Ember CLI to generate the stub for us and update it's registry but then modify it. The code below is a complete replacement for the drupal.js file - let's call our authenticator drupal. 

  • ember generate authenticator drupal
// app/authenticators/drupal.js
import Ember from 'ember';
import Base from 'ember-simple-auth/authenticators/base';
import {isAjaxError, isBadRequestError, isNotFoundError, isForbiddenError} from 'ember-ajax/errors';

export default Base.extend({
 ajax: Ember.inject.service(),
 username: '',
 
 restore: function(data) {
   console.log('authenticators/drupal.js - restore action', data);
   return new Ember.RSVP.Promise(function (resolve, reject) {
     if (!Ember.isEmpty(Ember.get(data, 'name'))
        && !Ember.isEmpty(Ember.get(data, 'logout_token'))
        && !Ember.isEmpty(Ember.get(data, 'csrf_token'))
      ) {
       resolve(data);
      }
     else {
       reject();
     }
   });
 },

 authenticate(name, pass) {
   const loginData = JSON.stringify({name: name, pass: pass});
   console.log('authenticators/drupal.js - loginData', loginData);
   return this.get('ajax').request('/user/login?_format=json', {
     method: 'post',
     data: loginData,
     xhrFields: {
       withCredentials: true,
     },

   }).then(function (response) {
     console.log('authenticators/drupal.js - successful result', response);
     return { name, logout_token: response.logout_token, csrf_token: response.csrf_token };

   }).catch(function (error) {
     if(isBadRequestError(error)) {
       console.log('authenticators/drupal.js - Bad Login - 400');
       return;
     }
     if(isAjaxError(error)) {
       console.log('authenticators/drupal.js - Ajax Error');
       return;
     }
     console.log('authenticators/drupal.js - error', error);
   });
 },

 invalidate(data) {
   // This is where I add the call to user/logout
   console.log('authenticators/drupal.js - invalidate action', data);
   let logout_token = data.logout_token;
   let csrf_token = data.csrf_token;
   console.log('logout token is', logout_token);
   if(logout_token != null) {
     let result =  this.get('ajax').request('/user/logout?_format=json&token=' + logout_token + '&csrf_token=' + csrf_token, {
       method: 'post',
       xhrFields: {
         withCredentials: true,
       },
     }).then(function (response) {
       // Not executing when we get a 200 response - possibly related to: https://github.com/ember-cli/ember-ajax/issues/101
       // But we are expecting a 204 from Drupal on a successfull logout so we are fine
       console.log('authenticators/drupal.js - logout successful result', response);
     }).catch(function (error) {
       console.log('authenticators/drupal.js - Logout Request Error', error);
     });
     return result;
   }
   else {
     console.log('Logout Token is not set');
     this._super(...arguments);
   }
 },
});

A few notes about the above authenticator code

  • We implemented the three methods and have injected the ajax service which is used in the authenticate and invalidate methods to login and logout. The ajax service returns a promise that creates an async request to the server and will await for the response from the server. The response is then tested and if success, we execute the then() or use the AJAX returned result to report the error in the catch() functions. The Drupal /user/login REST API will return two tokens that need to be saved in our session store. A session cookie is also returned for the browser to send automatically along for subsequent requests to keep the app logged in. Well it should except we will see shortly there is an issue but more about that in a min. Let's finish wiring this up so we can test out the login and see the issue.
  • For both ajax requests this.get('ajax').request, we are passing in the URL for the request as well as an settings array for the jQuery AJAX function. We need to pass in the option to set extra fields on the XHR request for cross-domain-requests. Since we are (in my case and maybe your case) running the local ember app on localhost:4200 and the website is accessed on http://d8site6.dd:8083 (using devdesktop). So different domain and different port. You will have a CORS issue if either port or domain are different.

Our app needs a login form so the user can enter their login credentials. Modify the app/templates/application.hbs file as per below. For now we will keep it simple but once we have the login working, we will come back and improve this so the logout button will render instead of the login form.

You will notice that as soon as you save the changes to the application.hbs file, the form appears but you will see an error in the browser console because, On Submit, this form is calling an 'authenticate' action which we need to implement. 

{{! app/templates/application.hbs }}
<h1>My Sample Ember App using Drupal</h1>

<div style="margin-top:20px;padding:20px;">
  <form {{action 'authenticate' on='submit'}}>
      {{input value=name placeholder='Login'}}
      <div style="margin-top:5px;">{{input value=pass placeholder='Password' type='password'}}</div>
      <div style="padding-top:10px;"><button type="submit">Login</button></div>
  </form>
</div>

{{outlet}}

We haven't yet discussed Ember Controllers but this is where we will add our logic to handle the login form actions. Our application as it is now has a default route when we access http://localhost:4200 and the app renders the application.hbs template. By default then Ember will be looking for a controller called application as well. Use the ember cli to generate the application controller and then modify as below which shows the code for the authenticate action.

  • ember generate controller application
import Ember from 'ember';
import {isAjaxError, isBadRequestError, isNotFoundError, isForbiddenError} from 'ember-ajax/errors';
import { task } from 'ember-concurrency';

export default Ember.Controller.extend({
    ajax: Ember.inject.service(),
    session: Ember.inject.service('session'),

    actions: {

        authenticate() {

            // Use the ember-simple-auth addon authenticate action in authenticator/drupal.js
            let { name, pass } = this.getProperties('name', 'pass');
            console.log('call the authenticator');
            var router = Ember.getOwner(this).lookup('router:main');
            this.get('session').authenticate('authenticator:drupal', name, pass).then(function () {
                console.log('authentication completed successfully')
                router.transitionTo('/');
            }).catch((reason) => {
                this.set('authenticate errorMessage', reason.error || reason);
            });

        },

    }

});

Test out the login

You will now see the login form and we can test out our app. If your watching the browser debugger, you will see the debugging information we are sending out in the console.log statements.

Chrome Debugger showing REST login API results.

Oh no, we have an error

But we have a 403 error and an Error Message complaining about the Access-Controll-Allow-Credentials header.  Our application is sending the correct request and asking the server to allow a cross domain request but the server is not sending back the expected headers acknowledging to the browser that it accepts the request. The reason is drupal, we need to enable this ability via the services.yml file. We need to tell Drupal to set this header. Set this option TRUE and then be sure to clear your site drupal cache or the change won't be picked up. In the cors.config: section (currently last line in the services.yml file)

    # Sets the Access-Control-Allow-Credentials header.
    supportsCredentials: false

Let's try the login now with Drupal site CORS enabled

Success, we get a HTTP 200 from the login request and the tokens are returned. If you explore the browser debugger further, under Network tab, you can view the request and response headers and see the Drupal site session cookie is also sent. Drupal now sets the headers correctly on the response to satisfy the browser requirements for CORS.

Access-Control-Allow-Credentials: true
Access-Control-Allow-Origin: http://localhost:4200

Screenshot of devtools showing successful login

If the login did not work

  • Be sure you cleared the Drupal site cache after making any changes to services.yml
  • If your using the same browser but different tabs to run the ember app and as well as logged into the drupal site, then you will not be able to login because the browser tabs are using the same session and drupal will think your already logged. Logout of the drupal site and try the ember app login again. I recently created a drupal issue about this and Wim Leers as already submitted a working patch to improve the message returned.
  • If need, purge the sessions table for the site and clear out any active or stale sessions.

Now let's add the Logout functionality

Modify the application.hbs template to test if we are logged in by checking the session store isAuthenticated property and if TRUE then show a logout button that will fire the logout action that we will add to the application controller.

{{! app/templates/application.hbs }}
<h1>My Sample Ember App using Drupal</h1>

<div style="margin-top:20px;padding:20px;">
    {{#if session.isAuthenticated}}
        <button {{action "logout"}}>Logout</button>
    {{else}}
        <form {{action 'authenticate' on='submit'}}>
            {{input value=name placeholder='Login'}}
            <div style="margin-top:5px;">{{input value=pass placeholder='Password' type='password'}}</div>
            <div style="padding-top:10px;"><button type="submit">Login</button></div>
        </form>
    {{/if}}
</div>

{{outlet}}

Add the logout action now to the controllers/application.js file

        logout() {
            // Use the ember-simple-auth addon invalidate action in authenticator/drupal.js
            let isAuthenticated = this.get('session.isAuthenticated');
            console.log('Logout Action - isAuthenticated:', isAuthenticated);

            this.get('session').invalidate().then(function (response) {
                    console.log('logout promise successful response ', response);
                }
            ).catch(function (error) {
                    console.log('logout promise Error ', error);
                }
            );
            return;

        },

Final Testing

You should now be able to login and upon a success, the front page login form will change to a logout button. Clicking on the logout button will fire the logout action in application.js that will call the invalidate method in authenticators/drupal.js which makes the server request to /user/logout and passes the logout token and csrf_token. I found that it was necessary to send the csrf_token as well.

Next in Part 3 of this series, we will refactor our app and create a separate component for the login and apply some basic theming to our app with a menu. In Part 4, lets add the ability to create a new article.

Additional background references:

Aug 14 2017
Aug 14

Templates and tasks make up the basic building blocks of a Maestro workflow.  Maestro requires a workflow template to be created by an administrator.  When called upon to do so, Maestro will put the template into "production" and will follow the logic in the template until completion.  The definitions of in-production and template are important as they are the defining points for important jargon in Maestro.  Simply put, templates are the workflow patterns that define logic, flow and variables.  Processes are templates that are being executed which then have process variables and assigned tasks in a queue.

Once created, a workflow template allows the Maestro engine to follow a predefined set of steps in order to automate your business process.  When put into production, the template's tasks are executed by the Maestro engine or end users in your system.  This blog post defines what templates and tasks are, and some of the terms associated with them.

Templates define the logical progression of a workflow pattern from a single start point to one or more end points.  Templates are stored in Drupal 8 as config entities provided by the maestro module and are managed through the maestro_template_builder module.  A Maestro template defines a few static and non-deletable elements:

Template machine name:  The machine name of the template is initially derived from the template human-readable label, however, you can edit the machine name to suit your requirements.

Template Canvas height and width:  The height and width, in pixels, of the template as shown in the template editor.  

"initiator" template variable:  The initiator variable appears once a new template has been saved.  You are unable to remove the initiator variable.  The initiator variable is set by the workflow engine when a template is put into production and is set to the user ID of the person starting the workflow.  The initiator variable is helpful in using to assign tasks back to the individual who kicked off a process.  You are able to edit/alter the initiator's value via the Maestro API.

"entity_identifiers" variable:  The entity_identifiers variable also appears once a new template has been saved.  You are also unable to remove the entity_identifiers variable.  entity_identifiers is used to store any entities used by the workflow in a specific format.  As an example, the Content Type Task uses the entity_identifiers variable as a means to store the unique IDs of content created and also to fetch that content for later use.  The format of the variable is as follows:    type:unique_identifier:ID,type:unique_identifier:ID,...  Where 'type' is the type of entity.  For content type tasks, this is set as the actual content type machine name (e.g. article).  'unique_identifier' is used to give each piece of content a unique ID used in the engine and task console to pick off which of the entities it should be actioning upon.  'ID' is the actual unique ID of the entity where in the Content Type Task's case, is the node ID.  While this may sound confusing, it's simply a list of entities which are used in the workflow.  As a workflow designer, you do not have to use the entity_identifiers to store unique IDs -- you can create and use variables as you see fit.

Maestro Workflow Concepts: Initiator and entity_identifier variablesThe template variable editor showing initiator, entity_identifiers and a third variable.

Start Task: When a template is created, a Start task is automatically generated.  This task is a non-deletable task and always has the machine name of "start".  The workflow engine will always begin execution of a process using the 'start' task (unless you specify via an API spawned process otherwise).

End Task:  Although deletable, the end task is generated automatically when a template is created.  A template can have multiple end tasks and as such, the end task is deletable and can be added back in to a template.

Already noted in the Template section above, the initiator and entity_identifiers variables are created by default for each template.  These variables are used primarily by the engine and tasks to store important information about what is going on in the execution of the process.  As a workflow administrator, you can create template variables that can be used by your workflow application to assign tasks to users or roles or to make logical branching determinations based on values.

You can create any number of template variables and assign them a default value.  It is advisable to set default values to avoid logic issues in your workflow when testing for specific values.  Each time your template is put into production, the variables you've created on the template are created in the process.  Process variables and their values are used by the workflow engine for assignment or logic branching.  It is up to you to determine how best to use the variables.

Tasks are used on Templates and are either assigned to actors in the workflow (called Interactive Tasks) or are executed by the Maestro engine (called Engine Tasks).  The following list of tasks are shipped with Maestro D8:

Start Task: Automatically created by the engine for each template and is non-deletable.  This task must be present for a workflow to function.

End Task: You can have any number of End tasks on your template, however, you must at least have one end task in order for your template to be validated for production usage.  The end task is responsible for ending a workflow and properly signalling the engine to close off the process and set the appropriate flags.  If you terminate a non-end-task-terminated workflow branch by having no other tasks after it, the process will never be flagged as complete and archivable.  In such a scenario, the process will appear to never complete.

And Task:  Logical AND.  This task takes multiple branches of a workflow and ANDs them together, meaning that the flow of the process will HOLD at the AND task until all tasks that point to the AND are complete before continuing execution. 

Or Task:  Logical OR.  This task takes multiple branches of a  workflow and ORs them together, meaning that the flow of the process will NOT hold at the OR task.  The OR is used to combine multiple branches of a workflow together into a single serial point of execution. 

Batch Function: The Batch Function task allows you to create a function that will be executed by the engine.  This is a non-interactive task and requires that the batch function return a TRUE in order to signal the engine that the task has completed.

Content Type Task:  The Content Type task provides an interactive task to the user to fill in a content type and have the content attach itself to the workflow via the "entity_identifiers" variable.  The Content Type task requires the administrator to attach a unique identifier to the content so that the content can be referenced in the workflow across multiple content type tasks.

If Task:  The If task provides logical branching based on the status of the last executed task preceding the IF, or based on the value of a variable.  The IF task provides a TRUE and a FALSE branch and is the mechanism used to generally cause a logical loop-back condition.

Interactive Task:  The Interactive task is a user-executed task that is generally run as a modal dialog in the Maestro Task Console.  Interactive tasks are completely customizable by the workflow developers to present whatever type of information is required to the end user.  Interactive tasks will only complete when an end user assigned to the task completes it.  The workflow will not continue until such time.

Manual Web Task:  The Manual Web task is used to redirect the user to a different Drupal page, or even an external page from the workflow application.  The redirection to a page is done in the Maestro Task Console and provides the Maestro Queue ID (queueid=xxxx) as a parameter in the URL when redirecting to the page.  It is 100% up to the manual web task's ultimate endpoint to complete the task via the Maestro API.  The workflow will not continue until the Manual Web Task's endpoint has completed the task.

Set Process Variable Task: The Set Process Variable task (SPV) is used to set the value of a process variable either though a hard-coded value, adding or subtracting a value from the variable, or by specifying a function to fetch data with.

Maestro's API in conjunction with the power of Drupal 8's underlying structure means that if a type of task that you require is missing, one can be written.  Examples of both interactive and non-interactive tasks are shipped with Maestro. 

Nextide provides consulting and support services for all of your workflow needs.  Contact us today with your project requirements!

Aug 08 2017
Aug 08

This is the first in a series of articles that will document lessons learned while exploring using Ember as a decoupled client with Drupal.

You will need to have Ember CLI installed and a local Drupal 8 (local development assumed). This initial series of articles is based on Ember 2.14 and Drupal 8.3.5 but my initial development was over 6 months ago with earlier versions of both Ember so this should work if you have an earlier ember 2.11 or so installed.

You should read this excellent series of articles written by Preston So of Acquia on using Ember with Drupal that provides a great background and introduction to Ember and Drupal.

The Ember community has excellent documentation and guides including how to install Ember and the Ember Cli, The Ember Cli is like drush or drupal console and will be used to create the initial ember app stub code and install ember addons.

We will be using the JSON API format which is an emerging specification for REST APIs in JSON. The JSONAPI was initially developed by Yehuda Katz, who is the one of the creators of emberjs and thus has tight integration in Ember and Ember Data. There is an open issue and plan to include the JSONAPI module as a Drupal 8 core experimental module in 8.4 in Drupal 8.4 but for now you will need to install this module. Also for reference and background - the D.O documentation page for the JSON API module.

In this first article, we will create and setup a basic ember app to retrieve a listing of nodes (articles) with a default Drupal 8.3.5 install. The default node permissions allow view access for anonymous but our next article will explore adding authentication and pagination, exploring layout control using bootstrap.

Dependencies to walk along with this introduction.

  • Drupal 8.3.5 install - we used a default install (no edits to settings.yml files) to start so we can explore initial issues and how to resolve.
  • Ember 2.14 is being used
  • Ember Inspector added to our browser (chrome in our case)
  • Was using Acquia DevDesktop and my local site URL: http://d8site6.dd:8083

Install and enable the following Drupal modules

  • devel
  • devel_generate
  • jsonapi

drush dl devel jsonapi

drush en -y devel, devel_generate, jsonapi

Generate 20 initial nodes with Devel Generate

drush genc 20

Creating our Ember App

We will be using the Ember Cli to create the ember app stub code and built-in browser for local development. From the root directory of your Drupal installation:

Creates the initial ember application - call it what ever you like.

  • ember new emberapp
  • This will take a few minutes as it's doing a fair amount of work and logs it's progress to your console.

We can now change directories into the ember app and start the built-in ember server. It's best to do this in a new command window (shell) so that you still have access to the ember cli. The ember app will be automatically rebuilt as we add/edit the app source code.

  • cd emberapp 
  • ember serve
  • The application will be compiled and any errors displayed. Once built, you will see Build successful (7737ms) – Serving on http://localhost:4200

Now bring up the app in your browser and you should get the initial ember app welcome page.

Let's edit the app landing page so we can see how live editing works. You will see the application code is located under emberapp/app and has a very structured layout. Ember has formal conventions or best practices that make it easier to ramp up and understand other developers code.

  • edit the app/templates/applications.hbs. Remove the {{welcome-page}} component and add our basic title for now.
  • If you still have the ember server running, you will see the application rebuild and browser refresh automatically with the affected change.
{{! app/templates/application.hbs }}
<h1>My Sample Ember App using Drupal</h1>

{{outlet}}

Next, let's work on retrieving a list of articles using the ember app. Ember is URL-driven so it always starts at the URL. We will create the route stub code using the ember cli and it will automatically update the main app/router.js and create a route handler with a default template that we can customize. You should be in the created <emberapp> directory now so the generated stub code is created as part of the new ember app.

  • ember g route articles  (note need to be in the created emberapp directory)
Blaines-MacBook-Pro:app blaine$ ember g route articles
installing route
  create app/routes/articles.js
  create app/templates/articles.hbs
updating router
  add route articles
installing route-test
  create tests/unit/routes/articles-test.js

We need to define the model to represent the persistent data store. When data changes, or new data is added, the model is saved.  We will create the model using the full entity name so that it matches the expected returning JSON object type from Drupal. If you navigate to {siteurl}/jsonapi/node/article?_format=api_json, you can see the format of the REST API response and the data format, note the type attribute is set to node--article.

// app/models/node--article.js
import DS from 'ember-data';

export default DS.Model.extend({
  nid: DS.attr(),
  uuid: DS.attr(),
  title: DS.attr(),
  created: DS.attr(),
  body: DS.attr()
});

Now we have the model defined and need to update the generated route stub code we created for articles. Our changes will fetch the data to populate the model when the route is used. The Ember route API has quite a few methods but we will just need the model method to have it return all articles - update your app/routes/articles.js as below.

import Ember from 'ember';

export default Ember.Route.extend({
  model() {
    return this.get('store').findAll('node--article');    
  }
});

When the model hook fires, it will make the API request to Drupal. We need to create and modify the Ember adapters for connecting to Drupal. In Ember Data, the Adapter determines how data is persisted to a backend data store, such as the URL format and headers for a REST API.  First we will create the adapter stub code and then modify the default implementation.

  • ember g adapter application
// app/adapters/application.js
import DS from 'ember-data';

export default DS.JSONAPIAdapter.extend({
  host: 'http://d8site6.dd:8083',
  namespace: 'jsonapi',

  pathForType(type) {
      let entityPath;
      switch(type) {
        case 'node--article':
          entityPath = 'node/article';
          break;
      }
      return entityPath;
    },

    buildURL() {
      return this._super(...arguments) + '?_format=api_json';
    }

});

  • Modify the adapters/application.js as noted in the above code.
  • We need to define the host URL and namespace that will be used to build the url in the buildURL method. Note the call to this._super method(). You will see this type of implementation frequently in ember code. This is how you call the parent implementation for the method (i.e. the object you are extending), so you can override methods but still access the implementation of your parent class.
  • Replace the host URL with your site specific URL 
  • There are several Ember Data Adapters but we will be using the JSONAPI Adapter and we extend from that base class. The adapter class has a handful of hooks that are commonly used to customize it and we need to alter the URL path used to make the REST API request. By default, it will use the data model name 'type' that we created. Although the Drupal entity type and data model is node--article, the API URL that we need is node/article so we need to override the pathForType method - reference: https://guides.emberjs.com/v2.14.0/models/customizing-adapters

We are getting close, we now need to update our templates to display the articles. When we created the route articles, it created templates/articles.hbs. We will modify to iterate over the model records and it will render automatically into the application.html {{outlet}}.

<h2>List of articles</h2>
<ul>
  {{#each model as |article|}}
    <li>{{article.title}}</li>
    <p>{{article.body.value}}</p>
  {{/each}}
</ul>

 

Let's navigate in our browser to the articles route http://localhost:4200/articles

Oh no, you likely see a blank white page which is a good indication of an error. Check the browser console using the browser devtools.

Screenshot of devtools showing CORS issue

We have a CORS issue because we are making the API request from a different URL (domain and port) and this is not allowed by browsers for security reasons by default. There is nothing we can really do from the browser or client side and have to grant permission. This is more a local development issue because you would deploy the compiled production Ember App to the server running Drupal most likely but if not, then we need to address this issue the same way. 

Fortunately for us, Drupal as of Drupal 8.2, there is native support for CORS - https://www.drupal.org/node/2715637

  • copy the sites/default/default.services.yml to services.yml and enable the CORS service. The minimum settings to work for now are:
  • # Configure Cross-Site HTTP requests (CORS).
       # Read https://developer.mozilla.org/en-US/docs/Web/HTTP/Access_control_CORS
       # for more information about the topic in general.
       # Note: By default the configuration is disabled.
      cors.config:
        enabled: true
        # Specify allowed headers, like 'x-allowed-header'.
        allowedHeaders: ['Content-Type', 'Access-Control-Allow-Headers']
        # Specify allowed request methods, specify ['*'] to allow all possible ones.
        allowedMethods: []
        # Configure requests allowed from specific origins.
        allowedOrigins: ['*']
        # Sets the Access-Control-Expose-Headers header.
        exposedHeaders: false
        # Sets the Access-Control-Max-Age header.
        maxAge: false
        # Sets the Access-Control-Allow-Credentials header.
        supportsCredentials: false
  • Once the services.yml is in place, clear your drupal site cache and refresh the browser.

You should now be able to see the article listing.

Screenshot of ember app showing article listing

Next time, let's explore adding authentication and CORS will once again resurface.

Aug 01 2017
Aug 01

We've put together a Maestro overview video introducing you to Maestro for Drupal 8.  Maestro is a workflow engine that allows you to create and automate a sequence of tasks representing any business process. Our business workflow engine has existed in various forms since 2003 and through many years of refinements, it was released for Drupal 7 in 2010. 

If it can be flow-charted, then it can be automated

Now, with the significant updates for Drupal 8, maestro was has been rewritten to take advantage of the Drupal 8 core improvements and module development best practices. Maestro now provides a tighter integration with native views and entity support.

Maestro is a solution to automate business workflow which typically include the movement of documents or forms for editing and review/approval. A business process that would require conditional tests - ie: IF this Then that.

  • if this document is approved then send it to the department manager else return to the initiator
  • or ... if this form/document requires a security review, then sent it to the security team
  • or .. if the last task set the flag for Purchasing .. then launch the purchasing sub workflow

A number of condition checks (if tasks) can be incorporated through simple admin functions without any coding. Complex business processes which include parallel approvals and serial grouping of tasks with dynamic routing can be modeled. There really is no limit to how large or complex a workflow can be. All business workflows are just a series of interactive tasks, batch tasks, and conditional tasks. The module provides a number of different interactive and batch tasks and the module API makes it easy to extend and provide your own custom tasks. The visual workflow editor will automatically support your custom task types.

Example business workflows:

  • Expense Approval
  • New employee hire (procurement, application access, building access, office manager ....)
  • Product Development (marketing, engineering, sales, design, manufacturing ... )
  • Project Management
  • New Idea Submission
  • Contract Management
  • Document Management / Revision Management
  • Legal
  • Request Tracking

This video will provide an overview of how to create a workflow template (set of tasks) for a simple approval workflow and the admin features like tracking or tracing workflows. 

[embedded content]

  • Link to the maestro project page and download on drupal.org
  • Subscribe to our YouTube channel so you don't miss any videos about how to use Maestro for your workflow automation projects.
Jul 31 2017
Jul 31

If you can flowchart it, we can automate it!

The Maestro Workflow Engine for Drupal 8

The Maestro Workflow Engine for Drupal 8 is now available as a Beta download!  It has been many months of development to move Maestro out of the D7 environment to a more D8 integrated structure and we think the changes made will benefit both the end user and developer.  This post is the first of many on Maestro for D8, which will give an overview of the module and provide a starting point for those regardless of previous Maestro experience.

Maestro is a business process workflow automation engine which ships with a process template editor and end user task console.  Maestro's slogan is: "If you can flowchart it, we can automate it".  With Maestro's template editor, you can visually design a workflow which has human-interaction-required tasks and machine-executed tasks, complete with logic, looping, branching and variables.

Maestro Workflow Engine for Drupal 8 Template EditorMaestro Drupal 8 Template Editor

Maestro's first generation release was back in 2003 where a rudimentary version of our current workflow engine was developed.  Over the intervening 14 years, Maestro has been steadily updated and in 2011, Maestro was released as a full version module on Drupal 7.  Maestro's D7 release heralded a new beginning for the engine with Nextide's focus for Maestro being solely centred on Drupal.  Maestro for D8 is now more integrated with Drupal, provides a rich Drupal 8 development environment while still delivering the basic functionality out of the box.

Maestro's workflow engine is known as the "Orchestrator".  The Orchestrator is responsible for marshaling workflow processes through their templated paths.  The maestro module contains the workflow engine, the entities and the API used for managing workflow processes.  The Orchestrator is designed to run independently from the rest of Drupal in a similar fashion to Drupal Cron, thus allowing the workflow process to proceed without having to wait for a user to hit the site and bootstrap Drupal.

Entities

Maestro for D8 now uses Drupal 8 config entities for the templates and content entities for the in-production templates (called processes).  The introduction of native Drupal 8 entities for the engine means that site builders can now use Views as a means to create interfaces and reports for active processes.  Maestro ships with a few out-of-the-box views for showing outstanding and in-production tasks to help administrators with their daily routines. 

Maestro Drupal 8 Configuration EntityTemplates are now Config Entities

 

API

Maestro's API has been revamped and updated to provide a much more streamlined development experience.  A separate blog post will be used to highlight the basic API.  The maestro_engine provides the basic API for use in your own Maestro-related applications.

The Template Builder is the interface workflow administrators will spend a great deal of time.  As shown in the image in the "What is Maestro" section, the interface allows administrators to drag-and-drop tasks on to a visual workflow designer.  The Maestro D8 template builder is an SVG-based tool that works on all platforms, including iOS and Android browsers.  The editor interface has been updated to allow single-click operations for editing and moving of tasks in the interface. 

Developers using the new template builder will be pleased to know that the edit screens for tasks are now full Form API driven interfaces, giving developers the ability to inject their own form fields and manage that data as they see fit.

Task Editing via the Form APITask Editing interface

Maestro for D8 has introduced the concept of validated templates.  In order for a template to be put into production, it must first be validated by the engine.  The validation routine ensures that tasks have the appropriate fields or task pointers in place to allow a template to function as error free as possible.

Ultimate end users of the workflow engine are not administrators, but rather the users in your environment that must complete assigned tasks in order for the workflow to continue.  The task console is the default application provided by Maestro to show users their tasks.  Tasks are assigned to users based on the template and the configuration provided for the task.

Maestro's Task ConsoleMaestro's Task Console

The task console provides users with a simple interface to execute their tasks.  Maestro developers can customize the functionality, look and feel of the tasks to suit the business' requirements.  A number of APIs and hooks exist for the Maestro Engine and the task console to augment and customize the text and appearance of the console columns themselves.

Jun 22 2015
Jun 22

A very common use-case for Maestro is to launch a workflow in order to moderate some piece of content. You may have an expense form as a content type and you wish to have a manager review and approve it before handing it off to other departments for processing.

This post will show you how to fire off a moderation workflow after saving content with Rules.

Step 1: Create a simple test flow

I know you have a super-ultra-complex workflow, but this is best to get off the ground with a simple 1 step flow for the time being!

Step 1 - create a simple test flow

As shown, this is a simple 1 step flow to ensure that you're getting all of the mechanics to work. This is an interactive function task with the handler set to "maestro_show_message".

When you're editing the interactive function task, go to the "Optional" tab, and create a variable named "message" and type in a simple message for its value. This will be a simple placeholder so the task actually does something in the task console.

Assign the task to a user that has rights to view and edit Article content. Using the Admin user is fine for this example to avoid permission problems for the time being. You also will have to create a new process variable for this workflow.

Step 1.5 - create a node_id variable

As you can see, this variable is called "node_id" and will be used to store the created node's ID inside of the process after it has been created.

Step 2: Create a new rule

I trust that you know how to create rules. When you create the rule, ensure that you create it based on the event of firing after saving of new content. Create a rule condition to only fire the rule when a certain type of content is created. In my example, I'm using the default Article content type that comes out-of-the-box with Drupal.

Step 2 - create a simple rule

At this point, the rule will fire any time a new Article is created.

Step 3: Rule Actions

What's most important about this step is to get the newly created node automagically added to a Maestro workflow.

We need to do this by associating the node's ID with the node_id process variable we created in step 1. You do this by first creating an action to "Add a variable". It must be an integer value, and the variable label and variable name should be "node_id". Maestro will look for these specifically named rules variables in order to try to set its own "node_id" variable from step 1.

Next, create a second action that will "Set a data value". You will set the newly created node_id rule variable with the created content's node:nid.

Finally, you will create a third action to "Launch a Maestro Workflow". In the value drop down, select the workflow you wish to launch. In Step 1, I named ours "Rules based workflow". Check off the "Content Moderation Workflow" checkbox as that will allow Maestro to pre-populate some of the internal Maestro data elements with content linkages. The Process ID section should simply have "Process ID" as the variable label and "process_id" as the variable name.

Your Actions should look similar to this: Step 3 - rule actions

Done!

At this point, you've created a rule that will pass off node information to Maestro and launch a workflow.

Here's a sample of what my task console looks like when I created an article.

Step 4 - task console

As shown, the task console has a flow created that has a single task. When you expand the task details, you will see the Article content type listed under the Content area. If you have permissions to view/edit, you can click on the content link to the Article and view it.

Pages

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web