Feeds

Author

Upgrade Your Drupal Skills

We trained 1,000+ Drupal Developers over the last decade.

See Advanced Courses NAH, I know Enough
Feb 01 2024
Feb 01

Last December, I wrote about AI as a tool for Drupal development – a concept that, for many, sounds like a huge leap forward, and it is. But it’s just the beginning. We’re not far from AI acting as an actual Drupal developer.

AI tools capable of doing developer-like work already exist in various forms. Among them GPT-4 and Google Bard. Tools such as these can analyze code, allowing conversations where errors and concerns are discussed and solutions are recommended.  The result is often a significant increase in efficiency.

AI can also be leveraged as a development tool to generate code from scratch, seek suggested revisions, receive helpful feedback, and a lot more. At some point, AI may drive the entire Drupal web development process. 

We’re not there yet, and here are five hurdles that need to be cleared.

Current Limitations to AI-Driven Web Development

1. There are limits on amount of input AI models can handle without having to compress the conversation. 

Code creation involves a lengthy conversation back and forth between the developer and the AI powered Large Language Model (LLM).  When the maximum input size supported by an LLM model is reached, the conversation must be automatically compressed so that it can continue. 

During a lengthy code-creation conversation with GPT-4, using tools such as Bard and chatGPT, developers are not alerted to the fact that maximum conversation size has been reached. Instead, the conversation up to that point is automatically compressed and the process moves forward. This results in the AI "forgetting" details which can reduce the quality of its feedback.

While the compression algorithm used by various LLM apps may not be known, we know that it basically takes chunks of content from the conversation and reduces it to a manageable size. In many cases, this might not be a problem. There is, however, the potential for details to be lost in the process – particularly if we are talking about an activity such as code creation where there's no room for ambiguity.

To illustrate this limitation, here's a simple example created with GPT-3.5-Turbo that asks for essays about food from each country:

User: Please generate essays on foods for each country of the world.  When I say continue, please generate an essay on the next country until you have generated essays for all countries.  It is okay if you can't fit the whole essay in a single response; I will say continue and then you can proceed with the current essay.  Please order the countries alphabetically and then start with the first country.

GPT: Let's start our culinary journey around the world with Afghanistan.

Afghanistan, a land of rich cultural heritage and breathtaking landscapes, offers a cuisine that reflects its history, geography, and diverse influences. Afghan cuisine is a fusion of flavors from various regions, including Central Asia, the Middle East, and South Asia. It's characterized by hearty dishes, aromatic spices, and a focus on fresh ingredients.

One of the most iconic dishes of Afghan cuisine is Kabuli Pulao...

After generating additional essays in the same conversation up to "China", I asked GPT to remember the first dish it listed in the first country's essay but it was not able to remember that far back and gave an incorrect response:

User: What was the first dish in the first country listed?

GPT: The first dish listed in the essay for the first country, which was Burkina Faso, is Tô, a thick porridge made from millet or sorghum flour, cooked with water and stirred until it reaches a smooth consistency.

One approach to avoiding the potential loss of detail due to compression is to start a new conversation. The challenge, however, is that the new conversation might not accommodate all relevant information without also requiring compression.

Another approach is to ask one-off questions in separate conversations; if the AI does not need the full context from the main conversation, asking those questions elsewhere helps avoid excess clutter, thus reducing the amount of compression needed to maintain the conversation.

In the near future, it's likely that new versions of GPT and other LLMs will be able to manage larger inputs. As longer, non-compressed conversations become possible, we will move from AI assisting with code snippets to larger files and then, ultimately, allow analysis and modification of entire repos with AI's generating pull requests.  

As longer, non-compressed conversations become possible, we will move from AI assisting with code snippets to entire files and then, ultimately, allow analysis and modification of entire repos with AI's generating pull requests. 

Input limitations is key among the reasons why AI is not yet positioned to step into the role of a Drupal developer. And there are others.
 

2. AI Models are prone to inaccurate guesses.

One facet of leveraging LLM's for development is their tendency to "hallucinate" incorrect output that may seem correct but is factually wrong.

I’ve had this happen when asking GPT-4 to generate a snippet of code, and then it proceeded to generate code that would be valid if certain functions and methods existed in Drupal. But they don’t. When I asked why it made up an invalid solution, it first apologized and then, when pressed, admitted it did not know and that it cannot "introspect." This makes sense as LLMs simply guess the next best word. This behavior illustrates the benefit of techniques such as chain-of-thought where the LLM is asked to talk through its process first rather than immediately output a solution.  

Likely what’s needed to overcome the frequency of hallucinations is a bigger model which has been trained on more data. GPT-4 is bigger than GPT-3, and seems to be less prone to hallucinations. When GPT-4.5 or GPT-5 is released in the next few months, the occurrence of hallucinations is likely to decrease further.

3. Solutions might not match the current version of Drupal.

Another issue that I’ve encountered with GPT-4 is that solutions are sometimes suggested that would have been appropriate for Drupal 7 or Drupal 8, when I’m doing Drupal 10 work. 

This kind of error could be the result of the target Drupal version, mentioned at the beginning of a conversation, being excluded due to compression or it could be another form of hallucination.  To mitigate this, I'll periodically remind the AI we are talking about Drupal 10 as the conversation progresses.

Sometimes, a suboptimal solution can be corrected by pointing it out to the AI. The AI might then apologize and generate an alternate version of that code snippet.

4. Judgment might simply be bad.

The fourth risk is bad judgment. The AI model might offer solutions that work, but are not necessarily good solutions that follow best practices. When a developer is leveraging AI as a tool for Drupal development, bad advice is something that can be sifted through and discounted. The picture is very different when you are looking to AI to function as a developer, because it might create code that technically works, but is it future-proof or as scalable as code written by an expert human developer?

On one level, the potential for bad judgment can be explained by the fact that LLM's are modeled on the human brain, which also has the potential for mistakes and bad judgment. That said, GPT-4 shows improvements in judgment over GPT-3.5 and there is every reason to expect that judgment will continue to improve with subsequent versions.

5. And of course, security.

Currently, the most prominent AI solutions are hosted remotely by huge companies: OpenAI, Google, Microsoft, and Amazon. As such, there are inherent questions about the security of software developed by AI – particularly if sensitive data is at stake.


There is no guarantee that future AI models will not be trained on sensitive data that hackers might seek to get access. At this point, developers need to ask: “If the work I’m doing was shared publicly, would that be okay?”

Given how useful AI tools are, it's worth finding solutions to mitigate security risks. For example, sanitizing or encoding sensitive data before submitting it to the AI and then decoding its response; solutions are already being created to play this role.


Fortunately, for those of us who are Drupal developers, our work is often open source. When I’m leveraging AI to write an open source Drupal module, I’m not worried that others might get access to the code. In fact, that might actually be a good thing since future LLM's could suggest that module as a solution.

Imagine a dynamic similar to current Search Engine Optimization (SEO) strategies used to help a website come up higher in search rankings. This kind of thinking is still hypothetical, but in the future, a secondary motivation for integrating AI tools into a website might be to ensure future LLM models have information about the website baked into their training so they can suggest it to users.


 

What’s Next? Future Possibilities for AI Web Development 

Now that we’ve covered what I view as the key reasons why we are not there yet, let’s look at how GPT-5 or GPT-6 might act as a developer.


The first limitation mentioned above is the input size. It's very likely that input capacity will only increase, possibly to the point that an AI will be able to hold an entire website “in its mind.” Vast new possibilities would emerge at that point.

Let’s compare an AI-driven world to current development workflows. Current processes for a Drupal support request can be summed up as follows:

  • A client requests to make a change to their website.
  • A developer provides a level of effort in terms of hours and a Jira ticket is created. 
  • The project manager calculates costs and coordinates with the client for approval to proceed.
  • The project manager schedules the work with the developer.
  • The development team completes the work and forwards it to QA.
  • The client reviews the work and either requests revisions or okays it for deployment.
  • The work is deployed.
  • Post-deploy QA testing is done.
  • The client confirmed work was deployed correctly.

In the future, as tools get smarter and can handle bigger inputs with better judgment, all of these steps could be handled by AI. 

Imagine a Jira ticket containing a "chat widget" containing dialogue between an AI project manager, fine-tuned for managing the project and coordinating the work, talking to an "AI developer" and "AI QA specialist". They would collaborate like humans would, but much faster.

I've experimented with this kind of interaction, manually simulating a conversation between multiple AIs by copying and pasting prompts from one AIs conversation to another, where each AI is given unique information at the beginning of its conversation; current AI capabilities could actually handle all of the above steps, but the quality and judgment would not be there and the required troubleshooting and oversight would outweigh any efficiencies.

Ultimately, this kind of technological capability may raise the level of abstraction so human stakeholders can focus more on the big picture and less on micromanaging smaller details. It will still be possible to "zoom in" on details and refine them, but as AI tools improve, the probability that they will complete tasks correctly the first time will improve as well.
 

Transitioning to AI Driven Drupal Development

At this point, an interesting next step in transitioning from human developers to AI efficiencies would be incremental automations of every step in the process, until we get to the point that we’re confident of AI’s judgment and capacity. 

Here's how this might look:

  • A client submits a request.
  • The AI reads it and applies judgment concerning the level of complexity and how to best proceed.
  • A human developer would then review the AI's recommendations to determine whether the approach is correct.

It's a good idea to start experimenting with AI-driven development workflows now even though the technology isn't there yet.  By prototyping this kind of activity, its performance can be evaluated across a range of criteria as new LLM models are released and the following questions come into focus:

  • How much code can the LLM properly understand at once?
  • How often does it hallucinate? 
  • How often does it give bad advice or have bad judgment?
  • How quickly does it generate output?
  • How well does it follow directions?

The transition to AI-driven web development will be a snowball effect. We're now at the point of using AI to make work more efficient. Next steps will be toward the creation of more automation, as we move toward systems that are self-maintaining and self-upgrading.

We’re now at the point of using AI to make work more efficient. Next steps will be toward the creation of more automation, as we move toward systems that are self-maintaining and self-upgrading.
 

Next-Level Security

Moving forward, remotely hosted AI models might not be the only option available. Considering the fact it wasn't long ago that a room full of mainframes was needed to accomplish what can now be done in a cell phone or watch, it’s realistic to envision advancements in AI technology where self-hosted models are within reach for organizations.

Rather than relying on OpenAI, for example, and having to trust that your data would not be hijacked, it might be just a matter of time before individual companies can run their own data centers and spin up their own LLM's that they could train, customize, and secure. 

Security levels of different AI systems might be audited and serve as a competitive differentiator. 
 

Will AI Take Jobs from Web Developers?

Yes.

Ultimately, AI driven efficiencies will take on much of the work that web developers are currently responsible for. That’s the downside. 

The alternative to becoming obsolete is to lean into AI as Promet is doing.  Moving to AI-driven development will be a paradigm shift similar to moving from print media to the Internet; the companies and individuals that embrace AI will prosper. 
 

Interested in continuing the conversation about how AI may impact Drupal development, as well as how it can be leveraged to make your website better, right now? Let’s talk.


 

Dec 07 2023
Dec 07

2023 appears to be the year that Artificial Intelligence emerged from the shadows and into the mainstream, but the potential of AI has barely scratched the surface. AI is here and its impact on life and work is developing at an exponential pace

As this disruptive technology is generating quick answers, streamlining processes, and creating vast new efficiencies, hundreds of possibilities for AI – ranging from healthcare diagnoses, to cybersecurity threat detection, government services, Drupal development, API development, and many, many more – are taking shape.

This post will explore how an AI large language model (LLM), such as ChatGPT, can be used for writing code – specifically, a Drupal module.

Getting Started with an LLM: Prompt Engineering

An LLM is essentially a massive, simulated brain trained on an even more massive amount of data.  LLMs such as ChatGPT are extremely good at identifying the "next best word" in a conversation to the point where human-like intelligence emerges.

Interacting with an LLM is done conversationally, beginning with a "prompt." Think of this process like writing a software program, with conversational English being the programming language.  

The first step in leveraging an LLM to give it context. Even if the context of what is being asked appears obvious, stating it explicitly can increase the quality of ideas and insights that are returned.  

For the purpose of generating code, a good example of a starting prompt could begin with: "You are an expert Drupal 10 developer..."  Since the answer is built one word at a time, an explicit prompt such as this helps to create a framework for success.  
 

What is the “Chain of Thought” Technique in AI?

Another technique called "Chain of Thought" can be very helpful to increase the quality of an AI response. The idea is to list out the steps the AI should take when thinking about a solution. This is similar to writing a software program except communication is at a higher level of abstraction.   

For example, rather writing a prompt that reads: "As an expert chef, please give me an apple pie recipe," the Chain of Thought technique would provide deeper, step-by-step instructions such as:

"As an expert chef, please think about different, popular apple pie recipes, specifically looking at the crust and the filling. Identify the best aspects from the different recipes. Consider how they could be combined to make an even better recipe. Finally, generate a final recipe."

For more complex topics, the Chain of Thought technique can be broken into multiple steps in a dialogue with an AI tool such as ChatGPT.
 

Refining the Prompt

Because an LLM can act as a natural language program interpreter, giving it rules to follow will help it to better achieve the right results. Here’s an example of writing a prompt for the purpose of help in writing prompts: 

"You are an expert Prompt Engineer. Your goal is to craft the best possible prompt to be used by ChatGPT. Please follow the process below:"

  • "Your first response will be to ask me what the prompt should be about. I will provide my answer, but we will need to improve it through continual iterations by going through the next steps."
  • "Based on my input, you will generate two sections: a) Revised prompt (provide your rewritten prompt. It should be clear, concise, and easily understood by you), and b) Ask any relevant questions pertaining to what additional information is needed from me to improve the prompt."
  • "We will continue this iterative process with me providing additional information to you, and you refining the original prompt in the 'Revised prompt' section until I decide this process is complete."
  • "Do not actually process the prompt once it is ready. That will be done in a separate conversation."

AI for Code Creation

Now that we've covered various aspects of Prompt Engineering, let's look at using LLM prompts for code generation. Here are some tips:

  • Generate fully completed code. If the solution you're generating is relatively straightforward, it may be easier to ask the AI to generate fully completed code rather than using placeholder comments. gpt-4 has a tendency to use placeholders which may or may not be helpful depending on the task at hand.
  • Let it know that it is okay to generate a long response. Your prompt could read something like: "It is okay if your response stops abruptly; I will say 'continue.'"
  • Be sure to give it context at the beginning, such as: "You are an expert Drupal 10 developer."
  • Use Chain of Thought to help guide it. By "holding its hand" and providing more detailed steps, the end result is more likely to work correctly.

Sample Chain of Thought Prompt for Creating a Drupal Module

  • You are an expert Drupal 10 developer. 
  • You will be writing a module called "awesome_csv_importer"
  • This module will provide a form where a CSV file can be uploaded. This CSV file will contain data for creating article nodes. Each row will contain a "title" and "body" field.
  • The author for each node should be user 1.
  • Before proceeding, think about Drupal best practices for this CSV import.
  • If there is an issue, an error should be output as a Drupal message.
  • For each file in this module, please note the file's path within the module.
  • Please generate fully completed code; do not use placeholder comments.
  • Please be sure to properly comment on your code, following Drupal code standards.
  • It is okay if your response stops abruptly. "I will say 'continue.'"

Here's a screenshot of the beginning of the gpt-4's response to this prompt:

ChatGPT prompt to create Drupal module

After the initial code is generated, there is often an iterative process as issues are identified by reviewing or running the code, and then the AI tool is asked to make corrections. If errors are encountered it is sufficient to just paste the error into a follow up message.

For more complex solutions, it may be more appropriate to generate sub-sections of the module via different conversations rather than using a single conversation to generate the whole module.

Limitations of AI for Drupal Development

  • Maximum input limitations. First, there are limits to the allowable size of the prompt. The specific size depends on the LLM model in use, but the limits are relatively large so should not be a concern for most tasks. It's important to note that when having a conversation with an LLM, each message sent is within the context of the entirety of the previous preceding dialog in the conversation. This means the AI can lose some context in a longer conversation since compression techniques must be used "behind the scenes" to avoid hitting the current LLM's max input limit. When this is the case, it can be appropriate to start a new conversation, with an explanation of what has been covered up to that point, along with any relevant code that it has been generated.
  • Scalability. When using LLM APIs such OpenAI's API, there are limitations on concurrent requests. Particularly for public-facing solutions, it may be necessary to implement queuing and caching mechanisms.
  • Hallucinations. Sometimes, LLMs appear to simply make things up. For example, I've had LLMs generate Drupal code using methods and hook that simply do not exist, but would be appropriate to use if they did exist. Often, if you point this out the AI will acknowledge its mistake an provide an alternative solution.
  • Bad judgment. Sometimes, LLM-generated code will work but may not be implemented in the best way. For example, I once asked for code to modify a Drupal view's results but the LLM tried to alter the view's SQL query, which is a less flexible, less maintainable solution than was needed. I've also seen instances where the LLM used pure PHP solutions rather than Drupal-specific solutions. It may also use solutions from older versions of Drupal that might work currently but will fail for future versions.
  • Security concerns. Another critical factor to keep in mind when using LLMs is to never submit sensitive information like passwords. This is because your input may be used to train future LLM models, meaning hackers could, with the right prompt, obtain an output containing this information. While LLM's like ChatGPT have been trained to behave lawfully, they can be tricked into straying from this by asking for "dis-allowed content" within the context of a fictional story.
  • Wrong answers. Sometimes, LLMs simply choose the wrong steps to follow, resulting in an incorrect result. Here's a simple example using math. (Incidentally, you'll notice this response is using a pirate persona. This is because I was experimenting with ChatGPT's custom instructions setting at the time.) In the below example, GPT gives the wrong answer to a simple math problem and then admits it doesn't know why it did that.


ChatGPT producing a math error

Prompting for the Big Picture: AI for Drupal Module Development

The above represents real world walk throughs and recent experiences for leveraging an LLM to develop a Drupal module. There’s much more. As another demonstration of the instant insights and information that  ChatGPT can provide, I recently entered a ChatGPT prompt that read: 

Can AI be leveraged to develop a module in Drupal?

Here's the answer that resulted from this prompt:

ChatGPT logo

Yes, AI can be leveraged to develop modules in Drupal. Drupal is an open-source content management system (CMS) that allows developers to create and customize websites. AI technologies can be integrated into Drupal modules to enhance various aspects of a website, such as personalization, content recommendation, natural language processing, and more.

Here are some ways AI can be incorporated into Drupal modules:

Content Personalization:

  • Use AI algorithms to analyze user behavior and preferences.

  • Develop a Drupal module that dynamically displays personalized content based on user preferences, improving user engagement.

Chatbots and Conversational Interfaces:

  • Integrate AI-powered chatbots or conversational interfaces within Drupal websites.

  • Create a module that allows users to interact with the website using natural language, providing a more user-friendly experience.

Search Optimization:

  • Implement AI-driven search algorithms to improve the accuracy and relevance of search results.

  • Develop a Drupal module that integrates advanced search capabilities powered by AI.

  • Image and Video Recognition:

  • Utilize AI for image and video recognition within Drupal.

  • Create modules that automatically tag and categorize media content based on AI analysis.

Data Analytics and Reporting:

  • Integrate AI for data analytics to extract meaningful insights from user interactions and website performance.

  • Develop Drupal modules that provide detailed analytics and reporting using AI algorithms.

Security Enhancements:

  • Implement AI-driven security measures to identify and prevent potential threats.

  • Develop modules that use AI to enhance website security by detecting and responding to malicious activities.

Automated Content Generation:

  • Use natural language processing (NLP) algorithms to automatically generate content.

  • Develop Drupal modules that assist in creating and optimizing content based on AI-generated suggestions.

When integrating AI into Drupal modules, it's important to consider factors such as data privacy, performance, and the specific requirements of the website. Additionally, leveraging existing AI libraries and frameworks can simplify the development process. Drupal's modular architecture makes it well-suited for incorporating AI functionalities into various aspects of a website.

What's Next for AI with Drupal?

As amazing and disruptive as today's LLMs are, they are only going to become faster and more capable as research continues.

It is hard to predict the rate at which LLM solutions will evolve because there may be unexpected bottleneck; it may be that availability of computational resources will be the main limiting factor for how smart these LLM can become.

However, it is safe to say that current and upcoming AI solutions will disrupt many aspects of business and technology, perhaps even more than the Internet did in the 1990s. In the same way that media companies evolved to leverage the Internet, AI will drive a similar evolution. 

While the future is not clearly mapped out, the time to embrace AI is now. 

Interested in architecting AI-powered solutions for your Drupal website? Promet is at the leading edge of exploring the potential and possibilities of AI for Drupal web development. Let’s talk!

About Drupal Sun

Drupal Sun is an Evolving Web project. It allows you to:

  • Do full-text search on all the articles in Drupal Planet (thanks to Apache Solr)
  • Facet based on tags, author, or feed
  • Flip through articles quickly (with j/k or arrow keys) to find what you're interested in
  • View the entire article text inline, or in the context of the site where it was created

See the blog post at Evolving Web

Evolving Web