Guiding LLMs to Success in Legal Due Diligence: The Power of Context and Direction

Guiding LLMs to Success in Legal Due Diligence: The Power of Context and Direction

LLMs are quickly establishing themselves as valuable tools in legal workflows, particularly for tasks like M&A due diligence. Recently the team at Addleshaw Goddard put out: The RAG Report demonstrating that optimising these AI tools requires more than basic prompting ("Please do due diligence"), particularly when dealing with complex clauses such as "Exclusivity", ultimately finding that refining prompt strategies and incorporating legal expertise that LLMs can deliver more accurate and meaningful results - something I think we all anecdotally expect, but it's great to see actual research to prove it.


Handling Complex Clauses

LLMs handle straightforward content like "Governing Law" or "Effective Date" with relative ease, however they can stumble when interpreting nuanced provisions such as "Exclusivity." An LLM asked to find exclusivity clauses might incorrectly highlight irrelevant sections, such as those related to exclusive access rights or licensing terms.

AG addressed this challenge by developing Provision-Specific Prompts. They provided the LLM with more detailed legal context, explaining how "Exclusivity" clauses typically prevent parties from engaging with competitors. This deeper guidance helped the LLM focus on the correct clauses, demonstrating the importance of precise and context rich prompting.


Keywords and Follow Up

To further enhance accuracy they cover how they introduced targeted keywords like "non-compete" into their prompts. This guided the LLM to hone in on the most relevant sections, significantly improving both speed and precision.

However the real breakthrough came with Follow Up Prompting, after receiving the initial response from the LLM, AG issued a second prompt asking the model to review and refine its findings. This iterative process led to substantially better outcomes, especially for complex provisions. For instance, using this refined strategy, AG improved exclusivity clause accuracy from 53.85% to 97.44%, a remarkable 43.59% increase.


Taxonomies and Cross Referenced Clauses

Recently when I was chatting with Graeme from NosLegal we discussed how implementing an internal taxonomy could help transform LLM results. A taxonomy provides a structured framework that clearly defines what terms like "exclusivity risk" mean in different contract types, this structured approach guides the LLM toward the right interpretation by embedding context into its prompts.

For example, exclusivity risk in a supplier agreement might mean the buyer is locked into sourcing from one supplier, posing a risk if the supplier can't meet demand. In an M&A deal, it might prevent the seller from negotiating with other buyers, limiting opportunities, by defining these nuances then the LLM can differentiate between contexts, enhancing accuracy.

In my own work, I've seen significant improvements by using cross referenced clauses and defined terms in LLM prompts. When the model understands how related clauses interact, its performance improves dramatically. Embedding legal knowledge into the prompts shapes the LLM's understanding, leading to better, more accurate outputs.


The Potential of Prompt Libraries

Building on the taxonomy concept, developing Prompt Libraries could bring even more consistency to LLM-driven document reviews. For instance, a prompt library might include a carefully crafted prompt for "exclusivity risk" that outlines specific clauses to look for and the context in which they appear. By tying prompts to specific taxonomy terms like "exclusivity risk" or "termination rights," the LLM has a pre-built structure to follow.

This approach helps to reduce variability in results and ensures the model operates with consistent context and precision, making repeated tasks more efficient. However, maintaining these libraries requires ongoing effort to update prompts as laws and regulations evolve. Integrating prompt libraries into existing legal workflows can be facilitated through training programs and collaboration tools, ensuring that all team members utilise the technology effectively.

However I think real value in prompt libraries is that they can serve as repositories of best practices, capturing the collective expertise of the legal team. They not only standardise AI interactions but also promote knowledge sharing between senior and junior lawyers.


The Vital Role of Human Expertise

Despite all the promises of tech over the years, human expertise remains crucial:

  • Crafting Effective Prompts: The quality of AI outputs hinges on the inputs provided. Legal professionals must design prompts that accurately reflect the complexities of legal issues.
  • Interpreting Results: AI can generate insights, but understanding their implications requires actual legal skill, as it stands lawyers must critically analyse AI outputs and then make informed decisions.
  • Ensuring Accuracy: Legal professionals play a key role in verifying the AI's findings, correcting any inaccuracies, and providing the nuanced understanding that only human expertise can offer.
  • Continuous Improvement: By monitoring the LLM's performance and refining prompts over time, legal teams can enhance the model's effectiveness, ensuring it remains a valuable tool in their workflow.

The Findings, Summarised

  1. Detailed Prompts are Essential: Clear, well-structured prompts enable LLMs to handle complex clauses effectively.
  2. Keywords Sharpen Focus: Incorporating specific legal terms guides the LLM to relevant sections, enhancing accuracy.
  3. Follow-Up Prompts Improve Performance: Iterative prompting encourages deeper analysis, leading to better results.
  4. Taxonomies Bring Clarity: Internal taxonomies define terms in context, guiding the LLM toward correct interpretations.
  5. Prompt Libraries Ensure Consistency: Pre-built prompts tied to taxonomy terms standardize AI interactions, improving reliability.
  6. Human Expertise Remains Crucial: Legal professionals must guide AI tools, interpret results, and refine outputs to ensure accuracy.

Addleshaw Goddard's work illustrates how optimising prompts, adding keyword guidance, and employing follow-up queries can unlock the true potential of LLMs in legal due diligence. Then in addition, by integrating taxonomies and developing prompt libraries, we can further enhance the accuracy and efficiency of AI in legal tasks.

From my own experience with cross referenced clauses and taxonomies, it's evident that a thoughtful, structured approach does lead to more accurate and meaningful results. That link between AI and human expertise is key to delivering the best outcomes, then as we continue to refine our methods and integrate these tools into our workflows we can really start to see the full potential of LLMs to advance the legal profession.

The integration of LLMs into legal due diligence represents a significant step forward in legal tech, by spending time focusing on prompt optimisation and leveraging legal expertise, we can see real benefits of AI while maintaining the high standards expected in legal practice. Embracing these strategies will not only improve efficiency but also enhance the quality of legal services delivered.