Teaching AI to Tap Lightly: Exploring the Role of Lightly Held Ideas in Legal Tech

Teaching AI to Tap Lightly: Exploring the Role of Lightly Held Ideas in Legal Tech

At 4am, tapping my two month old son’s stomach to soothe him wasn’t a firmly held idea, it was an instinctive attempt (maybe desperation an hour in to this...), a small, gentle action that might help, or might not. Yet it worked, and in that moment, I remembered how often small, exploratory acts lead to surprising outcomes. It made me wonder if AI could adopt a similar mindset? Could it approach tasks by offering small, tangential, or lightly held ideas, not just definitive answers and let humans decide their value?

In parenting, these intuitive gestures often succeed where planned strategies fail, much to my equal annoyance and joy. In legal work, where nuance and creativity are paramount, lightly held ideas could open new avenues for AI to augment human expertise. While it’s often assumed that legal work is formulaic and the opposite of creative, this view overlooks the subtle judgment, contextual thinking, and ingenuity that underpin much of the profession. Instead of aiming for absolute correctness or perfection, AI could embrace exploration, surfacing related insights, edge cases, or unexpected connections that might otherwise be missed.


Moving Beyond Definitive Outputs

Generative AI systems are designed to deliver polished, authoritative outputs. They draw from immense datasets to produce definitive responses, but this “all or nothing” approach has limits. By focusing solely on high-confidence outputs, AI risks overlooking less obvious but potentially valuable connections.

Legal work, in particular, thrives on these subtleties, a lightly held idea in this context might look like a tenuous precedent that challenges conventional thinking, a clause structure from a tangentially related contract, or a weak signal in due diligence that’s worth investigating further. AI designed to explore and present these possibilities could become a far more effective tool.

So here's how this approach could be applied in legal tech:

Contract Drafting with a Twist

When drafting a contract, lightly held ideas might involve suggesting clauses or terms inspired by tangentially related contracts. An AI system could flag a clause from a different industry or jurisdiction that isn’t immediately relevant but contains innovative language worth considering. These “suggestions” wouldn’t aim to replace standard clauses but could inspire new directions.

Legal Research with Exploratory Depth 

Traditional legal research tools prioritise precision, finding the most relevant cases, statutes, or regulations. But an exploratory AI could also surface related cases with different contexts or unexpected reasoning. It may well flag a dissenting opinion from an unrelated case, simply because the reasoning parallels the issue at hand in an unconventional way.

Risk Identification in Due Diligence

When analysing documents for risks, lightly held ideas could mean surfacing low-confidence signals: clauses that might indicate risk but don’t perfectly match predefined patterns. By presenting these weak signals alongside stronger ones, the AI would allow users to decide whether to investigate further, balancing thoroughness with efficiency.

Drafting Negotiation Strategies

AI tools could assist by generating a range of fallback positions on contentious clauses, including some that stretch the boundaries of typical negotiation strategies. For example, it might propose language used in entirely different contexts to see if it opens new avenues for compromise.


Designing AI for Lightly Held Ideas

Building AI that taps lightly, rather than pushes firmly, requires a shift in design philosophy.

Here’s what I see it involving:

Embracing Ambiguity

AI models need to learn to present outputs with varying levels of confidence, explicitly flagging when an idea is exploratory rather than definitive. This could include tagging suggestions with metadata such as “low-confidence” or “creative exploration.”

Breadth Before Depth

Instead of narrowing results to the most relevant options, AI systems could cast a wider net, deliberately including tangential or outlier ideas for consideration.

Transparency and Control

Users should have clear visibility into how and why certain ideas were included, enabling them to evaluate whether these lightly held ideas have merit in their specific context.

Incremental Presentation

AI might surface exploratory ideas gradually, as a supplement to core outputs. For instance, after delivering a polished response, the system could offer additional “out-of-the-box” suggestions for consideration.


Legal work often hinges on creativity, judgement, and the ability to connect dots that aren’t immediately obvious. By enabling AI to think more flexibly, legal professionals could:

Discover New Perspectives

Lightly held ideas can introduce concepts or approaches that might not have been considered, sparking creative solutions.

Expand the Scope of Exploration

Lawyers could explore a wider range of possibilities without being bogged down by irrelevant information, as the AI surfaces only the most potentially useful tangents.

Enhance Collaboration

By presenting a range of options, AI can serve as a brainstorming partner, encouraging collaboration and discussion rather than dictating answers.


Lets Teach AI to Tap Lightly

Lightly held ideas are about exploration, humility and curiosity, qualities that are obviously human but rarely associated with AI, by teaching AI to tap lightly, to present possibilities without insisting on their relevance, we could unlock a new level of utility in legal tech. I feel these exploratory systems would align better with how lawyers think and work, embracing the nuances and complexities of the field.

Just as a small tap on my son’s stomach turned out to be exactly what he needed, a lightly held idea from an AI might be the spark that leads to a breakthrough. Perhaps it’s time we designed AI systems to embrace the beauty of “might work” rather than striving only for “must work.”