The Pitfalls of User-Driven Model Selection in Legal Tech

The Pitfalls of User-Driven Model Selection in Legal Tech
Photo by Mr Xerty / Unsplash

In recent demos and product releases I've noticed an odd trend, the growing push by vendors to shift model selection responsibility onto the end-user. This approach, while perhaps well-intentioned, I think is fundamentally flawed and risks undermining the very efficiency and effectiveness that legal tech aims to provide.

Task Articulation and Model Selection

There's a common misconception that a user who can articulate a task well (what we’d call prompt engineering) is equally equipped to select the most appropriate AI model for that task. This isn‘t really the case - people just aren’t as interested in the nuts and bolts of AI as some of us are.

To picture this another way, imagine you're planning a drive across Europe. Your ability to navigate the roads, understand traffic rules, and plan your route is entirely separate from knowing which engine would be best suited for your journey. You wouldn't expect the average driver to pop open the bonnet and swap out engines based on each day's driving conditions, would you? Though it’d make for an interesting Grand Tour special.

The Cognitive Burden on Users

Legal professionals are already juggling complex tasks daily. Adding the responsibility of model selection to their plate is not just unnecessary - it's counterproductive. When faced with the choice, users are likely to default to the most well-rounded model, regardless of cost or effectiveness for specific tasks. Cough GPT-4o.

This approach ignores the fact that different tasks may require different model strengths, heck even different aspects of a single request might need different models. A model excellent for document review might not be the best choice for making sense of legal research or contract analysis. Expecting users to make these nuanced decisions for each task is unrealistic and inefficient.

The Role of Technology: Invisible Efficiency

The beauty of well-designed technology lies in its ability to make complex processes invisible to the user. Our job as technologists is to handle the intricacies behind the scenes, allowing legal professionals to focus on what they do best - y’know practising law.

By pushing model selection to the user, we're essentially asking them to do our job. It'd be like a car manufacturer asking drivers to manually adjust fuel injection rates while driving. It's not just impractical; it's a step backwards in user experience and efficiency - just when I thought we were starting to really get user experience in legal.

Learning from Apple's Intelligent Routing

It’s not as though it’s hard to find an approach that’s already being rolled out. Apple, is handling this with an approach with their Apple Intelligence system that shows how we should think about task routing in AI applications.

Apple's system is designed to intelligently route tasks to the most appropriate endpoint based on the nature of the task itself. This means that users don't need to worry about which specific AI model or system is best suited for their needs - the technology makes that decision invisibly.

We could achieve similar results in legal tech by interpreting what is being requested in each task. Here's how this might work:

  1. Task Analysis: Make use of language models or even existing Legal NLP solutions that can interpret the nature of legal tasks as they're input by users.
  2. Task Categorisation: Classify tasks into categories such as document review, contract analysis, summarisation, spell checking, rewriting etc.
  3. Model Mapping: Create a dynamic mapping between task categories and the most effective AI models (or even classic tooling) for each type of task.
  4. Continuous Learning: Implement feedback loops that allow the system to learn from the results of each task, continuously refining the model selection process.
  5. Transparent Reporting: Provide clear, easy to interpret reasoning on why certain models were chosen, allowing users to trust the system's decisions.

So instead of burdening users with model selection, legal tech should be moving towards intelligent, automated model selection. This approach would then look like:

  1. Task Analysis: Automatically analysing the user's input to understand the nature of the task.
  2. Performance Metrics: Continuously monitoring and learning from the performance of different models on various types of legal tasks.
  3. Contextual Selection: Choosing the most appropriate model based on the task, user history, and current performance data.
  4. Transparent Reporting: Allowing users to trust the system's decisions.

So…

Our focus should be on simplifying and streamlining processes for our users, not complicating them. By taking responsibility for model selection and optimisation, we can ensure that legal professionals can harness the full power of AI without getting bogged down in technical decisions - and sure we can allow power users to make their own decisions, but I think we’ll find in most cases people don’t need that extra hassle.

The future of legal tech lies not in pushing more decisions onto the user, but in creating intelligent systems that make the right choices invisibly, allowing legal professionals to focus on what truly matters - delivering excellent legal services to their clients. By looking to places like Apple and adapting their approaches to our specific needs, we can create a new generation of legal tech tools that are more powerful, more intuitive, and more effective than ever before.