A few years ago, when I was working at a traditional law firm, the partners gathered with us with barely any excitement. “Rejoice,” they announced, unveiling our new AI assistant that would make legal work faster, easier, and better. An expert was brought in to train us on dashboards and automation. Within months, her enthusiasm had curdled into frustration as lawyers either ignored the expensive tool or, worse, followed its recommendations blindly.
That’s when I realized: we weren’t learning to use AI. AI was learning to use us.
Many traditional law firms have rushed to adopt AI decision support tools for client selection, case assessment, and strategy development. The pitch is irresistible: AI reduces costs, saves time, and promises better decisions through pure logic, untainted by human bias or emotion.
These systems appear precise: When AI was used in cases, evidence gets rated “strong,” “medium,” or “weak.” Case outcomes receive probability scores. Legal strategies are color-coded by risk level.
But this crisp certainty masks a messy reality: most of these AI assessments rely on simple scoring rules that check whether information matches predefined characteristics. It’s sophisticated pattern-matching, not wisdom, and it falls apart spectacularly with borderline cases that don’t fit the template.
And here’s the kicker: AI systems often replicate the very biases they’re supposed to eliminate. Research is finding that algorithmic recommendations in legal tech can reflect and even amplify human prejudices baked into training data. Your “objective” AI tool might carry the same blind spots as a biased partner, it’s just faster and more confident about it.
And yet: None of this means abandoning AI tools. It means building and demanding better ones.
The Default Trap
“So what?” you might think. “AI tools are just that, tools. Can’t we use their speed and efficiency while critically reviewing their suggestions?”
In theory, yes. In practice, we’re terrible at it.
Behavioral economists have documented a phenomenon called status quo bias: our powerful preference for defaults. When an AI system presents a recommendation, that recommendation becomes the path of least resistance. Questioning it requires time, cognitive effort, and the social awkwardness of overriding what feels like expert consensus.
I watched this happen repeatedly at the firm. An associate would run case details through the AI, which would spit out a legal strategy. Rather than treating it as one input among many, it became the starting point that shaped every subsequent discussion. The AI’s guess became our default, and defaults are sticky.
This wouldn’t matter if we at least recognized what was happening. But something more insidious occurs: our ability to think independently atrophies. Writer Nicholas Carr has long warned about the cognitive costs of outsourcing thinking to machines, and mounting evidence supports his concerns. Each time we defer to AI without questioning it, we get a little worse at making those judgments ourselves.
I’ve watched junior associates lose the ability to evaluate cases on their own. They’ve become skilled at operating the AI interface but struggle when asked to analyze a legal problem from scratch. The tool was supposed to make them more efficient; instead, it’s made them dependent.
Speed Without Wisdom
The real danger isn’t that AI makes mistakes. It’s that AI makes mistakes quickly, confidently, and at scale.
An attorney accepts a case evaluation without noticing the system misunderstood a crucial precedent. A partner relies on AI-generated strategy recommendations that miss a creative legal argument a human would have spotted. A firm uses AI for client intake and systematically screens out cases that don’t match historical patterns, even when those cases have merit. Each decision feels rational in the moment, backed by technology and data. But poor inputs and flawed models produce poor outputs, just faster than before.
The Better Path Forward
The problems I witnessed stemmed from how these legacy systems were designed: as replacement tools rather than enhancement tools. They positioned AI as the decision-maker with humans merely reviewing outputs, rather than keeping human judgment at the center.
Better AI legal tools exist, and they take a fundamentally different approach.
They’re built with judgment-first design, treating lawyers as the primary decision-makers and AI as a support system that enhances rather than replaces expertise. These systems make their reasoning transparent, showing how they arrived at recommendations rather than presenting black-box outputs. They include regular capability assessments to ensure lawyers maintain independent analytical skills even while using AI assistance. And they’re designed to flag edge cases and uncertainties rather than presenting false confidence.
The difference is philosophical: are you building tools that make lawyers faster at being lawyers, or tools that try to replace lawyering itself?
I see this different approach playing out in immigration services, where the stakes of poor decisions are particularly high. Consider a case where an applicant’s employment history doesn’t neatly match historical approval patterns, perhaps they’ve had gaps, career shifts, or worked in emerging fields. A traditional AI tool would flag this as “non-standard,” lowering approval probability and becoming the default recommendation. A judgment-first system does something entirely different: it surfaces the exact factors that make the case atypical, explains why precedent might or might not apply, and explicitly asks the immigration officer, “What do you see here that the algorithm misses?” The officer remains the decision-maker, armed with both AI efficiency and the cognitive space to apply nuanced expertise. The tool didn’t replace judgment; it enhanced it. That’s the difference between AI that makes professionals dependent and AI that makes them sharper.
Taking Back Control
None of this means abandoning AI tools. It means using them deliberately:
Treat AI recommendations as drafts, not answers. Before accepting any AI suggestion, ask: “What would I recommend if the system weren’t here?” If you can’t answer, you’re not ready to evaluate the AI’s output.
Build in friction. Create a rule that important decisions require at least one alternative to the AI’s recommendation. Force yourself to articulate why the AI is right, rather than assuming it is.
Test regularly. Periodically work through problems without AI assistance to maintain your independent judgment. Think of it like a pilot practicing manual landings despite having autopilot.
Demand transparency. Push vendors to explain how their systems reach conclusions. If they can’t or won’t, that’s a red flag. You’re entitled to understand what’s shaping your decisions.
Stay skeptical of certainty. When AI outputs seem suspiciously confident or precise, dig deeper. Real-world problems are messy; if the answer looks too clean, something’s probably being oversimplified.
The legal professionals who thrive with AI aren’t those who defer to it blindly or reject it entirely. They’re the ones who leverage its efficiencies while maintaining sharp human judgment, and who insist on tools designed to enhance their capabilities rather than circumvent them.
Left unchecked, poorly designed AI assistants will train you to make terrible decisions. But that outcome isn’t inevitable. The future belongs to legal professionals who demand tools that genuinely enhance their expertise rather than erode it. After all, speed and convenience lose much of their appeal if they compromise the quality of justice itself.