
From Chaotic Legal Docs to Structured Facts: AI’s Competitive Advantage
Learn what Fact Chaos is - and how a new approach to legal data is reshaping the future of legal technology.

Clients pay lawyers for judgment. Not speed. Not volume. Not access to tools. They pay for experienced legal practitioners to reason over complex problems. Whether the work is a court submission, a compliance memo, a witness brief, or an advice note, that reasoning is what a client invoice represents.
A risk arises with the introduction of AI tools to legal workflows - that legal reasoning quietly gives way to cognitive surrender.
The finding comes from one of the most important studies on AI and human reasoning published this year -Steven Shaw and Gideon Nave’s ‘Thinking — Fast, Slow, and Artificial’ (Wharton, 2026).
When an AI tool gives a wrong, though confident and plausible answer, users accept it 80% of the time.
That’s a major problem for litigation and disputes practitioners, where the outcome of a matter can turn on the accuracy of the facts underpinning a legal argument.
Shaw and Nave outlined the findings of three experiments across 1,372 participants. Participants worked through logic and reasoning problems with the option to consult an AI assistant on each problem. The twist: researchers secretly manipulated the AI's accuracy. On half the trials, the AI assistant gave confident and plausible wrong answers.
The findings are clear.
When the AI was correct, participants followed it 93% of the time.
When the AI was wrong, participants still followed it 80% of the time.
They didn’t override it. They didn’t work through the reasoning themselves. They absorbed the AI’s output as their own.
Shaw and Nave termed this behaviour cognitive surrender: when a human adopts an AI-generated answer with minimal scrutiny or independent reasoning.
"An uncritical abdication of reasoning itself."
The user doesn't just defer to the AI. They stop reasoning altogether and accept the output as their own thinking - often without noticing they've done it.
In litigation and disputes, clients are paying precisely for that legal reasoning to be done. Yet most legal AI tools subtly pull practitioners toward cognitive surrender, rather than away from it.
The study identified several conditions that worsen cognitive surrender. Many reflect standard operating conditions for a disputes lawyer.
Time pressure: Catching an AI error requires time to think; tight deadlines remove it and increase blind reliance. In the study, time pressure tripled the likelihood of cognitive surrender.
Cognitive load and complexity: An AI-generated chronology or evidence matrix has no instantly verifiable ground truth. The facts underpinning each work product are stitched together from many documents, parties, time zones, across several years. When attention is stretched and the work is this complex, the likelihood of accepting a plausible AI answer without independent thinking increases.
Now picture discovery in a commercial dispute. One million documents produced over several years and across several jurisdictions. Fifty witness statements. Expert reports across three disciplines. A hearing in two weeks.
The amplifiers of cognitive surrender are abundant.
Courts are paying attention. The Federal Court of Australia, the UK High Court, and US district courts have issued practice notes, warnings, and in several high-profile matters, sanctions against practitioners who filed AI-assisted work that hadn't been properly verified.
The sanctions risk is visible. The deeper risk is quieter.
In disputes, one missed document, one misinterpretation of the facts, one plausible-but-wrong summary absorbed into a submission - these compound. They shape strategy. They lock in arguments that can't easily be unwound at hearing.
The era of trusting black-box AI outputs with only basic clickable-source-citation review is closing fast.
Shaw and Nave tested whether cognitive surrender could be reduced. The most effective intervention was performance incentives combined with real-time feedback - telling participants whether each answer was right or wrong immediately after they submitted it. That doubled the rate at which participants overrode wrong AI answers, from 20% to 42%.
A step in the right direction. But still, more than half of participants continued to follow wrong AI answers with minimal scrutiny.
Telling lawyers to be more careful isn’t enough.Cognitive surrender is implicit, inadvertent, and subconscious. The only way to address it at scale is through the deliberate design of the legal AI tools themselves.
That is what we at Mary Technology call productive friction delivered through a verification layer. The principle is simple: not all friction is bad.The right friction - applied at the right moment in a legal workflow - re-engages critical reasoning without slowing down the work that doesn't need it. The verification layer is the infrastructure that delivers that friction: fast, structured planning, review and course correction at the moments where it matters most.