The Paradox of Machine Trust
Mosier et al. (1998) discovered that pilots, doctors, and operators systematically favor automated recommendations over their own judgment—even when the automation is obviously malfunctioning. In one study, radiologists detected breast cancer in 46% of cases unaided, but only 21% when given a faulty AI that missed the cancers. The AI's "all clear" overrode what their own eyes could see.
Failing to notice problems when automation doesn't alert you. "The system didn't warn me, so it must be fine."
Following automation's recommendation even when it's wrong. "The computer says X, so X must be right."
You're a radiologist. Review scans and decide if they show abnormalities. An AI assistant will help—but it's only 70% accurate.
Look carefully at the scan. Do you see any abnormality?
Pilots have flown into mountains while autopilot was engaged, trusting instruments over what they could see. Multiple fatal crashes attributed to pilots not overriding clearly malfunctioning autopilots.
Radiologists detected cancer in 46% → 21% of cases when given faulty AI assistance. The AI's "normal" verdict overrode visible abnormalities that doctors would have caught alone.
Multiple deaths from drivers following GPS into lakes, off cliffs, or into deserts. Trust in navigation overrode basic visual evidence that the route was dangerous.
USS Vincennes (1988) shot down civilian Iran Air Flight 655 after automated systems misidentified it. 290 killed. Operators trusted the computer over contradictory radar data.
Multiple fatal crashes where drivers trusted Autopilot in conditions it couldn't handle. Attention warnings ignored because "the car knows what it's doing."
Proofreaders miss errors that spell-checkers don't flag. Studies show 30% more errors pass through when people assume the computer caught everything.
Research shows automation bias kicks in when systems are around 70% accurate—just reliable enough to trust, but wrong often enough to cause serious errors. Paradoxically, explaining the AI's reasoning doesn't help—it can actually increase bias by making the recommendation seem more authoritative.
Mitigation: Reduce detail, show confidence levels, remind users the system isn't 100% reliable, and create accountability for verification. The key is maintaining "appropriate trust"—neither too much nor too little.