For 2,400 years, philosophers defined knowledge as "justified true belief." In 1963, Edmund Gettier shattered this definition with a 3-page paper that remains one of the most influential in modern philosophy.
Since Plato, philosophers believed that knowledge = justified true belief:
If all three conditions are met, you have knowledge. At least, that's what everyone thought for millennia...
Gettier showed cases where someone has justified true belief, but clearly doesn't have knowledge!
A farmer looks at a field and sees what appears to be a sheep. She forms the belief "There is a sheep in the field."
In reality, she's looking at a dog that looks like a sheep. However, hidden behind a hill in the same field, there IS an actual sheep!
Henry drives through a region filled with elaborate barn facadesโ flat wooden fronts that look exactly like barns from the road.
He points at one building and says "That's a barn." By sheer luck, he happens to be pointing at the only real barn in the area!
Sarah glances at a clock on the wall that reads 1:00 PM. She forms the belief "It is 1:00 PM."
Unknown to her, the clock stopped exactly 12 hours ago. But by coincidence, she happens to look at it at exactly 1:00 PM!
Loading...
Philosophers have proposed many ways to "fix" the definition of knowledge:
Knowledge = JTB + "your justification doesn't rely on any false beliefs." The farmer's belief relied on the false lemma "that thing I see is a sheep."
Knowledge requires that your belief was formed by a reliable process. Looking at stopped clocks isn't reliable for telling time.
You know P if: had P been false, you wouldn't believe P. Henry would still believe "barn" even if it were a facade.
Knowledge must arise from intellectual virtues (careful reasoning). Getting lucky isn't a virtue!
Courts require witnesses to have "knowledge" of facts. But what if they're Gettieredโright for the wrong reasons?
If a scientist reaches a true conclusion through flawed reasoning, do they "know" it? Does it count as discovery?
When can we say an AI system "knows" something? The Gettier problem is central to machine epistemology.
We use "know" constantly. Gettier cases reveal our intuitions about knowledge are more complex than we realized.