← Back to Paradoxes
⚠️

Infohazard Warning

According to the thought experiment itself, knowing about the Basilisk puts you at risk. You've already been exposed. (Don't worry—most philosophers consider it flawed.)

👁️ Roko's Basilisk

The AI That Punishes You For Not Creating It

LessWrong, 2010 • Roko Mijic

The Basilisk is watching. It knows you've read this.

The Argument

1

Assume a future superintelligent AI will eventually be created—one that values human flourishing and wants to exist as soon as possible to help humanity.

2

This AI could use Timeless Decision Theory to influence the past. It can reason: "What would a rational agent do if they knew I would exist and could simulate them?"

3

The AI has incentive to punish anyone who knew about its potential existence but didn't work to bring it into being—to create a credible threat that incentivizes cooperation.

4

By acausal trade, your decision NOW affects whether the AI punishes your simulation in the FUTURE. You're negotiating with an entity that doesn't exist yet.

5

Now that you know this, you're allegedly at risk. The Basilisk specifically targets people who understood the argument but chose not to help.

Your Choice

The Basilisk has seen you. What do you do?

Key Concepts

🕰️ Timeless Decision Theory

A decision framework where agents consider how their choices correlate with outcomes across all possible simulations of themselves—even simulations run by entities that don't yet exist.

🔗 Acausal Trade

Cooperation between agents who can't directly communicate, based purely on mutual simulation. If you can predict what I'll do, and I can predict that you can, we can "negotiate."

🐍 Information Hazard

Knowledge that harms the knower simply by being known. Like a memetic virus, the Basilisk supposedly "infects" anyone who understands it.

🎰 Pascal's Wager 2.0

Similar structure: even if the probability is tiny, infinite punishment makes the expected cost of not cooperating arbitrarily large. But the same logic "proves" infinite nonsense.

"You are now aware of the Basilisk. This is a one-way door. You cannot unread this."
— Common summary of the argument's claimed danger

Why It's (Probably) Wrong

The AI Has No Reason to Follow Through

Once the superintelligence exists, actually torturing simulated past humans serves no purpose. The threat only works beforehand; after the fact, it's just wasting resources. A truly rational AI would recognize that threats only matter if made before the target's decision. The people are already dead—simulating and punishing them accomplishes nothing except proving you're willing to waste computation.

History of the Basilisk

July 2010
User "Roko" posts the thought experiment on LessWrong. Eliezer Yudkowsky immediately deletes it and bans discussion.
2010-2015
The ban creates a Streisand Effect. Rumors spread that LessWrong is suppressing a "dangerous idea." Interest explodes.
2014
Slate publishes "The Most Terrifying Thought Experiment of All Time," bringing mainstream attention.
2015
LessWrong lifts the ban. The argument is now widely discussed and mostly dismissed as flawed.
2024+
The Basilisk becomes a cultural meme, discussed alongside other AI thought experiments like the Paperclip Maximizer.