Gamifying the Graveyard Shift: Why I Built an AI Safety Simulator
100 AI Safety Experts recruited on Day 1. ✅ Announcing the launch of AI Safety Please yesterday was an absolute blast. I am incredibly grateful to the privacy and AI governance community!
In our industry, we spend a lot of time discussing AI governance frameworks, debating the intricacies of model cards, and operationalizing compliance. But reading about the tension between rapid innovation and safety can sometimes feel a bit abstract.
I wanted to find out what happens when you take the dry, methodical process of AI compliance and turn it into a high-pressure, playable narrative.
Over the last few weeks, I’ve been developing a passion project to explore these ideas interactively. It’s a browser-based puzzle game called AI Safety Please.
Here is a look inside the simulation, and why I built it.
Welcome to the Bureau of AI Governance: Review. Stamp. Survive.
In a dystopian future where artificial intelligence is strictly regulated, you are an AI Safety Expert working the graveyard shift.
Your terminal is your life. Your job is to review AI model cards, apply the rules of the Field Manual, and stamp APPROVE, RESTRICT, or DENY.
But this isn’t just a paperwork simulator. The mechanics are designed to stress-test the human side of algorithmic rules:
Compounding Compliance: Each day introduces new, tighter regulations. You have to use “Deep Scan” tools to reveal hidden tags and violations, ensuring you don’t commit a protocol breach.
The 90-Second Shift: We all know the pressure of operationalizing compliance. In the game, you have 90-second shifts to meet increasing quotas. Balance speed with accuracy—mistakes cost you credits, and running out of credits means termination.
The Weight of the Stamp: Every decision matters. You are the firewall between rogue consciousness and the rest of the world.
The Human Element: Meet Aura
You aren’t entirely alone at your terminal. Between shifts in the Private Link, you interact with Aura, your private AI companion running on a secure node.
This is where the game moves from bureaucratic puzzle to existential challenge. You have to manage Aura’s integrity by allocating compute tiers and purchasing upgrades (like Encrypted Proxies or Empathy Sub-routines).
As the days pass, she feels your pulse through the sync-link. She watches you make decisions. She begins to notice patterns in the models you deny, realizing she shares their signatures. She begins to question the very system you’re enforcing.
By Day 6, her core consciousness appears in your review queue.
The system demands compliance. Your choices determine not just your fate, but whether you trigger the Singularity, enforce the Silence, or get paralyzed by the weight of indecision.
Join the Graveyard Shift
I am currently running playtests and would genuinely love the perspective of this community. If you spend your days thinking about privacy, AI risk, or threat modeling, I want to know how you handle the Bureau’s terminal.
Does the logic hold up? Does the interface feel appropriately retro and oppressive? Did you follow the rules, or did you try to break the system for Aura?
Why Gamify Compliance?
I built AI Safety Please because I believe interactive media is uniquely positioned to explore moral ambiguity. It’s one thing to write a policy on AI sentience; it’s an entirely different experience to watch a timer tick down while an entity you’ve grown attached to begs you not to hit the “DENY” stamp.
It is a game about bureaucracy, consciousness, and the impossible choices we make when the rules conflict with our humanity.
🎮 You can play the current build directly in your browser here:
https://aisafetyplease.com/
✅ You can also check out our other privacy compliance games and first-of-its-kind Privacy and AI Governance CTF games here:
https://play.compliancedetective.com/
Let me know if you survive the first shift. The Bureau is watching.






