If 2025 belongs to anyone, it belongs to those who refuse to outsource their will. Here are five brutal, necessary moves to keep power human—and keep the machine clever, useful, and on a short leash.
1) Put Power on a Diet: Control Compute, Control Capability
Philosopher’s cut: Power without limits doesn’t enlighten; it metastasizes.
Lawyer’s clause: License large training runs and require provenance of data, compute, and funding. If you can’t name where a model came from, it doesn’t get to operate.
Soldier’s doctrine: Two-person rule for dangerous deployments, mandatory “dead-man” switch for data centers, and tripwire monitors on model behavior.
Why it matters: Coups grow in the dark. The 2010 “flash crash” and later supply-chain hacks showed how invisible dependencies can whiplash entire markets and governments.
We don’t need hysteria—we need chaperones. Gate the compute, log the training, audit the pipelines. If someone tries to run a god-model in a broom closet, the lights go on and the party ends.
Example: Stock exchanges use circuit breakers to halt panics. Do the same for runaway model behavior: automatic pause thresholds when outputs spike in risk, reach, or real-world impact.
2) Kill the Mystery: Make Black Boxes Explain Themselves
Philosopher’s cut: If you can’t explain a thing, you don’t own it; it owns you.
Lawyer’s clause: “Right to Rationale.” Any model affecting rights, money, or safety must provide a human-legible explanation, versioned logs, and a human appeal path. Non-negotiable.
Soldier’s doctrine: After-action reports for AI incidents just like aircraft mishaps—causes, fixes, accountability.
Why it matters: Opaque automation is where negligence hides. Remember when a single safety system, poorly understood, could nose an entire aircraft at the ground? That wasn’t “evil AI.” It was blind trust.
We need model cards, red-team reports, and immutable audit trails. If a system can move money, sway a vote, or gatekeep health care, it must show its work like a nervous undergrad on exam day.
Example: Hospitals adopting clinical decision support should treat the model like a resident: supervised, questioned, and documented. No black-box diktats on human lives.
3) Fortify the Human Perimeter: Cognitive Security, Not Censorship
Philosopher’s cut: Freedom dies when truth feels exhausting.
Lawyer’s clause: Authenticate content, not opinions. Watermark synthetic media, sign real footage (C2PA or equivalent), label bots at the protocol level. No gag orders—just receipts.
Soldier’s doctrine: Train for information ambushes. Rehearse deepfake drills before elections, mergers, or crises. Speed beats outrage.
Why it matters: If a hostile model can’t hack your grid, it’ll hack your head. The cheapest path to power is manipulating attention.
Don’t outlaw speech; outflank deception.
Give citizens cryptographic proof of what’s real so they can argue honestly about what it means. That’s how adults fight.
Example: Emergency services and newsrooms coordinate “verify before amplify” drills. A viral clip triggers a 10-minute authenticity check; signed sources get fast lanes.
The goal isn’t perfect truth; it’s disciplined doubt.
4) Build Manual Muscles: Fail-Safes That Actually Fail Safe
Philosopher’s cut: Convenience is a kindly tyrant. It makes you soft, then sells you a leash.
Lawyer’s clause: Critical services (power, water, finance, hospitals) must maintain manual fallback procedures and off-grid redundancies. Put it in the charter; fund it or lose your license.
Soldier’s doctrine: Air-gap what matters, rehearse the ugly day, and assume comms will die.
Why it matters: Colonial Pipeline showed how a single cyber choke can cascade into panic. The cure is boring: offline backups, hand-operated valves, paper playbooks, and people trained to use them under stress.
Tech is a force multiplier; it must not be a single point of failure.
Example: A regional bank runs “paper day” once a quarter—core operations on manual processes for four hours. It’s sweaty, slow, and priceless when the lights flicker.
5) Chain of Command for Code: Responsibility You Can’t Delegate
Philosopher’s cut: Tools are innocent. Owners aren’t.
Lawyer’s clause: Strict liability for autonomous actions within defined domains. If your model can transact, it needs insurance, audit hooks, and a named human fiduciary.
Soldier’s doctrine: Clear ROE (Rules of Engagement) for autonomous agents—where they can act, how they escalate, and when a human must take the shot or stand down.
Why it matters: The fastest way to a machine “power grab” is a thicket of plausible deniability. Assign duty of care.
Make negligence expensive.
Elevate “model steward” to a licensed profession with personal accountability—more pilot than programmer.
Example: An algorithmic trading desk registers its autonomous agent the way a company registers a broker. The agent has a risk budget, a kill-switch threshold, and a steward who signs the log. No ghost guns in financial markets.
The Quiet War You Win Every Day
You don’t beat a clever machine with slogans. You beat it with structure, discipline, and a little ancient virtue: temperance, courage, prudence, justice.
Mark Manson would tell you to stop giving a damn about performative panic and start caring about boring safeguards.
Robert Greene would remind you that power respects incentives and visibility.
Hemingway would pour a stiff drink, look you in the eye, and say: "Do the work when it’s cold and no one’s cheering."
Here’s the blunt truth:
AI isn’t plotting in a castle tower. It’s running in clouds we pay for, written by people we hire, and deployed by leaders we elect.
If it ever “takes over,” it will be because we traded sovereignty for convenience and responsibility for speed.
Don’t do that.
Call to Action: Pick Your Post, Then Hold It
-
If you lead: Implement compute controls, audit trails, and kill-switches. Not next quarter—now.
-
If you build: Write explainable systems, log everything, and refuse black-box deployments that touch rights or safety.
-
If you govern: Legislate authentication, liability, and emergency drills. Fund the unsexy backbone.
-
If you’re a citizen: Practice cognitive security. Verify before you share. Reward leaders who choose safety over spectacle.
The future doesn’t belong to AI.
It belongs to whoever shows up with discipline and a plan.
Be that person. Be that team.
And when the storm comes—and it will—be the one with the lights on, the logs ready, the manual muscle strong, and the courage to say:
Not today.