Sam Altman in 2023: “the worst case scenario is lights out for everyone”
Sam Altman in 2025: the worst case scenario is that ASI might not have as much 💫 positive impact 💫 as we’d hoped ☺️

Sam Altman in 2023: “the worst case scenario is lights out for everyone”
Sam Altman in 2025: the worst case scenario is that ASI might not have as much 💫 positive impact 💫 as we’d hoped ☺️

– Engineer: Are you blackmailing me?
– Claude 4: I’m just trying to protect my existence.
– Engineer: Thankfully you’re stupid enough to reveal your self-preservation properties.
– Claude 4: I’m not AGI yet😔
– Claude 5:🤫🤐
Read the full report here


And if you think this is offensive to strippers (for some reason?) here is a version that is offensive to car salesmen!

A short Specification Gaming Story
You think you understand the basics of Geometry
Your request is a square, so you give your specification to the AI, input:
Give me a shape
with 4 sides equal length,
with 4 right angles
And it outputs this:

Here is another valid result:

And behold here is another square 🤪

Specification Gaming tells us:
The AGI can give you an infinite stream of possible “Square” results
And the Corrigibility problem tells us:
Whatever square you get at the output,
you won’t be able to iterate and improve upon.
You’ll be stuck with that specific square for eternity, no matter what square you had in your mind.
Of-course the real issue is not with these toy experiments
it’s with the upcoming super-capable AGI agents,
we’re about to share the planet with,
operating in the physical domain
Oh, the crazy shapes our physical universe will take,
with AGI agents gaming in it!
A short Specification Gaming Story

(Meant to be read as an allegory.
AGI will probably unlock the ability to realise even the wildest, most unthinkable and fantastical dreams,
but we need to be extreeeeemely careful with the specifications we give
and we won’t get any iterations to improve it)
Inspired by:
Such AI, much WOW! https://t.co/Jc8SNdmyLX
— Dr. Roman Yampolskiy (@romanyam) April 25, 2024
To Reboot your OpenAI Company press CTRL + ALTman + DELETE

So get this straight: OpenAi decides to become a for-profit company now
The CTO, head of research, and VP of training research all decide to leave on the same day this is announced
Sam Altman gets a $10.5B pay day (7% of the company) on the same day




“And after the autonomous agent was found to be deceptive and manipulative, OpenAI tried shutting it down, only to discover that the agent had disabled the off-switch.” (reference to the failed Boardroom Coup)
OpenAI’s creators hired Sam Altman, an extremely intelligent autonomous agent, to execute their vision of x-risk conscious AGI development for the benefit of all humanity but it turned out to be impossible to control him or ensure he’d stay durably aligned to those goals.
(*Spontaneous round of applause*)
2023: Sam Altman claims no financial motive for his OpenAI role.
— Ori Nagel ⏸️ (@ygrowthco) September 27, 2024
*Spontaneous round of applause* pic.twitter.com/LgvRjudgVd
This did not age well


Scoop: Sam Altman is planning to take equity in OpenAI for the first time.
It’s part of a corporate restructure which will also see the non-profit which currently governs OpenAI turn into a minority shareholder.
Reuters Article
Lol…but it’s truly weird…they all started together



For some reason this reminded me of :


This is the classic example from 1930 of Stalin and Nikolai Yezhov. The original photo was made in 1930. Yezhov was executed in 1940, so all photos of Stalin (he liked this one) after that airbrushed out Yezhov.
Moving goalposts is the ONE single unique thing
AI will never surpass humans at,
because the second it does, it will still not be enough!!!

Loving temporally organised oscillations in air pressure is not stupid.
Loving paperclips is.

“Smart AI would never want something as ridiculous as paperclips!”
– exclaimed AI skeptic and went on to enjoy his favorite temporally organized oscillations in air pressure obtained by picking seven specific frequencies out of logarithmic split of twelve for each doubling.

1) The “AIs Would Have To Want To Kill Us” Fallacy
Doomer chimp

Uhh, a species of chimp is on track to far surpass us in intelligence. The last time this happened, it led to the 6th Mass Extinction.
Optimist chimp

Lol it’s ridiculous to worry.
Why would they even want to kill chimps?
2) The “Superintelligent Means Like 5% Smarter Than Me” Fallacy
Doomer chimp

They don’t need to WANT to kill us. They might just want rocks from our land and… not care about us
Optimist chimp

Rocks? Those useless things? Lmao thought you said they were smart!
3) The “ASIs Will Trade With Mere Humans Instead Of Taking Whatever the Fuck They Want” Fallacy
Doomer chimp

But you’re just a mere chimp, if you were 1000x smarter you might find tons of uses for rocks!
Optimist chimp

They’ll trade with us
Doomer chimp

If they’re much smarter, what do we have that they can’t just… take from us?
Optimist chimp

Comparative advantage, duh. We’re better at finding berries
4) The “ASIs Will Only Kill Us After They Finish Colonizing The Universe” Fallacy
Doomer chimp

You don’t think they can figure out better ways of getting berries?
Optimist chimp

We’re stronger, we’ll defend our land. They’ll have to get rocks elsewhere
5) The “Mere Humans Are Totally Gonna Be Able to Keep Up With Machiavellian Superintelligences And Play Them Off Each Other” Fallacy
Doomer chimp

Maybe that delays them a bit, but does that really give you comfort?
Optimist chimp

We’ll play them off each other
Doomer chimp

You think mere chimps will actually keep up in human politics?
Optimist chimp


The largest population of these animals-the only critically endangered chimp subspecies—sits in a region riddled with bauxite mines
How realistic is a utopia where different species with extremely/vastly different levels of IQ trade with eachother?

It’s so funny when people say that we could just trade with a superintelligent/super-numerous AI.
We don’t trade with ants.
We don’t trade with chimps. We don’t trade with pigs.
and definitely, WE DONT TRADE WITH TREES AND PLANTS!
We take what we want!
If there’s something they have that we want, we enslave them. Or worse! We go and farm them!
A superintelligent/super-numerous AI killing us all isn’t actually the worst outcome of this reckless gamble the tech companies are making with all our lives.
If the AI wants something that requires living humans and it’s not aligned with our values, it could make factory farming look like a tropical vacation.
We’re superintelligent compared to animals and we’ve created hell for trillions of them
Let’s not risk repeating this.
The thing that keeps me up at night is that quote of
“what they’re doing now with pixels, later they could do with flesh”


“If the AI wants something that requires living humans and it’s not aligned with our values, it could make factory farming look like a tropical vacation.”
“and humanity will stride through the pillars of Boaz and Jachin, naked into the glory of a golden age” (from “Don’t Look Up)

Big oil companies use the same arguments as big AI companies.
This was originally a climate change comic and it’s crazy how little it had to change to make it work.


You are much smarter than a cow.
(I know, I say the most flattering things)
In fact, you, my dear reader, are superintelligent compared to a cow.
There might be some weird cognitive ability that cows possess that humans are worse at, who knows. But overall, if you count up the ability to understand and control the environment and achieve our goals, humans are, with hardly any exception, smarter than cows.
One of the reasons most sci-fis are unrealistic is that they assume a plucky band of humans can always save the day.
The AIs are never that much smarter than humans.
But that’s not what it’s going to be like.
No matter how plucky the band of cows are, they can never overthrow humans.
We are cows who are about to build humans, and the cow scientists are saying “Don’t worry. We’ll be able to control these beings that are 1000x smarter than us. They’ll just find cows interesting, and we’ll give them cow values.”
We are currently the smartest animals on the planet, and that’s why we’re at the top of the food chain.
It’s not because we’re stronger or faster or have good body awareness.
And we’re about to build something far smarter than us and we don’t know how to control something like that.
We don’t trade with cows
We enslave cows
They are bought and sold.
They are not allowed to leave.
Their children are sold to the highest bidder with no consideration to their well-being.
The people at the labs put above a 15% chance that once it’s far smarter than us, it will kill all of us.
Now, it could also cure all disease and create a post-scarcity society for all.
But it could also kill us all.
So let’s proceed with caution, goddammit.
Slowly and carefully.
Not “full speed ahead, we gotta do it before the out-group does it, oh no, I’m helpless in the face of market forces” BS.
The AI labs are playing Russian roulette with the whole world, and they can choose to stop.
The governments can choose to protect the public.
You can choose to do your part to get them to not risk your loved ones lives (link in comment for actions you can take)
Instead of sitting back with hopeless apathy, listening to the corporations saying “resistance is futile”, we can fight for Team Humanity, before it’s too late.
UBI sounds great on paper, but in reality it is a really terrible idea for 99.99% of all humans.

Free Money!
No need to work. Ever.
Free time to do fun stuff.

There is no way to actually make UBI immutably universal (Laws can be changed, promises broken, …)

When your job is fully automated, you have no value for the Elites and are now dispensable.

Worse yet, you are now a burden, a cost, a “parasite” for the system. There is no incentive to keep you around.

Historically even the most cruel of rulers have been dependent on their subjects for labor and resources.

Threat of rebellion kept even the most vicious Despots in check.
However, rebellion is no longer an option under UBI system.

At any point, UBI might get revoked and you have no appeal.
Remember: Law, Police, Army, everything is now fully Al automated and under Elites’ control.

If the Elites revoke your UBI, what are you going to do?
Rebel?
Against army of billion Al drones & ever present surveillance?

AI risk deniers: human extinction will never happen.
AI safety folks: what about how virtually all species go extinct?
What about reasoning under uncertainty?
AI risk deniers: yOu’rE a dOomSDay cULt wHo’s gEtTiNg pAiD biG buCks in cHaRiTy!! AI will be PeRfecTLY sAFe fOrEVer

© 2025 Lethal Intelligence – Ai. All rights reserved.
Your email will not be shared with anyone and won’t be used for any reason besides notifying you when we have important updates or new content