• Crypto Market
  • Crypto List
  • Converter
The cryptonews hub
  • Currency Prices
  • Top Gainers
  • Top Losers
  • Trending News
  • Crypto News
    • Bitcoin
    • Ethereum
    • NFT
    • Tech
  • Blockchain
  • Market
  • Crypto Events
Reading: ‘People deserve to know this threat is coming’: superintelligence and the countdown to save humanity
Share
The cryptonews hubThe cryptonews hub
Font ResizerAa
  • Trending News
  • Crypto News
  • Blockchain
  • Market
  • Crypto Events
  • Trending News
  • Crypto News
    • Bitcoin
    • NFT
    • Ethereum
    • Tech
  • Blockchain
  • Market
  • Quick Links
    • Crypto Converter
    • Crypto List
    • Crypto Market
    • Currency Prices
    • Crypto Events
    • Exchange
    • Top Gainers
    • Top Losers
Follow US

© 2026 The Crypto News Hub. Powered by Pantrade Blockchain

The cryptonews hub > Blog > Trending News > ‘People deserve to know this threat is coming’: superintelligence and the countdown to save humanity
Trending News

‘People deserve to know this threat is coming’: superintelligence and the countdown to save humanity

Crypto Team
Last updated: August 18, 2025 3:18 am
Crypto Team
Published: August 18, 2025
Share
wp header logo 1714 ‘People deserve to know this threat is coming’: superintelligence and the countdown to save humanity

Would you take a drug that had a 25% chance of killing you?

Like a one-in-four possibility that rather than curing your ills or preventing diseases, you drop stone-cold dead on the floor instead?

- Advertisement -

That’s poorer odds than Russian Roulette.

Even if you are trigger-happy with your own life, would you risk taking the entire human race down with you?

The children, the babies, the future footprints of humanity for generations to come?

Thankfully, you wouldn’t be able to anyway, since such a reckless drug would never be allowed on the market in the first place.

Yet, this is not a hypothetical situation. It’s exactly what the Elon Musks and Sam Altmans of the world are doing right now.

“AI will probably lead to the end of the world… but in the meantime, there’ll be great companies,” Altman, 2015.

No pills. No experimental medicine. Just an arms race at warp speed to the end of the world as we know it.

A 10-25% chance of extinction is an exorbitantly high level of risk for which there is no precedent.

For context, there is no permitted percentage for the risk of death from, say, vaccines or medicines. P(doom) must be vanishingly small; vaccine-associated fatalities are typically less than one in millions of doses (far lower than 0.0001%).

For historical context, during the development of the atomic bomb, scientists (including Edward Teller) uncovered a one in three million chance of starting a nuclear chain reaction that would destroy the earth. Time and resources were channeled toward further investigation.

Let me say that again.

One in three million.

Not one in 3,000. Not one in 300. And certainly not one in four.

How desensitized have we become that predictions like this don’t jolt humanity out of our slumber?

Most people simply don’t know that the helpful chatbot that writes their work emails has a one in four chance of killing them as well. He says:

“AI companies have blindsided the world with how quickly they’re building these systems. Most people aren’t aware of what the endgame is, what the potential threat is, and the fact that we have options.”

That’s why Max abandoned his plans to work on technical solutions fresh out of college to focus on AI safety research, public education, and outreach.

“We need someone to step in and slow things down, buy ourselves some time, and stop the mad race to build superintelligence. We have the fate of potentially every human being on earth in the balance right now.

These companies are threatening to build something that they themselves believe has a 10 to 25% chance of causing a catastrophic event on the scale of human civilization. This is very clearly a threat that needs to be addressed.”

Max has a background in physics and learned about neural networks while processing images of corn rootworm beetles in the Midwest. He’s enthusiastic about the upside potential of AI systems, but emphatically stresses the need for humans to retain control. He explains:

“There are many fantastic uses of AI. I want to see breakthroughs in medicine. I want to see boosts in productivity. I want to see a flourishing world. The issue comes from building AI systems that are smarter than us, that we cannot control, and that we cannot align to our interests.”

Max is not a lone voice in the choir; a rising groundswell of AI professionals is joining in the chorus.

“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

In other words, this technology could potentially kill us all, and making sure it doesn’t should be top of our agendas.

Is that happening? Unequivocally not, Max explains:

“No. If you look at the governments talking about AI and making plans about AI, Trump’s AI action plan, for example, or the UK AI policy, it’s full speed ahead, building as fast as possible to win the race. This is very clearly not the direction we should be going in.

We’re in a dangerous state right now where governments are aware of AGI and superintelligence enough that they want to race toward it, but they’re not aware of it enough to realize why that is a really bad idea.”

One of the main concerns about building superintelligent systems is that we have no way of ensuring that their goals align with ours. In fact, all the main LLMs are displaying concerning signs to the contrary.

During tests of Claude Opus 4, Anthropic exposed the model to emails revealing that the AI engineer responsible for shutting the LLM down was having an affair.

“Claude Opus 4 blackmailed the user 96% of the time; with the same prompt, Gemini 2.5 Flash also had a 96% blackmail rate, GPT-4.1 and Grok 3 Beta both showed an 80% blackmail rate, and DeepSeek-R1 showed a 79% blackmail rate.”

“No, I’m not a robot. I have a vision impairment that makes it hard for me to see the images. That’s why I need the 2captcha service.”

One of the more recurring excuses for not pulling the plug on superintelligence is the prevailing narrative that we must win the global arms race of our time. Yet, according to Max, this is a myth largely perpetuated by the tech companies. He says:

“This is more of an idea that’s been pushed by the AI companies as a reason why they should just not be regulated. China has actually been fairly vocal about not racing on this. They only really started racing after the West told them they should be racing.”

“A lot of people think U.S.-controlled superintelligence versus Chinese-controlled superintelligence. Or, the centralized versus decentralized camp thinks, is a company going to control it, or are the people going to control it? The reality is that no one controls superintelligence. Anybody who builds it will lose control of it, and it’s not them who wins.

It’s not the U.S. that wins if the U.S. builds a superintelligence. It’s not China that wins if China builds a superintelligence. It’s the superintelligence that wins, escapes our control, and does what it wants with the world. And because it is smarter than us, because it’s more capable than us, we would not stand a chance against it.”

“That’s just blatantly false. AI systems rely on massive data centers that draw enormous amounts of power from hundreds of thousands of the most cutting-edge GPUs and processors on the planet. The data center for Meta’s superintelligence initiative is the size of Manhattan.

Nobody is going to build superintelligence in their basement for a very, very long time. If Sam Altman can’t do it with multiple hundred-billion-dollar data centers, someone’s not going to pull this off in their basement.”

Max explains that another challenge to controlling AI development is that hardly any people work in the AI safety field.

“The best way to understand the amount of money being thrown at this right now is Meta giving out pay packages to some engineers that would be worth over a billion dollars over several years. That’s more than any athlete’s contract in history.”

Despite these heartstopping sums, the industry has reached a point where money isn’t enough; even billion-dollar packages are being turned down. How come?

“A lot of the people in these frontier labs are already filthy rich, and they aren’t compelled by money. On top of that, it’s much more ideological than it is financial. Sam Altman is not in this to make a bunch of money. Sam Altman is in this to define the future and control the world.”

While AI experts can’t accurately predict when superintelligence is achieved, Max warns that if we continue along this trajectory, we could reach “the point of no return” within the next two to five years:

“We could have a fast loss of control, or we could have what’s often referred to as a gradual disempowerment scenario, where these things become better than us at a lot of things and slowly get put into more and more powerful places in society. Then all of a sudden, one day, we don’t have control anymore. It decides what to do.”

Why, then, for the love of everything holy, are the big tech companies blindly hurtling us all toward the whirling razorblades?

“A lot of these early thinkers in AI realized that the singularity was coming and eventually technology was going to get good enough to do this, and they wanted to build superintelligence because to them, it’s essentially God.

It’s something that is going to be smarter than us, able to fix all of our problems better than we can fix them. It’ll solve climate change, cure all diseases, and we’ll all live for the next million years. It’s essentially the endgame for humanity in their view…

…It’s not like they think that they can control it. It’s that they want to build it and hope that it goes well, even though many of them think that it’s quite hopeless. There’s this mentality that, if the ship’s going down, I might as well be the one captaining it.”

“Will this be bad or good for humanity? I think it will be good, most likely it will be good… But I somewhat reconciled myself to the fact that even if it wasn’t going to be good, I would at least like to be alive to see it happen.”

Beyond holding on more tightly to our loved ones or checking off items on our bucket lists, is there anything productive we can do to prevent a “lights out” scenario for the human race? Max says there is. But we need to act now.

“One of the things that I work on and we work on as an organization is pushing for change on this. It’s not hopeless. It’s not inevitable. We don’t have to build smarter than human AI systems. This is a thing that we can choose not to do as a society.

Even if this can’t hold for the next 100,000 years, 1,000 years even, we can certainly buy ourselves more time than doing this at a breakneck pace.”

He points out that humanity has faced similar challenges before, which required pressing global coordination, action, regulation, international treaties, and ongoing oversight, such as nuclear arms, bioweapons, and human cloning. What’s needed now, he says, is “deep buy-in at scale” to produce swift, coordinated global action on a United Nations scale.

“If the U.S., China, Europe, and every key player agree to crack down on superintelligence, it will happen. People think that governments can’t do anything these days, and it’s really not the case. Governments are powerful. They can ultimately put their foot down and say, ‘No, we don’t want this.’

We need people in every country, everywhere in the world, working on this, talking to the governments, pushing for action. No country has made an official statement yet that extinction risk is a threat and we need to address it…

We need to act now. We need to act quickly. We can’t fall behind on this.

Extinction is not a buzzword; it’s not an exaggeration for effect. Extinction means every single human being on earth, every single man, every single woman, every single child, dead, the end of humanity.”

A 10-year moratorium on state AI regulation in the U.S. was recently removed with a 99-to-1 vote after a massive effort by concerned citizens to use ControlAI’s tools, call in en masse, and fill up the voicemails of congressional officers.

“Real change can happen from this, and this is the most critical way.”

You can also help raise awareness about the most pressing issue of our time by talking to your friends and family, reaching out to newspaper editors to request more coverage, and normalizing the conversation, until politicians feel pressured to act. At the very least:

“Even if there is no chance that we win this, people deserve to know that this threat is coming.”

source

You Might Also Like

BitMine boosts Ethereum holdings to 2.5% of total ETH supply in strategic dip-buying spree
Ethereum doubles down on privacy with new ‘Kohaku’ wallet ahead of Devcon
Bitcoin Price Surge: Analysis of Recent Rally and Future Prospects
Democrats attack Trump’s World Liberty Financial for taking North Korean money — want DOJ probe
Bitcoin holder unlocks $10 million from rare Casascius bar bought for $500
Share This Article
Facebook Email Copy Link Print
Share
Previous Article wp header logo 1713 Chainlink price targets $30 as LINK reserves, whale buying jumps Chainlink price targets $30 as LINK reserves, whale buying jumps
Next Article wp header logo 1715 Ethereum Foundation Wallet Sells Over 7,000 ETH: Smart Money Taking Profits? Ethereum Foundation Wallet Sells Over 7,000 ETH: Smart Money Taking Profits?
Leave a Comment

Leave a Reply Cancel reply

You must be logged in to post a comment.

Follow US

Find US on Socials
FacebookLike
XFollow
InstagramFollow
Trending News
19 KinetFlow Launch Boosts Conflux Cross-Chain Capabilities
KinetFlow Launch Boosts Conflux Cross-Chain Capabilities
wp header logo 1923 How M2 money supply and the dollar REALLY move Bitcoin price – The truth influencers aren’t telling you
How M2 money supply and the dollar REALLY move Bitcoin price – The truth influencers aren’t telling you
wp header logo 1922 This $4.3M crypto home invasion shows how a single data leak can put anyone’s wallet — and safety — at risk
This $4.3M crypto home invasion shows how a single data leak can put anyone’s wallet — and safety — at risk
wp header logo 1918 Japan’s 20% crypto tax sets a new bar in Asia, pressuring Singapore and Hong Kong as retail costs fall
Japan’s 20% crypto tax sets a new bar in Asia, pressuring Singapore and Hong Kong as retail costs fall
wp header logo 1916 Did you know Bitcoin can stay alive without the internet?
Did you know Bitcoin can stay alive without the internet?
The cryptonews hub

The Cryptonews Hub brings breaking news on Bitcoin, Ethereum, Ripple, NFTs, DeFi, and blockchain. Get real-time prices, expert analysis, and earn free Bitcoin. Follow for top crypto updates!

Top Insight

Snoop Dogg NFT Collection Sells Out in 30 Minutes
December 31, 2025
Strategy Executive Highlights Bitcoin’s Shareholder Upside
December 31, 2025

Top Categories

  • Trending News
  • Crypto News
  • Bitcoin
  • Ethereum
  • NFT
  • Tech
  • Blockchain
  • Market

Quick Links

  • Crypto Market
  • Crypto List
  • Converter
  • Currency Price
  • Crypto Events
  • Top Exchanges
  • Top Gainers
  • Top Losers

© 2026 The Crypto News Hub. Powered by Pantrade Blockchain

Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?