• Crypto Market
  • Crypto List
  • Converter
The cryptonews hub
  • Currency Prices
  • Top Gainers
  • Top Losers
  • Trending News
  • Crypto News
    • Bitcoin
    • Ethereum
    • NFT
    • Tech
  • Blockchain
  • Market
  • Crypto Events
Reading: How a single computer file accidentally took down 20% of the internet yesterday – in plain English
Share
The cryptonews hubThe cryptonews hub
Font ResizerAa
  • Trending News
  • Crypto News
  • Blockchain
  • Market
  • Crypto Events
  • Trending News
  • Crypto News
    • Bitcoin
    • NFT
    • Ethereum
    • Tech
  • Blockchain
  • Market
  • Quick Links
    • Crypto Converter
    • Crypto List
    • Crypto Market
    • Currency Prices
    • Crypto Events
    • Exchange
    • Top Gainers
    • Top Losers
Follow US

© 2025 The Crypto News Hub. Powered by Pantrade Blockchain

The cryptonews hub > Blog > Trending News > How a single computer file accidentally took down 20% of the internet yesterday – in plain English
Trending News

How a single computer file accidentally took down 20% of the internet yesterday – in plain English

Crypto Team
Last updated: November 19, 2025 11:57 pm
Crypto Team
Published: November 19, 2025
Share
wp header logo 1606 How a single computer file accidentally took down 20% of the internet yesterday – in plain English

In fact, it’s so dependent that a single configuration error made large parts of the internet totally unreachable for several hours.

Many of us work in crypto because we understand the dangers of centralization in finance, but the events of yesterday were a clear reminder that centralization at the internet’s core is just as urgent a problem to solve.

- Advertisement -

The obvious giants like Amazon, Google, and Microsoft run enormous chunks of cloud infrastructure.

But equally critical are firms like Cloudflare, Fastly, Akamai, DigitalOcean, and CDN (servers that deliver websites faster around the world) or DNS (the “address book” of the internet) providers such as UltraDNS and Dyn.

Most people barely know their names, yet their outages can be just as crippling, as we saw yesterday.

To start with, here’s a list of companies you may never have heard of that are critical to keeping the internet running as expected.

Yesterday’s culprit was Cloudflare, a company that routes almost 20% of all web traffic.

It now says the outage started with a small database configuration change that accidentally caused a bot-detection file to include duplicate items.

That file suddenly grew beyond a strict size limit. When Cloudflare’s servers tried to load it, they failed, and many websites that use Cloudflare began returning HTTP 5xx errors (error codes users see when a server breaks).

Here’s the simple chain:

The trouble began at 11:05 UTC when a permissions update made the system pull extra, duplicate information while building the file used to score bots.

That file normally includes about sixty items. The duplicates pushed it past a hard cap of 200. When machines across the network loaded the oversized file, the bot component failed to start, and the servers returned errors.

According to Cloudflare, both the current and older server paths were affected. One returned 5xx errors. The other assigned a bot score of zero, which could have falsely flagged traffic for customers who block based on bot score (Cloudflare’s bot vs. human detection).

Diagnosis was tricky because the bad file was rebuilt every five minutes from a database cluster being updated piece by piece.

If the system pulled from an updated piece, the file was bad. If not, it was good. The network would recover, then fail again, as versions switched.

According to Cloudflare, this on-off pattern initially looked like a possible DDoS, especially since a third-party status page also failed around the same time. Focus shifted once teams linked errors to the bot-detection configuration.

By 13:05 UTC, Cloudflare applied a bypass for Workers KV (login checks) and Cloudflare Access (authentication system), routing around the failing behavior to cut impact.

The main fix came when teams stopped generating and distributing new bot files, pushed a known good file, and restarted core servers.

Cloudflare says core traffic began flowing by 14:30, and all downstream services recovered by 17:06.

Cloudflare’s systems enforce strict limits to keep performance predictable. That helps avoid runaway resource use, but it also means a malformed internal file can trigger a hard stop instead of a graceful fallback.

Because bot detection sits on the main path for many services, one module’s failure cascaded into the CDN, security features, Turnstile (CAPTCHA alternative), Workers KV, Access, and dashboard logins. Cloudflare also noted extra latency as debugging tools consumed CPU while adding context to errors.

On the database side, a narrow permissions tweak had wide effects.

The change made the system “see” more tables than before. The job that builds the bot-detection file did not filter tightly enough, so it grabbed duplicate column names and expanded the file beyond the 200-item cap.

The loading error then triggered server failures and 5xx responses on affected paths.

Impact varied by product. Core CDN and security services threw server errors.

Workers KV saw elevated 5xx rates because requests to its gateway passed through the failing path. Cloudflare Access had authentication failures until the 13:05 bypass, and dashboard logins broke when Turnstile could not load.

Cloudflare Email Security temporarily lost an IP reputation source, reducing spam detection accuracy for a period, though the company said there was no critical customer impact. After the good file was restored, a backlog of login attempts briefly strained internal APIs before normalizing.

The database change landed at 11:05 UTC. First customer-facing errors appeared around 11:20–11:28.

Teams opened an incident at 11:35, applied the Workers KV and Access bypass at 13:05, stopped creating and spreading new files around 14:24, pushed a known good file and saw global recovery by 14:30, and marked full restoration at 17:06.

According to Cloudflare, automated tests flagged anomalies at 11:31, and manual investigation began at 11:32, which explains the pivot from suspected attack to configuration rollback within two hours.

A five-minute rebuild cycle repeatedly reintroduced bad files as different database pieces updated.

A 200-item cap protects memory use, and a typical count near sixty left comfortable headroom, until the duplicate entries arrived.

The cap worked as designed, but the lack of a tolerant “safe load” for internal files turned a bad config into a crash instead of a soft failure with a fallback model. According to Cloudflare, that’s a key area to harden.

Cloudflare says it will harden how internal configuration is validated, add more global kill switches for feature pipelines, stop error reporting from consuming large CPU during incidents, review error handling across modules, and improve how configuration is distributed.

source

Taiwan’s first Bitcoin treasury investor bets $10 million on Nasdaq’s SORA
Solana memecoin average daily volume surges 46% in May, echoing Bitcoin’s recovery
Volkswagen ADMT chooses Solana-based Hivemapper to power Robotaxi test fleet
SEC greenlights new generic standards to expedite crypto ETP listings
Coinbase Q2 results miss estimates, COIN falls 8% after hours
Share This Article
Facebook Email Copy Link Print
Share
Previous Article wp header logo 1605 Abu Dhabi loaded up on Bitcoin ETF shares—then the market tanked Abu Dhabi loaded up on Bitcoin ETF shares—then the market tanked
Next Article wp header logo 1607 Aster’s 26% Breakout Revives Altcoin Hopes As Maxi Doge Eyes Next 1000x Crypto Status Aster’s 26% Breakout Revives Altcoin Hopes As Maxi Doge Eyes Next 1000x Crypto Status
Leave a Comment

Leave a Reply Cancel reply

You must be logged in to post a comment.

Follow US

Find US on Socials
FacebookLike
XFollow
InstagramFollow
Trending News
19 KinetFlow Launch Boosts Conflux Cross-Chain Capabilities
KinetFlow Launch Boosts Conflux Cross-Chain Capabilities
wp header logo 1923 How M2 money supply and the dollar REALLY move Bitcoin price – The truth influencers aren’t telling you
How M2 money supply and the dollar REALLY move Bitcoin price – The truth influencers aren’t telling you
wp header logo 1922 This $4.3M crypto home invasion shows how a single data leak can put anyone’s wallet — and safety — at risk
This $4.3M crypto home invasion shows how a single data leak can put anyone’s wallet — and safety — at risk
wp header logo 1918 Japan’s 20% crypto tax sets a new bar in Asia, pressuring Singapore and Hong Kong as retail costs fall
Japan’s 20% crypto tax sets a new bar in Asia, pressuring Singapore and Hong Kong as retail costs fall
wp header logo 1916 Did you know Bitcoin can stay alive without the internet?
Did you know Bitcoin can stay alive without the internet?
The cryptonews hub

The Cryptonews Hub brings breaking news on Bitcoin, Ethereum, Ripple, NFTs, DeFi, and blockchain. Get real-time prices, expert analysis, and earn free Bitcoin. Follow for top crypto updates!

Top Insight

U.S. December Inflation Forecast Signals Slight Decline
December 5, 2025
Crypto Blockchain News: Latest Updates & Market Trends
December 5, 2025

Top Categories

  • Trending News
  • Crypto News
  • Bitcoin
  • Ethereum
  • NFT
  • Tech
  • Blockchain
  • Market

Quick Links

  • Crypto Market
  • Crypto List
  • Converter
  • Currency Price
  • Crypto Events
  • Top Exchanges
  • Top Gainers
  • Top Losers

© 2025 The Crypto News Hub. Powered by Pantrade Blockchain

Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?