Back

Cloudflare outage on December 5, 2025

782 points2 monthsblog.cloudflare.com
mixedbit2 months ago

This is architectural problem, the LUA bug, the longer global outage last week, a long list of earlier such outages only uncover the problem with architecture underneath. The original, distributed, decentralized web architecture with heterogeneous endpoints managed by myriad of organisations is much more resistant to this kind of global outages. Homogeneous systems like Cloudflare will continue to cause global outages. Rust won't help, people will always make mistakes, also in Rust. Robust architecture addresses this by not allowing a single mistake to bring down myriad of unrelated services at once.

tobyjsullivan2 months ago

I’m not sure I share this sentiment.

First, let’s set aside the separate question of whether monopolies are bad. They are not good but that’s not the issue here.

As to architecture:

Cloudflare has had some outages recently. However, what’s their uptime over the longer term? If an individual site took on the infra challenges themselves, would they achieve better? I don’t think so.

But there’s a more interesting argument in favour of the status quo.

Assuming cloudflare’s uptime is above average, outages affecting everything at once is actually better for the average internet user.

It might not be intuitive but think about it.

How many Internet services does someone depend on to accomplish something such as their work over a given hour? Maybe 10 directly, and another 100 indirectly? (Make up your own answer, but it’s probably quite a few).

If everything goes offline for one hour per year at the same time, then a person is blocked and unproductive for an hour per year.

On the other hand, if each service experiences the same hour per year of downtime but at different times, then the person is likely to be blocked for closer to 100 hours per year.

It’s not really bad end user experience that every service uses cloudflare. It’s more-so a question of why is cloudflare’s stability seeming to go downhill?

And that’s a fair question. Because if their reliability is below average, then the value prop evaporates.

ccakes2 months ago

> If an individual site took on the infra challenges themselves, would they achieve better? I don’t think so.

The point is that it doesn’t matter. A single site going down has a very small chance of impacting a large number of users. Cloudflare going down breaks an appreciable portion of the internet.

If Jim’s Big Blog only maintains 95% uptime, most people won’t care. If BofA were at 95%.. actually same. Most of the world aren’t BofA customers.

If Cloudflare is at 99.95% then the world suffers

esrauch2 months ago

I'm not sure I follow the argument. If literally every individual site had an uncorrelated 99% uptime, that's still less available than a centralized 99.9% uptime. The "entire Internet" is much less available in the former setup.

It's like saying that Chipotle having X% chance of tainted food is worse than local burrito places having 2*X% chance of tainted food. It's true in the lens that each individual event affects more people, but if you removed that Chipotle and replaced with all local, the total amount of illness is still strictly higher, it's just tons of small events that are harder to write news articles about.

+1
psychoslave2 months ago
Akronymus2 months ago

Also what about individual sites having 99% uptime while behind CF with an uncorrelated uptime of 99.9%?

Just because CF is up doesnt mean the site is

johncolanduoni2 months ago

Look at it a user (or even operator) of one individual service that isn’t redundant or safety critical: if choice A has 1/2 the downtime of choice B, you can’t justify choosing choice B by virtue of choice A’s instability.

moqmar2 months ago

That is exactly why you don't see Windows being used anymore in big corporations. /s

shermantanktop2 months ago

Maybe worlds can just live without the internet for a few hours.

There are likely emergency services dependent on Cloudflare at this point, so I’m only semi serious.

+1
p-e-w2 months ago
locknitpicker2 months ago

> Maybe worlds can just live without the internet for a few hours.

The world can also live a few hours without sewers, water supply, food, cars, air travel, etc.

But "can" and "should" are different words.

raincole2 months ago

> A single site going down has a very small chance of impacting a large number of users

How? If Github is down how many people are affected? Google?

> Jim’s Big Blog only maintains 95% uptime, most people won’t care

Yeah, and in the world with Cloudflare people don't care if Jim's Blog is down either. So Cloudflare doesn't make things worse.

dns_snek2 months ago

Terrible examples, Github and Google aren't just websites that one would place behind Cloudflare to try to improve their uptime (by caching, reducing load on the origin server, shielding from ddos attacks). They're their own big tech companies running complex services at a scale comparable to Cloudflare.

chii2 months ago

> If Cloudflare is at 99.95% then the world suffers

if the world suffers, those doing the "suffering" needs to push that complaint/cost back up the chain - to the website operator, which would push the complaint/cost up to cloudflare.

The fact that nobody did - or just verbally complained without action - is evidence that they didn't really suffer.

In the mean time, BofA saved cost in making their site 99.95% uptime themselves (presumably cloudflare does it cheaper than they could individually). So the entire system became more efficient as a result.

yfw2 months ago

They didnt really suffer or they dont have choice?

locknitpicker2 months ago

> The fact that nobody did - or just verbally complained without action - is evidence that they didn't really suffer.

What an utterly clueless claim. You're literally posting in a thread with nearly 500 posts of people complaining. Taking action takes time. A business just doesn't switch cloud providers overnight.

I can tell you in no uncertain terms that there are businesses impacted by Cloudflare's frequent outages that started work shedding their dependency on Cloudflare's services. And it's not just because of these outages.

hectormalot2 months ago

> On the other hand, if each service experiences the same hour per year of downtime but at different times, then the person is likely to be blocked for closer to 100 hours per year.

I think the parent post made a different argument:

- Centralizing most of the dependency on Cloudflare results in a major outage when something happens at Cloudflare, it is fragile because Cloudflare becomes the single point of failure. Like: Oh Cloudflare is down... oh, none of my SaaS services work anymore.

- In a world where this is not the case, we might see more outages, but they would be smaller and more contained. Like: oh, Figma is down? fine, let me pickup another task and come back to Figma once it's back up. It's also easier to work around by having alternative providers as a fallback, as they are less likely to share the same failure point.

As a result, I don't think you'll be blocked 100 hours a year in scenario 2. You may observe 100 non-blocking inconveniences per year, vs a completely blocking Cloudflare outage.

And in observed uptime, I'm not even sure these providers ever won. We're running all our auxiliary services on a decent Hetzner box with a LB. Say what you want, but that uptime is looking pretty good compared to any services relying on AWS (Oct 20, 15 hours), Cloudflare (Dec 5 (half hour), Nov 18 (3 hours)). Easier to reason about as well. Our clients are much more forgiving when we go down due to Azure/GCP/AWS/Cloudflare vs our own setup though...

dfex2 months ago

> If everything goes offline for one hour per year at the same time, then a person is blocked and unproductive for an hour per year. > On the other hand, if each service experiences the same hour per year of downtime but at different times, then the person is likely to be blocked for closer to 100 hours per year.

Putting Cloudflare in front of a site doesn't mean that site's backend suddenly never goes down. Availability will now be worse - you'll have Cloudflare outages* affecting all the sites they proxy for, along with individual site back-end failures which will of course still happen.

* which are still pretty rare

randmeerkat2 months ago

> If an individual site took on the infra challenges themselves, would they achieve better? I don’t think so.

I’m tired of this sentiment. Imagine if people said, why develop your own cloud offering? Can you really do better than VMWare..?

Innovation in technology has only happened because people dared to do better, rather than giving up before they started…

fallous2 months ago

"My architecture depends upon a single point of failure" is a great way to get laughed out of a design meeting. Outsourcing that single point of failure doesn't cure my design of that flaw, especially when that architecture's intended use-case is to provide redundancy and fault-tolerance.

The problem with pursuing efficiency as the primary value prop is that you will necessarily end up with a brittle result.

locknitpicker2 months ago

> "My architecture depends upon a single point of failure" is a great way to get laughed out of a design meeting.

This is a simplistic opinion. Claiming services like Cloudflare are modeled as single points of failure is like complaining that your use of electricity to power servers is a single point of failure. Cloudflare sells a global network of highly reliable edge servers running services like caching, firewall, image processing, etc. And more importas a global firewall that protects services against global distributed attacks. Until a couple of months ago, it was unthinkable to casual observers that Cloudflare was such an utter unreliable mess.

+1
fallous2 months ago
+1
kortilla2 months ago
kjgkjhfkjf2 months ago

That's an interesting point, but in many (most?) cases productivity doesn't depend on all services being available at the same time. If one service goes down, you can usually be productive by using an alternative (e.g. if HN is down you go to Reddit, if email isn't working you catch up with Slack).

sema4hacker2 months ago

If HN, Reddit, email, Slack and everything else is down for a day, I think my productivity would actually go up, not down.

zqna2 months ago

During 1st Cloudflare outage StackOverflow was down too.

tobyjsullivan2 months ago

Many (I’d speculate most) workflows involve moving and referencing data across multiple applications. For example, read from a spreadsheet while writing a notion page, then send a link in Slack. If any one app is down, the task is blocked.

Software development is a rare exception to this. We’re often writing from scratch (same with designers, and some other creatives). But these are definitely the exception compared to the broader workforce.

Same concept applies for any app that’s built on top of multiple third-party vendors (increasingly common for critical dependencies of SaaS)

geysersam2 months ago

On the other hand, if one site is down you might have alternatives. Or, you can do something different until the site you needed is up again. Your argument that simultaneous downtime is more efficient than uncoordinated downtime because tasks usually rely on multiple sites being online simultaneously is an interesting one. Whether or not that's true is an empirical question, but I lean toward thinking it's not true. Things failing simultaneously tends to have worse consequences.

nialse2 months ago

Paraphrasing: We are setting aside the actual issue and looking for a different angle.

To me this reads as a form of misdirection, intentional or not. A monopolist has little reason to care about downstream effects, since customers have nowhere else to turn. Framing this as roll your own versus Cloudflare rather than as a monoculture CDN environment versus a diverse CDN ecosystem feels off.

That said, the core problem is not the monopoly itself but its enablers, the collective impulse to align with whatever the group is already doing, the desire to belong and appear to act the "right way", meaning in the way everyone else behaves. There are a gazillion ways of doing CDN, why are we not doing them? Why the focus on one single dominant player?

citizen-stig2 months ago

> Why the focus on one single dominant player?

I don’t the answer to the all questions. But here I think it is just a way to avoid responsibility. If someone choses CDN “number 3” and it goes down, business people *might* put a blame on this person for not choosing “the best”. I am not saying it is a right approach, I just seen it happens too many times.

nialse2 months ago

True. Nobody ever got fired for choosing IBM/Microsoft/Oracle/Cisco/etc. Likely an effect of stakeholder (executives/MBAs) brand recognition.

wat100002 months ago

That’s fine if it’s just some random office workers. What if every airline goes down at the same time because they all rely on the same backend providers? What if every power generator shuts off? “Everything goes down simultaneously” is not, in general, something to aim for.

tazjin2 months ago

That is literally how a large fraction of airlines work. It's called Amadeus, and it did have a big global outage not too long ago.

wat100002 months ago

Which should be a good example of why this should be avoided.

atmosx2 months ago

CloudFlare doesn’t have a good track record. It’s the third party that caused more outages for us than any other third party service in the last four years.

Nextgrid2 months ago

> If an individual site took on the infra challenges themselves, would they achieve better? I don’t think so.

I disagree; most people need only a subset of Cloudflare's features. Operating just that subset avoids the risk of the other moving parts (that you don't need anyway) ruining your day.

Cloudflare is also a business and has its own priorities like releasing new features; this is detrimental to you because you won't benefit from said feature if you don't need it, yet still incur the risk of the deployment going wrong like we saw today. Operating your own stack would minimize such changes and allow you to schedule them to a maintenance window to limit the impact should it go wrong.

The only feature Cloudflare (or its competitors) offers that can't be done cost-effectively yourself is volumetric DDoS protection where an attacker just fills your pipe with junk traffic - there's no way out of this beyond just having a bigger pipe, which isn't reasonable for any business short of an ISP or infrastructure provider.

Arainach2 months ago

>The only feature Cloudflare (or its competitors) offers that can't be done cost-effectively yourself is volumetric DDoS protection

.... And thanks to AI everyone needs that all the time now since putting a site on the Internet means an eternal DDoS attack.

smsm422 months ago

> If an individual site took on the infra challenges themselves, would they achieve better? I don’t think so.

That's a wrong way of looking at it though. For 99.99% individual sites, I wouldn't care if they were down for weeks. Even if I use this site, there are very few sites that I need to use daily. For the rest of them, if one of them randomly goes down I probably would never know or notice, because I didn't need it then. However, when single-point-of-failure provider, like Cloudflare, goes down, you bet I notice. I must notice, because my work would be affected, my CI/CD pipelines will start failing, my newsfeeds will stop, I will notice it in dozens of places - because everybody uses it. The aggregated fails-per-unit-of-time may be less but the impact of each fail is way, way more, and the probability of it impacting me is approaching certainty.

So for me, as an average internet user, it would be much better if all the world wouldn't go down at once, even if the instances of particular things going down would be more frequent - provided they are randomly distributed in time and not concentrated. If just one thing goes down, I could do another thing. If everything goes down, I can only sit and twiddle my thumbs until it's back up.

sunrunner2 months ago

> If everything goes offline for one hour per year at the same time, then a person is blocked and unproductive for an hour per year.

This doesn’t guarantee availability of those N services themselves though, surely? N services with a slightly lower availability target than N+1 with a slightly higher value?

More importantly, I’d say that this only works for non-critical infrastructure, and also assumes that the cost of bringing that same infrastructure back is constant or at least linear or less.

The 2025 Iberian Peninsula outage seems to show that’s not always the case.

clickety_clack2 months ago

If you’re using 10 services and 1 goes down, there’s a 9/10 chance you’re not using it and you can switch to work on something else. If all 10 go down you are actually blocked for an hour. Even 5 years ago, I can’t recall ever being actually impacted by an outtage to the extent that I was like “well, might as well just go get something to eat because everything is down”.

lxgr2 months ago

> If everything goes offline for one hour per year at the same time, then a person is blocked and unproductive for an hour per year.

The consequence of some services being offline is much, much worse than a person (or a billion) being bored in front of a screen.

Sure, it’s arguably not Cloudflares fault that these services are cloud-dependent in the first place, but even if service just degrades somewhat gracefully in an ideal case, that’s a lot of global clustering of a lot of exceptional system behavior.

Or another analogy: Every person probably passes out for a few minutes in their live at one point or another. Yet I wouldn’t want to imagine what happens if everybody got that over with at the very same time without warning…

tonyhb2 months ago

Cloudbleed. It’s been a fun time.

embedding-shape2 months ago

> Cloudflare has had some outages recently. However, what’s their uptime over the longer term? If an individual site took on the infra challenges themselves, would they achieve better? I don’t think so.

Why is that the only option? Cloudflare could offer solutions that let people run their software themselves, after paying some license fee. Or there could be many companies people use instead, instead of everyone flocking to one because of cargoculting "You need a CDN like Cloudflare before you launch your startup bro".

Moto74512 months ago

What you’re suggesting is not trivial. Otherwise we wouldn’t use various CDNs. To do what Cloudflare does your starting point is “be multiple region/multiple cloud from launch” which is non-trivial especially when you’re finding product-market fit. A better poor man’s CDN is object storage through your cloud of choice serving HTTP traffic. Cloudflare also offers layers of security and other creature comforts. Ignoring the extras they offer, if you build what they offer you have effectively made a startup within a startup.

Cloudflare isn’t the only game in town either. Akamai, Google, AWS, etc all have good solutions. I’ve used all of these at jobs I’ve worked at and the only poor choice has been to not use one at all.

tobyjsullivan2 months ago

What do you think Cloudflare’s core business is? Because I think it’s two things:

1. DDoS protection

2. Plug n’ Play DNS and TLS (termination)

Neither of those make sense for self-hosted.

Edit: If it’s unclear, #2 doesn’t make sense because if you self-host, it’s no longer plug n’ play. The existing alternatives already serve that case equally well (even better!).

stingraycharles2 months ago

Cloudflare Zero-Trust is also very core to their enterprise business.

chamomeal2 months ago

When I’m working from home and the internet goes down, I don’t care. My poor private-equity owned corporation, think of the lost productivity!!

But if I was trying to buy insulin at 11 pm before benefits expire, or translate something at a busy train station in a foreign country, or submit my take-home exam, I would be freeeaaaking out.

The cloudflare-supported internet does a whole lot of important, time-critical stuff.

gerdesj2 months ago

All of my company's hosted web sites have way better uptimes and availability than CF but we are utterly tiny in comparison.

With only some mild blushing, you could describe us as "artisanal" compared to the industrial monstrosities, such as Cloudflare.

Time and time again we get these sorts of issues with the massive cloudy chonks and they are largely due to the sort of tribalism that used to be enshrined in the phrase: "no one ever got fired for buying IBM".

We see the dash to the cloud and the shoddy state of in house corporate IT as a result. "We don't need in-house knowledge, we have "MS copilot 365 office thing" that looks after itself and now its intelligent - yay \o/

Until I can't, I'm keeping it as artisanal as I can for me and my customers.

foobarkey2 months ago

Sorry for the downvotes but this is true many times with some basic HA you get better uptime than the big cloud boys, yes their stack and tech is fancier but we also need to factor in how much CF messes with it vs self hosted, anyway the self hosted wisdom is RIP these days and I mostly just run cf pages / kv :)

gerdesj2 months ago

That sounds a bit complicated and rather weird.

I still have my webby assets up with way longer uptimes than the biggies.

I don't need internet points. A satisfied PO is what I want to see as a business owner.

WD-422 months ago

In other words, the consolidation on Cloudflare and AWS makes the web less stable. I agree.

amazingman2 months ago

Usually I am allergic to pithy, vaguely dogmatic summaries like this but you're right. We have traded "some sites are down some of the time" for "most sites are down some of the time". Sure the "some" is eliding an order of magnitude or two, but this framing remains directionally correct.

PullJosh2 months ago

Does relying on larger players result in better overall uptime for smaller players? AWS is providing me better uptime than if I assembled something myself because I am less resourced and less talented than that massive team.

If so, is it a good or bad trade to have more overall uptime but when things go down it all goes down together?

+1
VorpalWay2 months ago
+1
Aeolun2 months ago
Lamprey2 months ago

When only one thing goes down, it's easier to compensate with something else, even for people who are doing critical work but who can't fix IT problems themselves. It means there are ways the non-technical workforce can figure out to keep working, even if the organization doesn't have on-site IT.

Also, if you need to switchover to backup systems for everything at once, then either the backup has to be the same for everything and very easily implementable remotely - which to me seems unlikely for specialty systems, like hospital systems, or for the old tech that so many organizations still rely on (and remember the CrowdStrike BSODs that had to be fixed individually and in person and so took forever to fix?) - or you're gonna need a LOT of well-trained IT people, paid to be on standby constantly, if you want to fix the problems quickly, on account of they can't be everywhere at once.

If the problems are more spread out over time, then you don't need to have quite so many IT people constantly on standby. Saves a lot of $$$, I'd think.

And if problems are smaller and more spread out over time, then an organization can learn how to deal with them regularly, as opposed to potentially beginning to feel and behave as though the problem will never actually happen. And if they DO fuck up their preparedness/response, the consequences are likely less severe.

UltraSane2 months ago

AWS and Cloudflare can recover from outages faster because they can bring dozens (hundreds?) of people to help, often the ones who wrote the software and designed the architecture. Outages at smaller companies I've worked for have often lasted multiple days, up to an exchange server outage that lasted 2 weeks.

3rodents2 months ago

Would you rather be attacked by 1,000 wasps or 1 dog? A thousand paper cuts or one light stabbing? Global outages are bad but the choice isn’t global pain vs local pleasure. Local and global both bring pain, with different, complicated tradeoffs.

Cloudflare is down and hundreds of well paid engineers spring into action to resolve the issue. Your server goes down and you can’t get ahold of your Server Person because they’re at a cabin deep in the woods.

Lamprey2 months ago

It's not "1,000 wasps or 1 dog", it's "1,000 dogs at once, or "1 dog at once, 1,000 different times". Rare but huge and coordinated siege, or a steady and predictable background radiation of small issues.

The latter is easier to handle, easier to fix, and much more suvivable if you do fuck it up a bit. It gives you some leeway to learn from mistakes.

If you make a mistake during the 1000 dog siege, or if you don't have enough guards on standby and ready to go just in case of this rare event, you're just cooked.

philipallstar2 months ago

I don't quite see how this maps onto the situation. The "1000 dog seige" also was resolved very quickly and transparently, so I would say it's actually better than even one of the "1 dog at once"s.

+1
rolisz2 months ago
psunavy032 months ago

If you've allowed your Server Person to be a single point of failure out innawoods, that's an organizational problem, not a technological one.

Two is one and one is none.

gblargg2 months ago

Why would there be a centralized outage of decentralized services? The proper comparison seems to be attacked by a dog or a single wasp.

jchw2 months ago

In most cases we actually get both local and global pain, since most people are running servers behind Cloudflare.

delusional2 months ago

What you've identified here is a core part of what the banking sector calls the "risk based approach". Risk in that case is defined as the product of the chance of something happening and the impact of it happening. With this understanding we can make the same argument you're making, a little more clearly.

Cloudflare is really good at what they do, they employ good engineering talent, and they understand the problem. That lowers the chance of anything bad happening. On the other hand, they achieve that by unifying the infrastructure for a large part of the internet, raising the impact.

The website operator herself might be worse at implementing and maintaining the system, which would raise the chance of an outage. Conversely, it would also only affect her website, lowering the impact.

I don't think there's anything to dispute in that description. The discussion then is if cloudflares good engineering lowers the chance of an outage happening more than it raises the impact. In other words, the things we can disagree about is the scaling factors, the core of the argument seems reasonable to me.

JumpCrisscross2 months ago

> Homogeneous systems like Cloudflare will continue to cause global outages

But the distributed system is vulnerable to DDOS.

Is there an architecture that maintains the advantages of both systems? (Distributed resilience with a high-volume failsafe.)

NicoJuicy2 months ago

You should really check Cloudflare.

There is not a single company that makes their infrastructure as globally available like Cloudflare.

Additionally, the downtime of Cloudflare seems to be objectively less than the others.

Now, it took 25 minutes for 28% of the network.

While being the only ones to fix a global vulnerability.

There is a reason other clouds wouldn't touch the responsiveness and innovation that Cloudflare brings.

ivanjermakov2 months ago

Robust architecture that is serving 80M requests/second worldwide?

My answer would be that no one product should get this big.

rekrsiv2 months ago

On the other hand, as long as the entire internet goes down when Cloudflare goes down, I'll be able to host everything there without ever getting flack from anyone.

johncolanduoni2 months ago

Actually, maybe 1 hour downtime for ~ the whole internet every month is a public good provided by Cloudflare. For everyone that doesn’t get paged, that is.

terminalshort2 months ago

It's not as simple as that. What will result in more downtime, dependency on a single centralized service or not being behind Cloudflare? Clearly it's the latter or companies wouldn't be behind Cloudflare. Sure, the outages are more widespread now than they used to be, but for any given service the total downtime is typically much lower than before centralization towards major cloud providers and CDNs.

Klonoar2 months ago

> Rust won't help, people will always make mistakes, also in Rust.

They don't just use Rust for "protection", they use it first and foremost for performance. They have ballpark-to-matching C++ performance with a realistic ability to avoid a myriad of default bugs. This isn't new.

You're playing armchair quarterback with nothing to really offer.

UltraSane2 months ago

They badly need smaller blast radius and to use more chaos engineering tools.

cbsmith2 months ago

I find this sentiment amusing when I consider the vast outages of the "good ol' days".

What's changed is a) our second-by-second dependency on the Internet and b) news/coverage.

theoldgreybeard2 months ago

Notwithstanding that most people using Cloudflare aren't even benefiting from what it actually provides. They just use it...because reasons.

lxgr2 months ago

Not too long ago, critical avionics were programmed by different software developers and the software was run on different hardware architectures, produced by different manufacturers. These heterogeneous systems produced combined control outputs via a quorum architecture – all in a single airplane.

Now half of the global economy seems to run on same service provider, it seems…

psychoslave2 months ago

That's a reflect of social organisation. Pushing for hierarchical organisation with a few key centralising nodes will also impact business and technological decisions.

See also https://en.wikipedia.org/wiki/Conway%27s_law

steelblueskies2 months ago

Reductionist, but it's a backup problem.

Data matters? Have multiple copies, not all in the same place.

This is really no different, yet we don't have those redundancies in play.

Host, and paths.

Every other take is ultimately just shuffling justification around the least bad for everyone lack of backups for cost saving.

m00dy2 months ago

Obviously Rust is the answer to these kind of problems. But if you are cloudflare and have an important company at a global scale, you need to set high standarts for your rust code. Developers should dance and celebrate end of the day if their code compiles in rust.

chickensong2 months ago

You're not wrong, but where's the robust architecture you're referring to? The reality of providing reliable services on the internet is far beyond the capabilities of most organizations.

coderjames2 months ago

I think it might be a organizational architecture that needs to change.

> However, we have never before applied a killswitch to a rule with an action of “execute”.

> This is a straightforward error in the code, which had existed undetected for many years

So they shipped an untested configuration change that triggered untested code straight to production. This is "tell me you have no tests without telling me you have no tests" level of facepalm. I work on safety-critical software where if we had this type of quality escape both internal auditors and external regulators would be breathing down our necks wondering how our engineering process failed and let this through. They need to rearchitect their org to put greater emphasis on verification and software quality assurance.

jonhess2 months ago

Yeah, redundancy and efficiency are opposites. As engineers, we always chase efficiency, but resilience and redundancy are related.

rossjudson2 months ago

You have a heterogeneous, fault-free architecture for the Cloudflare problem set? Interesting! Tell us more.

cyanydeez2 months ago

Bro, but how do we make shareholder value if we don't monopolize and enshittify everything

w10-12 months ago

Kudos to Cloudflare for clarity and diligence.

When talking of their earlier Lua code:

> we have never before applied a killswitch to a rule with an action of “execute”.

I was surprised that a rules-based system was not tested completely, perhaps because the Lua code is legacy relative to the newer Rust implementation?

It tracks what I've seen elsewhere: quality engineering can't keep up with the production engineering. It's just that I think of CloudFlare as an infrastructure place, where that shouldn't be true.

I had a manager who came from defense electronics in the 1980's. He said in that context, the quality engineering team was always in charge, and always more skilled. For him, software is backwards.

zwnow2 months ago

"Kudos"? This is like the South Park episode in which the oil company guy just excuses himself while the company just continues to fuck up over and over again. There's nothing to praise, this shouldn't happen twice in a month. Its inexcusable.

vpShane2 months ago

twice in a month _so far_

hinkley2 months ago

We still have two holidays and associated vacations and vacation brain to go. And then the January hangover.

Every company that has ignored my following advice has experienced a day for day slip in first quarter scheduling. And that advice is: not much work gets done between Dec 15 and Jan 15. You can rely on a week worth, more than that is optimistic. People are taking it easy and they need to verify things with someone who is on vacation so they are blocked. And when that person gets back, it’s two days until their vacation so it’s a crap shoot.

NB: there’s work happening on Jan 10, for certain, but it’s not getting finished until the 15th. People are often still cleaning up after bad decisions they made during the holidays and the subsequent hangover.

Bengalilol2 months ago

Those AI agents are coding fast, or am I missing some obvious concept here?

kordlessagain2 months ago

reaching for that _one 9 of uptime_

ifwinterco2 months ago

It's weird reading these reports because they don't seem to test anything at all (or at least there's very little mention of testing).

Canary deployment, testing environments, unit tests, integration tests, anything really?

It sounds like they test by merging directly to production but surely they don't

chippiewill2 months ago

The problem is that Cloudflare do incremental rollouts and loads of testing for _code_. But they don't do the same thing for configuration - they globally push out changes because they want rapid response.

It's still a bit silly though, their claimed reasoning probably doesn't really stack up for most of their config changes - I don't see it to be that likely that a 0.1->1->10->100 rollout over the period of 10 minutes would be a catastrophically bad idea for them for _most_ changes.

And to their credit, it does seem they want to change that.

ifwinterco2 months ago

Yeah to me it doesn't make any sense - configuration changes are just as likely to break stuff (as they've discovered the hard way) and both of these issues could have been found in a testing environment before being deployed to production

Dumbledumb2 months ago

In the post they described that they observed errors happening in their testing env, but decided to ignore because they were rolling out a security fix. I am sure there is more nuance to this, but I don’t know whether that makes it better or worse

misswaterfairy2 months ago

> but decided to ignore because they were rolling out a security fix.

A key part of secure systems is availability...

It really looks like vibe-coding.

braiamp2 months ago

This is funny, considering that someone that worked on the defense industry (guide missile system) found a memory leak on one of their products, at that time. They told him that they knew about it, but that it's timed just right with the range of the system it would be used, so it doesn't matter.

Etheryte2 months ago

This paraphrased urban legend has nothing to do with quality engineering though? As described, it's designed to the spec and working as intended.

mikkupikku2 months ago

It tracks with my experience in software quality engineering. Asked to find problems with something already working well in the field. Dutifully find bugs/etc. Get told that it's working though so nobody will change anything. In dysfunctional companies, which is probably most of them, quality engineering exists to cover asses, not to actually guide development.

+2
colechristensen2 months ago
sally_glance2 months ago

Having observed an average of two mgmt rotations at most of the clients our company is working for this comes at absolutely no surprise to me. Engineering is acting perfectly reasonable, optimizing for cost and time within the constraints they were given. Constraints are updated at a (marketing or investor pleasure) whim without consulting engineering, cue disaster. Not even surprising to me anymore...

mopsi2 months ago

... until the extended-range version is ordered and no one remembers to fix the leak. :]

hinkley2 months ago

Ariane 5 happens.

wizzwizz42 months ago

They will remember, because it'll have been measured and documented, rigorously.

+5
SketchySeaBeast2 months ago
hinkley2 months ago

Was this one measured and documented rigorously?

Well obviously not, because the front fell off. That’s a dead giveaway.

runlaszlorun2 months ago

My hunch is that we do the same with memory leaks or other bugs in web applications where the time of a request is short.

cpncrunch2 months ago

I've noticed that in recent months, even apart from these outages, cloudflare has been contributing to a general degradation and shittification of the internet. I'm seeing a lot more "prove you're human", "checking to make sure you're human", and there is normally at the very least a delay of a few seconds before the site loads.

I don't think this is really helping the site owners. I suspect it's mainly about AI extortion:

https://blog.cloudflare.com/introducing-pay-per-crawl/

james2doyle2 months ago

You call it extortion of the AI companies, but isn’t stealing/crawling/hammering a site to scrape their content to resell just as nefarious? I would say Cloudflare is giving these site owners an option to protect their content and as a byproduct, reduce their own costs of subsidizing their thieves. They can choose to turn off the crawl protection. If they aren't, that tells you that they want it, doesn’t it?

cpncrunch2 months ago

>You call it extortion of the AI companies, but isn’t stealing/crawling/hammering a site to scrape their content to resell just as nefarious?

You can easily block ChatGPT and most other AI scrapers if you want:

https://habeasdata.neocities.org/ai-bots

james2doyle2 months ago

This is just using robots.txt and asking "pretty please, don’t scrape me".

Here is an article (from TODAY) about the case where Perplexity is being accused of ignoring robots.txt: https://www.theverge.com/news/839006/new-york-times-perplexi...

If you think a robots.txt is the answer to stopping the billion-dollar AI machine from scraping you, I don’t know what to say.

+1
Aeolun2 months ago
+1
cpncrunch2 months ago
jacobgkau2 months ago

I'm guessing you don't manage any production web servers?

robots.txt isn't even respected by all of the American companies. Chinese ones (which often also use what are essentially botnets in Latin American and the rest of the world to evade detection) certainly don't care about anything short of dropping their packets.

+2
cpncrunch2 months ago
dingnuts2 months ago

[dead]

mplewis2 months ago

No you cannot! I blocked all of the user agents on a community wiki I run, and the traffic came back hours later masquerading as Firefox and Chrome. They just fucking lie to you and continue vacuuming your CPU.

+1
cpncrunch2 months ago
Sohcahtoa822 months ago

How are you this naive? Do you really think scrapers give a damn about your robots.txt?

cpncrunch2 months ago

The legitimate ones do, which is what I was referring to. Obviously there are bastard ones as well.

chrneu2 months ago

this is the equivalent of asking people not to speed on your street.

literalAardvark2 months ago

Tell me you don't run a site without telling me you don't run a site

cpncrunch2 months ago

Tell me you make incorrect assumptions without specifically saying so. (Yes, you're incorrect).

bobbob19212 months ago

Ive been seeing more of those prove your human pages as well, but I generally assume they are there to combat a DDOS or other type of attack (or maybe ai/bot). I remember how annoying it was combating DDOS attacks, or hacked sites before Cloudflare existed. I also remember how annoying capcha s were, everywhere. Cloudflare is not perfect but net, I think it’s been a great improvement.

cpncrunch2 months ago

Ovh provides ddos protection without that nonsense, for free. Aws does it too, for a fee.

gblargg2 months ago

More and more sites I can't even visit because of this "prove you're human" because it's not compatible with older web browsers, even though the website it's blocking is.

_kidlike2 months ago

the two things are unrelated...

The pay-per-crawl thing, is about them thinking ahead about post-AI business/revenue models.

The way AI happened, it removed a big chunk of revenue from news companies, blogs, etc. Because lots of people go to AI instead of reaching the actual 3rd party website.

AI currently gets the content for free from the 3rd party websites, but they have revenue from their users.

So Cloudflare is proposing that AI companies should be paying for their crawling. Cloudflare's solution would give the lost revenue back where it belongs, just through a different mechanism.

The ugly side of the story is that this was already an existing solution, and open source, called L402.org.

Cloudflare wants to be the first to take a piece of the pie, but also instead of using the open source version, they forked it internally and published it as their own service, which is cloudflare specific.

To be completely fair, the l402 requires you to solve the payment mechanism itself, which for Cloudflare is easy because they already deal with payments.

stef252 months ago

> I've noticed that in recent months, even apart from these outages, cloudflare has been contributing to a general degradation and shittification of the internet. I'm seeing a lot more "prove you're human", "checking to make sure you're human", and there is normally at the very least a delay of a few seconds before the site loads.

Good to know I'm not the only one

chamomeal2 months ago

Feel like that’s the fault of LLMs, not cloudflare

cpncrunch2 months ago

Looking into this more, it does indeed seem to be a cloudflare problem. It looks like cloudflare made a significant error in their bot fingerprinting, and Perplexity wasn't actually bypassing robots.txt.

https://www.perplexity.ai/hub/blog/agents-or-bots-making-sen...

To be honest I find cloudflare a much more scammy company than Perplexity. I had a DDoS attack a few years ago which originated from their network, and they had zero interest in it.

pmdr2 months ago

In my experience it's been in recent years, not months.

NooneAtAll32 months ago

it can't even spy on us silently, damn

jacobgkau2 months ago

I noticed this outage last night (Cloudflare 500s on a few unrelated websites). As usual, when I went to Cloudflare's status page, nothing about the outage was present; the only thing there was a notice about the pre-planned maintenance work they were doing for the security issue, reporting that everything was being routed around it successfully.

cnnlives2652 months ago

This is the case with just about every status page I’ve ever seen. It takes them a while to realize there’s really a problem and then to update the page. One day these things will be automated, but until then, I wouldn’t expect more of Cloudflare than any other provider.

What’s more concerning to me is that now we’ve had AWS, Azure, and CloudFlare (and CliudFlare twice) go down recently. My gut says:

1. developers and IT are using LLMs in some part of the process, which will not be 100% reliable.

2. Current culture of I have (some personal activity or problem) or we don’t have staff, AI will replace me, f-this.

3. Pandemic after effects.

4. Political climate / war / drugs; all are intermingled.

mikkupikku2 months ago

Management doesn't like when things like this are automated. They want to "manage" the outage/production/etc numbers before letting them out.

kbolino2 months ago

There's no sweet spot I've found. I don't work for Cloudflare but when I did have a status indicator to maintain, you could never please everyone. Users would complain when our system was up but a dependent system was down, saying that our status indicator was a lie. "Fixing" that by marking our system as down or degraded whenever a dependent system was down led to the status indicator being not green regularly, causing us to unfairly develop a reputation as unreliable (most broken dependencies had limited blast radius). The juice no longer seemed worth the squeeze and we gave up on automated status indicators.

+1
jacobgkau2 months ago
noname1202 months ago

> whenever a dependent system was down led to the status indicator being not green regularly, causing us to unfairly develop a reputation as unreliable (most broken dependencies had limited blast radius)

You are responsible of your dependencies, unless they are specific integrations. Either switch to more reliable dependencies or add redundancy so that you can switch between providers when any is down.

naniwaduni2 months ago

The headline status doesn't have to be "worst of all systems". Pick a key indicator, and as long as it doesn't look like it's all green regardless of whether you're up or down, users will imagine that "green headline, red subsystems" means whatever they're observing, even if that makes the status display utterly uninterpretable from an outside perspective.

Yeri2 months ago

100% — will never be automated :)

hnuser1234562 months ago

Still room for someone to claim the niche of the Porsche horsepower method in outage reporting - underpromise, overdeliver.

TechniKris2 months ago

Thing is, these things are automated... Internally.

Which makes it feel that much more special when a service provides open access to all of the infrastructure diagnostics, like e.g. https://status.ppy.sh/

rezonant2 months ago

Nice! Didn't know you could make a Datadog dashboard public like that!

colechristensen2 months ago

>It takes them a while to realize there’s really a problem and then to update the page.

Not really, they're just lying. I mean yes of course they aren't oracles who discover complex problems in instant of the first failure, but naw they know when well there are problems and significantly underreport them to the extent they are are less "smoke alarms" and more "your house has burned down and the ashes are still smoldering" alarms. Incidents are intentionally underreported. It's bad enough that there ought to be legislation and civil penalties for the large providers who fail to report known issues promptly.

sugerman2 months ago

Those are complex and tenuous explanations for events that have occurred since long before all of your reasons came into existence.

mrb2 months ago

Only way to change that it to shame them for it: "Cloudflare is so incompetent at detecting and managing outages that even their simple status page is unable to be accurate"

If enough high-ranked customers report this feedback...

matteocontrini2 months ago

The status page was updated 6 minutes after the first internal alert was triggered (8:50 -> 8:56:26 UTC), I wouldn't say this is too long.

Scaevolus2 months ago

> Disabling this was done using our global configuration system. This system does not use gradual rollouts but rather propagates changes within seconds to the entire network and is under review following the outage we recently experienced on November 18.

> As soon as the change propagated to our network, code execution in our FL1 proxy reached a bug in our rules module which led to the following LUA exception:

They really need to figure out a way to correlate global configuration changes to the errors they trigger as fast as possible.

> as part of this rollout, we identified an increase in errors in one of our internal tools which we use to test and improve new WAF rules

Warning signs like this are how you know that something might be wrong!

testplzignore2 months ago

> They really need to figure out a way to correlate global configuration changes to the errors they trigger as fast as possible.

This is what jumped out at me as the biggest problem. A wild west deployment process is a valid (but questionable) business decision, but if you do that then you need smart people in place to troubleshoot and make quick rollback decisions.

Their timeline:

> 08:47: Configuration change deployed and propagated to the network

> 08:48: Change fully propagated

> 08:50: Automated alerts

> 09:11: Configuration change reverted and propagation start

> 09:12: Revert fully propagated, all traffic restored

2 minutes for their automated alerts to fire is terrible. For a system that is expected to have no downtime, they should have been alerted to the spike in 500 errors within seconds before the changes even fully propagated. Ideally the rollback would have been automated, but even if it is manual, the dude pressing the deploy button should have had realtime metrics on a second display with his finger hovering over the rollback button.

Ok, so they want to take the approach of roll forward instead of immediate rollback. Again, that's a valid approach, but you need to be prepared. At 08:48, they would have had tens of millions of "init.lua:314: attempt to index field 'execute'" messages being logged per second. Exact line of code. Not a complex issue. They should have had engineers reading that code and piecing this together by 08:49. The change you just deployed was to disable an "execute" rule. Put two and two together. Initiate rollback by 08:50.

How disconnected are the teams that do deployments vs the teams that understand the code? How many minutes were they scratching their butts wondering "what is init.lua"? Are they deploying while their best engineers are sleeping?

bostik2 months ago

> 2 minutes for their automated alerts to fire is terrible

I take exception to that, to be honest. It's not desirable or ideal, but calling it "terrible" is a bit ... well, sorry to use the word ... entitled. For context, I have experience running a betting exchange. A system where it's common for a notable fraction of transactions in a medium-volume event to take place within a window of less than 30 seconds.

Vast majority of current monitoring systems are built on Prometheus. (Well okay, these days it's more likely something Prom-compatible but more reliable.) That implies collection via recurring scrapes. A supposedly "high" frequency online service monitoring system does a scrape every 30 seconds. Well known reliability engineering practices state that you need a minimum of two consecutive telemetry points to detect any given event - because we're talking about a distributed system and network is not a reliable transport. That in turn means that with near-perfect reliability the maximum time window before you can detect something failing is the time it takes to perform three scrapes: thing A might have failed a second after the last scrape, so two consecutive failures will show up only after a delay of just-a-hair-shy-of-three scraping cycle windows.

At Cloudflare's scale, I would not be surprised if they require three consecutive events to trigger an alert.

As for my history? The betting exchange monitoring was tuned to run scrapes at 10-second intervals. That still meant that the first an alert fired for something failing could have been effectively 30 seconds after the failures manifested.

Two minutes for something that does not run primarily financial transactions is a pretty decent alerting window.

dotancohen2 months ago

Prometheus compatible but more reliable? Sell it to me!

+1
bostik2 months ago
yearolinuxdsktp2 months ago

Critical high-level stats such as errors should be scraped more frequently than 30 seconds. It’s important to have multiple time granularity scraping intervals, a small set of most critical stats should be scraped closer to 10s or 15s.

Prometheus has as an unaddressed flaw [0], where rate functions must be at least 2x the scrape interval. This means that if you scrape at 30s intervals, your rate charts won’t reflect the change until a minute after.

[0] - https://github.com/prometheus/prometheus/issues/3746

rossjudson2 months ago

"Scrape" intervals (and the plumbing through to analysis intervals) are chosen precisely because of the denoising function aggregation provides.

Most scaled analysis systems provide precise control over the type of aggregation used within the analyzed time slices. There are many possibilities, and different purposes for each.

High frequency events are often collected into distributions and the individual timestamps are thrown away.

parchley2 months ago

> At Cloudflare's scale, I would not be surprised if they require three consecutive events to trigger an alert.

Sorry but that’s a method you use if you serve 100 requests per second, not when you are at Cloudflare scale. Cloudflare easily have big enough volume that this problem would trigger an instant change in a monitorable failure rate.

rossjudson2 months ago

Let's say you have 10 million servers. No matter what you are deploying, some of those servers are going to be in a bad state. Some are going to be in hw ops/repair. Some are going to be mid-deployment for something else. A regional issue may be happening. You may have inconsistencies in network performance. Bad workloads somewhere/anywhere can be causing a constant level of error traffic.

At scale there's no such thing as "instant". There is distribution of progress over time.

The failure is an event. Collection of events takes times (at scale, going through store and forward layers). Your "monitorable failure rate" is over an interval. You must measure for that interval. And then you are going to emit another event.

Global config systems are a tradeoff. They're not inherently bad; they have both strengths and weaknesses. Really bad: non-zero possibility for system collapse. Bad: Can progress very quickly to towards global outages. Good: Faults are detected quickly, response decision making is easy, and mitigation is fast.

Hyperscale is not just "a very large number of small simple systems".

Denoising alerts is a fact of life for SRE...and is a survival skill.

morpheos1372 months ago

I see lots of people complaining about this down time but in actuality is it really that big a deal to have 30 minutes of down time or whatever. It's not like anything behind cloudflare is "mission critical" in the sense that lives are at stake or even a huge amount of money is at stake. In many developed countries the electric power service has local down times on occasion. That's more important than not being able to load a website. I agree if CF is offering a certain standard of reliability and not meeting it then they should offer prorated refunds for the unexpected down time but otherwise I am not seeing what the big deal is here.

ljm2 months ago

> It's not like anything behind cloudflare is "mission critical" in the sense that lives are at stake or even a huge amount of money is at stake.

This is far too dismissive of how disruptive the downtime can be and it sets the bar way too low for a company so deeply entangled in global internet infrastructure.

I don’t think you can make such an assertion with any degree of credibility.

odie55332 months ago

> It's not like anything behind cloudflare is "mission critical" in the sense that lives are at stake or even a huge amount of money is at stake.

Yes, there are lots of mission critical systems that use cloudflare and lives and huge amounts of money are at stake.

+1
morpheos1372 months ago
bombcar2 months ago

30 minutes of downtime is fine for most things, including Amazon.

30 minutes of unplanned downtime for infrastructure is unacceptable; but we’re tending to accept it. AWS or Cloudflare have positioned themselves as The Internet so they need to be held to a higher standard.

moritonal2 months ago

I am confident there is at least a few hospitals, gp offices or ticketing systems that interact directly or indirectly with Cloud flare. They've sold themselves as a major defence in security.

therein2 months ago

> about this down time but in actuality is it really that big a deal to have 30 minutes of down time or whatever. It's not like anything behind cloudflare is "mission critical" in the sense that lives are at stake or even a huge amount of money is at stake.

This reads like sarcasm. But I guess it is not. Yes, you are a CDN, a major one at that. 30 minutes of downtime or "whatever" is not acceptable. I worked at traffic teams of social networks that looked at themselves as that mission critical. CF is absolutely that critical and it is definitely lives at stake.

philipwhiuk2 months ago

> Warning signs like this are how you know that something might be wrong!

Yes, as they explain it's the rollback that was triggered due to seeing these errors that broke stuff.

Scaevolus2 months ago

They saw errors and decided to do a second rollout to disable the component generating errors, causing a major outage.

JesseJames2102 months ago

That was their first mistake, if your deployment does not behave the way you expect to (or even give you bad smell) roll back, that how it used to be... when I was a kid...lol Or I don't know, maybe load test before you deploy.....?

8cvor6j844qw_d62 months ago

Would be nice if the outage dashboards are directly linked to this instead of whatever they have now.

bombcar2 months ago

“ Uh...it's probably not a problem...probably...but I'm showing a small discrepancy in...well, no, it's well within acceptable bounds again. Sustaining sequence. Nothing you need to worry about, Gordon. Go ahead.“

shadowgovt2 months ago

"Hey, this change is making the 'check engine' light turn on all the time. No problem; I just grabbed some pliers and crushed the bulb."

8note2 months ago

they arent a panacea though, internal tools like that can be super noisy on errors, and be broken more often than theyre working

lionkor2 months ago

Cloudflare is now below 99.9% uptime, for anyone keeping track. I reckon my home PC is at least 99.9%.

ryandvm2 months ago

Indeed. AWS too.

I feel like the cloud hosting companies have lost the plot. "They can provide better uptime than us" is the entire rationale that a lot of small companies have when choosing to run everything in the cloud.

If they cost more AND they're less reliable, what exactly is the reason to not self host?

toomuchtodo2 months ago

> If they cost more AND they're less reliable, what exactly is the reason to not self host?

Shifting liability. You're paying someone else for it to be their problem, and if everyone does it, no one will take flak for continuing to do so. What is the average tenure of a CIO or decision maker electing to move to or remain at a cloud provider? This is why you get picked to talk on stage at cloud provider conferences.

(have been in the meetings where these decisions are made)

XCSme2 months ago

Plus, when you self-host, you can likely fix the issue yourself in a couple of hours max, instead of waiting indefinitely for a fix or support that might never come.

bombcar2 months ago

These global cloud outages aren’t the real issue; they affect everyone and get fixed.

What is killer is when there is a KNOWN issue that affects YOU but basically only you so why bother fixing it!

+2
XCSme2 months ago
chippiewill2 months ago

Capex vs Opex and scale-out.

For a start-up it's much easier to just pay the Cloud tax than it is to hire people with the appropriate skill sets to manage hardware or to front the cost.

Larger companies on the other hand? Yeah, I don't see the reason to not self host.

markus_zhang2 months ago

TBF, it depends on the number of outages locally. In my area it is one outage every thunderstorm/snow storm, so unfortunately the up time of my laptop, even with the help of a large, portable battery charging station (which can charge multiple laptops at the same time), is not optimistic.

I sometimes fancy that I could just take cash, go into the wood, build a small solar array, collect & cleanse river water, and buy a starlink console.

roguecoder2 months ago

Costco had a deal on solid-state UPS & solar panels a while back that I was happy to partake of

SoftTalker2 months ago

Yeah, I'd guess I average a power drop once a month or so at home. Never calculated the nines of uptime average, but it's not that infrequent.

I know when I need to reset the clock on my microwave oven.

lionkor2 months ago

99.9 is like 9 hours of downtime a year.

DANmode2 months ago

Far more achievable pricing and logistics than even ten years ago.

odie55332 months ago

When a piece of hardware goes or a careless backup process fails, downtime of a self-hosted service can be measured in days or weeks.

pclmulqdq2 months ago

Have 2 of them and your users only see correlated failures.

hashstring2 months ago

Where/how are you keeping track of this? What is their current uptime percentage?

ivanjermakov2 months ago

1 - downtime/period. I suspect period is 1 year. 99.9% is 8.76 hours of downtime a year.

lionkor2 months ago

Exactly this. Just keep a note every time they have a big outage, and then take the last year and see if its over 9 hours, is a good measure.

tripplyons2 months ago

Do they include uptime guarantees in any contracts?

hashstring2 months ago

See 1.1 and 1.2; you get some credit back for when they fail to deliver.

https://www.cloudflare.com/business-sla/

chickensong2 months ago

That's a pretty silly comparison though.

lionkor2 months ago

Yes, but it should also never happen that CF drops below 99.99, that's extremely doable for them (and it is literally the point of using a CDN).

uyzstvqs2 months ago

What I'm missing here is a test environment. Gradual or not; why are they deploying straight to prod? At Cloudflare's scale, there should be a dedicated room in Cloudflare HQ with a full isolated model-scale deployment of their entire system. All changes should go there first, with tests run for every possible scenario.

Only after that do you use gradual deployment, with a big red oopsie button which immediately rolls the changes back. Languages with strong type systems won't save you, good procedure will.

tetha2 months ago

This is kinda what I'm thinking. We're absolutely not at the scale Cloudflare is at.

But we run software and configuration changes through three tiers - first stage for the dev-team only, second stage with internal customers and other teams depending on it for integration and internal usage -- and finally production. Some teams have also split production into different rings depending on the criticality of the customers and the number of customers.

This has lead to a bunch of discussions early on, because teams with simpler software and very good testing usually push through dev and testing with no or little problem. And that's fine. If you have a track record of good changes, there is little reason to artificially prolong deployment in dev and test just because. If you want to, just go through it in minutes.

But after a few spicy production incidents, even the better and faster teams understood and accepted that once technical velocity exists, actual velocity is a choice, or a throttle if you want an analogy.

If you do good, by all means, promote from test to prod within minutes. If you fuck up production several times in a row and start threatening SLAs, slow down, spend more resources on manual testing and improving automated testing, give changes time to simmer in the internally productive environment, spend more time between promotions from production ring to production ring.

And this is on top of considerations of e.g. change risk. Some frontend-only application can move much faster than the PostgreSQL team, because one rollback is a container restart, and the other could be a multi-hour recovery from backups.

bombcar2 months ago

They have millions of “free” subscribers; said subscribers should be the test pigs for rollouts; paying (read: big) subscribers can get the breaking changes later.

beardedetim2 months ago

This feels like such a valid solution and is how past $dayjobs released things: send to the free users, rollout to Paying Users once that's proven to not blow up.

sznio2 months ago

If your target is availability, that's correct.

If your target is security, then _assuming your patch is actually valid_ you're giving better security coverage for free customers than to your paying ones.

Cloudflare is both, and their tradeoffs seem to be set on maximizing security at cost of availability. And it makes sense. A fully unavailable system is perfectly secure.

ectospheno2 months ago

Free tier doesn’t get WAF. We kept working.

bsdpqwz2 months ago

Their December 3rd blog about React states:

"These new protections are included in both the Cloudflare Free Managed Ruleset (available to all Free customers) ..... "

having some burn in time in free tier before it hits the whole network would have been good?!

znkr2 months ago

I am sure they have this. What tends to happen is that the gradual rollout system becomes too slow for some rare, low latency rollout requirements, so a config system is introduced that fulfills the requirements. For example, let’s say you have a gradual rollout for binaries (slow) and configuration (fast). Over time, the fast rollout of the configuration system will cause outages, so it’s slowed down. Then a requirement pops up for which the config system is too slow and someone identifies a global system with no gradual rollout (e.g. a database) to be used as the solution. That solution will be compliant with all the processes that have been introduced to the letter, because so far nobody has thought of using a single database row for global configuration yet. Add new processes whenever this happens and at some point everything will be too slow and taking on more risk becomes necessary to stay competitive. So processes are adjusted. Repeat forever.

eviks2 months ago

> Languages with strong type systems won't save you, good procedure will.

One of the items in the list of procedures is to use types to encode rules of your system.

vouwfietsman2 months ago

> Languages with strong type systems won't save you

Neither will seatbelts if you drive into the ocean, or helmets if you drink poison. I'm not sure what your point is.

djmips2 months ago

I think you strengthened their point.

vouwfietsman2 months ago

I don't think I did. The implication is that using languages with strong types (as discussed in the article) is not a solution. That's rubbish. It's at least part of the solution.

teleforce2 months ago

Internet packet switching based architecture was originally design to withstand this type of outages [1].

Some people even go further by speculating that the original military DARPA network precursor to the modern Internet was originally designed to ensure the continuity of command and control (C&C) of the US military operation in the potential event of all out nuclear attack during the Cold War.

This the time when Internet researchers need to redefine the Internet application and operation. The local-first paradigm is the first step in the right direction (pardon the pun) [2].

[1] The Real Internet Architecture: Past, Present, and Future Evolution:

https://press.princeton.edu/books/paperback/9780691255804/th...

[2] Local-first software You own your data, in spite of the cloud:

https://www.inkandswitch.com/essay/local-first/

liampulles2 months ago

The lesson presented by the last few big outages is that entropy is, in fact, inescapable. The comprehensibility of a system cannot keep up with its growing and aging complexity forever. The rate of unknown unknowns will increase.

The good news is that a more decentralized internet with human brain scoped components is better for innovation, progress, and freedom anyway.

agentifysh2 months ago

yet my dedicated server has been up since 2015 with zero downtimes

i dont think this is an entropy issue its human error bubbling up and cloudflare charges a premium for it

my faith in cloudflare is shoook for sure two major outages weeks apart ad this wont be the last

ectospheno2 months ago

Which 2015 kernel are you running?

PKop2 months ago

Why is the stability of your dedicated server a counterpoint that cloud behemoths can't keep up with their increasing entropy? Seems more like a supporting argument of OP at best, a non sequitur at worst.

bongodongobob2 months ago

Yeah, because it's not complex. It's 1 server. Get back to us when your 100k servers homelab data center that does a million different things has 10 years of uptime.

samdoesnothing2 months ago

With all due respect, your dedicated server is not quite as complex as Cloudflare...

venturecruelty2 months ago

Eppur si muove. A random server serving things is exactly what the internet was supposed to be: a decentralized network of nodes.

samdoesnothing2 months ago

I agree, I was just commenting that your single server being simpler is less affected by entropy :)

hnthrowaway03282 months ago

I'm not sure how decentralization helps though. People in a bazzar are going to care even less about sharing shadow knowledge. Linux IMO succeeds not because of the bazaar but because of Linus.

venturecruelty2 months ago

Decentralization is resilience; that's why the internet even works at all. That was the entire point of it, in fact.

marcosdumay2 months ago

You don't keep a bazaar running with shadow knowledge. Either the important things are published or it doesn't run.

liampulles2 months ago

What is the shadow knowledge in this case?

thenthenthen2 months ago

Also curious, since the bazaar seems to be where one acquires shadow knowledge (grey market items, support structures for unregistered people etc.). See Chungking Mansions in Hong Kong for a practical example.

miyuru2 months ago

Whats going on with cloudflare's software team?

I have seen similar bugs in cloudflare API recently as well.

There is an endpoint for a feature that is available only to enterprise users, but the check for whether the user is on an enterprise plan is done at the last step.

archon8102 months ago

I recently ran into an issue with the Cloudflare API feature that if you want to roll back requires contacting the support team because there's no way to roll it back with the API or GUI. Even when the exact issue was pointed out, it took multiple days to change the setting and to my knowledge there's still no API fix available.

https://www.answeroverflow.com/m/1234405297787764816

652 months ago

My guess? Code written by AI

markus_zhang2 months ago

TBF they are still hiring a lot of eng people from US/UK/EU:

https://www.cloudflare.com/careers/jobs/?department=Engineer...

rurban2 months ago

No, the original author left long time ago. And nobody understands some uncovered parts anymore.

system22 months ago

100%. Upper managements try to cut costs and hire remote bullshitters.

venturecruelty2 months ago

Agreed in re cost cutting, but there's no need to disparage those of us who don't want to be traffic for two hours every day.

system22 months ago

I work remotely 100% too. I don't go to any office. That doesn't change the fact that most remote people are just using AI and bullshitting. Yes they are bullshitters. Don't need to be super soft about it, it is not like an LGBTQ+ subject. Many remote workers are shitty. There, I said it again. Most remote workers are shitty.

LelouBil2 months ago

Can you elaborate? I'm not sure what you mean by "at the last step"

Etheryte2 months ago

I'm not sure which endpoint gp meant, but as I understood it, as an example, imagine a three-way handshake that's only available to enterprise users. Instead of failing a regular user on the first step, they allow steps one and two, but then do the check on step three and fail there.

miyuru2 months ago

The API endpoint I am talking about needs a external verification. they allow to do the external verification before checking if the user is on the enterprise plan or not.

The feature is only available to enterprise plans, it should not even allow external verification.

flaminHotSpeedo2 months ago

What's the culture like at Cloudflare re: ops/deployment safety?

They saw errors related to a deployment, and because it was related to a security issue instead of rolling it back they decided to make another deployment with global blast radius instead?

Not only did they fail to apply the deployment safety 101 lesson of "when in doubt, roll back" but they also failed to assess the risk related to the same deployment system that caused their 11/18 outage.

Pure speculation, but to me that sounds like there's more to the story, this sounds like the sort of cowboy decision a team makes when they've either already broken all the rules or weren't following them in the first place

dkyc2 months ago

One thing to keep in mind when judging what's 'appropriate' is that Cloudflare was effectively responding to an ongoing security incident outside of their control (the React Server RCE vulnerability). Part of Cloudlfare's value proposition is being quick to react to such threats. That changes the equation a bit: any hour you wait longer to deploy, your customers are actively getting hacked through a known high-severity vulnerability.

In this case it's not just a matter of 'hold back for another day to make sure it's done right', like when adding a new feature to a normal SaaS application. In Cloudflare's case moving slower also comes with a real cost.

That isn't to say it didn't work out badly this time, just that the calculation is a bit different.

flaminHotSpeedo2 months ago

To clarify, I'm not trying to imply that I definitely wouldn't have made the same decision, or that cowboy decisions aren't ever the right call.

However, this preliminary report doesn't really justify the decision to use the same deployment system responsible for the 11/18 outage. Deployment safety should have been the focus of this report, not the technical details. My question that I want answered isn't "are there bugs in Cloudflare's systems" it's "has Cloudflare learned from it's recent mistakes to respond appropriately to events"

vlovich1232 months ago

> doesn't really justify the decision to use the same deployment system responsible for the 11/18 outage

There’s no other deployment system available. There’s a single system for config deployment and it’s all that was available as they haven’t yet done the progressive roll out implementation yet.

locknitpicker2 months ago

> There’s no other deployment system available.

Hindsight is always 20/20, but I don't know how that sort of oversight could happen in an organization whose business model rides on reliability. Small shops understand the importance of safeguards such as progressive deployments or one-box-style deployments with a baking period, so why not the likes of Cloudflare? Don't they have anyone on their payroll who warns about the risks of global deployments without safeguards?

flaminHotSpeedo2 months ago

There was another deployment system available. The progressive one used to roll out the initial change, which presumably rolls back sanely too.

+1
edoceo2 months ago
dkyc2 months ago

The 11/18 outage was 2.5 weeks ago. Any learning & changes they made as a result for that probably didn't make its way yet to production.

Particularly if we're asking them to be careful & deliberate about deployments, hard to ask them fast-track this.

Already__Taken2 months ago

the cve isn't a zero day though how come cloudflare werent at the table for early disclosure?

flaminHotSpeedo2 months ago

Do you have a public source about an embargo period for this one? I wasn't able to find one

+1
Pharaoh22 months ago
charcircuit2 months ago

Considering there were patched libraries at the time of disclosure, those libraries' authors must have been informed ahead of time.

ascorbic2 months ago

Cloudflare did have early access, and had mitigation in place from the start. The changes that were being rolled out were in response to ongoing attempts to bypass those.

Disclosure: I work at Cloudflare, but not on the WAF

cowsandmilk2 months ago

Cloudflare had already decided this was a rule that could be rolled out using their gradual deployment system. They did not view it as being so urgent that it required immediate global roll out.

udev40962 months ago

[flagged]

toomuchtodo2 months ago

Indeed, but it is what it is. Cloudflare comes out of my budget, and even with downtime, its better than not paying them. Do I want to deal with what Cloudflare offers? I do not, I have higher value work to focus on. I want to pay someone else to deal with this, and just like when cloud providers are down, it'll be back up eventually. Grab a coffee or beer and hang; we aren't savings lives, we're just building websites. This is not laziness or nihilism, but simply being rational and pragmatic.

+1
locknitpicker2 months ago
liampulles2 months ago

Rollback is a reliable strategy when the rollback process is well understood. If a rollback process is not well known and well experienced, then it is a risk in itself.

I'm not sure of the nature of the rollback process in this case, but leaning on ill-founded assumptions is a bad practice. I do agree that a global rollout is a problem.

newsoftheday2 months ago

Rollback carries with it the contextual understanding of complete atomicity; otherwise it's slightly better than a yeet. It's similar to backups that are untested.

marcosdumay2 months ago

Complete atomicity carries with it the idea that the world is frozen, and any data only needs to change when you allow it to.

That's to say, it's an incredibly good idea when you can physically implement it. It's not something that everybody can do.

newsoftheday2 months ago

No, complete atomicity doesn't require a frozen state, it requires common sense and fail-proof, fool-proof guarantees derived from assurances gained from testing.

There is another name for rolling forward, it's called tripping up.

programd2 months ago

Global rollout of security code on a timeframe of seconds is part of Cloudflare's value proposition.

In this case they got unlucky with an incident before they finished work on planned changes from the last incident.

flaminHotSpeedo2 months ago

That's entirely incorrect. For starters, they didn't get unlucky. They made a choice to use the same system they knew was sketchy (which they almost certainly knew was sketchy even before 11/18)

And on top of that, Cloudflare's value proposition is "we're smart enough to know that instantaneous global deployments are a bad idea, so trust us to manage services for you so you don't have to rely on in house folks who might not know better"

crote2 months ago

> They saw errors related to a deployment, and because it was related to a security issue instead of rolling it back they decided to make another deployment with global blast radius instead?

Note that the two deployments were of different components.

Basically, imagine the following scenario: A patch for a critical vulnerability gets released, during rollout you get a few reports of it causing the screensaver to show a corrupt video buffer instead, you roll out a GPO to use a blank screensaver instead of the intended corporate branding, a crash in a script parsing the GPOs on this new value prevents users from logging in.

There's no direct technical link between the two issues. A mitigation of the first one merely exposed a latent bug in the second one. In hindsight it is easy to say that the right approach is obviously to roll back, but in practice a roll forward is often the better choice - both from an ops perspective and from a safety perspective.

Given the above scenario, how many people are genuinely willing to do a full rollback, file a ticket with Microsoft, and hope they'll get around to fixing it some time soon? I think in practice the vast majority of us will just look for a suitable temporary workaround instead.

lukeasrodgers2 months ago

Roll back is not always the right answer. I can’t speak to its appropriateness in this particular situation of course, but sometimes “roll forward” is the better solution.

flaminHotSpeedo2 months ago

Like the other poster said, roll back should be the right answer the vast majority of the time. But it's also important to recognize that roll forward should be a replacement for the deployment you decided not to roll back, not a parallel deployment through another system.

I won't say never, but a situation where the right answer to avoid a rollback (that it sounds like was technically fine to do, just undesirable from a security/business perspective) is a parallel deployment through a radioactive, global blast radius, near instantaneous deployment system that is under intense scrutiny after another recent outage should be about as probable as a bowl of petunias in orbit

crote2 months ago

Is a roll back even possible at Cloudflare's size?

With small deployments it usually isn't too difficult to re-deploy a previous commit. But once you get big enough you've got enough developers that half a dozen PRs will have been merged since the start of the incident and now. How viable is it to stop the world, undo everything, and start from scratch any time a deployment causes the tiniest issues?

Realistically the best you're going to get is merging a revert of the problematic changeset - but with the intervening merges that's still going to bring the system in a novel state. You're rolling forwards, not backwards.

+1
jamesog2 months ago
newsoftheday2 months ago

If companies like Cloudflare haven't figured out how to do reliable rollbacks, there seems little hope for any of us.

yuliyp2 months ago

I'd presume they have the ability to deploy a previous artifact vs only tip-of-master.

gabrielhidasy2 months ago

That will depend on how you structure your deployments, on some large tech companies, while thousands of changes little are made every hour, and deployments are mande in n-day cycles. A cut-off point in time is made where the first 'green' commit after that is picked for the current deployment, and if that fails in an unexpected way you just deploy the last binary back, fix (and test) whatever broke and either try again or just abandon the release if the next cut is already close-by.

echelon2 months ago

You want to build a world where roll back is 95% the right thing to do. So that it almost always works and you don't even have to think about it.

During an incident, the incident lead should be able to say to your team's on call: "can you roll back? If so, roll back" and the oncall engineer should know if it's okay. By default it should be if you're writing code mindfully.

Certain well-understood migrations are the only cases where roll back might not be acceptable.

Always keep your services in "roll back able", "graceful fail", "fail open" state.

This requires tremendous engineering consciousness across the entire org. Every team must be a diligent custodian of this. And even then, it will sometimes break down.

Never make code changes you can't roll back from without reason and without informing the team. Service calls, data write formats, etc.

I've been in the line of billion dollar transaction value services for most of my career. And unfortunately I've been in billion dollar outages.

drysart2 months ago

"Fail open" state would have been improper here, as the system being impacted was a security-critical system: firewall rules.

It is absolutely the wrong approach to "fail open" when you can't run security-critical operations.

echelon2 months ago

Cloudflare is supposed to protect me from occasional ddos, not take my business offline entirely.

This can be architected in such a way that if one rules engine crashes, other systems are not impacted and other rules, cached rules, heuristics, global policies, etc. continue to function and provide shielding.

You can't ask for Cloudflare to turn on a dime and implement this in this manner. Their infra is probably very sensibly architected by great engineers. But there are always holes, especially when moving fast, migrating systems, etc. And there's probably room for more resiliency.

this_user2 months ago

The question is perhaps what the shape and status of their tech stack is. Obviously, they are running at massive scale, and they have grown extremely aggressively over the years. What's more, especially over the last few years, they have been adding new product after new product. How much tech debt have they accumulated with that "move fast" approach that is now starting to rear its head?

sandeepkd2 months ago

I think this is probably a bigger root cause and is going to show up in different ways in future. The mere act of adding new products to an existing architecture/system is bound to create knowledge silos around operations and tech debt. There is a good reason why big companies keep smart people on their payroll to just change couple of lines after a week of debate.

ignoramous2 months ago

> this sounds like the sort of cowboy decision

Ouch. Harsh given that Cloudflare's being over-honest (to disabling the internal tool) and the outage's relatively limited impact (time wise & no. of customers wise). It was just an unfortunate latent bug: Nov 18 was Rust's Unwrap, Dec 5 its Lua's turn with its dynamic typing.

Now, the real cowboy decision I want to see is Cloudflare [0] running a company-wide Rust/Lua code-review with Codex / Claude...

cf TFA:

  if rule_result.action == "execute" then
    rule_result.execute.results = ruleset_results[tonumber(rule_result.execute.results_index)]
  end

  This code expects that, if the ruleset has action="execute", the "rule_result.execute" object will exist ... error in the [Lua] code, which had existed undetected for many years ... prevented by languages with strong type systems. In our replacement [FL2 proxy] ... code written in Rust ... the error did not occur.
[0] https://news.ycombinator.com/item?id=44159166
otterley2 months ago

From the post:

“We have spoken directly with hundreds of customers following that incident and shared our plans to make changes to prevent single updates from causing widespread impact like this. We believe these changes would have helped prevent the impact of today’s incident but, unfortunately, we have not finished deploying them yet.

“We know it is disappointing that this work has not been completed yet. It remains our first priority across the organization.”

NicoJuicy2 months ago

Where I work, all teams were notified about the React CVE.

Cloudflare made it less of an expedite.

rvz2 months ago

> Not only did they fail to apply the deployment safety 101 lesson of "when in doubt, roll back" but they also failed to assess the risk related to the same deployment system that caused their 11/18 outage.

Also there seems to be insufficient testing before deployment with very junior level mistakes.

> As soon as the change propagated to our network, code execution in our FL1 proxy reached a bug in our rules module which led to the following LUA exception:

Where was the testing for this one? If ANY exception happened during the rules checking, the deployment should fail and rollback. Instead, they didn't assess that as a likely risk and pressed on with the deployment "fix".

I guess those at Cloudflare are not learning anything from the previous disaster.

deadbabe2 months ago

As usual, Cloudflare is the man in the arena.

samrus2 months ago

There are other men in the arena who arent tripping on their own feet

usrnm2 months ago

Like who? Which large tech company doesn't have outages?

+1
k8sToGo2 months ago
+1
nish__2 months ago
k__2 months ago

"tripping on their own feet" == "not rolling back"

nine_k2 months ago

> more to the story

From a more tinfoil-wearing angle, it may not even be a regular deployment, given the idea of Cloudflare being "the largest MitM attack in history". ("Maybe not even by Cloudflare but by NSA", would say some conspiracy theorists, which is, of course, completely bonkers: NSA is supposed to employ engineers who never let such blunders blow their cover.)

NoSalt2 months ago

Ooh ... I want to be on a cowboy decision making team!!!

paradite2 months ago

The deployment pattern from Cloudflare looks insane to me.

I've worked at one of the top fintech firms, whenever we do a config change or deployment, we are supposed to have rollback plan ready and monitor key dashboards for 15-30 minutes.

The dashboards need to be prepared beforehand on systems and key business metrics that would be affected by the deployment and reviewed by teammates.

I've never seen a downtime longer than 1 minute while I was there, because you get a spike on the dashboard immediately when something goes wrong.

For the entire system to be down for 10+ minutes due to a bad config change or deployment is just beyond me.

vlovich1232 months ago

That is also true at Cloudflare for what it’s worth. However, the company is so big that there’s so many different products all shipping at the same time it can be hard to correlate it to your release, especially since there’s a 5 min lag (if I recall correctly) in the monitoring dashboards to get all the telemetry from thousands of servers worldwide.

Comparing the difficulty of running the world’s internet traffic with hundreds of customer products with your fintech experience is like saying “I can lift 10 pounds. I don’t know why these guys are struggling to lift 500 pounds”.

paradite2 months ago

The fintech company I worked at does handle millions of QPS has has thousands of servers. It is on the same order of magnitude or at least 0.1x scale, not to mention the complexity of business logic involving monetary transactions.

If there’s indeed a 5 min lag in monitoring dashboard in Cloudflare, I honestly think that's a pretty big concern.

For example, a simple curl script on your top 100 customers' homepage that runs every 30 seconds would have given the warning and notifications within a minute. If you stagger deployments at 5 minute intervals, you could have identified the issue and initiated the rollback within 2 minutes and completed it within 3 minutes.

autoexec2 months ago

> However, the company is so big that there’s so many different products all shipping at the same time it can be hard to correlate it to your release

This kind of thing would be more understandable for a company without hundreds of billions of dollars, and for one that hasn't centralized so much of the internet. If a company has grown too large and complex to be well managed and effective and it's starting to look like a liability for large numbers of people there are obvious solutions for that.

evanelias2 months ago

What "hundreds of billions of dollars"? Cloudflare's annual revenue is around $2 billion, and they are not yet profitable.

+1
froober2 months ago
+1
autoexec2 months ago
pulkitsh12342 months ago

Genuinely curious, how to actually implement detection systems for a large scale global infra which that works with < 1 minute SLO ? Given cost is no constraint.

+1
autoexec2 months ago
vlovich1232 months ago

Can you name a major cloud provider that doesn’t have major outages?

If this were purely a money problem it would have been solved ages ago. It’s a difficult problem to solve. Also, they’re the youngest of the major cloud providers and have a fraction of the resources that Google, Amazon, and Microsoft have.

+1
autoexec2 months ago
theplatman2 months ago

With all due respect, engineers in finance can’t allow for outages like this because then you are losing massive amounts of money and potentially going out of business.

dehrmann2 months ago

Cloudflare is orders of magnitude larger than any fintech. Rollouts likely take much longer, and having a human monitoring a dashboard doesn't scale.

notepad0x902 months ago

That means they engineered their systems incorrectly then? Precisely because they are much bigger, they should be more resilient. You know who's bigger than Cloudflare? tier-1 ISPs, if they had an outage the whole internet would know about it, and they do have outages except they don't cascade into a global mess like this.

Just speculating based on my experience: It's more likely than not that they likely refused to invest in fail-safe architectures for cost reasons. Control-plane and data-plane should be separate, a react patch shouldn't affect traffic forwarding.

Forget manual rollbacks, there should be automated reversion to a known working state.

vlovich1232 months ago

> Control-plane and data-plane should be separate

They are separate.

> a react patch shouldn't affect traffic forwarding.

If you can’t even bother to read the blog post maybe you shouldn’t be so confident in your own analysis of what should and shouldn’t have happened?

This was a configuration change to change the buffered size of a body from 256kb to 1mib.

The ability to be so wrong in so few words with such confidence is impressive but you may want to take more of a curiosity first approach rather than reaction first.

+2
notepad0x902 months ago
cowsandmilk2 months ago

> Rollouts likely take much longer

Cloudflare’s own post says the configuration change that resulted in the outage rolled out in seconds.

paradite2 months ago

The blog post said the rollout of the config change took 1 minute.

markus_zhang2 months ago

My guess is that CF has so many external customers that they need to move fast and try not to break things. My hunch is that their culture always favors moving fast. As long as they are not breaking too many things, customers won't leave them.

paradite2 months ago

There is nothing wrong with moving fast and deploying fast.

I'm more talking about how slow it was to detect the issue caused by the config change, and perform the rollback of the config change. It took 20 minutes.

linhns2 months ago

I think everyone favors moving fast. We humans want to see results of our action early.

theideaofcoffee2 months ago

Same, my time at a F100 ecommerce retailer showed me the same. Every change control board justification needed an explicit back-out/restoration plan with exact steps to be taken, what was being monitored to ensure that was being held to, contacts of prominent groups anticipated to have an effect, emergency numbers/rooms for quick conferences if in fact something did happen.

The process was pretty tight, almost no revenue-affecting outages from what I can remember because it was such a collaborative effort (even though the board presentation seemed a bit spiky and confrontational at the time, everyone was working together).

prdonahue2 months ago

And you moved at a glacial pace compared to Cloudflare. There are tradeoffs.

theideaofcoffee2 months ago

Yes, of course, I want the organization that inserted itself into handling 20% of the world's internet traffic to move fast and break things. Like breaking the internet on a bi-weekly basis. Yep, great tradeoff there.

Give me a break.

jimmydorry2 months ago

While you're taking your break, exploits gain traction in the wild and one of the value propositions for using a service provider like CloudFlare is catching and mitigating theses exploits as fast as possible. From the OP, this outage was in relation to handling a nasty RCE.

wvenable2 months ago

But if your job is mitigate attacks/issues then things can very broken while you're being slow to mitigate it.

JeremyNT2 months ago

Lest we forget, they initially rose to prominence by being cheaper than the existing solutions, not better, and I suppose this is a tradeoff a lot of their customers are willing to make.

lljk_kennedy2 months ago

This sounds just as bad as yolo-merges, just on the other end of the spectrum.

nova220332 months ago
draw_down2 months ago

[dead]

ferat2 months ago

Today, after the Cloudflare outage, I noticed that almost all upload routes for my applications were being blocked.

After some investigation, I realized that none of these routes passed through Cloudflare OWASP. The reported anomalies total 50, exceeding the pre-configured maximum of 40 (Medium).

Despite being simple image or video uploads, the WAF is generating anomalies that make no sense, such as the following:

Cloudflare OWASP Core Ruleset Score (+5)

933100: PHP Injection Attack: PHP Open Tag Found

Cloudflare OWASP Core Ruleset Score (+5)

933180: PHP Injection Attack: Variable Function Call Found

For now, I’ve had to raise the OWASP Anomaly Score Threshold to 60 and enable the JS Challenge, but I believe something is wrong with the WAF after today’s outage.

This issue was still not solved to this moment.

rachr2 months ago

Time for Cloudflare to start using the BOFH excuse generator. https://bofh.d00t.org/

Smalltalker-802 months ago

So Cloudflare: - Did a last minute, untested change to their change: "turning off our WAF rule testing tool". - Did an immediate global roll-out, instead of a staged one. . Is seems they would have enough leaning-cases now never to do that again...

xnorswap2 months ago

My understanding, paraphrased: "In order to gradually roll out one change, we had to globally push a different configuration change, which broke everything at once".

But a more important takeaway:

> This type of code error is prevented by languages with strong type systems

jsnell2 months ago

That's a bizarre takeaway for them to suggest, when they had exactly the same kind of bug with Rust like three weeks ago. (In both cases they had code implicitly expecting results to be available. When the results weren't available, they terminated processing of the request with an exception-like mechanism. And then they had the upstream services fail closed, despite the failing requests being to optional sidecars rather than on the critical query path.)

littlestymaar2 months ago

In fairness, the previous bug (with the Rust unwrap) should never have happened: someone explicitly called the panicking function, the review didn't catch it and the CI didn't catch it.

It required a significant organizational failure to happen. These happen but they ought to be rarer than your average bug (unless your organization is fundamentally malfunctioning, that is)

greatgib2 months ago

The issue would also not have happened, if someone did the right code, tests, and the review or CI caught it...

marcosdumay2 months ago

It's different to expect somebody to write the correct program every time than to expect somebody not to call the "break_my_system" procedure that was warnings all over it telling people it's there for quick learning-to-use examples or other things you'll never run.

Hamuko2 months ago

Yeah, my first thought was that had they used Rust, maybe we would've seen them point out a rule_result.unwrap() as the issue.

pdimitar2 months ago

To be precise, the previous problem with Rust was because somebody copped out and used a temporary escape hatch function that absolutely has no place in production code.

It was mostly an amateur mistake. Not Rust's fault. Rust could never gain adoption if it didn't have a few escape hatches.

"Damned if they do, damned if they don't" kind of situation.

There are even lints for the usage of the `unwrap` and `expect` functions.

As the other sibling comment points out, the previous Cloudflare problem was an acute and extensive organizational failure.

zozbot2342 months ago

You can make an argument that .unwrap() should have no place in production code, but .expect("invariant violated: etc. etc.") very much has its place. When the system is in an unpredicted and not-designed-for state it is supposed to shut down promptly, because this makes it easier to troubleshoot the root cause failure whereas not doing so may have even worse consequences.

pdimitar2 months ago

I don't disagree but you might as well also manually send an error to f.ex. Sentry and just halt processing of the request.

Though that really depends. In companies where k8s is used the app will be brought back up immediately anyway.

debugnik2 months ago

Prevented unless they assert the wrong invariant at runtime like they did last time.

skywhopper2 months ago

This is the exact same type of error that happened in their Rust code last time. Strong type systems don’t protect you from lazy programming.

inejge2 months ago

It's not remotely the same type of error -- error non-handling is very visible in the Rust code, while the Lua code shows the happy path, with no indication that it could explode at runtime.

Perhaps it's the similar way of not testing the possible error path, which is an organizational problem.

jakub_g2 months ago

The interesting part:

After rolling out a bad ruleset update, they tried a killswitch (rolled out immediately to 100%) which was a code path never executed before:

> However, we have never before applied a killswitch to a rule with an action of “execute”. When the killswitch was applied, the code correctly skipped the evaluation of the execute action, and didn’t evaluate the sub-ruleset pointed to by it. However, an error was then encountered while processing the overall results of evaluating the ruleset

> a straightforward error in the code, which had existed undetected for many years

8cvor6j844qw_d62 months ago

> have never before applied a killswitch to a rule with an action of “execute”

One might think a company on the scale of Cloudflare would have a suite of comprehensive tests to cover various scenarios.

hnthrowaway03282 months ago

I kinda think most companies out there are like that. Moving fast is the motto I heard the most.

They are probably OK with occasional breaks as long as customers don't mind.

robryan2 months ago

Yeah the example they gave does feel like pretty isolated unit test territory, or at least an integration test on a subset of the system that could be ran in isolation.

8cvor6j844qw_d62 months ago

Is there some underlying factors that resulted in the recent outages (e.g., new processes, layoffs, etc.) or just a series of pure coincidences?

Elucalidavah2 months ago

Sounds like their "FL1 -> FL2" transition is involved in both.

Someone12342 months ago

It was involved in the previous one, but not in this latest one. All FL2 did was prevent the outage being even wider spread than it was. None of this had anything to do with migration.

tetha2 months ago

If FL2 didn't have the outage, and FL1 did, the pace of the migration did have an impact.

Though this is showing the problem with these things: Migrating faster could have reduced the impact of this outage, while increasing the impact of the last outage. Migrating slower could have reduced the impact of the last outage, while increasing the impact of this outage.

This is a hard problem: How fast do you rip old working infrastructure out and risk finding new problems in the new stack, yet, how long do you tolerate shortcomings of the old stack that caused you to build the new stack?

venturecruelty2 months ago

I'm sure everything slowly falling apart all at the same time is due to some strange coincidence, and not the regular and steady firing of thousands of people.

rr8082 months ago

AI

gernigg2 months ago

[flagged]

aeyes2 months ago

How hard can it be for a company with 1000 engineers to create a canary region before blasting their centralized changes out to everyone.

Every change is a deployment, even if its config. Treat it as such.

Also you should know that a strongly typed language won't save you from every type of problem. And especially not if you allow things like unwrap().

It is just mind boggling that they very obviously have completely untested code which proxies requests for all their customers. If you don't want to write the tests then at least fuzz it.

gkoz2 months ago

I sometimes feel we'd be better off without all the paternalistic kitchensink features. The solid, properly engineered features used intentionally aren't causing these outages.

ilkkao2 months ago

Agreed, I don't really like Cloudflare trying to magically fix every web exploit there is in frameworks my site has never used.

nish__2 months ago

Honestly. This feels outside of their domain.

theideaofcoffee2 months ago

I’ve been downvoted enough with my comments on this blog post where I’m hesitant to add anything else, but here I agree with you. They’re trying to be everything to everyone, where does the accountability of their customers being responsible for running, you know, up-to-date packages come in? Like, you don’t take just a little bit of pride in your work that you’re continually watching CVE lists and exploits and just have a minimum of effort toward patching your own shit, rather than pawning it off on vendor? I simply can’t understand the mindset.

venturecruelty2 months ago

The good news is that you can have that right now. Just don't use Cloudflare.

egorfine2 months ago

> provides customers with protection against malicious payloads, allowing them to be detected and blocked. To do this, Cloudflare’s proxy buffers HTTP request body content in memory for analysis.

I have a mixed feeling about this.

On the other hand, I absolutely don't want a CDN to look inside my payloads and decide what's good for me or. Today it's protection, tomorrow it's censorship.

At the same time this is exactly what CloudFlare is good for - to protect sites from malicious requests.

udev40962 months ago

We need a decentralized ddos mitigation network based on incentives. Donate X amount of bandwidth, get Y amount of protection from other peers. Yes, we gotta do TLS inspection on every end for effective L7 mitigation but at least filtering can be done without decrypting any packets

mewpmewp22 months ago

How would that work for latency / reliability of the requests?

seanparsons2 months ago

"This type of code error is prevented by languages with strong type systems. In our replacement for this code in our new FL2 proxy, which is written in Rust, the error did not occur." It's starting to sound like a broken record at this point, languages are still seen as equal and as a result, interchangeable.

lapcat2 months ago

> This is a straightforward error in the code, which had existed undetected for many years. This type of code error is prevented by languages with strong type systems. In our replacement for this code in our new FL2 proxy, which is written in Rust, the error did not occur.

Cloudflare deployed code that was literally never tested, not even once, neither manually nor by unit test, otherwise the straightforward error would have been detected immediately, and their implied solution seems to be not testing their code when written, or even adding 100% code coverage after the fact, but rather relying on a programming language to bail them out and cover up their failure to test.

JohnMakin2 months ago

Large scale infrastructure changes are often by nature completely untestable. The system is too large, there are too many moving parts to replicate with any kind of sane testing, so often, you do find out in prod, which is why robust and fast rollback procedures are usually desirable and implemented.

lapcat2 months ago

> Large scale infrastructure changes are often by nature completely untestable.

You're changing the subject here and shifting focus from the specific to the vague. The two postmortems after the recent major Cloudflare outages both listed straightforward errors in source code that could have been tested and detected.

Theoretical outages could theoretically have other causes, but these two specific outages had specific causes that we know.

> which is why robust and fast rollback procedures are usually desirable and implemented.

Yes, nobody is arguing against that. It's a red herring with regard to my point about source code testing.

JohnMakin2 months ago

I am not changing any subject. These are glue logic scripts connecting massive pieces of infra together, spanning what is likely several teams and orgs over the course of many years. It is impossible to blurt something out like "well, source code testing" for something like this, when the source code inputs are not possibly testable outside the scale of the larger system. They're often completely unknowable as well.

With all due respect, it sounds like you have not worked on these types of systems, but out of curiosity - what type of test do you think would have prevented this?

+1
lapcat2 months ago
roguecoder2 months ago

Akamai manages it.

winddude2 months ago

They don't, akamai has had several outages as well jsut no one notices. Akamai is way way smaller than cloudflare, 20% of internet traffic passes through CF networks, not sure it's even measurable on Akamai.

+1
andrewf2 months ago
resonious2 months ago

> This type of code error is prevented by languages with strong type systems.

True, as long as you don't call unwrap!

IshKebab2 months ago

That's a different kind of error. And even then unwrap is opt-in whereas this is opt-out if you're lucky.

Kind of funny that we get something showing the benefits of Rust so soon after everyone was ragging on a out unwrap anyway!

0x9e3779b62 months ago

Apart from Cloudflare config system working too good to propagate failure modes:

the code quality on very mission critical path powering “half the internets” could’ve been better.

I’m not sure if Lua LSP / linting tools would’ve caught the issue (I also never used Lua myself), but tools and methods exist to test mission critical dynamically typed code.

The company with genuinely impressive concentration of talent was expected to think about fuzzing this legacy crap somehow.

As for `.unwrap()` related incident: normally code like this should never pass the review.

You just (almost) never unwrap in production code.

I’d start with code quality tooling but more important - the related processes before even thinking about the architecture changes.

Changing architecture in global sense which, in general has served for years with 99.99(9)% uptime is not obviously smart thing to do.

Architecture is doing great, it’s just impact which has been devastating because of the scale.

Everyone makes errors and it’s fine, but there are ways not to roll shit in prod (reference to famous meme pic where bugs do that).

scottlamb2 months ago

There's a lot of bad karma in this discussion. It's hard to run large services. Careful when you set a precedent of pillorying after an outage. It could be you next!

Yes, this is the second time in a month. Were folks expecting that to have been enough time for them to have made sweeping technical and organization changes? I say no—this doesn't mean they aren't trying or haven't learned any lessons from the last outage. It's a bit too soon to say that.

I see this event primarily as another example of the #1 class of major outages: bad rapid global configuration change. (The last CloudFlare outage was too, but I'm not just talking about CloudFlare. Google has had many many such outages. There was an inexplicable multi-year gap between recognizing this and having a good, widely available staged config rollout system for teams to drop into their systems.) Stuff like DoS attack configurations needs to roll out globally quickly. But they really need make it not quite this quick. Imagine they deployed to one server for one minute, one region for one minute on success, then everywhere on success. Then this would have been a tiny blip rather than a huge deal.

(It can be a bit hard to define "success" when you're doing something like blocking bad requests that may even be a majority of traffic during a DDoS attack, but noticing 100% 5xx errors for 38% of your users due to a parsing bug is doable!)

As for the specific bug: meh. They should have had 100% branch coverage on something as critical (and likely small) as the parsing for this config. Arguably a statically typed language would have helped (but the `.unwrap()` error in the previous outage is a bit of a counterargument to that). But it just wouldn't have mattered that much if they caught it before global rollout.

iLoveOncall2 months ago

The most surprising from this article is that CloudFlare handles only around 85M TPS.

blibble2 months ago

it can't really be that small, can it?

that's maybe half a rack of load

nish__2 months ago

Given the number of lua scripts they seem to be running, it has to take more than half a rack.

hrimfaxi2 months ago

Having their changes fully propagate within 1 minute is pretty fantastic.

denysvitali2 months ago

This is most likely a strong requisite for such a big scale deployment if DDOS protection and detection - which explains their architectural choices (ClickHouse & co) and the need of a super low latency config changes.

Since attackers might rotate IPs more frequently than once per minute, this effectively means that the whole fleet of servers should be able to quickly react depending on the decisions done centrally.

chatmasta2 months ago

The coolest part of Cloudflare’s architecture is that every server is the same… which presumably makes deployment a straightforward task.

jamesog2 months ago

The bad change wasn't even a deployment as such, just an entry in the global KV store https://blog.cloudflare.com/introducing-quicksilver-configur...

Actual deployments take hours to propagate worldwide.

(Disclosure: former Cloudflare SRE)

reassess_blind2 months ago

Why wasn’t the rollback fixed within the second minute after they saw the 500s?

rany_2 months ago

> As part of our ongoing work to protect customers using React against a critical vulnerability, CVE-2025-55182, we started rolling out an increase to our buffer size to 1MB, the default limit allowed by Next.js applications.

Why would increasing the buffer size help with that security vulnerability? Is it just a performance optimization?

boxed2 months ago

I think the buffer size is the limit on what they check for malicious data, so the old 128k would mean it would be trivial to circumvent by just having 128k ok data and then put the exploit after.

whs2 months ago

I got curious and I checked AWS WAF. Apparently AWS WAF default limit for CloudFront is 16KB and max is 64KB.

redslazer2 months ago

If the request data is larger than the limit it doesn’t get processed by the Cloudflare system. By increasing buffer size they process (and therefore protect) more requests.

rvz2 months ago

> Instead, it was triggered by changes being made to our body parsing logic while attempting to detect and mitigate an industry-wide vulnerability disclosed this week in React Server Components.

Doesn't Cloudflare rigorously test their changes before deployment to make sure that this does not happen again? This better not have been used to cover for the fact that they are using AI to fix issues like this one.

Better not be any presence of vibe coders or AI agents being used to be touching such critical pieces of infrastructure at all and I expected Cloudflare to learn from the previous outage very quickly.

But this is quite a pattern but might need to consider putting the unreliability next to GitHub (which goes down every week).

eviks2 months ago

> This first change was being rolled out using our gradual deployment system.

So they are aware of some basic mitigation tactics guarding against errors

> This system does not perform gradual rollouts,

They just choose to YOLO

> Typical actions are “block”, “log”, or “skip”. Another type of action is “execute”,

> However, we have never before applied a killswitch to a rule with an action of “execute”.

Do they do no testing? These isn't even fuzzing with “infinite” variations, but a limited list of actions

> existed undetected for many years. This type of code error is prevented by languages with strong type systems.

So this solution is also well known, just ignored for years, because "if it’s not broken, don’t fix it?", right?

mxpxrocks102 months ago

First, what Cloudflare does is hard and I want to start with that.

That being said, I think it’s worth a discussion. How much of the last 3 outages were because of the JGC (the former CTO) retiring and Dane taking over?

Did JGC have a steady hand that’s missing? Or was it just time for outages that would have happened anyway?

Dane has maintained a culture of transparency which is fantastic, but did something get injected in the culture leading towards these issues? Will it become more or less stable since JGC left?

Curious for anyone with some insight or opinions.

(Also, if it wasn’t clear - huge Cloudflare fan and sending lots of good vibes to the team)

cinericius2 months ago

Looking at Dane's career history on LinkedIn, it appears that he has only ever been in product and some variety of manager, and his degree is in 'Engineering Management System'. It's an odd choice given that the previous two CTOs (Lee and John) were extremely technical and how core technology is to Cloudflare.

As with any organisation where the CTO is not technical, there will be someone who the 'CTO' has to ask to understand technical situations. In my opinion, that person being asked is the real CTO, for any given situation.

jokoon2 months ago

I still don't understand what is cloudflare's business model, yet they manage to make news.

I don't see how their main product is ddos protection, yet cloudflare goes down for some reason.

This company makes zero sense to me.

OneDeuxTriSeiGo2 months ago

Cloudflare protects against DDOS but also various forms of malicious traffic (bots, low reputation IP users, etc) and often with a DDOS or similar attacks, it's better to have the site go down from time to time than for the attackers to hammer the servers behind cloudflare and waste mass amounts of resources.

i.e. it's the difference between "site goes down for a few hours every few months" and "an attacker slammed your site, and through in on-demand scaling or serverless component cloud fees blew your entire infrastructure budget for the year.

Doubly so when your service is part of a larger platform and attacks on your service risk harming your reputation for the larger platform.

mr_windfrog2 months ago

If I'm remembering correctly, there was another outage around 10 days ago.

It still surprises me that there are basically no free alternatives comparable to Cloudflare. Putting everything on CF creates a pretty serious single point of failure.

It's strange that in most industries you have at least two major players, like Coke vs. Pepsi or Nike vs. Adidas. But in the CDN/edge space, there doesn't seem to be a real free competitor that matches Cloudflare's feature set.

It feels very unhealthy for the ecosystem. Does anyone know why this is the case?

halapro2 months ago

I reckon AWS is "free enough" for most of its users, but it's not as easy nor as safe for the common user.

mr_windfrog2 months ago

Totally agree, AWS's free tier is great for many users, but it can definitely be tricky and risky for the average person.

Bender2 months ago

Suggestion for Cloudflare: Create an early adopter option for free accounts.

Benefit: Earliest uptake of new features and security patches.

Drawback: Higher risk of outages.

I think this should be possible since they already differentiate between free, pro and enterprise accounts. I do not know how the routing for that works but I bet they could do this. Think crowd-sourced beta testers. Also a perk for anything PCI audit or FEDRAMP security prioritized over uptime.

LelouBil2 months ago

I would for sure enable this, my personal server can handle being unreachable for a few hours in exchange for (potentially) interesting features.

rfmoz2 months ago

They do in some way because the LaLiga blocking problems in Spain don’t affect the paid accounts=large websites.

An other suggestion is to do it along night shift in every country, right now they only take into account EEUU night.

ectospheno2 months ago

If that meant free tier had WAF then sure, I’d enable that.

kune2 months ago

The interesting aspect of the Cloudflare support, which is not clarified, is how they came to the risk assessment that it is ok to roll out a change non-gradual globally without testing the procedure first. The only justification I can see is that the React/Next.js remote command execution vulnerabilities are actively exploited. But if this is the case they should say so.

qouteall2 months ago

It's (at least) the second time Couldflage gets bitten by React. Last time an useEffect caused an incident.

https://blog.cloudflare.com/deep-dive-into-cloudflares-sept-...

markus_zhang2 months ago

I wonder anyone from internal could share the culture a bit. I'm mostly interested in the following part:

If someone messes up royally, is there someone who says "if you break the build/whatever super critical, then your ass is the grass and I'm the lawn mower"?

_pdp_2 months ago

So no static compiler checks and apparently no fuzzers used to ensure these rules work as intended?

perching_aix2 months ago

Such tooling exists for Lua? Didn't know.

_pdp_2 months ago

They exist if you are worth 70 billion.

egnehots2 months ago

From a customer perspective, I think there should be an option:

- prioritize security: get patchs ASAP

- prioritize availability: get patchs after a cooldown period

Because ultimately, it's a tradeoff that cannot be handled by Cloudflare. It depends on your business, your threat model.

AtNightWeCode2 months ago

Not missing working with LUA in proxies. I think this is no big thing. They rolled back the change fairly quickly. Still bad but that outage mid November was worse since it was many bad decisions stacking up and it took too long time to resolve.

blinded2 months ago

They bypassed the gradual rollout system in order to meet a deadline for a cve. They put security above availability, tough tradeoff. Is there a non prod environment where that one off waf testing tool change could have been tested?

bradly2 months ago

Dang… I don’t even use React and it still brings down my sites. Good beats I guess.

stego-tech2 months ago

The problem that irks me isn’t that Cloudflare is having outages (everyone does and will at some point, no matter how many 9’s your SLA states), it’s that the internet is so damn centralized that a Cloudflare issue can take out a continent-sized chunk of the internet. Kudos to them on their success story, but oh my god that’s way too many eggs in one basket in general.

roguecoder2 months ago

I notice that this is the kind of thing that solid sociable tests ought to have caught. I am very curious how testable that code is (random procedural if-statements don't inspire high confidence.)

j452 months ago

Curious if there isn't a way to ingest the incoming traffic at scale, but route it to a secondary infrastructure to make sure it's resolving correctly, before pushing it to production?

away0x01ct2 months ago

1.1.1.1 domain test server, whether a relay or endpoints including /cdn-cgi/trace is WAF testing error, for 500 HTTP network & Cloudflare managed R-W-X permissions

denysvitali2 months ago

Ironically, this time around the issue was in the proxy they're going to phase out (and replace with the Rust one).

I truly believe they're really going to make resilience their #1 priority now, and acknowledging the release process errors that they didn't acknowledge for a while (according to other HN comments) is the first step towards this.

HugOps. Although bad for reputation, I think these incidents will help them shape (and prioritize!) resilience efforts more than ever.

At the same time, I can't think of a company more transparent than CloudFlare when it comes to these kind of things. I also understand the urgency behind this change: CloudFlare acted (too) fast to mitigate the React vulnerability and this is the result.

Say what you want, but I'd prefer to trust CloudFlare who admits and act upon their fuckups, rather than trying to cover them up or downplaying them like some other major cloud providers.

@eastdakota: ignore the negative comments here, transparency is a very good strategy and this article shows a good plan to avoid further problems

iLoveOncall2 months ago

> I truly believe they're really going to make resilience their #1 priority now

I hope that was their #1 priority from the very start given the services they sell...

Anyway, people always tend to overthink about those black-swan events. Yes, 2 happened in a quick succession, but what is the average frequency overall? Insignificant.

roguecoder2 months ago

This is Cloudflare. They've repeatedly broken DNS for years.

Looking across the errors, it points to some underlying practices: a lack of systems metaphors, modularity, testability, and an reliance on super-generic configuration instead of software with enforced semantics.

denysvitali2 months ago

I think they have to strike a balance between being extremely fast (reacting to vulnerabilities and DDOS attacks) while still being resilient. I don't think it's an easy situation

trashburger2 months ago

I would very much like for him not to ignore the negativity, given that, you know, they are breaking the entire fucking Internet every time something like this happens.

denysvitali2 months ago

This is the kind of comment I wish he would ignore.

You can be angry - but that doesn't help anyone. They fucked up, yes, they admitted it and they provided plans on how to address that.

I don't think they do these things on purpose. Of course given their good market penetration they end up disrupting a lot of customers - and they should focus on slow rollouts - but I also believe that in a DDOS protection system (or WAF) you don't want or have the luxury to wait for days until your rule is applied.

beanjuiceII2 months ago

I hope he doesn't ignore it, the internet has been forgiving enough toward cloudflares string of failures..its getting pretty old, and creates a ton of choas. I work with life saving devices, being impacted in any way in data monitoring has a huge impact in many ways. "sorry ma'am we can't give your child t1d readings on your follow app because our provider decided to break everything in the pursuit of some react bug." has a great ring to it

Anon10962 months ago

Cloudflare and other cloud infra providers are only providing primitives to use, in this case WAF. They have target uptimes and it's never 100%. It's up to the people actually making end user services (like your medical devices) to judge whether that is enough and if not to design your service around it.

(and also, rolling your own version of WAF is probably not the right answer if you need better uptime. It's exceedingly unlikely a medical devices company will beat CF at this game.)

+1
esseph2 months ago
nish__2 months ago

Maybe not on purpose but there's such a thing as negligence.

fidotron2 months ago

> HugOps

This childish nonsense needs to end.

Ops are heavily rewarded because they're supposed to be responsible. If they're not then the associated rewards for it need to stop as well.

denysvitali2 months ago

I have never seen an Ops team being rewarded for avoiding incidents (focusing in tech debt reduction), but instead they get the opposite - blamed when things go wrong.

I think it's human nature (it's hard to realize something is going well until it breaks), but still has a very negative psychological effect. I can barely imagine the stress the team is going through right now.

fidotron2 months ago

> I have never seen an Ops team being rewarded for avoiding incidents

That's why their salaries are so high.

+1
denysvitali2 months ago
+1
esseph2 months ago
agoodusername632 months ago

news to me.

esseph2 months ago

Ops has never been "rewarded" at any org I've ever been at or heard about, including physical infra companies.

da_grift_shift2 months ago

[ Removed by Reddit ]

denysvitali2 months ago

Wow. The three comments below parent really show how toxic HN has become.

beanjuiceII2 months ago

being angry about something doesn't make it toxic, people have a right to be upset

denysvitali2 months ago

The comment, before the edit, was what I would consider toxic. No wonder it has been edited.

It's fine to be upset, and especially rightfully so after the second outage in less than 30 days, but this doesn't justify toxicity.

nish__2 months ago

Is it crazy to anyone else that they deploy every 5 minutes? And that it's not just config updates, but actual code changes with this "execute" action.

roguecoder2 months ago

No: I've been at plenty of places where we get to continuous deployment, where any given change is deployed on demand.

What is wild is that they are deploying without first testing in a staging environment.

kccqzy2 months ago

Config updates are not so clear cut from code changes.

Once I worked with a team in the anti-abuse space where the policy is that code deployments must happen over 5 days and config updates can take a few minutes. Then an engineer on the team argued that deploying new Python code doesn’t count as a code change because the CPython interpreter did not change; it didn’t even restart. And indeed given how dynamic Python is, it is totally possible to import new Python modules that did not exist when the interpreter process is launched.

nish__2 months ago

lol man... If your "config" is Turing-complete, that's a code change. Full stop. Bro's just lazy.

kccqzy2 months ago

Yeah sure. Now tell the people who are designing config languages to make them both powerful and non-Turing-complete. You see, it is a very hard problem for a powerful language to be non-Turing-complete.

guluarte2 months ago

is it me or critical software bugs are more and more common?

snafeau2 months ago

A lot of these kind of bugs feel like they could be caught be a simple review bot like Greptile... I wonder if Cloudlare uses an equivalent tool internally?

nkmnz2 months ago

What makes greptile a better choice compared to claude code or codex, in your opinion?

roguecoder2 months ago

That has not been my experience with those tools.

Super-procedural code in particular is too complex for humans to follow, much less AI.

nish__2 months ago

Any bot that runs an AI model should not be called "simple".

antiloper2 months ago

Make faster websites:

> we started rolling out an increase to our buffer size to 1MB, the default limit allowed by Next.js applications.

Why is the Next.js limit 1 MB? It's not enough for uploading user generated content (photographs, scanned invoices), but a 1 MB request body for even multiple JSON API calls is ridiculous. There frameworks need to at least provide some pushback to unoptimized development, even if it's just a lower default request body limit. Otherwise all web applications will become as slow as the MS office suite or reddit.

ramon1562 months ago

The update was to update it to 3MB (paid 10MB)

AmazingTurtle2 months ago

a) They serialize tons of data into requests b) Headers. Mostly cookies. They are a thing. They are being abused all over the world by newbies.

dznodes2 months ago

When should we just give up on Cloudflare? Seems like this just keeps happening. Like some kind of backdoor triggered willy nilly, Hmmm?

venturecruelty2 months ago

Now. Right now. Seriously, stop using this terrible service. We also need to change the narrative that step 1 in every tutorial is "sign up for Cloudflare". This is partly a culture problem.

system22 months ago

Is that me, or did CloudFlare outages increase since LLM "engineers" were hired remotely? Do you think there is a correlation?

roguecoder2 months ago

They've always been flakey. At least these only impacted their own customers instead of taking down the internet.

dzonga2 months ago

before Cloudflare suffered an outage due to React's useEffect, now again trying to mitigate security issues around React Server Pages.

at one point in time - they've to admit this react thing ain't working. & just use classic server rendered pages, since their dashboards are simple toggle controls

RA_Fisher2 months ago

As a reliability statistician (and web user!), I'd love to see Cloudflare investing in reliability statistics. :)

mmmlinux2 months ago

Messing around on a Friday? Brave.

chickensong2 months ago

You don't really want security updates waiting around on a luxury schedule.

roguecoder2 months ago

Or overworked.

We can deploy on Fridays. We don't, because we aren't donating our time to the shareholders.

orphea2 months ago

If you're afraid of deploying on Friday, you're doing it wrong.

ken472 months ago

I have to wonder if there is a relation to the rising prevalence of coding LLMs.

MagicMoonlight2 months ago

If you had a 99.99% availability requirement they will have already cost you a fortune

borplk2 months ago

Every time they screw up they write an elaborate postmortem and pat themselves on the back. Don't get me wrong, better have the postmortem than not. But at this point it seems like the only thing they are good at is writing incident postmortem blog posts.

lofaszvanitt2 months ago

How come that crumbling, rotten to the core lua is still used? :D

dwa35922 months ago

I am not sure if it's just me or there have been too many outages this year to count. Is it the AI slop making into production?

rurban2 months ago

Oh oh, looks like agentzh's code. A living legend

kachapopopow2 months ago

why does this seem oddly familiar (fail-closed logic)

rubatuga2 months ago

Honestly a lot of these problems are because they don't test a staging environment, like isn't this software engineering basics?

dev1ycan2 months ago

Stop vibe coding on critical infrastructure :)

sandos2 months ago

Is it just me, or should they have just reverted instead of making _another_ change as a result of the first one?

ALSO, very very weird that they had not caught this seemingly obvious bug in proxy buffer size handling. This points to that the change nr 2, done in "reactive" mode to change nr 1 that broke shit, HAD NOT BEEN TESTED AT ALL! Which is the core reason they should never have deployed that, but rather revert to a known good state, then test BOTH changes combined.

nish__2 months ago

No love lost, no love found.

arjie2 months ago

Classic. Things always get worse before they get better. I remember when Netflix was going through their annus horribilis, and AWS before that, and Twitter before that, and so on. Everyone goes through this. Good luck to you guys getting to FL2 quickly enough that this class of error reduces.

jgalt2122 months ago

I do kind of like who they are blaming React for this.

rudedogg2 months ago

I’m really sick of constantly seeing cloudflare, and their bullshit captchas. Please, look at how much grief they’re causing trying to be the gateway to the internet. Don’t give them this power

fidotron2 months ago

> This change was being rolled out using our gradual deployment system, and, as part of this rollout, we identified an increase in errors in one of our internal tools which we use to test and improve new WAF rules. As this was an internal tool, and the fix being rolled out was a security improvement, we decided to disable the tool for the time being as it was not required to serve or protect customer traffic.

Come on.

This PM raises more questions than it answers, such as why exactly China would have been immune.

skywhopper2 months ago

China is probably a completely separate partition of their network.

fidotron2 months ago

One that doesn't get proactive security rollouts, it would seem.

skywhopper2 months ago

I assume it was next on the checklist, or assigned to a different ops team.

roguecoder2 months ago

The deploys are very unlikely to be managed from the same system.

da_grift_shift2 months ago

It's not an outage, it's an Availability Incident™.

https://blog.cloudflare.com/5-december-2025-outage/#what-abo...

perching_aix2 months ago

You jest, but recently I also felt compelled to stop using the word (planned) outage where I work, because it legitimately creates confusion around the (expected) character of impact.

Outage is the nuclear wasteland situation, which given modern architectural choices, is rather challenging to manifest. To avoid it is face-saving, but also more correct.

aw16211072 months ago

From earlier in the very same blog post (emphasis added):

> This system does not perform gradual rollouts, but rather propagates changes within seconds to the entire fleet of servers in our network and is under review following the outage we experienced on November 18.

Uptrenda2 months ago

Can't believe one shitty website can take down most of the mainstream web.

gotekom9522 months ago

"They let the internet down"

blibble2 months ago

amateur level stuff again

theoldgreybeard2 months ago

This is total amateur shit. Completely unacceptable for something as critical as Cloudflare.

jchip3032 months ago

[dead]

alwaysroot2 months ago

[flagged]

dreamcompiler2 months ago

[flagged]

kosolam2 months ago

Some nonsense again. The level of negligence there is astounding. This is frightening because this entity is daily exposed to a large portion of our personal data which goes over the wire. As well as business data. It’s just a matter of time before a disaster is going to occur. Some regulatory body must take control in their hands right now.

kosolam2 months ago

Got furiously downvoted but no counter arguments presented. Says it all.

websiteapi2 months ago

i wonder why they cannot partially rollout. like the other outage they have to do a global rollout.

usrnm2 months ago

I really don't see how it would've helped. In go or Rust you'd just get a panic, which is in no way different.

denysvitali2 months ago

The article mentions that this Lua-based proxy is the old generation one, which is going to be replaced by the Rust based one (FL2) and that didn't fail on this scenario.

So, if anything, their efforts towards a typed language were justified. They just didn't manage to migrate everything in time before this incident - which is ironically a good thing since this incident was cause mostly by a rushed change in response to an actively exploited vulnerability.

websiteapi2 months ago

yes, but as the article states why are they doing global fast rollouts?

denysvitali2 months ago

I think (would love to be corrected) that this is the nature of their service. They probably push multiple config changes per minute to mitigate DDOS attacks. For sure the proxies have a local list of IPs that, for a period of time, are blacklisted.

For DDOS protection you can't really rely on multiple-hours rollouts.

barbazoo2 months ago

> Customers that did not have the configuration above applied were not impacted. Customer traffic served by our China network was also not impacted.

Interesting.

flaminHotSpeedo2 months ago

They kinda buried the lede there, 28% failure rate for 100% of customers isn't the same as 100% failure rate for 28% of customers

jpeter2 months ago

Unwrap() strikes again

dap2 months ago

I guess you’re being facetious but for those who didn’t click through:

> This type of code error is prevented by languages with strong type systems. In our replacement for this code in our new FL2 proxy, which is written in Rust, the error did not occur.

skywhopper2 months ago

That bit may be true, but the underlying error of a null reference that caused a panic was exactly the same in both incidents.

roguecoder2 months ago

Yep: it is wild for them to claim that a strongly-typed language would have saved them when it didn't.

Relying on language features instead of writing code well will always eventually backfire.

dap2 months ago

You're right that you have to "write code well" to prevent this sort of thing. It's also true that Rust's language features, if you use them, can make this sort of mistake a compile-time error rather than something that only blows up at runtime under the wrong conditions. The problem with their last outage was that somebody explicitly opted out of the tool provided by the language. As you say, that's "not writing code well". But I think you're dismissing the value of the language feature in helping you write code well.

rossjudson2 months ago

Relying on code ninja ego backfires way sooner, and way more often.

https://security.googleblog.com/2025/11/rust-in-android-move...

throwawaymaths2 months ago

this time in lua. cloudflare can't catch a break

RoyTyrell2 months ago

Or they're not thoroughly testing changes before pushing them out. As I've seen some others say, CloudFlare at this point should be considered critical infrastructure. Maybe not like power but dang close.

esseph2 months ago

My power goes out every Wednesday around noon and normally if the weather is bad. In a major US metro.

I hope cloudflare is far more resilient than local power.

gcau2 months ago

The 'rewrite it in lua' crowd are oddly silent now.

barbazoo2 months ago

How do you know?

infrcg2 months ago

[flagged]

jcmfernandes2 months ago

Did you really go through the trouble of creating an account just to spit trash? Damn!

lexoj2 months ago

Anyone knows why lua? Or is it perhaps as a redis script in lua?

lexoj2 months ago

Figured it, its prob a nginx lua module

rvz2 months ago

Time to use boring languages such as Java and Go.