Back

Kernel bugs hide for 2 years on average. Some hide for 20

281 points2 dayspebblebed.com
Fiveplus2 days ago

Before the "rewrite it in Rust" comments take over the thread:

It is worth noting that the class of bugs described here (logic errors in highly concurrent state machines, incorrect hardware assumptions) wouldn't necessarily be caught by the borrow checker. Rust is fantastic for memory safety, but it will not stop you from misunderstanding the spec of a network card or writing a race condition in unsafe logic that interacts with DMA.

That said, if we eliminated the 70% of bugs that are memory safety issues, the SNR ratio for finding these deep logic bugs would improve dramatically. We spend so much time tracing segfaults that we miss the subtle corruption bugs.

aw16211071 day ago

> It is worth noting that the class of bugs described here (logic errors in highly concurrent state machines, incorrect hardware assumptions)

While the bugs you describe are indeed things that aren't directly addressed by Rust's borrow checker, I think the article covers more ground than your comment implies.

For example, a significant portion (most?) of the article is simply analyzing the gathered data, like grouping bugs by subsystem:

    Subsystem        Bug Count  Avg Lifetime
    drivers/can      446        4.2 years
    networking/sctp  279        4.0 years
    networking/ipv4  1,661      3.6 years
    usb              2,505      3.5 years
    tty              1,033      3.5 years
    netfilter        1,181      2.9 years
    networking       6,079      2.9 years
    memory           2,459      1.8 years
    gpu              5,212      1.4 years
    bpf              959        1.1 years

Or by type:

    Bug Type         Count  Avg Lifetime  Median
    race-condition   1,188  5.1 years     2.6 years
    integer-overflow 298    3.9 years     2.2 years
    use-after-free   2,963  3.2 years     1.4 years
    memory-leak      2,846  3.1 years     1.4 years
    buffer-overflow  399    3.1 years     1.5 years
    refcount         2,209  2.8 years     1.3 years
    null-deref       4,931  2.2 years     0.7 years
    deadlock         1,683  2.2 years     0.8 years
And the section describing common patterns for long-lived bugs (10+ years) lists the following:

> 1. Reference counting errors

> 2. Missing NULL checks after dereference

> 3. Integer overflow in size calculations

> 4. Race conditions in state machines

All of which cover more ground than listed in your comment.

Furthermore, the 19-year-old bug case study is a refcounting error not related to highly concurrent state machines or hardware assumptions.

johncolanduoni1 day ago

It depends what they mean by some of these: are the state machine race conditions logic races (which Rust won’t trivially solve) or data races? If they are data races, are they the kind of ones that Rust will catch (missing atomics/synchronization) or the ones it won’t (bad atomic orderings, etc.).

It’s also worth noting that Rust doesn’t prevent integer overflow, and it doesn’t panic on it by default in release builds. Instead, the safety model assumes you’ll catch the overflowed number when you use it to index something (a constant source of bugs in unsafe code).

I’m bullish about Rust in the kernel, but it will not solve all of the kinds of race conditions you see in that kind of context.

aw16211071 day ago

> are the state machine race conditions logic races (which Rust won’t trivially solve) or data races? If they are data races, are they the kind of ones that Rust will catch (missing atomics/synchronization) or the ones it won’t (bad atomic orderings, etc.).

The example given looks like a generalized example:

    spin_lock(&lock);
    if (state == READY) {
        spin_unlock(&lock);
        // window here where another thread can change state
        do_operation();  // assumes state is still READY
    }
So I don't think you can draw strong conclusions from it.

> I’m bullish about Rust in the kernel, but it will not solve all of the kinds of race conditions you see in that kind of context.

Sure, all I'm trying to say is that "the class of bugs described here" covers more than what was listed in the parentheses.

+6
rjzzleep1 day ago
jiggawatts1 day ago

The default Mutex struct in Rust makes it impossible to modify the data it protects without holding the lock.

"Each mutex has a type parameter which represents the data that it is protecting. The data can only be accessed through the RAII guards returned from lock and try_lock, which guarantees that the data is only ever accessed when the mutex is locked."

Even if used with more complex operations, the RAII approach means that the example you provided is much less likely to happen.

materielle1 day ago

I don’t think that the parent comment is saying all of the bugs would have been prevented by using Rust.

But in the listed categories, I’m equally skeptical that none of them would have benefited from Rust even a bit.

johncolanduoni23 hours ago

That’s not my point - just that “state machine races” is a too-broad category to say much about how Rust would or wouldn’t help.

RealityVoid23 hours ago

Why doesn't it surprise me that the CAN bus driver bugs have the longest average lifetime?

apaprocki1 day ago

> Furthermore, the 19-year-old bug case study is a refcounting error

It always surprised me how the top-of-the line analyzers, whether commercial or OSS, never really implemented C-style reference count checking. Maybe someone out there has written something that works well, but I haven’t seen it.

johncolanduoni1 day ago

This is I think an under-appreciated aspect, both for detractors and boosters. I take a lot more “risks” with Rust, in terms of not thinking deeply about “normal” memory safety and prioritizing structuring my code to make the logic more obviously correct. In C++, modeling things so that the memory safety is super-straightforward is paramount - you’ll almost never see me store a std::string_view anywhere for example. In Rust I just put &str wherever I please, if I make a mistake I’ll know when I compile.

anon-39882 days ago

> It is worth noting that the class of bugs described here (logic errors in highly concurrent state machines, incorrect hardware assumptions) wouldn't necessarily be caught by the borrow checker. Rust is fantastic for memory safety, but it will not stop you from misunderstanding the spec of a network card or writing a race condition in unsafe logic that interacts with DMA.

Rust is not just about memory safety. It also have algebraic data types, RAII, among other things, which will greatly help in catching this kind of silly logic bugs.

JuniperMesos1 day ago

Yeah, Rust gives you much better tools to write highly concurrent state machines than C does, and most of those tools are in the type system and not the borrow checker per se. This is exactly what the Typestate pattern (https://docs.rust-embedded.org/book/static-guarantees/typest...) is good at modeling.

the84721 day ago

The concurrent state machine example looks like a locking error? If the assumption is that it shouldn't change in the meantime, doesn't it mean the lock should continue to be held? In that case rust locks can help, because they can embed the data, which means you can't even touch it if it's not held.

john01dav22 hours ago

Rust has more features than just the borrow checker. For example, it has a a more featured type system than C or C++, which a good developer can use to detect some logic mistakes at compile time. This doesn't eliminate bugs, but it can catch some very early.

wordisside21 hours ago

But unsafe Rust, which is generally more often used in low-level code, is more difficult than C and C++.

https://rust-unofficial.github.io/too-many-lists/

https://chadaustin.me/2024/10/intrusive-linked-list-in-rust/

aw162110721 hours ago

> But unsafe Rust, which is generally more often used in low-level code, is more difficult than C and C++.

I think "is" is a bit too strong. "Can be", sure, but I'm rather skeptical that all uses of unsafe Rust will be more difficult than writing equivalent C/C++ code.

wordisside21 hours ago

[flagged]

kubb1 day ago

It’s hilarious that you feel the need to preemptively take control of the narrative in anticipation of the Rust people that you fear so much.

Is this an irrational fear, I wonder? Reminds me of methods used in the political discourse.

Bridged77561 day ago

People who make that kind of remarks should be called out and shunned. The Rust community is tired of discrimination and being the butt of jokes. All the other inferior languages prey on its minority status, despite Rust being able to solve all their problems. I take offense to these remarks, I don't want my kids to grow up as Rustaceans in such a caustic society.

irishcoffee1 day ago

> It’s hilarious that you feel the need to preemptively take control of the narrative in anticipation of the Rust people that you fear so much.

> Is this an irrational fear, I wonder? Reminds me of methods used in the political discourse.

In a sad sort of way, I think its hilarious that hn users have been so completely conditioned to expect rust evangelism any time a topic like this comes up that they wanted to get ahead of it.

Not sure who it says more about, but it sure does say a whole lot.

kubb1 day ago

I don’t think evangelism is necessary anymore. Rust adoption is now a matter of time.

BobbyTables212 hours ago

I’ve seen too many embedded drivers written by well known companies not use spinlocks for data shared with an ISR.

At one point, I found serious bugs (crashing our product) that had existed for over 15 years. (And that was 10 years ago).

Rust may not be perfect but it gives me hope that some classes of stupidity will be either be avoided or made visible (like every function being unsafe because the author was a complete idiot).

pjc501 day ago

> race condition in unsafe logic that interacts with DMA

It's worth noting that if you write memory safe code but mis-program a DMA transfer, or trigger a bug in a PCIe device, it's possible for the hardware to give you memory-safety problems by splatting invalid data over a region that's supposed to contain something else.

mgaunard1 day ago

I don't think 70% of bugs are memory safety issues.

In my experience it's closer to 5%.

cogman101 day ago

I believe this is where that fact comes from [1]

Basically, 70% of high severity bugs are memory safety.

[1] https://www.chromium.org/Home/chromium-security/memory-safet...

saagarjha1 day ago

High severity security issues.

mgaunard1 day ago

Right, which is a measure which is heavily biased towards memory safety bugs.

stonemetal1220 hours ago

Using the data provided, memory safety issues (use-after-free, memory-leak, buffer-overflow, null-deref) account for 67% of their bugs. If we include refcount It is just over 80%.

IshKebab1 day ago

70% of security vulnerabilities are due to memory safety. Not all bugs.

tester7561 day ago

That's the figure that Microsoft and Google found in their code bases.

redeeman1 day ago

probably quite a bit less than 5%, however, they tend to be quite serious when they happen

mgaunard1 day ago

Only serious if you care about protecting from malicious actors running code on the same host.

redeeman19 hours ago

you dont? I would imagine people that runs for example a browser would have quite an interest in that

nibman1 day ago

[dead]

ramon1561 day ago

You're fighting air

marcosdumay20 hours ago

Eh... Removing concurrence bugs is one of the main selling points for Rust. And algebraic types are a really boost for situations where you have lots of assumptions.

keybored1 day ago

No other top-level comments have since mentioned Rust[1] and TFA mentions neither Rust nor topics like memory safety. It’s just plain bugs.

The Rust phantom zealotry is unfortunately real.

[1] Aha, but the chilling effect of dismissing RIR comments before they are even posted...

staticassertion1 day ago

Yes, I saw this last night and was confused because only one comment mentioned Rust, and it was deleted I think. I nearly replied "you're about to prompt 1,000 rust replies with this" and here's what I woke up to lol

IshKebab1 day ago

Rust has other features that help prevent logic errors. It's not just C plus a borrow checker.

paulddraper1 day ago

Rust would prevent a number of bugs, as it can model state machine guarantees as well.

Rewriting it all in Rust is extremely expensive, so it won't be done (soon).

wiz21c1 day ago

Expensive because of: 1/ a re-write is never easy 2/ rust is specifically tough (because it catches error and forces you to think about it for real, because it makes some contruct (linked list) really hard to implement) for kernel/close to kernel code ?

IshKebab1 day ago

Both I'd say. Rust imposes more constraints on the structure of code than most languages. The borrow checker really likes ownership trees whereas most languages allow any ownership graph no matter how spaghetti it is.

As far as I know that's why Microsoft rewrote Typescript in Go instead of Rust.

wiz21c7 hours ago

I've been using rust for several years now and I like the way you explain the essence of the issue: tree instead of spaghetti :-)

However: https://www.reddit.com/r/typescript/comments/wbkfsh/which_pr...

so looks like it's not written in go :-)

lynx9723 hours ago

Thanks for raising this. It feels like evangelists paint a picture of Rust basically being magic which squashes all bugs. My personal experience is rather different. When I gave Rust a whirl a few years ago, I happened to play with mio for some reason I can't remember yet. Had some basic PoC code which didn't work as expected. So while not being a Rust expert, I am still too much fan of the scratch your own itch philosophy, so I started to read the mio source code. And after 5 minutes, I found the logic bug. Submitted a PR and moved on. But what stayed with me was this insight that if someone like me can casually find and fix a Rust library bug, propaganda is probably doing more work then expected. The Rust craze feels a bit like Java. Just because a language baby-sits the developer doesn't automatically mean better quality. At the end of the day, the dev needs to juggle the development process. Sure, tools are useful, but overstating safety is likely a route better avoided.

nibman1 day ago

[dead]

DobarDabar1 day ago

[dead]

gjfr1 day ago

Interesting! We did a similar analysis on Content Security Policy bugs in Chrome and Firefox some time ago, where the average bug-to-report time was around 3 years and 1 year, respectively. https://www.usenix.org/conference/usenixsecurity23/presentat...

Our bug dataset was way smaller, though, as we had to pinpoint all bug introductions unfortunately. It's nice to see the Linux project uses proper "Fixes: " tags.

staticassertion19 hours ago

> It's nice to see the Linux project uses proper "Fixes: " tags.

Sort of. They often don't.

giamma1 day ago

Is the intention of the author to use the number of years bugs stay "hidden" as a metric of the quality of the kernel codebase or of the performance of the maintainers? I am asking because at some point the articles says "We're getting faster".

IMHO a fact that a bug hides for years can also be indication that such bug had low severity/low priority and therefore that the overall quality is very good. Unless the time represents how long it takes to reproduce and resolve a known bug, but in such case I would not say that "bug hides" in the kernel.

cogman101 day ago

> IMHO a fact that a bug hides for years can also be indication that such bug had low severity/low priority

Not really true. A lot of very severe bugs have lurked for years and even decades. Heartbleed comes to mind.

The reason these bugs often lurk for so long is because they very often don't cause a panic, which is why they can be really tricky to find.

For example, use after free bugs are really dangerous. However, in most code, it's a pretty safe bet that nothing dangerous happens when use after free is triggered. Especially if the pointer is used shortly after the free and dies shortly after it. In many cases, the erroneous read or write doesn't break something.

The same is true of the race condition problems (which are some of the longest lived bugs). In a lot of cases, you won't know you have a race condition because in many cases the contention on the lock is low so the race isn't exposed. And even when it is, it can be very tricky to reproduce as the race isn't likely to be done the same way twice.

turtletontine1 day ago

> …lurked for years and even decades. Heartbleed comes to mind.

I don’t know much about Heartbleed, but Wikipedia says:

> Heartbleed is a security bug… It was introduced into the software in 2012 and publicly disclosed in April 2014.

Two years doesn’t sound like “years or even decades” to me? But again, I don’t know much about Heartbleed so I may be missing something. It does say it was also patched in 2014, not just discovered then.

cogman1024 hours ago

This may just be me misremembering, but as I recall, the bug of Heartbleed was ultimately a very complex macro system which supported multiple very old architectures. The bug, IIRC, was the interaction between that old macro system and the new code which is what made it hard to recognize as a bug.

Part of the resolution to the problem was I believe they ended up removing a fair number of unsupported platforms. It also ended up spawning alternatives to openssl like boring ssl which tried to remove as much as possible to guard against this very bug.

mrguyorama22 hours ago

Maybe you are thinking of ShellShock

https://en.wikipedia.org/wiki/Shellshock_(software_bug)

The bug was introduced into the code in 1989, and only found and exploited in 2014.

staticassertion1 day ago

> IMHO a fact that a bug hides for years can also be indication that such bug had low severity/low priority and therefore that the overall quality is very good.

It doesn't seem to indicate that. It indicates the bug just isn't in tested code or isn't reached often. It could still be a very severe bug.

The issue with longer lived bugs is that someone could have been leveraging it for longer.

galangalalgol1 day ago

Worst case is that it doesn't even cause correctness issues in normal use, only when misused in a way that is unlikely to happen unintentionally.

staticassertion1 day ago

I guess because I work in security the "unintentionally" doesn't matter much to me.

+1
SAI_Peregrinus1 day ago
jackfranklyn1 day ago

The state machine race pattern resonates beyond kernel work. I've seen similar bugs hide for years in application code - transaction state edge cases that only trigger under specific sequences of user actions that nobody tests for.

The median lifetimes are fascinating. Race conditions at 5.1 years vs null-deref at 2.2 years makes intuitive sense - the former needs specific timing to manifest, while the latter will crash obviously once you hit the code path. The ones that need rare conditions to trigger are the ones that survive longest.

pixl9722 hours ago

>hide for years in application code

Yea, it's pretty common. We had a customer years ago that was having a rare and random application crash under load. Never could figure out where it was from. Quite some time later a batch load interface was added to the app and with the rate things were input with it the crash could be triggered reliably.

It's something else that's added/changed in the application that eventually makes the bug stand out.

MarleTangible1 day ago

One of the iOS 26 Core Audio bug (CVE-2025-31200) is about synchronizing two different arrays with each other and the assumption mistakes that were made trusting dimensional information which could be coming from the user.

https://youtu.be/nTO3TRBW00E

NewsaHackO2 days ago

It may be just my system, but the times look like hyperlinks but aren't for some reason. It is especially disappointing that the commit hashes don't link to the actual commit in the kernel repo.

Telaneo2 days ago

They're <strong> tags with color:#79635c on hover in the CSS. A really weird style choice for sure, but semantically they aren't meant to be links at all.

NewsaHackO1 day ago

I know, I am saying they should be links, as it is what one would expect from an article like this.

michaelcampbell23 hours ago

Thank goodness for reader mode. The transparent background where the text is with the wiggly line background is... challenging.

silver_sun1 day ago

Their section on "Dataset limitations" says that the study "Only captures bugs with Fixes: tags (~28% of fix commits)."

Just worth noting that it is a significant extrapolation from only "28%" of fix commits to assume that the average is 2 years.

tremon1 day ago

Why? A sample size of 28% is positively huge compared to what most statistical studies have to work with. The accuracy of an extrapolation is mostly determined by underlying sampling bias, not the amount of data. If you have any basis to suggest that capturing "only bugs with fixes tags" creates a skewed sample, that would be grounds to distrust the extrapolation, but simply claiming "it's only 28%" does not make it worth noting.

ValdikSS1 day ago

grsecurity project has fixed many security bugs but did not contribute back, as they're profiting from selling the patchset.

It's not uncommon for the bugs they found to be rediscovered 6-7 years later.

https://xcancel.com/spendergrsec

staticassertion19 hours ago

This implies (or states, hard to say) that they don't upstream specifically in order to profit. That is nonsense.

1. Tons of bugs are reported upstream by grsecurity historically.

2. Tons of critical security mitigations in the kernel were outright invented by that team. ASLR, SMAP, SMEP, NX, etc.

3. They were completely FOSS until very recently.

4. They have always maintained that they are entirely willing to upstream patches but that it's a lot of work and would require funding. Upstream has always been extremely hostile towards attempts to take small pieces of Grsecurity and upstream them.

woliveirajr1 day ago

But the patchset should use the same license as the original code, shouldn't?

ValdikSS1 day ago
__bjoernd1 day ago

> as they're profiting from selling the patchset

Profiting from selling their patchset is not the whole story, though. grsec was public and free for a long time and there were many effects at play preventing the kernel from adopting it.

sedatk2 days ago

Firefox bugs stay in the open for that long.

steveklabnik2 days ago

One of my favorite Firefox bugs was some I don’t quite remember the details of, but went something like this:

“There’s a crash while using this config file.” Something more complex than that, but ultimately a crash of some kind.

Years later, like 20 years later, the bug was closed. You see, they re-wrote the config parser in Rust, and now this is fixed.”

That’s cool but it’s not the part I remember. The part I always think about is, imagine responding to the bug right after it was opened with “sorry, we need to go off and write our own programming language before this bug is fixed. Don’t worry, we’ll be back, it’s just gonna take some time.”

Nobody would believe you. But yet, it’s what happened.

nurettin1 day ago

To be fair, any rewrite could have fixed it, didn't have to wait for Rust.

Yossarrian221 day ago

No, Graydon Hoare took one look at the config code, went “fuck this” and decided to create a new language instead.

Xss31 day ago

But that take ruins all the intrigue of their comment... But youre spot on. They fabricated a story.

steveklabnik1 day ago

I didn’t say otherwise. Rust is not the point here.

mmooss1 day ago

All software has long-lived bugs. None are bug-free, at any point in their existance, so it's almost inevitable. Have you seen Windows' bug tracker?

The anti-Firefox mob really is striving to take shots at it.

The point of the article isn't a criticism of Linux, but an analysis that leads to more productive code review.

redleader5521 hours ago

I don't think the problem is the kernel. Kernel bugs stay hidden because no one runs recent Kernels.

My Pixel 8 runs kernel a stable minor from 6.1, which was released more than 4 years ago. Yes, fixes get backported to it, but the new features in 6.2->6.19 stay unused on that hardware. All the major distros suffer from the same problem, most people are not running them in production

Most hyperscalers are running old kernel versions on which they do backports. If you go Linux conferences you hear folks from big companies mentioning 4.xx, 3.xx kernels, in 2025.

dpc_012341 day ago

Might be obviously, but there is definitely a lot of biases in the data here. It's unavoidable. E.g. many bugs will not be detected, but they will be removed when the code is rewritten. So code that is refactored more often will have lower age of fixed bugs. Components/subsystems that are heavily used will detect bugs faster. Some subsystems by their very nature can tolerate bugs more, while some by necessity will need to be more correct (like bpf).

a3w1 day ago

The kernel this speaks of is probably linux. Does windows have a similar round time?

pixl9722 hours ago

I mean, yea.

Here is a device driver bug that was around 11 years.

https://www.bitdefender.com/en-us/blog/hotforsecurity/google...

MORPHOICES1 day ago

Deep bugs, particularly in kernels, can go unnoticed for years, according to analyses I keep seeing. Decades at times. ~

That seems frightening at first. However, the more I consider it, the more it seems... predictable.

The mental model that I find useful:

Users discover surface bugs.

Deep bugs only appear in infrequent combinations.

For some bugs to show up, new context is required.

I've observed a few patterns:

Undefined behavior-related bugs are permanently hidden.

Logic errors are less important than uncommon hardware or timing conditions.

Long before they can be exploited, security flaws frequently exist.

I'm curious what other people think of this:

Do persistent bugs indicate stability or failure?

What typically leads to their discovery?

To what extent do you trust "well-tested" code?

saagarjha1 day ago

> Undefined behavior-related bugs are permanently hidden.

No they are often found and fixed.

fsflover1 day ago

> To what extent do you trust "well-tested" code?

I don't, which is why I use Qubes OS providing security through compartmentalization.

hun31 day ago

Then the question becomes: to what extent do you trust Xen and Qubes RPC?

fsflover23 hours ago

I do have to somewhat trust Xen, but Qubes' isolation relies on hardware virtualization (VT-d), which statistically has much less security issues than Xen itself. Most Xen advisories do not affect Qubes: https://www.qubes-os.org/security/xsa/

sureglymop1 day ago

Only tangentially related but maybe someone here can help me.

I have a server which has many peripherals and multiple GPUs. Now, I can use vfio and vfio-pcio to memory map and access their registers in user space. My question is, how could I start with kernel driver development? And I specifically mean the dev setup.

Would it be a good idea to use vfio with or without a vm to write and test drivers? How to best debug, reload and test changing some code of an existing driver?

zkmon1 day ago

A bug is a piece of code that doesn't agree with requirements or architecture. The misalignment can not be attributed to code alone.

calebm18 hours ago

It's interesting to consider that the same phenomenon may also hold true for humanity's psychological software.

GaryBluto21 hours ago

What's with the odd scribbles in the background?

kmavm19 hours ago

It's an easter egg on the website that usually goes unnoticed. It's our first time on the front page of HN, so it's a little overutilized right now. Capital-C clears it.

eab-24 hours ago

I'd find this article a bit more compelling if it was used to find current introduced bugs, instead of just using a holdout set

Adrian-ChatLocl12 hours ago

Still probably a lot better than Windows.

ryukoposting24 hours ago

This is fascinating stuff, especially the per-subsystem data. I've worked with CAN in several different professional and amateur settings, I'm not surprised to see it near the bottom of this list. That's not a dig against the kernel or the folks who work on it... more of a heavy sigh about the state of the industries that use CAN.

On a related note, I'm seeing a correlation between "level of hoopla" and a "level of attention/maintenance." While it's hard to distinguish that correlation from "level of use," the fact that CAN is so far down the list suggests to me that hoopla matters; it's everywhere but nobody talks about it. If a kernel bug takes down someone's datacenter, boy are we gonna hear about it. But if a kernel bug makes a DeviceNet widget freak out in a factory somewhere? Probably not going to make the front page of HN, let alone CNN.

pixl9722 hours ago

There is a general rule on bugs is that the more devices they are on, the more apt they are to trigger.

A CAN with 10,000 machines total and relatively fixed applications is either going to trigger the bug right off the bat and then work around it, or trigger the bug so rarely it won't be recognized as a kernel issue.

General purpose systems running millions and millions of units with different workloads are an evolutionary breeding ground for finding bugs and exploits.

jmyeet1 day ago

The lesson here is that people have an unrealistic view of how complex it is to write correct and safe multithreaded code on multi-core, multi-thread, assymmetric core, out-of-order processors. This is no shade to kernel developers. Rather, I direct this at people who seem to you can just create a thread pool in C++ and solve all your concurrency problems.

One criticism of Rust (and, no, I'm not saying "rewrite it in Rust", to be clear) is that the borrow checker can be hard to use whereas many C++ engineers (in particular, for some reason) seem to argue that it's easier to write in C++. I have two things to say about that:

1. It's not easier in C++. Nothing is. C++ simply allows you to make mistakes without telling you. GEtting things correct in C++ is just as difficult as any other language if not more so due to the language complexity; and

2. The Rust borrow checker isn't hard or difficult to use. What you're doing is hard and difficult to do correctly.

This is I favor cooperative multitasking and using battle-tested concurrency abstractions whenever possible. For example the cooperative async-await of Hack and the model of a single thread responding to a request then discarding everything in PHP/Hack is virtually ideal (IMHO) for serving Web traffic.

I remember reading about Google's work on various C++ tooling including valgrind and that they exposed concurrency bugs in their own code that had lain dormant for up to a decade. That's Google with thousands of engineers and some very talented engineers at that.

marcosdumay19 hours ago

> The Rust borrow checker isn't hard or difficult to use. What you're doing is hard and difficult to do correctly.

There are entire classes of structures that no, aren't hard to do properly, but the borrow checker makes artificially hard due to design limitations that are known to be sub-optimal.

No, two-directional linked lists and partially editable data structures aren't inherently hard. It's a Rust limitation that a piece of code can't take enough ownership of them to edit they safely.

wordisside24 hours ago

> It's not easier in C++. Nothing is.

The implementations of sort in Rust are filled with unsafe.[0]

Another example is that of doubly linked lists.[1] It is possible to implement a doubly linked list correctly in C++ without much trouble. In Rust, it can be significantly more challenging.

In C++, pointers are allowed to alias if their types, roughly said, are compatible. In Rust, there are stricter rules, and getting those rules wrong in an unsafe block, or code outside unsafe blocks that code inside unsafe blocks rely on, will result in breakage of memory safety.

This has been discussed by others.[2]

Based on that, do you agree that there are algorithms and data structures that are significantly easier to implement efficiently and correctly in C++ than in Rust? And thus that you are being completely wrong in your claim?

[0] https://github.com/rust-lang/rust/blob/main/library/core/src...

[1] https://rust-unofficial.github.io/too-many-lists/

[2] https://chadaustin.me/2024/10/intrusive-linked-list-in-rust/

aw162110721 hours ago

> The implementations of sort in Rust are filled with unsafe.

Strictly speaking, the mere presence of `unsafe` says nothing on its own about whether "it" is easier in C++. Not only does `unsafe` on its own say nothing about the "difficulty" of the code it contains, but that is just one factor of one side of a comparison - very much insufficient for a complete conclusion.

Furthermore, "just" writing a sorting algorithm is pretty straightforwards both in Rust and C++; it's the more interesting properties that tend to make for equally interesting implementations, and one would need to procure Rust and C++ implementations with equivalent properties, preferably from the same author(s), for a proper comparison.

Past research has shown that Rust's current sorting algorithms have different properties than C++ implementations from the time (e.g., the "X safety" results in [0]), so if nothing substantial has changed since then there's going to be some work to do for a proper comparison.

Edit: forgot to add the reference [0]: https://github.com/Voultapher/sort-research-rs/blob/main/wri...

wordisside21 hours ago

[flagged]

wordisside24 hours ago

Also, the Linux kernel developers turned off strict aliasing in the C compilers they use, because they found strict aliasing too difficult. Yet unsafe Rust has more difficult rules than even strict aliasing in C and C++.

aw162110721 hours ago

> Also, the Linux kernel developers turned off strict aliasing in the C compilers they use, because they found strict aliasing too difficult.

I'm not sure "they found strict aliasing too difficult" is an entirely correct characterization? From this rather (in)famous email from Linus [0]:

    The fact is, using a union to do type punning is the traditional AND
    STANDARD way to do type punning in gcc. In fact, it is the
    *documented* way to do it for gcc, when you are a f*cking moron and
    use "-fstrict-aliasing" and need to undo the braindamage that that
    piece of garbage C standard imposes.

    [snip]

    This is why we use -fwrapv, -fno-strict-aliasing etc. The standard
    simply is not *important*, when it is in direct conflict with reality
    and reliable code generation.

    The *fact* is that gcc documents type punning through unions as the
    "right way". You may disagree with that, but putting some theoretical
    standards language over the *explicit* and long-time documentation of
    the main compiler we use is pure and utter bullshit.
[0]: https://lkml.org/lkml/2018/6/5/769
wordisside21 hours ago

[flagged]

eulgro2 days ago

From the stats we see that most bugs effectively come from the limitations of the language.

Impressive results on the model, I'm surprised they improved it with very simple heuristics. Hopefully this tool will be made available to the kernel developers and integrated to the workflow.

snvzz2 days ago

Millions of lines of code, all running in supervisor mode.

One bug is all it takes to compromise the entire system.

The monolithic UNIX kernel was a good design in the 60s; Today, we should know better[0][1].

0. https://sel4.systems/

1. https://genode.org/

tlb1 day ago

My conclusion is that microkernels offer some protection from random reboots, but not much against hacking

Say the USB system runs in its own isolated process. Great, but if someone pwns the USB process they can change disk contents, intercept and inject keystrokes, etc. You can usually leverage that into a whole system compromise.

Same with most subsystems: GPU, network, file system process compromises are all easily leveraged to pwn the whole system.

bawolff2 days ago

Year of HURD on the desktop?

calvinmorrison1 day ago

Highly unrealistic rewrite disease

josefx1 day ago

Of course by now processor manufacturers decided that blowing holes into the CPUs security model to make it go faster was the way to go. So your micro kernel is stuck on a hardware security model that looks like swiss cheese and smells like Surströmming.

__bjoernd1 day ago

How are SEL4 and Genode going for you in your day-to-day compute usage?

windowssuperfi2 days ago

Yeah cause windows is amazing Or maybe macos? Ignore their freebsd parts of course.

DowsingSpoon2 days ago

Yes. As far as kernels go, NT was pretty damn good.

So is Mach, by the way, if you can afford the microkernel performance overhead.

johncolanduoni1 day ago

Mach is not a very good microkernel at all, because the overhead is much higher than necessary. The L4 family’s IPC design is substantially more efficient, and that’s why they’re used in actual systems. Fuchsia/Zircon have improved on the model further.

Someone will of course bring up XNU, but the microkernel aspect of it died when they smashed the FreeBSD kernel into the codebase. DriverKit has brought some userspace drivers back, but they use shared memory for all the heavy lifting.

+1
saagarjha1 day ago
heavyset_go1 day ago

XNU monolith-ized itself over time, even over some microkernel-esque boundaries.

dundarious2 days ago

If you include all the drivers too (which surely makes the comparison more accurate), is that still the case?

+2
nobodyandproud2 days ago
cosmic_cheese2 days ago

Apple at least has been making a concerted effort to kick more macOS/iOS functionality out into userland in the past several years.

pjmlp1 day ago

Just like Windows since Vista.

lifetimerubyist2 days ago

NT is actually a pretty good kernel. NTFS and the userland is what is shit.

IcePic1 day ago

I think NTFS get a bit of crap from the OS above it adding limitations. If you read up on what NTFS allows, it is far better than what Windows and the explorer allows you to do with it.

edoceo1 day ago

Userland peaked in Windows 2000

speed_spread2 days ago

NTFS is a beast of a filesystem and has been nothing but solid for 25+ years. The performance grievances ignore the warranties that NTFS offers vs many antiquated POSIX filesystems.

+1
johncolanduoni1 day ago
esseph2 days ago

Imagine if no one outside a select circle ever got to examine the code.

immibis2 days ago

Everything is open source if you're skilled with Ghidra.

We call AI models "open source" if you can download the binary and not the source. Why not programs?

KK7NIL2 days ago

> We call AI models "open source" if you can download the binary and not the source.

Who's "we"? There's been quite a lot of pushback on this naming scheme from the OSS community, with many preferring the term "open weights".

serf2 days ago

>We call AI models "open source" if you can download the binary and not the source. Why not programs?

the weights of a model aren't equivalent to the binary output of source code, no matter how you try to stretch the metaphor.

>why not

because we aren't beholden to change all definitions and concepts because some guy at some corp said so.

immibis1 day ago

Unless that corp is OSI, right?

heavyset_go1 day ago

Binaries and AI models can be inscrutable. They're meant to be interpreted by machines.

We want human readable, comprehensible, reproducible and maintainable sources at minimum when we say open source.

dspillett24 hours ago

North Korea is called a “democratic people's republic”. Just because one thing that really isn't <whatever> is called <whatever> by the people in coontrol of it, doesn't mean that it is or that incorrectly calling other things <whatever> is correct.

burnt-resistor1 day ago

Speaking of nasty kernel bugs although on another platform, there's a nasty one in either Microsoft's Win 11 nwifi.sys handling of deadlock conditions or Qualcomm's QCNCM865 FastConnect 7800 WCN785x driver that panics because of a watchdog failure in nwifi!MP6SendNBLInternal+0x4b guarded by a deadlocked ndis!NdisAcquireRWLockRead+0x8b. It "BSODs" the system rather than doing something sane like dropping a packet or retransmitting.

Am I the only unreasonable maniac who wants a very long-term stable, seL4-like capability-based, ubiquitous, formally-verified μkernel that rarely/never crashes completely* because drivers are just partially-elevated programs sprinkled with transaction guards and rollback code for critical multiple resource access coordination patterns? (I miss hacking on MINIX 2.)

* And never need to reboot or interrupt server/user desktop activities because the core μkernel basically never changes since it's tiny and proven correct.

YouAreWRONGtoo1 day ago

[dead]

maximgeorge1 day ago

[dead]