Recent: Copy Fail - https://news.ycombinator.com/item?id=47952181 - April 2026 (466 comments)
For context, the author of the linked post, Sam James, is a Gentoo developer.
Anyway, this is a disaster. It was extremely irresponsible to share the exploit with the world before the distributions shipped the fix. Who knows how many shared hosting providers were hacked with this.
It's also worrying that it seems there's no communication between the kernel security team and distribution maintainers. One would hope that the former would notify the latter, but apparently it's the responsibility of whoever finds the vulnerability.
> Note that for Linux kernel vulnerabilities, unless the reporter chooses to bring it to the linux-distros ML, there is no heads-up to distributions.
Why would they imply it is incumbent on the reporter to liaise with distributions? That seems to assume a high level of familiarity with the linux project. Vulnerability reporters shouldn’t be responsible for directly working with every downstream consumer of the linux kernel, what’s the limiting principal there? Should the reporter also be directly talking to all device manufacturers that use Linux on their machines?
IMO reporter did more than enough by responsibly disclosing it to linux and waiting for a patch to land.
Aren’t there people in the linux project itself with authority over and responsibility for security vulnerabilities? One would think they would be the ones notifying downstream distros…
Especially since the reporter is explicitly asked not to notify the distro teams first.
https://docs.kernel.org/process/security-bugs.html
```As such, the kernel security team strongly recommends that as a reporter of a potential security issue you DO NOT contact the “linux-distros” mailing list UNTIL a fix is accepted by the affected code’s maintainers and you have read the distros wiki page above and you fully understand the requirements that contacting “linux-distros” will impose on you and the kernel community. ```
The kernel team has been at odds with the CVE process and the oss-security community about this stuff for many, many years now. It's a big part of why the kernel team established a CNA and started flooding CVE notifications; they don't believe that security problems are different than non-security problems, and refuse to establish norms or policies based on the idea that they are.
It's such a bizarre viewpoint. I wonder when Linus will see sense.
IMO it's pretty obviously not a view that they seriously hold, it's just one of those technical justifications people come up with to avoid admitting something they don't want to admit - in this case that Linux has a poor security track record.
The reporter took time to check and mention on their website specific distributions Ubuntu/RHEL/SUSE. One would have thought reporting to security teams of at least those would be responsible.
“One” would have thought? Can you point to a written policy that says that’s how it should be?
No, nor can I point to a written policy that states one should cover one’s mouth when they cough.
Everyone involved here failed to do the right thing, and hiding behind the lack of written words is weak sauce.
The tenets of decency don’t need to be written down.
> If you can't write it down, why would you expect it to be universal and enforceable?
and this is the problem. It used to be the case that if you were smart enough to find an exploit you were also smart enough to realise what would happen if you irresponsibly disclosed it. I guess these tools have made that pattern no longer apply.
different cultures have different views on disclosing vulnerabilities to distros before the public?
There is little difference in culture here. Nearly all open source work is done in English.
Sure, maybe it's not a _requirement_, but now we're all in more pain because the reporters are more interested in Fame than Safe Remediation.
No, you're in more pain, but other defenders with different postures benefit from having faster and fuller disclosure.
The reporter made a website explicitly calling out Ubuntu, RedHat, Amazon, and SUSE but didn’t notify them, and you think that’s reasonable? That they might not have known those distributions are downstream from the kernel team?
If you notify the kernel and they ship a fix, it seems reasonable to expect that they will communicate the fix to the distros.
I see this as an organizational failure of the Linux ecosystem. There should be better communication between distro and kernel development.
What is the heuristic for who should get the heads up? Should they notify amazon but not google simply because they named amazon linux in the report? Seems to me the answer to my first question gets messy fast.
it's trivial to find out how to report a security issue like this to Linux distros.
Google search: https://share.google/aimode/eihDKXZJy94Z5lC1p
and it's beyond me to not think about doing this and instead exposing everyone and their neighbor to this exploit up front.
I'm certain this is even a felony in some legislations, rightfully so.
Agree it's not a good look for these folks, notwithstanding that disclosure is mostly theater.
Stop blaming the reporter. Start asking kernel to fix their process. Linux kernel is no longer a toy project, it has full time employees employed by various companies. They should have handled notifying distributions. Not some rando.
Just for what it's worth, I just pushed an eBPF-based workaround for people who are running kernels in which AF_ALG is linked directly into the kernel and not as a module: https://github.com/Dabbleam/CVE-2026-31431-mitigation
I am running this in production right now and it mitigates the attack, with no unexpected side-effects as far as I can see.
`nosuid` and probably `nodev` should IMO be the default filesystem mount options. `/dev` is already a special devtmpfs and the initrd minimal /dev can just explicitly mount the initrd tmpfs rootfs with `dev` and `suid` if necessary.
Letting SUID binaries just "exist" anywhere is a stupendous security issue. What if you mount some external storage medium, how are you to verify that none of the SUID binaries on that block device are malicious.
Additionally, this exploit appears to only work if the user executing the SUID binary can also read the SUID binary. There's no reason for non-root users to have read on a SUID binary.
NixOS does this correctly. No SUID in the normal package installation directory `/nix/store` and no package leakage outside of that no `nosuid` can safety be used on all other mountpoints. The exception is just a single-purpose `/run/wrappers.$hash` directory that safety contains executable ONLY SUID wrappers.
While I hate suid as much as the next person, it's really not the problem here.
The bug that is being exploited gives you basically arbitrary page cache poisoning. At that point it's already game over. Patching a suid program is maybe the easiest way to get a root shell from that but far from the only.
Without read permissions you cannot execute the binary, that would not make any sense.
To execute the binary it needs to be read from disk and loaded into memory.
In fact if you have read permissions but not executable permissions on a specific binary then you can still execute it by calling the linker directly /bin/ld.so.1 /path/to/binary (the linker will read and load the binary and then jump to the entry point without an exec() call)
loader
Neither was NixOS.
https://discourse.nixos.org/t/is-nixos-affected-by-copy-fail...
Hey Xint Code / tylerni7 <https://news.ycombinator.com/threads?id=tylerni7>, maybe you should improve your disclosure process as well? Maybe make it mandatory for users of your tool?
they disclosed 30 days after the patch was merged in the thing they reported to.
its the same disclosure policy as google's project zero, and several other major players, so you should probably be trying to ping a lot more people
reporters should not be responsible for finding out and individually reporting to every downstream consumer. blame the kernel security team, who is in a much better position to coordinate notifications to individual distro security teams.
The Bleeping Computer link below mentions a potential remedy until a patch is ready.
https://www.bleepingcomputer.com/news/security/new-linux-cop...
This workaround only applies to kernels with the impacted code compiled as a module. RHEL, Fedora, and Gentoo (we use a modified Fedora config) all are configured to build this in directly. Without a patch or config change (as Sam from Gentoo was alluding to), those distributions remain vulnerable.
There was some discussion on the GitHub issues about workarounds to disable it, even though it is baked in.
https://github.com/theori-io/copy-fail-CVE-2026-31431/issues...
https://github.com/theori-io/copy-fail-CVE-2026-31431/issues...
This worked as a mitigation on distros with the module compiled into the kernel: https://gist.github.com/m3nu/c19269ef4fd6fa53b03eb388f77464d...
Basically: sudo grubby --update-kernel=ALL --args=initcall_blacklist=algif_aead_init
sudo reboot
F44 is safe as the kernel is greater than 6.18.22
The potential remedy doesn't work on RedHat and derivatives because the affected code is not a module there but statically compiled in.
Was not disclosed to stagex, and I expect a lot of linux distros. Thankfully we were already on kernel 7.0 so not impacted
Ubuntu has patches out, tested before and after patching.
`initcall_blacklist` is a thing.
huh somehow seeing people not using ai to work is like wow moment which i cherish a lot these days
i have no problem with disclosing a vulnerability 30 days after its patched in the thing you reported to. (in fact, for those unaware, this is the same policy that google's project zero uses: "90+30" https://projectzero.google/vulnerability-disclosure-policy.h...)
the real problem is:
>It's also worrying that it seems there's no communication between the kernel security team and distribution maintainers.
the reporter should not be the one responsible for reporting separately to every single downstream of the thing they found a vuln in.
what should be happening, as you allude to, is a communication channel between the kernel security team and distribution maintainers. they are in a much better position to coordinate and communicate with the maintainers than random reporters are.
the minute the patch landed in the kernel, a notification should have gone out from the kernel team to a curated list of distro security folk that communicated the importance of the patch, and that the public disclosure would be in 30 days.
If the maintainers were unresponsive, sure -- but it seems slightly hard to buy that a responsible reporter trying to make a big splash and a good impression wouldn't first check "did this make it out to the distros?" before making sysadmin's days real shitty. At which point, they should have realized that a mistake was made.
its an industry standard disclosure process. 90 days after reporting, or 30 days after the patch lands, the vuln is disclosed.
the linux kernel team is in a 10000% better position to communicate to and coordinate their downstreams. it seems completely backwards to me to suggest that the reporter should be responsible for figuring out every possible downstream and opening up separate reports to each of them.
the kernel team should have a process/channel to say "this is important! disclosure is in 30 days" that is received by distro security teams. because this is not the first or last time the kernel will have a local privilege escalation. hoping that every reporter, forever in the future, will take the onus on themselves is a recipe for disapointment.
The disclosure was more about marketing than security. From the disclosure page:
> Is your software AI-era safe?
> Copy Fail was surfaced by Xint Code about an hour of scan time against the Linux crypto/ subsystem. [...]
> [Try Xint Code]
More chaos makes their product seem even more attractive.
Your advertising for them on HN would help them too, I bet.
Does it? Now that I see their name again in this context they're blacklisted for life.
hope you are also blacklisting google's project zero, and practically every other major player in the vulnerability reporting space, as all use roughly the same bog standard 90+30 policy.
this was a failure of the kernel security team, and their stance on communicating security issues with their downstreams.
Same. They do become famous, but not in a wholly positive way.
If they want to be seen as responsible rather than opportunistic, then yeah, they should do a proper coordinated disclosure.
Sure, they have no legal obligation to disclose, but we all also have no legal obligation to buy their services. Blacklisting bad actors like this is the right move to discourage this kind of behavior.
Unfortunately this is correct. As a security researcher I set millions in profit on fire for reporting vulns to projects that offer no bounties vs selling to highest bidder. I keep doing it because it is the right thing to do, but I would not blame someone that needs to feed their family making a different choice.
We must get public funds to reward ethical disclosure of big impact vulns like this.
> are free to sell 0day for profit.
This is not true in many jurisdictions.
I'm pretty sure they have a legal obligation in most jurisdictions not to sell 0days for profit.
And they absolutely have a moral obligation to do things in a way to minimize damage and impact to other people's systems. (I'm not saying "responsible disclosure" is the correct way to do that, but hoarding vulnerabilities and exploits and selling them to the highest bidder certainly isn't.)
This is how society needs to work.
> Researchers are under no obligation to engage in coordinated disclosure and are free to sell 0day for profit.
Uh... no? If you mean legally, some people might, depending on jurisdiction. But also, ethically? yes, researchers are ethically obligated to disclose responsibly.
> Just fyi.
...
> Be glad it was disclosed at all. Be glad a patch was available prior to release.
I am glad that a patch was available. Equally I can be glad that the linux community is strong enough to respond quickly, while also being angry that this person behaves unethically.
Likewise, when people in my industry behave poorly, or unethically; I'm now the person ethically obligated to both point it out, and condemn it. Not to become an apologist demanding I should be happy watching bad things happen, when much of the fallout could have been prevented with a bit less incompetence and ignorance.
mmmmmm, no it would seem like they are absolutely under a social obligation to not do that.
Sure, but doing so makes you a scumsucking shitbag.
So does justifying said scumsucking behavior.
They should have a legal obligation to engage in coordinated/responsible disclosure, and it should be a crime to sell or disclose a 0day to anyone other than a state-designated security organization or the vendor/provider.
If it won’t be handled through criminal law then it’ll be handled through civil litigation: Anyone who was exploited as a result of this disclosure should sue the discloser for contributing to the damage they’ve suffered.
Same. I did not know who they were, but now they have been named and shamed. Not every publicity is good.
Yes, exactly. Name and shame.
> It was extremely irresponsible
As a user and admin I disagree. Makes one appreciate what a masterful bit of lexical-engineering “Responsible” Disclosure is, kinda like “Secure” (from me, not forme) Boot — “Responsible” Disclosure is 100% about reputation-management for the various corporation/foundation middleman entities sitting between me and my computer.
Those groups don't care that my individual computer is vulnerable but about nobody being able to say “RHEL is vulnerable” or “Ubuntu is vulnerable”. The vulnerability exists for me either way, and I'd rather have the chance to know about it and minimize risk than to be surprised by the fix and hope nothing bad happened in that meantime.
Immediate public disclosure is the only choice that isn't irresponsible as far as I'm concerned.
So if I found a vulnerability that lets hackers withdraw withdraw all the money in your account without a trail on where the money went, you'd be fine with them disclosing it to the public at the same time as the bank learns about it?
Even when there is no known use case of the attack (other than the security researcher's)?
> The vulnerability exists for me either way, and I'd rather have the chance to know about it and minimize risk
By the time you hear about it, the money could be gone because 1000 hackers heard about it from the researcher before you did.
> than to be surprised by the fix and hope nothing bad happened in that meantime.
Hope is not a good strategy here.
Yep, I'd be fine with that. My bank has insurance, and my money would be returned.
The banks cost of insurance goes up, cost of running an account goes up, how do we correct for this? offer worse accounts to customers...
Seeing your other (rightfully flagged) reply I want to tell you as a neutral party that yes this is missing the point of the analogy. You're basically saying "I would simply hit the brakes on the trolley". It's not that they're so hubristic they think it's impossible to legitimately disagree with their argument, it's that mentioning insurance is sidestepping their argument entirely. You're not addressing the general idea of getting hacked and suffering the consequences of the hack.
[flagged]
Respectfully, I don't think they're missing the point. Banking, as an institution, has its flaws, but deposit insurance isn't one of them. These vulnerabilities exist whether or not they follow specific disclosure rituals, and systems should be deployed with defense-in-depth so that one privilege-escalation flaw is a recoverable event. Inventing tortured counterfactual analogies doesn't change the basic thrust of the poster's point: the account is insured, so getting drained by an attacker is not a fatal problem. Of course people should still take steps to prevent that from happening, but that doesn't mean prevention is (or should be) the only cure.
"I, personally am not affected, and I don't care about anyone else so therefore there are no consequences"
> Immediate public disclosure is the only choice that isn't irresponsible as far as I'm concerned.
No, it's really not.
High severity vulnerabilities are responsibly handled by quietly neutralising them with subtle patches that do not reveal the vulnerability, waiting for those patches to distribute. Then patching or removing the root cause of the vulnerability (at which point opportunists will start to notice), and finally publicly disclosing it when there are already good mitigations in place.
Example: spectre/meltdowm mitigations.
I've been asked to use this approach myself when reaching out to maintainers. Sometimes it's possible to directly fix the vulnerability as a "side effect" by making a legitimate adjacent change.
“The choice that maximizes potential damage isn’t irresponsible, because it means I can mitigate my own systems immediately.”
That’s what you’re saying here.
They're literally just restating the argument for full disclosure security. This is one of the oldest debates in information security.
Um, yes, everyone is expected to upgrade and reboot on a moment's notice. No policy or norm you come up with will change that.
(This bug does not technically require a reboot to mitigate).
What the heck is up with people today.
Using quotes around something where you’re actually doing a strawman paraphrase of another commenter you disagree with is bad form.
With immediate disclosure the provider can decide to shut down while it is fixed. Or to notify users and make it their decision. Or to be prepared with a diversified infra and switch over to a non-vulnerable path. e.g, BSDs are not affected by CopyFail
Those groups care about whether millions of computers are vulnerable, likely including your computer. If "immediate public disclosure" was done in all cases every vuln would be exploited and patches would be much lower quality. Shortening the disclosure timeline might be a good idea, 90 days is starting to feel long.
Millions of computers are still vulnerable. Not-knowing about it doesn't mean the vuln isn't there :p
The Venn diagram of mainstream distros and individual Linux users is virtually a circle.
Ubuntu/RHEL is vulnerable and so are most Linux users by extension.
The Linux kernel is not usable as a security boundary, so anyone who wants to do "shared hosting" and not be hacked needs to use something else, like gVisor or firecracker VMs
The only important system that uses it as a security boundary is Android and there is mitigated by the fact that APKs need user approval, plus strict SELinux and seccomp policy plus the GrapheneOS hardening, and in this case the mitigations succeeded (https://discuss.grapheneos.org/d/35110-grapheneos-is-protect...)
A LOT of websites are tenants on WHM/CPanel hosts. Not to mention how many agencies use it for their clients Wordpress sites.
I'm quite sure there are many application hosting providers which rely on container runtime such as runC (default runtime of containerd/Docker), and a shared kernel between users.
> Who knows how many shared hosting providers were hacked with this.
I'd consider a shared hoster which allows users to run their own (native) code and doesn't use VMs for tenant isolation extremely irresponsible in 2026.
This is probably more common than you think. VMs are expensive, both in resources and cost (if you’re using something commercial). OS-level isolation (shared kernel, cgroups, namespaces) is used pervasively
Without taking a position on the disclosure mechanics: any hosting provider hacked with this was already playing to lose. It is not OK to run competing untrusted tenant workloads under a single shared kernel. Kernel LPEs are not rare. This was a particularly simple and portable one, but the underlying raw capability is a CNE commodity.
> Kernel LPEs are not rare. This was a particularly simple and portable one, but the underlying raw capability is a CNE commodity.
I absolutely 100% agree with this and I'm glad to see somebody saying it. Any system that is one LPE away from being compromised is already insecure.
Expecting people to do the right thing is a fundamental issue here. Why would you ever expect for all of vulnerabilities to be disclosed privately? There's very little actual incentive to do this.
I'm honestly unaware of what systems could be put in place to prevent this but expecting people to always do the right thing is fantasy level thinking. I mean I bet the disclosers thought they were doing the right thing, hence why it's a bad thing to rely on.
edit: spelling/grammer.
When the exploit is an advertisement for an exploit detection company, not doing the right thing is a bad look
The worst thing would be to exploit or sell it for profit. Instead of that, publicizing the exploit is closer to neutral–good in my books, that did trigger a really quick reaction from the different actors to patch their kernels and systems
Imagine how much quicker the distros would have reacted if they were given a heads up a month ago. But, sure, I guess kudos to this company for not being actively criminal, and merely bumblingly incompetent and overly eager to get their marketing pitch out the door.
I can accept (and welcome) disclosure before there are patches.
But publishing a working exploit together with the disclosure before patches are available is really really irresponsible, maybe even criminal.
And no, the proposed mitigations don't help with half of the distributions out there...
AIUI the exploit was fairly low-effort once you knew the vulnerability. So publishing one probably didn't change the landscape much.
> maybe even criminal
What’s your theory here? What crime?
What does that have to do with this comment thread?
Copying from the comment I was replying to:
> But publishing a working exploit together with the disclosure before patches are available is really really irresponsible, maybe even criminal
If you wanted to somehow make coordinated disclosure into a legal framework, that would be an interesting and complex project.
But it’s not the law anywhere I’m aware of today, and I’d not support it becoming a law.
You know companies are allowed to pay people to find vulns, and pay people bug bounties?
Instead of that, you’d rather make the law compel free individuals to limit their speech, or to hand over their work to big companies privately, so big companies can save money?
That doesn’t sound like a nice future, if it’s even enforceable at all.
There is an alternative mitigation you can use which blacklists the function calls when the affected code is not built as a kernel module.
Patches were available for nearly a month.
Not true, if there’s any evidence of the exploit being used in the wild, it’s much more responsible to release immediately.
Considering that the patches have been available for a while, someone surely reversed what they were for and was actually exploiting this in the wild.
In the age of AI, I’d argue that “responsible disclosure” is dead. Arguably even in closed source projects. Just ask Claude to do a diff between the previous version and to see whether anything fixed in there could have had security implications.
We’re not there yet, but very soon the only way to responsibly disclose a vulnerability will be immediately.
“Made it into the wild?” Patches landed a month ago. Should they also wait until my linksys router from 2018 has a patch ready?
Fedora is patched.
only for versions 6.19.12 & 6.18.22. older versions (which are used in distributions) are not ready yet.
Why don't all these distro maintainers add their own back doors, and mine crypto off our machines without our knowledge? Surely, there is some legal fine print they can add that would let them do that. There is very little incentive for them to maintain these systems, given how thankless and underpaid the work is.
Why wouldn't the linux security team notify the main linux distributions?
Well, how do you define main Linux distros? Isn’t the next smaller one not receiving the info always complaining?
Isn't there already a distro security list for this purpose?
Partly they already have enough on their plate. It's up to the reporter to pick how to handle the disclosure, and unless a specific maintainer chooses to handle it, the Linux security team clearly says they won't.
Partly they have a strong belief that all kernel bugs are vulnerabilities and all vulnerabilities are just bugs; sometimes taken to the extreme in both ways (on one hand this case where the vulnerability is almost ignored; on the other hand, I saw cases where a VM panic that could be triggered only by a misbehaving host—which could just choose to stop executing the VM—was given a CVE).
You know the linux kernel is a free software project right? If you think “somebody should” do a thing but you aren’t prepared to do it yourself then you should maybe ask for a full refund.
I think it’s reasonable to expect folks in the security community who go to the trouble of creating a website detailing security vulnerabilities in specific listed software to pre-notify the security teams of that software. The CopyFail website calls out Ubuntu and Red Hat specifically, but apparently the author of the site did not inform them of the issue?
But even if you think making unethical decisions in personal self interest is something no one should be criticized for, surely the Linux kernel team ought to have some process for notifying the top distributions of an upcoming LPE, just out of practicality.
In what sense do you believe that the reporter did not notify the security team of the relevant software? The vulnerability is in the kernel. Reporter responsibly disclosed using the kernel’s security report mechanism and waited until a patch was ready.
Distros are downstream of kernel, that doesn’t entitle them to expect to be contacted directly by every security reporter. That’s not on them. Distros that are big enough should be plugged into the linux security team for notifications.
Security researchers cannot be held responsible for broken lines of communication within the org charts of projects that they study. They’re providing a valuable public service already, how much more do you want?
AWS and GCP are downstream another level. Should the reporter also have worked with them? And their customers? And the customers of their customers?
IMO this whole discussion seems like people are annoyed by the security researchers doing god’s work and wish they didn’t exist or think that they should be fully subservient to the projects and companies they are helping for free. The bugs were there before the researchers revealed them!!
> that doesn’t entitle them to expect to be contacted directly by the reporter
Yes it does. That's how it's always been done and distros can ship a fix well before it ends up in a kernel release.
> expecting people to always do the right thing is fantasy level thinking.
Most people in tech think like the techie in this comic strip.
https://xkcd.com/538/
Who knows how many attackers had found this vulnerability and had already been using it prior to this research finding it?
Argument from uncertainty is not a good way to reason about this.
I could equally ask: "Who knows how many attackers learned about this vulnerability from this disclosure, and used it before the distributions fixed it?"
Yes, you could. Thats the core of my point: there is no Right way to handle vulnerability disclosure. There are many competing factors, most of them have major elements of uncertainty because you can’t know who knows what or how various projects or stakeholders will react.
So maybe folks should take a break from the kind of armchair quarterbacking that this was “incredibly irresponsible”, as was done upthread, or that the researchers should be blacklisted for life, as a parallel commenter stated.
well now everyone does, so the irresponsible disclosure makes it significantly worse.
It’s your opinion that it’s irresponsible and that it makes something worse.
I asked a question and you replied with a statement. Your statement didn’t frame itself as an opinion but as fact.
The hilarious bit is that the idea that they needed to coordinate is clearly broken even in just this example. They did give prior notice to the Linux developers, who issued a patch. And they’re still getting raked over the coals in this comment page by armchair quarterbacks who have decided they needed to coordinate with specific distros. If they’d coordinated with those distros, somebody would have a pet distro that didn’t make the cut and they’d be pissed about that.
There are risks no matter how they do it, and there will be people who are pissed no matter how they do it. Security researchers don’t owe anybody a specific methodology.
The public disclosure page has a big blue "Get the exploit" button.
It's an advertisement for an unpatched critical exploit and apparently some kind of infosec company.
The title on this post was changed to imply that only the Gentoo developer was left out - which I could believe.
> Who knows how many shared hosting providers were hacked with this.
None? Because nobody* does hosting using Linux users as a security boundary. It's not the 90s.
* Standard HN disclaimer for people that think that some retro shell box with 10 users disproves "nobody": nobody does not literally mean exactly 0 people in this context.
> Anyway, this is a disaster. It was extremely irresponsible to share the exploit with the world before the distributions shipped the fix. Who knows how many shared hosting providers were hacked with this.
Maybe it is irresponsible how little attention we pay to software security. Maybe, software developers of all kind should spend an entire year not developing any features at all, but fix all the tech debt of 30 years instead.
Yes, that sounds revolutionary, but I do not see an alternative in an age where all you need to find kernel bugs of this scale with AI agents.
> It was extremely irresponsible to share the exploit with the world before the distributions shipped the fix.
Yes, this was clearly a marketing stunt to promote Xint code.
I, for one, will never use Xint code and will advise everyone to never use it. To anyone working there: enjoy your 15 minutes, I hope this backfires right in your face.
>> Anyway, this is a disaster. It was extremely irresponsible to share the exploit with the world before the distributions shipped the fix.
Maybe a decade of corporations with revenue in the billions, paying peanuts and coffee money, for critical vulnerability disclosures made it....
Counterpoint. End users have a right to mitigate this issue on their systems.
It is a really really bad look for Linux, puts a bit of water on all hype around switching from Windows.
It does? The disclosure even says the concern for single user systems is very low. If someone has access to your single user system, remote or otherwise, you’ve already lost on the sort of device people would be switching from windows to Linux on.
Someone like an AI coding agent perhaps ? This is the type of thing Prompt injection was made for.
No OS is perfect. The awkward rollout for this bug fix is proof of that.
Root access does not typically add anything interesting, for a desktop system. All the valuable stuff is already owned by the single user.
Imagine an ignorant response like this from Apple? One of the most short sighted comments I've seen on HN in some time. And the double down! A true master class in misunderstanding the issue and the entire FOSS ecosystem in two sentences.
As opposed to all other operating systems with no CVEs ever?
What happens if someone does the exploit in WSL?
Hype around switching from Windows servers?
>> puts a bit of water on all hype around switching from Windows.
Said no one ever...present post excluded :-))
You clearly have no idea how often windows has unpatched privesc exploits.
[dead]