Also https://www.washingtonpost.com/technology/2025/09/05/anthrop..., https://www.reuters.com/sustainability/boards-policy-regulat...
To be very clear on this point - this is not related to model training.
It’s important in the fair use assessment to understand that the training itself is fair use, but the pirating of the books is the issue at hand here, and is what Anthropic “whoopsied” into in acquiring the training data.
Buying used copies of books, scanning them, and training on it is fine.
Rainbows End was prescient in many ways.
> Buying used copies of books, scanning them, and training on it is fine.
But nobody was ever going to that, not when there are billions in VC dollars at stake for whoever moves fastest. Everybody will simply risk the fine, which tends to not be anywhere close to enough to have a deterrent effect in the future.
That is like saying Uber would have not had any problems if they just entered into a licensing contract with taxi medallion holders. It was faster to just put unlicensed taxis on the streets and use investor money to pay fines and lobby for favorable legislation. In the same way, it was faster for Anthropic to load up their models with un-DRM'd PDFs and ePUBs from wherever instead of licensing them publisher by publisher.
> It was faster to just put unlicensed taxis on the streets and use investor money to pay fines and lobby for favorable legislation
And thank god they did. There was no perfectly legal channel to fix the taxi cartel. Now you don't even have to use Uber in many of these places because taxis had to compete - they otherwise never would have stopped pulling the "credit card reader is broken" scam, taking long routes on purpose, and started using tech that made them more accountable to these things as well as harder for them to racially profile passengers. (They would infamously pretend not to see you if they didn't want to give you service back when you had to hail them with an IRL gesture instead of an app..)
i dont know that its such a great thing in the end. Uber/Lyft is 50-100% more expensive now than taxis were before. Theyre entrenched in different ways.
And it’s still shitty. Uber/Bolt is like on par with 90s taxis. At least here there was a short attempt to make things better in early 2010s with nicer cars and trying to force drivers to be nicer. But then it was „disrupted“.
In NYC, Vegas, and a few other places I take taxis because they're dense and work well there.
Uber was a godsend for everyone living outside of like 4 metro areas in the US.
It helped that they started in places like San Francisco, where the taxi cartel was so absurdly terrible that you'd win fans just by showing up.
I lived in SF when Uber started. We used to call Veteran's Cab because they were the only company that wouldn't ditch on the way to pick you up, but it was completely normal to wait more than an hour for a cab in the dark hinterlands of 24th and Dolores or the industrial wasteland of 2nd and Folsom. An hour during which you had to be ready to jump as soon as the car arrived. Everybody had at least one black-car driver's cell number for downtown use because if they happened to be free, you could at least get picked up.
Uber would have had a religious following of fanpersons even if all they'd done was an estimated pickup time that was accurate to within 20 minutes.
Where I am, the taxi from the airport is about $5 more expensive during off peak, but it can be $20 cheaper during peak hours. I always take the taxi since it's right there, but I usually check the price on Lyft or Uber just to compare.
I know how much my ride will be and I know it doesn't vary based on what happens along the way. L
Not at any airport I've been to recently. I've never seen lines of taxis waiting at any airport in the last few years. There are empty taxi slots. People hail the taxi using an app and then wait for it to show up. Just like Lyft/Uber.
In India, most taxis I ran across at the airport were 50% more expensive - after haggling!
I mean, that seems pretty unfair, no, giving one set of transportation companies an arbitrary advantage over another? This sort of thing is exactly why Uber started in the first place: because taxis had unfair monopolistic advantages for no particular reason, and gave customers a poor experience, because they knew they didn't have to do better to keep their jobs.
I have no idea what I'm going to get with those taxis waiting immediately outside the exit door. Even in my home country, at the airport next to my city, I have no idea. I know exactly what I'm getting with an Uber/Lyft, every time. That's valuable to me.
I was just in another country a couple months ago, and when trying to leave the airport, I was confused where I'd need to go in order to get an Uber. I foolishly gave up and went for one of those "conveniently-waiting" taxis, where I was quoted a price up-front, in my home currency, that I later (after doing the currency conversion on the Uber price) realized was a ripoff. The driver also aggressively tried to get me to instead rent his "friend's car" rather than take me to the rental car place like I asked. And honestly I consider that lucky: he didn't try to kidnap me or threaten me in any way, but I was tense during the whole ride, wondering if something bad was going to happen.
That sort of thing isn't an anomaly; it happens all the time to tourists in many countries.
> plus Uber Lyft initially losing money on rides to capture market share before they eventually had to actually breakeven?
That's typically considered to be somewhere between assholish and straight up illegal in most civilized economies.
Gas is priced lower when counting for inflation, isn’t it?
They're always more common in metro areas of the US. You must be from a relatively rural area and don't get out of it much.
That said, uh, the use of getting a taxi to drive you to or from the airport was just not having to park at the airport which generally costs a lot of money, and in certain areas is a little sketchy on whether or not your car will get cracked open while you're away.
What?
I strongly prefer to take traditional taxis, but I also comparison shop and Lyft is almost always 20-40% cheaper than a cab ride.
Where I live Ubers are WAY cheaper than taxis, even if you go back years and years.
“Entrenched” because that’s how consumers prefer to spend their money?
That's probably due to general inflation...
So are most things from 20 years ago. Inflation is acting as the majority of those increases I’d wager.
Here in Australia theres a never ending steam of complaints about taxis managing to bill passengers extraordinary amounts. From taking a route that deliberately includes a highway leg thats expensive to correct (screws tourists), to demanding higher fares, to card skimming, to outright just not displaying the taxi licences so you cant complain and have no idea which driver was being creepy.
Uber at least has fixed rates from what was displayed and there are logs of which driver was doing dodgy stuff.
The supposed 'taxi cartel' were just (some) scummy operators ... not really a cartel. Fast forward to today => you are paying more for what is essentially very similar service (because it literally turned into a monopoly because of network effects) and the money ends up in the pocket of some corporate douche not even the people doing the actual work.
This is the business model: get more money out of customers (because no real alternative) and the drivers (because zero negotiating power). Not to mention that they actually got to that position by literally operating at a loss for over a decade (because venture money). Textbook anti-competitive practices.
However, the idea itself (that is having an app to order taxi) is spectacular. It also something a high-school kid could make in a month in his garage. The actual strength of the business model is the network effects and the anti-competitive practices, not the app or anything having to do with service quality.
This is true ... except that it is simplistically naive way of looking at things, because this is just one form (out of many) of anti-competitive practices. It is essentially high-school level elementary basics of anti-trust. In actual reality there is quite a bit more to it than that.
For instance: Monopolies often don't actually limit supply. You only make it so customers can't choose an alternative and set prices accordingly (that is higher than they would have been if there were real alternatives). Big-tech companies do this all the time. Collusion is also not required, but only one form (today virtually unheard of or very rare) of how it may happen. For instance: big-tech companies often don't actually encroach on core parts of the business of other big-tech companies. Google, Microsoft and Apple or Uber are all totally different business with little competitive overlap. They are not doing this because of outright collusion. It's live and let live. Why compete with them when they are leaving us alone in our corner? Also: trying to compete is expensive (for them), it's risky and may hurt them in other ways. This is one of the dirty little secrets: Established companies don't (really) want to compete with other big companies. They all just want to protect what's their and keep it that way. If you don't believe me have a look at the (publicly available) emails from execs that are public record. Anti-competitive thinking through and through.
In the classical economic sense, Lyft/Uber should be competing to drive prices down to razor thin margins for the facilitator service. Is that happening? Or are they pocketing fat margins?
And it wasn't much of a cartel in NYC before, anyways. Most subways stops in Brooklyn had a black car nearby if you knew how to look for them.
Could you tell me why you think that?
In NYC, prior to Uber entering the market, taxi medallions changed hands for up to $1mm. Prices were fixed by the TLC.
If these are no strong indications of a cartel, I don’t know what is.
> And thank god they did. There was no perfectly legal channel to fix the taxi cartel
And instead Uber offloaded everything onto gig workers and society. And still lost 20 billion dollars in the process (price dumping isn't cheap).
It’s by design.. America is all about using you up as an asset then discarding you when you are no longer productive and generate economic benefits.
I always laugh when Americans poke fun at Europeans… we have it much better over here. I assure you of that.
My employer is a lot more dependable than the US government.
If you trust the overlord you didn't choose more than the one you did, then you might want to rethink your career.
But that's the thing, isn't it? Universal healthcare isn't magic. It's paid for by taxes. Yet Uber claimed its drivers where independent contractors that had to pay for anything: taxes, medical, insurance, car depreciation etc. etc.
What about the problem of sexual assault by drivers?
https://www.nytimes.com/2025/08/06/business/uber-sexual-assa...
The comment to which I replied said Uber was better than taxis. The article I referenced details why that might not be the case, when it comes to passenger safety.
Very few cartels actually existed to justify free range regulatory erasure.
> But nobody was ever going to that
Didn't Google have a long standing project to do just that?
From TFA
The Google Books project also faced a copyright lawsuit, which was eventually decided in favor of Google.
After contacting major publishers about possibly licensing their books, [former head of the Google Books project] bought physical books in bulk from distributors and retailers, according to court documents. He then hired outside organizations to dissemble the books, scan them and create digital copies that could be used to train the company’s AI. technologies.
Judge Alsup ruled that this approach was fair use under the law. But he also found the company’s previous approach — downloading and storing books from shadow libraries like Library Genesis and Pirate Library Mirror — was illegal.
That wasn't done as a play for venture capital. The Google Books project began before eBooks existed; in the 2000s, they spent money on all kinds of projects that had no real strategy for monetization. I remember Google Books being a valuable resource as it digitized books that were out of print. Back when they actually cared about making information available widely.
Disassemble*
This lawsuit also makes sure that only parties that can train an AI with good enough training material are now
- Anthropic
- Any Chinese company who do not care about copyright laws
What is the cost of buying and scanning books?
Copyright law needs to be fixed and its ridiculous hundred years tenure chopped away.
Reminds me when Facebook said to EU that they did not have the technology to merge FB and Whatsapp accounts when they bought Whatapp.
That's not really the point, though, is it? Now Anthropic can afford to buy books and get them scanned. They likely didn't have the money or time to do that before.
And even if they didn't use the illegally-obtained work to train any of the models they released, of course they used them to train unreleased prototypes and to make progress at improving their models and training methods.
By engaging in illegal activity, they advanced their business faster and more cheaply than they otherwise would have been able to. With this settlement, other new AI companies will see it on the record that they could face penalties if they do this, and will have to go the slower, more expensive route -- if they can even afford to do so.
It might not make it impossible, but it makes the moat around the current incumbents just that much wider.
> As part of the settlement, Anthropic said that it did not use any pirated works to build A.I. technologies that were publicly released.
Oh so now we're at "just trust me bro" levels of absurdity
It's been done.
’Twould wax yet more marvellous to ye beholders.
Crazy to think we've been helping train AI through captchas long before the "click all squares containing" ones.
"stop spam. read books." is a very ironic phrase to look back on considering the amount of spam on the internet that LLMs have enabled
Anthropic literally did exactly this to train its models according to the lawsuit. The lawsuit found that Anthropic didn't even use the pirated books to train its model. So there is that
The lawsuit didn't find anything, Anthropic claimed this as part of the settlement. Companies settle without admission of wrongdoing all the time, to the extent that it can be bargained for.
If I'm reading this right yes the training was fair use, but I was responding (unclearly) to the claim that the pirated books weren't used to train commercially released LLMs. The judge complained that it wasn't clear what was actually used, from the June order https://fingfx.thomsonreuters.com/gfx/legaldocs/jnvwbgqlzpw/... [pdf]:
> Notably, in its motion, Anthropic argues that pirating initial copies of Authors’ books and millions of other books was justified because all those copies were at least reasonably necessary for training LLMs — and yet Anthropic has resisted putting into the record what copies or even sets of copies were in fact used for training LLMs.
> We know that Anthropic has more information about what it in fact copied for training LLMs (or not). Anthropic earlier produced a spreadsheet that showed the composition of various data mixes used for training various LLMs — yet it clawed back that spreadsheet in April. A discovery dispute regarding that spreadsheet remains pending.
I'm "team Anthropic" if we're stack ranking the major American labs pumping out SOTA models by ethics or whatever, but there is no universe in which a company like them operating in this competitive environment didn't pirate the books.
Makes sense why Effective Altruism is so popular. Commit crime, make billions, give back when dead, live guilt free?
Except for Google at least.
Anthropic started scanning books in February 2024. I don't think these lawsuits had been filed by then - as far as I can tell that was in August 2024: https://www.courtlistener.com/docket/69058235/bartz-v-anthro...
> But nobody was ever going to that, not when there are billions in VC dollars at stake for whoever moves fastest.
Anthropic did. That was the part of their operation that they didn't get in trouble for, but the news spun it as "Anthropic destroyed millions of books to make AI".
Sir. These were carpoolers, just sharing a ride to their new online friends' B&B.
Lawyer: "Sir. These were carpoolers, just sharing a ride to their new online friends' B&B."
Judge: "But this app facilitated them."
Lawyer: "Well, you presume so-called genuine carpoolers are not facilitated? The manufacturers of their cell phones, the telecom operators, their employers or the bar where they met, or the bus company at whose bus stop they met, they all facilitated their carpooling behavior."
Judge: "But your company profits from this coordination!"
Lawyer: "Well we pay taxes, just like the manufacturer of the cell phone, the telecom operator, their employers, the bus company or the bar... But let's ignore that, what you -representing the government (which in turn supposedly represents the people)- are really after is money or power. As a judge you are not responsible for setting up the economy, or micromanaging the development of apps, so its not your fault that the government didn't create this application before our company did. In a sense you are lucky that we created the app given that the government did not create this application in a timely fashion!"
Judge: "How so?"
Lawyer: "If the population had created this app they would have started thinking about where the proceeds should go. They would have gotten concerned about the centralization of power (financial and intelligence). They would have searched for ways to decentralize and secure their app. They would have eventually gotten cryptographers involved. In that world, no substantial income would be generated, your fleet of taxi's would be threatened as well, and you wouldn't even have the juicy intel we occasionally share either!"
This conversation almost never takes place, since it only needs to take place once, after which a naive judge has learned how the cookie crumbles. Most judges have lost this naivety before even becoming a judge. They learn this indirectly when small "annoyances" threaten the scheme (one could say the official taxi fleet was an earlier such scheme).
Sure, but that’s mostly because the sheer convenience of the illegal way is so much higher, and carries zero startup cost.
The same could be said of grand larceny. The difference would seem to be a mix of social norms and, more notably for this conversation, very different consequences.
Oh I wasn’t saying the two crimes are comparable in their own terms. But specifically the statements made by the comment I responded to apply to larceny as well as to piracy.
Ah yes, the "I wouldn't have paid for it anyway, so I'm entitled to it for free" argument...
Not sure it is realistic or easier to physically steal 500k books.
I get what you are going for, but my point was that a dataset existed, and the only way it could be compiled was illegaly.
> But nobody was ever going to that
If this is a choice between risking to pay 1.5 billion or just paying 15 mil safely, they might.
Option 1: $183B valuation, $1.5B settlement.
Option 2: near-$0 valuation, $15M purchasing cost.
To an investor, that just looks like a pretty good deal, I reckon. It's just the cost of doing business - which in my opionion is exactly what is wrong with practices like these.
But that isn't how tax deductions work. Since taxes are always a fraction of income, a deduction can never save you more money than you already paid out to get the deduction in the first place. If you have a 10% tax rate, your options are:
A) Make 100M, pay 10M in taxes
or
B) Make 100M, pay 10M in lawsuit settlements, pay 9M in taxes
You come out ahead every time by not paying the settlement in the first place.
> I think a free society needs to let people break the rules if they are willing to pay the cost
so you don't think super rich people should be bound by laws at all?
Unless you made the cost proportional to (maybe expontial to) somebody's wealth, you would be creating a completely lawless class who would wreak havoc on society.
I agree to some extent, but there is a slippery slope to “no rules apply to the rich”.
I do agree that in the case of victimless crimes, having some ability to recompensate for damages instead of outright ban the thing, means that we can enact many massively net-positive scenarios.
Of course, most crimes aren’t victimless and that’s where the negative reactions are coming from (eg company pollutes the commons to extract a profit).
> What's actually wrong with this?
It's because they did not choose to pay for the books; they were forced to pay and they would not have done so if the lawsuit had not fallen this way.
If you are not sure why this is different from "they paid for pirated books (as if it were a transaction)", then this may reflect a lack of awareness of how fair exchange and trust both function in a society.
Should I be allowed to walk into the Louvre, steal the Mona Lisa, then pay $10.000 once caught? Should I be allowed to do this if I am employed by Stealing The Mona Lisa, LLC?
> They paid $1.5B for a bunch of pirated books.
They didn't pay, they settled. And considering flesh-and-blood people get sued for tens of thousands per download when there isn't a profit motive, that's a bargain.
> The settlement should reflect society's belief of the cost or deterrent.
No, it reflects the maximum amount the lawyers believe they can get out of them.
> This might be controversial, but I think a free society needs to let people break the rules if they are willing to pay the cost.
So how much should a politician need to pay to legally murder their opponent? Are you okay with your ex killing you for a $5000 fine?
> Imagine if you couldn't speed in a car.
Speed enough and you lose your license, no need to imagine.
Why does this company get away with it, but do warez groups get raided by SWAT teams, labeled a "criminal enterprise" or "crime gang", and sentenced to decades in jail? Why does the law not apply when you are rich?
> The settlement should reflect society's belief of the cost or deterrent
Settlements have nothing to do with either of those things. Settlement has to do with what the plaintiff believes is good enough for the cost that will avoid the uncertainty of trial. This is a civil case, "society" doesn't really come into play here. (And you can't "settle" a criminal case; closest analogue would be a plea deal.)
If the trial went forward to a guilty verdict, then the fines would represent society's belief of cost or deterrent. But we didn't get to see that happen.
It's not about money. It's about time.
What you describe is in fact what Waymo has had, of chosen to, deal with. They didn't go for an end run around regulations related to vehicles on public roads. They committed to driverless vehicles and worked with local governments to roll it out as quickly as regulators were willing to allow.
Uber could have made the same decision and worked with regulators to be allowed into markets one at a time. It was an intentional choice to lean on the fact that Uber drivers blended into traffic and could hide in plain sight until Uber had enough market share and customer base to give them leverage.
That doesn't really feel like the same thing to me.
With Uber you had a company that wanted to enter an existing market but couldn't due to legally-granted monopolies on taxi service. And given that existing market, you can be sure that the incumbents would lobby to keep Uber locked out.
With Waymo you have a new technology that has a computer driving the car autonomously. There isn't really any directly-incumbent party with a vested (conflict of) interest to argue against it. Waymo is a kind of taxi, though, so presumably existing taxi operators -- and the likes of Uber and Lyft -- could argue against it in order to protect their advantages. But ironically Uber and Lyft "softened" those regulatory bars already, so it might not have been worth it to try.
At any rate, the regulatory and safety concerns are also very different between the two.
I think I am also just a little more sympathetic to early Uber, given how terrible and cartel-like taxi service was in the past. But I would not at all be sympathetic toward Waymo putting driverless cars on the streets without regulatory approval and oversight, especially if people got injured or killed.
My understanding is that regulations for Waymo were much more strict because they billed themselves from the beginning as fully self-driving and wanted to operate on public streets.
My assumption is that they could have found ways to work around that by technically having someone in the drivers west, for example, but maybe I'm wrong there!
I think the difference between Waymo and Uber is risk level. Maybe Waymo would like to skirt regulations but they won't be allowed to by citizens and officials alike.
Waymo could likely have done something similar to Tesla. Pay a licensed driver to sit behind the wheel and claim the car only has driver assist. That likely would have worked long enough to gain traction and leverage to pressure a green light for full driverless mode.
Exactly. Well said.
actually NL is training a GPT on only materials they bought fairly.
it wont be a chatgpt or coding model ofc, thats not what they go for, but it'll be interesting to see its quality as its all fairly and honestly done. transparently.
What's wild is that $1.5B sounds huge… until you compare it to the potential upside of owning the dominant AI model trained on everything
Google did.
Anthropic also did specifically this, spent millions on it
Anthropic bought books, cut the spine off and scanned them with sheet fed scanners.
Not to mention that Uber doing well is exactly what would give them leverage to even have a discussion with Taxi medallion owners.
Otherwise, of course they would tell them to just pound sand.
> Rainbows End was prescient in many ways.
Agreed. Great book for those looking for a read: https://www.goodreads.com/book/show/102439.Rainbows_End
The author, Vernor Vinge, is also responsible for popularizing the term 'singularity'.
RIP to the legend. He has a lot of really fun ideas spread across his books.
I didn't realize Vernor Vinge had passed away... Sad TIL
I got to meet him once too! Unexpectedly met him at a Media Lab demo day. I was trying to play it cool though and didn't gush to him around how he's one of my favorite authors. I regret not doing so now.
Cookie monster is his strongest work. It has a VIBE.
Reminds me of permutation city
One of my favorites
Interesting. I love Vernon Vinge’s books. Except Rainbows End. It was such a dissapointment after many of the others.
“Marooned in Real Time” remains my fav.
I think the jury is still out on how fair use applies to AI. Fair use was not designed for what we have now.
I could read a book, but its highly unlikely I could regurgitate it, much less months or years later. An LLM, however, can. While we can say "training is like reading", its also not like reading at all due to permanent perfect recall.
Not only does an LLM have perfect recall, it also has the ability to distribute plagiarized ideas at a scale no human can. There's a lot of questions to be answered about where fair use starts/ends for these LLM products.
Fair use wasn't designed for AI, but AI doesn't change the motivations and goals behind copyright. We should be returning back to the roots - why do we have copyright in the first place, what were the goals and the intent behind it, and how does AI affect them?
The way this technology is being used clearly violates the intent behind copyright law, it undermines its goals and results in harm that it was designed to prevent. I believe that doing this without extensive public discussion and consensus is anti-democratic.
We always end up discussing concrete implementation details of how copyright is currently enforced, never the concept itself. Is there a good word for this? Reification?
I don't know the word but it's similar to arguing morality or public policy from the current status of the law.
> but AI doesn't change the motivations and goals behind copyright
That's the point they're makingThe person I responded to? Yes I'm agreeing with them, just adding my own thoughts. Maybe I could've worded that better :)
> Not only does an LLM have perfect recall
This has not been my experience. These days they are pretty good at googling though.
They do not have perfect recall unless you provide them a passage in the current context and then ask them to quote it.
The 'lossy encyclopedia' analogy is quite apt
I find the LLM on Google's search regularly regurgitates StackOverflow and Quora answers practically verbatim.
> I think the jury is still out on how fair use applies to AI.
The judge presiding over this case has already issued a ruling to the effect that training an LLM like Anthropic's AI with legally acquired material is in fact fair use. So unless someone comes up with some novel claims that weren't already attempted, claims that a different form of AI is significantly different from a copyright perspective from an LLM or tries their hand in a different circuit to get a split decision, the "jury" is pretty much settled on how fair use applies to AI. Legally acquired material used to train LLMs is fair use. Illegally obtaining copies of material is not fair use, and the transformative nature of LLMs don't retroactively make it fair use.
> I could read a book, but its highly unlikely I could regurgitate it, much less months or years later.
And even if one could, it would be illegal to do. Always found this argument for AI data laundering weird.
Has anyone actually made the argument that having an AI regurgitate a word for word copy of an otherwise copyrighted work is fair use? Or have they made the argument that training the AI is transformative and fair use, and using that AI to generate works that are similar but not duplications of the copyrighted work is fair use?
A xerox machine can reproduce an exact copy of a book if you ask it to, but that doesn't make a xerox machine inherently a copyright violation, nor does it make every use of a xerox machine a violation of copyright, even when the inputs are materials under copyright. So far the judge in this case has ruled that training an AI is sufficiently transformative, and that using legally acquired works for that purpose is not a violation of copyright. That outcome seems entirely unsurprising given the years of case law around copyright and technology that can duplicate copyrighted works. See the aforementioned xerox machines, but also CD ripping, DVRs, VHS recording of TV shows, audio cassette recording, emulators, the Java API lawsuit and also the Google Books lawsuit.
But there is a difference between “illegal to regurgitate it” and “illegal to remember it”. IIRC in this case that settled the judge had ruled on “remember” (fair use) but not on the other.
One more fundamental difference. I can't read all of the books and then copy my brain.
Which is one fundamental things how copyright is handled. Copying in general or performing multiple times. So I can accept argument that training model onetime and then using singular instance of that model is analogues to human learning.
But when you get to running multiple copies of model, we are clearly past that.
To be even more clear - this is a settlement, it does not establish precedent, nor admit wrongdoing. This does not establish that training is fair use, nor that scanning books is fine. That's somebody else's battle.
Right, the settlement doesn't.
However, the judge already ruled on the only important piece of this legal proceeding:
> Alsup ruled in June that Anthropic made fair use of the authors' work to train Claude...
The ruling also doesn’t establish precedent, because it is a trial court ruling, which is never binding precedent, and under normal circumstances can’t even be cited as persuasive precedent, and the settlement ensures there will be no appellate ruling.
On top of that this was just one case in the US. It's honestly a bit ridiculous how some Americans seem to believe that when one random judge from their country rules something that instantly turns into an international treaty that every country on Earth must accept.
> I thought it was precedentual within its circuit until an appellate says otherwise?
No, trial court decisions are never binding precedent, if they are “published” decisions, they may generally be cited as persuasive precedent. Appellate decisions (Circuit Courts in the federal system) are binding on the trial courts subordinate to that appellate court (and even on panels of the same appellate court) until reversed by the same court sitting en banc or by a higher court (the US Supreme Court in the federal system.)
I suspect that ruling legally gets wiped off the books by the settlement since the case gets dismissed, no?
Even if the ruling legally remains in place after the settlement, district court rulings are at most persuasive precedent and not binding precedent in future cases, even ones handled by the same court. In the US federal court system, only appellate rulings at either the circuit court of appeals level or the Supreme Court level are binding precedent within their respective jurisdictions.
That ruling does not get wiped off, you're right it is persuasive precedent, and it certainly can be cited in other cases, even if it's non-binding. It will be useful. District court rulings are used all the time as cites in novel applications of law like this.
Which is very important for e.g. the NYT lawsuit against OpenAI. Basically there’s now precedent that training AI models on text and them producing output is not copyright infringement.
Judge Alsup’s ruling is not binding precedent, no.
> Buying used copies of books
It remains deranged.
Everyone has more than a right to freely have read everything is stored in a library.
(Edit: in fact initially I wrote 'is supposed to' in place of 'has more than a right to' - meaning that "knowledge is there, we made it available: you are supposed to access it, with the fullest encouragement").
Well great so the Internet Archive is off the hook then.
Also, at least so far, we don't call computers "someone".
> Archive is off the hook then
Probably so, because with "library" I did not mean the "building". It is the decision of the society to make knowledge available.
> we don't call computers "someone"
We do instead, for this purpose. Why should we not. Anything that can read fits the set.
--
Edit: Come up with the arguments, sniper.
Moderation,
there is an asimmetry between agreement and disagreement: the latter requires arguments.
"Sneering and leaving" is antisocial, and that is underlying most of downvoting.
Stop this deficient, improductive and disruptive culture.
> Everyone has more than a right to freely have read everything is stored in a library.
Every human has the right to read those books.
And now, this is obvious, but it seems to be frequently missed - an LLM is not a human, and does not have such rights.
By US law, cccording to Author's Guild vs Google[1] on the Google book scanning project, scanning books for indexes is fair use.
Additionally:
> Every human has the right to read those books.
Since when?
I strongly disagree - knowledge should be free.
I don't think the author's arrangement of the words should be free to reproduce (ie, I think some degree of copyright protection is ethical) but if I want to use a tool to help me understand the knowledge in a book then I should be able to.
[1] https://en.wikipedia.org/wiki/Authors_Guild,_Inc._v._Google,....
I don't pay OpenAI and I use their model via ChatGPT frequently.
By this logic one shouldn't be able to research for a newspaper article at a library.
> vacuum up the commons
A vacuum removes what it sucks in. The commons are still as available as they ever were, and the AI gives one more avenue of access.
> for-profit
I presume you (people do) have exploited that knowledge that society has made in principle and largely practice freely accessible to build a professionality, which is now for-profit: you will charge parties for the skills that available knowledge has given you.
The "profit" part is not the problem.
That is an elision for "public knowledge". Of course there are nuances. In the case of books, there is little doubt: printed for sale is literally named "published".
(The "for sale" side does not limit the purpose to sales only, before somebody wants to attack that.)
And weights
Ok the corporation (or group of humans) that builds the LLM.
Maybe we should give machines rights, then.
> scanning books for indexes is fair use.
An LLM isn't an index.
> knowledge should be free
Knowledge costs money to gain/research.
Are you saying people who do the most valuable work of pushing the boundaries of human knowledge should not be fairly compensated for their work?
Scanning books for indexes is fair use. Very notably providing access to those books to the public for free was not fair use...
> this is obvious
I think it is obvious instead that readers employed by humans fit the principle.
> rights
Societally, it is more of a duty. Knowledge is made available because we must harness it.
Huh?
I think he implies that because one can borrow hypothetically any book for free from a library, one could use them for legal training purposes, so the requirement of having your own copy should be moot
Libraries are neither anarchist free for alls nor are they operating under licensing terms with regards to physical books.
They're merely doing what anyone is allowed to with the books that they own, loaning them out, because copyright law doesn't prohibit that, so no license is needed.
There are book scanners that don't require cutting the spine, though Anthropic doesn't seem to have used that approach.
That's what they did. They also destroyed books worth millions in the process.
They didn't think it would be a good idea to re-bind them and distribute it to the library or someone in need.
Nah, that's just if you want archival-quality scans. "Good enough for OCR" is a much lower bar.
I wonder what Aaron Swartz would think if he lived to see the era of libgen.
He died (2013) after libgen was created (2008).
I had no idea libgen was that old, thanks!
Yeah but did he die before anybody actually knew about it?
To be fair, he might have been rather preoccupied at that time.
Anna's Archive includes all of libgen and a lot more: https://en.wikipedia.org/wiki/Anna%27s_Archive
Recent MEGATHREAD on status of libgen and alternatives
https://www.reddit.com/r/libgen/comments/1n4vjud/megathread_...
Life pro tip: the Wikipedia pages for Libgen and Scihub contain up-to-date current links in the right sidebar. Only for the purpose of information and documentation, of course.
There are mirrors on its' wikipedia page: https://en.wikipedia.org/wiki/Library_Genesis
libgen.help is frequently updated
I believe that there's a reddit sub that keeps people up to date with what URLs are, or are not, functioning at any given point in time
Didn't he get in trouble for contributing to sci-hub before he died?
He got into trouble for breaking into an unsecured network closet at MIT and using MIT credentials to download a bunch of copyrighted content.
The whole incident is written up in detail, https://swartz-report.mit.edu/ by Hal Abelson (who wrote SICP among other things). It is a well-researched document.
Swartz, like many of us, see pay-for-access journals as an affront. I believe he wanted to "liberate" the content of these articles so that more people could read them.
Information may want to be free, but sometimes it takes a revolutionary to liberate it.
I think legally nobody knows why he was downloading the content to the point where he had to come to his hidden laptop to swap out hard drives of papers.
but also prior to that he had written the guerilla open access manifesto so it wasn't great optics to be caught doing that
Google scanned many books quite a while ago, probably way more than LibGen. Are they good to use them for training?
If they legally purchased them I dont think why not. IIRC they did borrow from libraries so probably not every book in Google Books
Anthropic legally purchased the books it used to train its model according to the judge. And the judge said that was fine. Anthropic also downloaded books from a pirate site and the judge said that was bad -- even though the judge also said they didn't use those books for training....
They litigated this a while ago and my understanding was that they were able to claim fair use, but I'm no expert.
What I'm wondering is if they, or others, have trained models on pirated content that has flowed through their networks?
Books.Google.Com was deemed fair use because it only shows previews, not full downloads. Internet Archive is still under litigation iirc besides having owned a physical copy of every book they ever scanned (and keeping a copy in their warehouses) they let people read the whole thing.
I’m surprised Google hasn’t hit its competitors harder with the fact that they actually got permission to scan books from its partner libraries and Facebook and OpenAI just torrented books2/books3, but I guess they have aligned incentive to benefit from a legal framework that doesn’t look to closely at how you went about collecting source material
I imagine the problem there is they primarily scanned library books so I doubt they have the same copyright protections here as if they bought them
All those books were loaned by a library or purchased.
Yes, the ruling was a massive win for generative AI companies.
The settlement was a smart decision by anthropic to remove a huge uncertainty. 1.5 is not small, but it won't stop them or slow them significantly.
> It’s important in the fair use assessment to understand that the training itself is fair use
IIUC this is very far from settled, at least in US law.
Yes, but if you are predisposed for some reason to think that Anthropic "won" this case, then you're going to believe all sorts of things.
I keep thinking,if they bought ebooks,would that be fine or is this required to be paper books? If it doesn't work with ebooks, the world is going to be a nightmare
The Librareome project was about simply scanning books, not training AI with them. And it was a matter of trying to stop corporations from literally destroying the physical books in the process. I don't know that this is applicable.
> It’s important in the fair use assessment to understand that the training itself is fair use
Is this completely settled legally? It is not obvious to me it would be so
It is not.
It should not be fine to train on them, because you are creating derivative works, exactly like when you deal with music.
> Buying used copies of books, scanning them, and training on it is fine.
Awesome, so I just need enough perceptrons to overfit every possible copyrighted works then?
META did pirate basically all books in Anna’s archive but if I remember correctly they just whispered a a cried sorry and it ended up as that. Why are they also not asked to pay?
Its not settled whether AI training is fair use.
Nevertheless, a crime is a crime.
I'm so over this shift in America's business model.
Original Silicon Valley model, and generally the engine of American innovation/growth/wealth equality for 200 years: Come up with a cool technology, build it in your garage, get people to fund it and sell it because it's a better mousetrap.
New model: Still come up with a cool idea, still get it funded and sold, but the idea involves committing crime at a staggering scale (Uber, Google, AirBnB, all AI companies, long list here), and then paying your way out of the consequences later.
Look some of these laws may have sucked, but having billionaires organize a private entity that systematically breaks them and gets off with a slap on the wrist, is not the solution. For one thing, if innovation requires breaking the law, only the rich will be able to innovate because only they can pay their way out of the law. For another, obviously no one should be able to pay their way out of following the law! This is basic "foundations of society" stuff that the vast majority of humans agree on in terms of what feels fair and just, and what doesn't.
Go to a country which has really serious corruption problems, like is really high on the corruption index, and ask the people there what they think about it. I mean I live in one and have visited many others so I can tell you, they all hate it. It not only makes them unhappy, it fills them with hopelessness about their future. They don't believe that anything can ever get better, they don't believe they can succeed by being good, they believe their own life is doomed to an unappealing fate because of when and where they were born, and they have no agency to change it. 25 years ago they all wanted to move to America, because the absence of that crushing level of corruption was what "the land of opportunity" meant. Now not so much, because America is becoming more like their country.
This timeline ends poorly for all of us, even the corrupt rich who profit from it, because in the future America will be more like a Latin American banana republic where they won't be able to leave their compounds for fear of getting Luigi'ed. We normal people get poverty, they get fear and death, everyone loses. The social contract is collapsing in front of our eyes.
You said it in one word - it’s corruption.
Not creative destruction. But pure corruption.
I agree with you, except that you’re too positive. The United States is already a banana republic.
The federal courts are a joke - the supreme court now has at least one justice whose craven corruption is notorious — openly accepting material value (ie bribes) from various parties. The district courts are being stuffed with Trump appointees with the obvious problems that go with that.
The congress is supine. Obviously they cannot act in any meaningful capacity.
We don’t have street level corruption today. But we’ve fired half the civil service, so I doubt that will continue.
It's bad but I think it's important to recognize how much worse it can get. Otherwise why would you work to save anything? I'm "positive" because I come from the US and I now live in an actual banana republic and I see firsthand how much worse things will get in America if the trajectory doesn't change.
Imagine a future where election results are casually and publicly nullified if the people with the guns don't like the result, and no one can do anything about it. Or where you can start a business but if it succeeds and you don't have the right family name it'll be taken from you and you'll be stripped of all you own and possibly put in prison for a while. That's reality in some countries, the US is not there yet, but those are the stakes we're playing for here, and why change needs to happen.
You realize that 1500 people were just pardoned for storming federal buildings, trying to kill elected official and trying to overturn an election?
Right now, the President is sending federal troops and occupying cities and just bombed a ship in Venezuela
Welcome to the age of grift.
> *and generally the engine of American innovation/growth/wealth equality for 200 years: Come up with a cool technology, build it in your garage, get people to fund it and sell it because it's a better mousetrap.”
So exactly when was there “wealth equality” in the US? Are you glossing over that whole segregation, redlining, era of the US?
And America was built on slavery and genocide.
Honest question: what's with the penchant some people have to turn every conversation into a referendum on how horrible America is?
You realize there are countries that are even worse to their citizens right? Like I'm really asking, why do so many people online seek to eliminate all conversation that isn't a simple and un-nuanced condemnation of America?
I am able to have criticisms of America while also thinking there are good things about it and that there are also worse places, but some people seem incapable of holding those three ideas in their heads simultaneously. Especially the idea that there actually are countries worse than the US, they just can't fathom that it seems, or don't consider it a fact that should receive any attention.
Womens suffrage was also not part of the deal in 1776.
You can't grab pirated stuff and then hope fair use magically sanitizes it
Yes, but the cat is out of the bag now. Welcome to the era of every piece of creative work coming with an EULA that you cannot train on it. It will be like clearing samples.
Many already did this years ago for game resources on iClone, Unity, and UE.
There are also a lot of usage rules that now make many games unfeasible.
We dug into the private markets seeking less Faustian terms, but found just as many legal submarines in wait... "AI" Plagiarism driven projects are just late to the party. =3
This is excellent news because it means that folks who pay for printed books and scan them also can train with their content. It's been said already that we've already trained on "the entire (public) internet." Printed books still hold a wealth of knowledge that could be useful in training models. And cheap, otherwise unwanted copies make great fodder for "destructive" scanning where you cut the spine off and feed it to a page scanner. There are online services that offer just that.
Do they actually need to scan the book?
Or can they buy the book, and then use the pirated copy?
Has it been decided that training models is fair use? Has it been decided in all jurisdictions?
I don't believe that's true. Most work I've read on fair use suggests it has to be a small amount, selectively used, substantially transformed, and not compete with content creators. These AI's training are the opposite of all that. I was surprised of a ruling like this but Alsup is a unique judge.
Additionally, sharing copyrighted works without permission... the data sets or data lakes... is its own tort. You're guilty just for sharing copies before even training. Some copyrighted works are also commercial, copyright with ban on others' commercial use, and patented. Some are NDA'd but 3rd party leaked them. Sources like Common Crawl probably have plenty of such content.
Additionally, there's often contractual terms of use on accessing the content. Even Singapore's and others laws allowing training on copyrighted content usually require that you lawfully accessed that content in the first place. The terms of use are the weakest link there.
I'd like to see these two issues turned by law into a copyright exception that no contract can override. It needs to specifically allow sharing scraped, publicly-visible content. Anything you can just view or download which the copyright owner put up. The law might impose or allow limits on daily scraping quantity, volume, etc to avoid damage scrapers are doing.
The RIAA should step in and get the money that publishers deserve. Talking millions per book and extra to make sure the pirates learned their lesson. And prison for the management.
> pirating of the books is the issue
I have an author friend who felt like this was just adding insult to injury.
So not only had his work been consumed into this machine that is being used to threaten his day job as a court reporter, not only was that done without seeking his permission in any way, but they didn’t even pay for a single copy.
Really embodies raising your middle finger to the little guy while you steamroll him.
Exactly this. It's only us peons who will be prosecuted under the current copyright laws. The rich and well connected will base their entire business on blatant theft and will get away with it.
Okay, so the blame for the offense was laundered..
It is related to scalable mode training, however. Chopping the spine off books and putting the pages in an automated scanner is not scalable. And don't forget about the cost of 1) finding 2) purchasing 3) processing and 4) recycling that volume of books.
I guess companies will pay for the cheapest copies for liability and then use the pirated dumps. Or just pretend that someone lent the books to them.
> Chopping the spine off books and putting the pages in an automated scanner is not scalable.
That's how Google Books, the Internet Archive, and Amazon (their book preview feature) operated before ebooks were common. It's not scalable-in-a-garage but perfectly scalable for a commercial operation.
We hem and haw about metaphorical "book burning" so much we forget that books themselves are not actually precious.
The books that are destroyed in scanning are a small minority compared to the millions discarded by libraries every year for simply being too old or unpopular.
The real power comes from the purging of knowledge from institutions that can keep that knowledge alive. Facts, ideas and histories can all be incinerated.
Well, the famous 1933-05-10 book burning did destroy the only copies of a lot of LGBT medical research, and destroying the last copy of various works was a stated intent of Nazi book burnings.
No, that's not how Google Books did it. https://en.wikipedia.org/wiki/Google_Books#Scanning_of_books
I remember them having a 3D page unwarping tech they built as well so they could photograph rare and antique books without hacking them apart.
I don't think Google Books scanner chopped off the spine. https://linearbookscanner.org/ is the open design they released.
Oh I didn't know that. That's wild
> the training itself is fair use
Sure, training by itself isn't worth anything.
Distributing and collecting payment for the usage of a trained model which may violate copyright, etc. that's still an open legal question and worth billions as well.
Then shouldn’t they be liable for at least 25 times this amount?
Wdym Rainbows End was prescient?
There's a scene early on where libraries are being destructively shredded, with the shreds scanned and reconstructed as digital versions.
Paying $3,000 for pirating a ~$30 book seems disproportionate.
I feel like proportionality is related also to the scale. If a student pirates a textbook, I’d agree that 100x is excessive, but this is a corporation handsomely profiting off of mass piracy.
It’s crazy to imagine, but there was surely a document or slack message thread discussing where to get thousands of books, and they just decided to pirate them and that was OK. This was entirely a decision based on ease or cost, not based on the assumption it was legal. Piracy can result in jail time IIRC, so honestly it’s lucky the employee who suggested this, or took the action avoided direct legal liability.
Oh and I’m pretty sure other companies (meta) are in litigation over this issue, and the publishers knew that settlement below the full legal limit would limit future revenue.
> handsomely profiting
Well actively generating revenue at least.
Profits are still hard to come by.
Investment is debt lol. Maybe you can make the argument that you're increasing the equity value but you do have to eventually prove you're able to make money right? Maybe you don't, this system is pretty messed up after all.
what a fascinating software project someone had the oppertunity to work on.
Not if 100 companies did it and they all got away.
This is to teach a lesson because you cannot prosecute all thieves.
Yale Law Journal actually writes about this, the goal is to deter crime because in most cases damages cannot be recovered or the criminal will never be caught in the first place.
Even if the goal is to deter crime, we still have a principle of proportionate punishment. We don't cut people's hands of for petty theft, and we don't execute people for exceeding the speed limit even though both should be pretty effective deterrents.
If in most cases damages cannot be recovered or the criminal will never be caught in the first place, then what is the lesson being taught? Doesn't that just create a moral hazard where you "randomly" choose who to penalize?
The message being you’ll likely get away with it?
With the per-item limit for "willful infringement" being $150,000, it's a bargain.
And a low end of $750/item.
Were you not around when people were getting sued for running Napster?
Fines should be disproportionate at this scale. So it discourages other businesses from doing the same thing.
So they’re creating monopolies? The existing players were allowed to do it, but anyone that tries to do it now will be hit with a 1.5B fine?
As long as they haven't been bullied into the corporate equivalent of suicide by the "justice" system it's not disproportionate considering what happened to Aaron Schwartz.
If anything it's too little based on precedent.
Realistically it will be $30 per book and $2,970 for the lawyers
That's not how class actions work. Ever.
In this specific case the settlement caps the lawyer fees at 25%, and even that is subject to the courts approval. In addition they will ask for $250k total ($50k / plaintiff) for the lead plaintiffs, also subject to the courts approval.
25% of 1.5B?
Well it's willful infringement so a court would be entitled to add a punitive multiplier anyway. But this is something Anthropic agreed to, if that wasn't clear.
Thanks for the reminder that what the Internet Archive did in its case would have been legal if it was in service of an LLM.
I like the IA as much as anyone else, but surely there's a significant difference between distributing literal word for word exact copies of copyrighted material and distributing statistical indexes about copyrighted material right?
Many things become legal when the perpetrator has money.
The golden rule:
He who has the gold makes the rules
This is a good soundbite but doesn't make sense. The Internet Archive had to pay for redistributing copyrighted materials. Anthropic just paid too. (Note: redistributing != training)
LLM’s are turning out to be a real get-out-of-legal-responsibilities card, hey?
> It’s important in the fair use assessment to understand that the training itself is fair use,
I think that this is a distinction many people miss.
If you take all the works of Shakespeare, and reduce it to tokens and vectors is it Shakespeare or is it factual information about Shakespeare? It is the latter, and as much as organizations like the MLB might want to be able to copyright a fact you simply cannot do that.
Take this one step further. IF you buy the work, and vectorize it, thats fine. But if you feed it in the vectors for Harry Potter so many times that it can reproduce half of the book, it becomes a problem when it spits out that copy.
And what about all the other stuff that LLM's spit out? Who owns that. Well at present, no one. If you train a monkey or an elephant to paint, you cant copyright that work because they aren't human, and neither is an LLM.
If you use an LLM to generate your code at work, can you leave with that code when you quit? Does GPL3 or something like the Elastic Search license even apply if there is no copyright?
I suspect we're going to be talking about court cases a lot for the next few years.
Yes. Someone on this post mentioned that switzerland allows downloading copyrightable material but not distributing them.
So things get even more dark because what becomes distribution can have a really vague definition and maybe the AI companies will only follow the law just barely, just for the sake of not getting hit with a lawsuit like this again. But I wonder if all this case did was maybe compensate the authors this one time. I doubt if we can see a meaningful change towards AI companies attitude's towards fair use/ essentially exploiting authors.
I feel like that they would try to use as much legalspeak as possible to extract as much from authors (legally) without compensating them which I feel is unethical but sadly the law doesn't work on ethics.
Switzerland has five main collecting societies: ProLitteris for literature and visual arts, the SSA (Société Suisse des Auteurs) for dramatic works, the SUISA for music, Suissimage for audiovisual works, and SWISSPERFORM for related rights like those of performers and broadcasters. These non-profit societies manage copyright and related rights on behalf of their members, collecting and distributing royalties from users of their works.
Note that the law specifically regulates software differently, so what you cannot do is just willy nilly pirate games and software.
What distribution means in this case is defined in the swiss law. However swiss law as a whole is in some ways vague, to leave a lot up to interpretation by the judiciary.
> compensate the authors this one time.
I would assume it would compensate the publisher. Authors often hand ownership to the publisher; there would be obvious exceptions for authors who do well.
> And what about all the other stuff that LLM's spit out? Who owns that. Well at present, no one. If you train a monkey or an elephant to paint, you cant copyright that work because they aren't human, and neither is an LLM.
This seems too cute by half, courts are generally far more common sense than that in applying the law.
This is like saying using `rails generate model:example` results in a bunch of code that isn't yours, because the tool generated it according to your specifications.
The example is a real legal case afaik, or perhaps paraphrased from one (don’t think it was a monkey - an ape? An elephant?).
I’d guess the legal scenario for `rails generate` is that you have a license to the template code (by way of how the tool is licensed) and the template code was written by a human so licensable by them and then minimally modified by the tool.
I think you're thinking of this case [1], it was a monkey, it wasn't a painting but a selfie. A painting would have only made the no-copyright argument stronger.
[1] https://en.wikipedia.org/wiki/Monkey_selfie_copyright_disput...
I don’t think the code you get from rails generate is yours. Certainly not by way of copyright, which protects original works of authorship and so if it’s not original, it’s not copyrightable, and yes it’s been decided in US courts that non-human-authorship doesn’t count as creative.
> courts are generally far more common sense than that in applying the law.
'The Board’s decision was later upheld by the U.S. District Court for the District of Columbia, which rejected the applicant’s contention that the AI system itself should be acknowledged as the author, with any copyrights vesting in the AI’s owner. The court further held that the CO did not act arbitrarily or capriciously in denying the application, reiterating the requirement that copyright law requires human authorship and that copyright protection does not extend to works “generated by new forms of technology operating absent any guiding human hand, as plaintiff urges here.”' From: https://www.whitefordlaw.com/news-events/client-alert-can-wo...
The court is using common sense when it comes to the law. It is very explicit and always has been... That word "human" has some long standing sticky legal meaning (as opposed to things that were "property").
The question is going to be how much human intellectual input there was I think. I don't think it will take much - you can write the crappiest novel on earth that is complete random drivel and you still have copyright on it.
So to me, if you are doing literally any human review, edits, control over the AI then I think you'll retain copyright. There may be a risk that if somebody can show that they could produce exactly the same thing from a generic prompt with no interaction then you may be in trouble, but let's face it should you have copyright at that point?
This is, however, why I favor stopping slightly short of full agentic development at this point. I want the human watching each step and an audit trail of the human interaction in doing it. Sure I might only get to 5x development speed instead of 10x or 20x but that is already such an enormous step up from where we were a year ago that I am quite OK with that for now.
> If you take all the works of Shakespeare, and reduce it to tokens and vectors is it Shakespeare or is it factual information about Shakespeare?
To rephrase the question:
Is a PDF of the complete works of Shakespeare Shakespeare, or is it factual information about Shakespeare?
Reencoding human-readable information into a form that's difficult for humans to read without machine assistance is nothing new.
Like most things in law, the answers are going to come down to intent and outcome. If you distribute the PDF to other people with the intent that they can read the copyrighted works of an author, then you have distributed that author's content in violation of copyright. If on the other hand, you encrypted the entire contents of that PDF, threw away the encryption key and the published prints of the PDF as artwork of binary code, that's probably going to fall on the side of "fair use" even though the entire copyrighted work is input to and contained in your final output. Though you might get into some legal hot water if you promoted your work using the author's name, but that's more of a trademark issue than a copyright issue.
I mean, sort of. The issue is that the compression is novel. So anything post tokenization could arguably be considered value add and not necessarily derivative work.
[dead]
I guess they must delete all models since they acquired the source illegally and benefitted from it, right? Otherwise it just encourages others to keep going and pay the fines later.
In a prior ruling, the court stated that Anthropic didn't train on the books subject to this settlement. The record is that Anthropic scanned physical books and used those for training. The pirated books were being held in a general purpose library and were not, according to the record, used in training.
So how did they profit off the pirated books?
But how the settlement cost was then defined if nobody read those books and there was no financial lost...
That is something which is extremely difficult to prove from either side.
It is 500,000 books in total so did they really scan all those books instead of using the pirated versions? Even when they did not have much money in the early phases of the model race?
The 500,000 number is the number of books that are part of the settlement. If they downloaded all of Libgen and the other sources it was more like >7Million. But it is a lot of work to determine which books can legitimately be part of the lawsuit. For example, if any of the books in the download weren't copyright (think self published) or not protected under US copyright law (maybe a book only published in Venezula) or it isn't clear who own the copyright then that copyright owner cannot be part of the class. So it seems like the 500,000 number is basically the smaller number of books for which the lawyers for the plaintiff felt they could most easily prove standing.
> Buying used copies of books, scanning them, and training on it is fine.
Buying used copies of books, scanning them, and printing them and selling them: not fair use
Buying used copies of books, scanning them, and making merchandise and selling it: not fair use
The idea that training models is considered fair use just because you bought the work is naive. Fair use is not a law to leave open usage as long as it doesn’t fit a given description. It’s a law that specifically allows certain usages like criticism, comment, news reporting, teaching, scholarship, or research. Training AI models for purposes other than purely academic fits into none of these.
Are "fantasy name generators" of the sort you find all over the place online fair use if the weighting of their generators is based on statistical information about names in fantasy novels? I would think most people would agree they're fair use, or if not in so many words, I think those people would find it pretty unfair for WotC to go around suing sites for running D&D character name generators.
Or let's talk about another form of buying copyrighted / protected content and selling the results of transforming it: emulators. The Connectix Virtual Game Station was the impetus for one of the most important lawsuits about emulation, and the ruling held that even though writing an emulator inherently involves copying copyrighted code, the result is sufficiently transformative and falls under fair use.
Buying used copies of books, scanning them, training an employee with the scans: fair use.
Unless legislation changes, model training is pretty much analogous to that. Now of course if the employee in question - or the LLM - regurgitates a copyrighted piece verbatim, that is a violation and would be treated accordingly in either case.
> Buying used copies of books, scanning them, training an employee with the scans: fair use.
Does this still hold true if multiple employees are "trained" from scanned copies at the same time?
Simultaneously I guess that would violate copyright, which is an interesting point. Maybe there's a case to be made there with model training.
Regardless, the issue could be resolved by buying as many copies as you have concurrent model training instances. It isn't really an issue with training on copyrighted work, just a matter of how you do so.
Computers aren't people. And analogies aren't laws.
Yes, but the law doesn’t exist, so until it catches up, analogies are all the legal system has to work with.
The purpose and character of AI models is transformative, and the effect of the model on the copyrighted works used in the model is largely negligible. That's what makes the use of copyrighted works in creating them fair use.
It fits the basicmost fair use: reading them. Current "training" can be considered as a gross form of reading.
Settlement Terms (from the case pdf)
1. A Settlement Fund of at least $1.5 Billion: Anthropic has agreed to pay a minimum of $1.5 billion into a non-reversionary fund for the class members. With an estimated 500,000 copyrighted works in the class, this would amount to an approximate gross payment of $3,000 per work. If the final list of works exceeds 500,000, Anthropic will add $3,000 for each additional work.
2. Destruction of Datasets: Anthropic has committed to destroying the datasets it acquired from LibGen and PiLiMi, subject to any legal preservation requirements.
3. Limited Release of Claims: The settlement releases Anthropic only from past claims of infringement related to the works on the official "Works List" up to August 25, 2025. It does not cover any potential future infringements or any claims, past or future, related to infringing outputs generated by Anthropic's AI models.
Don't forget: NO LEGAL PRECEDENT! which means, anybody suing has to start all over. You only settle in this scenario/point if you think you'll lose.
Edit: I'll get ratio'd for this- but its the exact same thing google did in it's lawsuit with Epic. They delayed while the public and courts focused in apple (oohh, EVIL apple)- apple lost, and google settled at a disadvantage before they had a legal judgment that couldn't be challenged latter.
> You only settle in this scenario/point if you think you'll lose.
Or because you already got the judgement you wanted. Remember Athropic's training of the AI was determined to be fair use for all the legally acquired items, which Anthropic claims is their current acquisition model anyway. If we assume that's true for the sake of argument, there's no point in fighting a battle on the remaining part unless they have something to gain by it. Since they're not doing that anymore, they don't gain, and run a very high risk of losing more. From a purely PR perspective, this is the right move.
I thought the courts decided against Google in Google vs Epic? It was even appealed and upheld. Are you thinking of another case? https://en.m.wikipedia.org/wiki/Epic_Games_v._Google
There is already a mountain of legal precedent that you can't just download copyrighted work. That's what this lawsuit is about. Just because one of the parties is Anthropic doesn't mean this is some new AI thing.
A full case is many more years of suits and appeals with high risks, so its natural to settle which obviously means no precedent
Or, if you think your competition, also caught up in the same quagmire, stands to lose more by battling for longer than you did?
A valid touche! I still think google went with delaying tactics as public and other pressures forced Apple's case forward at greater velocity. (Edit: implicit "and then caved when apple lost"... because they're the same case)
Wont Facebook just get sued for the same thing now and maybe set precedent?
I thought meta had been sued and forgiven as it was impulsive that they do it to make money and faced no charge.
This is what is confusing me here. I did not really follow any case, but as far as I remember meta seems to have gotten away with pirating books, but anthropic needs to pay $1.5B ?
So they can also keep models trained on the datasets? That seems pretty big too, unless the half life of models is so low it doesn't matter.
It's a separate suit being wages against Meta and OpenAI etc.
There's piracy, then there's making available a model to the public which can regurgitate copyrighted works or emulate them. The latter is still unsettled
So... it would be a lot cheaper to just buy all of the books?
Yes, much.
And they actually went and did that afterwards. They just pirated them first.
Where can I find source that says Anthropic bought the pirated books afterwards? I haven't seen this in any official document.
Also, do we know if the newer models were trained without the pirated books?
Thanks for the link.
Among several places where judge mentions Anthropic buying legit copies of books it pirated, probably this sentence is most relevant: "That Anthropic later bought a copy of a book it earlier stole off the internet will not absolve it of liability for the theft but it may affect the extent of statutory damages."
But document does not say Anthropic bought EVERY book it pirated. Other sections in the document also don't explicitly say that EVERY pirated book was later purchased.
I stopped using Claude when this case came to light. If the newer Claude models don't use pirated books, I can resume using it.
When you say, "I'm pretty sure we do...", do you mean that pirated books were used, or were they not used?
What is the HN term for this? "Bootstrapping" your start up? Or is it "growth-hacking" it?
Bookstrapping
The latter (I know you're joking, but...)
Bootstrapping in the startup world refers to starting a startup using only personal resources instead of using investors. Anthropic definitely had investors.
That might be practically impossible given the number of rights holders worldwide
The permission to buy them was already settled by Google Books in the 00's.
They did, but only after they pirated the books to begin with.
Few. This settlement potentially weakens all challenges to the use of copyrighted works in training LLM's. I'd be shocked if behind closed doors there wasn't some give and take on the matter between Executives/investors.
A settlement means the claimants no longer have a claim, which means if they're also part of- say, the New York Times affiliated lawsuit- they have to withdraw. A neat way of kneecapping a country wide decision that LLM training on copy written material is subject to punitive measures don't you think?
That's not even remotely true. Page 4 of the settlement describes released claims which only relate to the pirating of books. Again, the amount of misinformation and misunderstanding I see in copyright related threads here ASTOUNDS.
I'm not sure how your confusion about what's going on is being projected to me. What about "also" what about "adjacent"?
>In my experience&training in a fintech corp- Accepting a settlement in any suit weakens your defense- but prevents a judgement and future claims for the same claims from the same claimants (a la double jeopardy). So, again- at minimum- this prevents an actual judgement. Which, likely would be positive for the NYT (and adjacent) cases.
Okay? I'm an IP litigator and you clearly have no idea what you're talking about. The only thing left to try in this case was the book library piracy. Alsup's fair use decision is just as relevant and is not mooted by the settlement and will be cited by anyone that thinks its favorable to them.
Thank you. I assumed it would be quicker to find the link to the case PDF here, but your summary is appreciated!
Indeed, it is not only payout, but the destruction of the datasets. Although the article does quote:
> “Anthropic says it did not even use these pirated works,” he said. “If some other generative A.I. company took data from pirated source and used it to train on and commercialized it, the potential liability is enormous. It will shake the industry — no doubt in my mind.”
Even if true, I wonder how many cases we will see in the near future.
Only 500,000 copyrighted works?
I was under the impression they had downloaded millions of books.
Individual authors had to join the class action lawsuit, sadly. They were not all automatically registered for each violation.
I’m an author, can I get in on this?
I had the same question.
It looks like you'll be able to search this site if the settlement is approved:
> https://www.anthropiccopyrightsettlement.com/
If your work is there, you qualify for a slice of the settlement. If not, you're outta luck.
I didn't see a way to search for my book there, but there definitely is an author intake form.
This site references Meta, but the training corpus probably has some overlap? Maybe?
https://www.theatlantic.com/technology/archive/2025/03/searc...
I'm an author. Can I get anthropic stock instead?
If you are an author here are a couple of relevant links:
You can search LibGen by author to see if your work is included. I believe this would make you a member of the class: https://www.theatlantic.com/technology/archive/2025/03/searc...
If you are a member of the class (or think you are) you can submit your contact information to the plaintiff's attorneys here: https://www.anthropiccopyrightsettlement.com/
Wild - I searched my name out of curiosity and my PhD research papers turned up. Worth submitting my contact details I guess
That may depend on whether there is copyright on that work.
Thank you! I hadn't even thought that I could be affected, but I have written some programming books, and some of them show up on libgen. I've submitted my contact info, maybe something will come out of this...
Thank you for posting this!
I suspected my work was in the dataset and it looks like it is! I reached out via the form.
Good luck! Hope you get a payout!
It’s pretty incredible that the vast majority of authors will make more money for their books from this settlement that they ever have from selling their books.
God bless capitalistic America.
wow, I found 8 of my books!
8 works x $3k/work is not a bad payout!
I can't help but feel like this is a huge win for Chinese AI. Western companies are going to be limited in the amount of data they can collect and train on, and Chinese (or any foreign AI) is going to have access to much more and much better data.
The West can end the endless pain and legal hurdles to innovation by limiting the copyright. They can do it if there is will to open up the gates of information to everyone. The duration of 70 years after death of the author or 90 years for companies is excessively long. It should be ~25 years. For software it should be 10 years.
And if AI companies want recent stuff, they need to pay the owners.
However, the West wants to infinitely enrich the lucky old people and companies who benefited from the lax regulations at the start of 20th century. Their people chose to not let the current generations to acquire equivalent wealth, at least not without the old hags get their cut too.
The vast majority of books don't generate any profits past the first few years, so I prefer Lawrence Lessig's proposal of copyright renewal at five-year intervals with a fee. Under this scheme, most books would enter the public domain after five years
https://www.econlib.org/library/Columns/y2003/Lessigcopyrigh...
Lessig: Not for this length of time, no. Copyright shouldn’t be anywhere close to what it is right now. In my book I proposed a system where you’d have to renew after every five years and you get a maximum term of 75 years. I thought that was pretty radical at the time. The Economist, after the Eldred decision, came out with a proposal—let’s go back to 14 years, renewable to 28 years. Nobody needs more than 14 years to earn the return back from whatever they produced.
Won't people just wait 5 years to buy the book?
Lessig’s proposal is excellent. A long time ago I wrote 10 books for publishers like McGraw-Hill, J Wiley, Springer-Verlag, etc.
For many reasons I switched to writing using a Creative Commons license using Lulu, LeanPub, and my own web site for distribution. This has been a win for me economically, it feels good to add to the commons, and it is fun.
I think western companies will be just fine -- Anthropic is settling because they illegally pirated books from LibGen back in 2021 and subsequently trained models on them. They realized this was an issue internally and pivoted to buying books en masse and scanning them into digital formats, destroying the original copies in the process (they actually hired a former lead in the Google Books project to help them in this endeavor!). And a federal judge ruled a couple months ago that training on these legally-acquired scanned copies does not constitute fair use -- that the LLM training process is sufficiently transformative.
So the data/copyright issue that you might be worried about is actually completely solved already! Anthropic is just paying a settlement here for the illegal pirating that they did way in the past. Anthropic is allowed to train on books that they legally acquire.
And sure, Chinese AI companies could probably scrape from LibGen just like Anthropic did without getting in hot water, and potentially access a bit more data that way for cheap, but it doesn't really seem like the buying/scanning process really costs that much in the grand scheme of things. And Anthropic likely already has legally acquired most of the useful texts on LibGen and scanned them into its internal library anyways.
(Furthermore, the scanning setup might actually give Anthropic an advantage, as they're able to digitize more niche texts that might be hard to find outside of print form)
>And a federal judge ruled a couple months ago that training on these legally-acquired scanned copies does not constitute fair use -- that the LLM training process is sufficiently transformative.
You mean does constitute fair use?
It's easier for one company to digitize and sell/share than for many companies to do it individually.
Western companies will be fine but sharing data in ways that would be illegal in the US does help other companies outside the US.
This isn’t is a race to the bottom. They could have bought these books instead of pirating them.
It's naive to think Chinese models have a free pass. Local censorship, language/data biases, and export restrictions cut both ways.
True enough, but training on synthetic data now seems to be pushing SOTA.
Good, if AI is such a great thing, why wouldn't we want the 2+ billion Chinese to have it also?
But most marginal training of Anthropic, OpenAI and Google models is done on LLM paraphrased user data on those platforms. That user data is proprietary and obviously way more valuable than random books.
Wait so they raised all that money just to give it to publishers?
Can only imagine the pitch, yes please give us billions of dollars. We are going to make a huge investment like paying of our lawsuits.
From the article:
> Although the payment is enormous, it is small compared with the amount of money that Anthropic has raised in recent years. This month, the start-up announced that it had agreed to a deal that brings an additional $13 billion into Anthropic’s coffers. The start-up has raised a total of more than $27 billion since its founding in 2021.
Maybe small compared to the money raised, but it is in fact enormous compared to the money earned. Their revenue was under $1b last year and they projected themselves as likely to make $2b this year. This payout equals their average yearly revenue of the last two years.
I thought they were projecting 10B and said a few months ago they have already grown from a 1B to 4B run rate
Here is an article that discusses why those numbers are misleading[1]. From a high level, "run rate" numbers are typically taking a monthly revenue number and multiplying it by 12 and that just isn't an accurate way to report revenue for reasons outlined in that article. When it comes to actual projections for annual revenue, they have said $2b is the most likely outcome for their 2025 annual revenue.
Personally I believe in the ideal scenario (for the fed govt.) these firms will develop the tech. The fed will then turn around and want those law suits to win - effectively gutting the firms financially and putting the tech in the hands of the public sector.
You never know, its a game of interests and incentives - one thing for sure - does does the fed want the private sector to own and control a technology of this kind? Nope.
But what are the profits? 1.5B is a huge amount, no matter what, especially if you’re committing to destroying the datasets as well. That implies you basically used 1.5B for a few years of additional training data, a huge price.
maybe I’m bad at math but paying >5% of your capital raised for a single fine doesn’t seem great from a business perspective
If they are going to be making Billions in net income every year going forward, as many years as analysts can make projections for, and using these works allowed them to GTM faster/quicker/gain advantage against competitors, then it is quite great from a business prospective.
If it allowed them to move faster than their completion, I imagine management would consider it money well spent. They are expected to spend absurd amounts of money to get ahead. They were never expected to spend money efficiently if it meant taking additional months/years to get results.
In that case, this was one of the most expensive no-ops in history.
It's VC money, I don't think anyone believes it's real money
If it weren't, why are we taking it as legal tender? I certainly wouldn't mind being paid in VC money
Yeah it does, cost of materials is way more than that if they were building something physical like a new widget or something. Same idea, they paid for their raw materials.
The money they don't pay out in settlements goes to Nvidia.
You're joking, but that's actually a good pitch. There was a significant legal issue hanging over their heads, with some risk of a potentially business-ending judgment down the line. This makes it go away, which makes the company a safer, more valuable investment. Both in absolute terms and compared to peers who didn't settle.
It just resolves their liability with regards to books they purported they did not even train the models on, which is all that was left in this case after summary judgment. Sure the potential liability was company ending, but it's all a stupid business decision when it is ultimately for books they did not even train on.
It basically does nothing for them besides that. Given the split decisions so far, I'm not sure what value the Alsup decision is going to bring to the industry, moving forward, when it's in the context of books that Anthropic physically purchased. The other AI cases are generally not fact patterns where the LLM was trained with copyrighted materials that the AI company legally purchased copies of.
Isn't that how the whole system operates? Everyone is a conduit to allow rich people to enrich themselves further. The amount and quality of opportunities any individual receives are proportional to how well it serves existing capital.
So long as there is an excuse to justify money flows, that's fine, big capital doesn't really care about the excuse; so long as the excuse is just persuasive enough to satisfy the regulators and the judges.
Money flows happen independently, then later, people try to come up with good narratives. This is exactly what happened in this case. They paid the authors a lot of money as a settlement and agreed on a narrative which works for both sets of people; that training was fine, it's the pirating which was a problem...
It's likely why they settled; they preferred to pay a lot of money and agree on some false narrative which works for both groups rather than setting a precedent that AI training on copyrighted material is illegal; that would be the biggest loss for them.
> Isn't that how the whole system operates? Everyone is a conduit to allow rich people to enrich themselves further. The amount and quality of opportunities any individual receives are proportional to how well it serves existing capital.
Yes, and FWIW that's very succinctly stated.
Sort of.
Some individuals in society find a way through that and figure out a way to strategically achieve their goals. Rare though.
They wanted to move fast and break things. No one made them.
After their recent change in tune to retain data for longer and to train on our data, I deleted my account.
Try to do that. There is no easy way to delete your account. You need to reach out to their support via email. Incredibly obnoxious dark pattern. I hate OpenAI, but everything with Anthropic also smells fishy.
We need more and better players. I hope that XAi will give them all some good competition, but I have my doubts.
X doesn't smell fishy to you??
I think it's fair to call out a dark pattern for account deletion (which, for better or worse, is common practice) - but the data training and data retention thing can both be disabled...I was much more surprised that they DIDN'T train on data as long as they did, when every other LLM provider was sucking in as much data as they could (OpenAI, Google, Meta, and xAI - although Meta gets a pass for providing the open-weight models in my head).
Anthropic has made AI safety a central pillar of their ethos and have shared a lot of information about what they're doing to responsibly train models...personally I found a lot of corporate-speak on this topic from OpenAI, but very little information.
Their logo is a butt hole.
Everything talks about settlement to the 'authors'; is that meant to be shorthand for copyright holders? Because there are a lot of academic works in that library where the publisher holds exclusive copyright and the author holds nothing.
By extension, if the big publishers are getting $3000 per article, that could be a fairly significant windfall.
Dunno if this matters but I thought the copyright always remains with the creator/author but they end up assigning the rights contractually. At least generally for books. Movies will be copyrighted by the studio.
Kinda how like patents will state the human “inventor” but Apple or whichever corp is assigned the rights.
very unsurprisingly, new york times is going to frame this as a win for "the little guy" when in reality it's just multi-billion dollar publishers, with a long rich history of their own exploitive practices, hanging on for dear life against generative AI
This is sad for open source AI, piracy for the purpose of model training should also be fair use because otherwise only the big companies who can afford to pay off publishers like Anthropic will be able to do so. There is no way to buy billions of books just for model training, it simply can't happen.
Fair use isn't about how you access the material, its about what you can do with it after you legally access it. If you don't legally access it, the question of fair use is moot.
Hence, "should"
It’s the sign of a health economy when we respect the creation of content.
IP protections are in the US Constitution. Has the US been in decline since the late 1700s?
This implies training models is some sort of right.
No, it implies that having the power to train AI models exclusively consolidated into a handful of extremely powerful companies is bad.
It implies that people want everyone to do this when it's clear no one should do it. I'm not exactly a fan of "this isn't profitable for small businesses to steal from so we should make it so everyone should steal".
> only big companies benefit from our current copyright regime
You’ve never authored, created, or published something? Never worked for a company that sells something protected by copyright?
It isn't black and white, you can be against some aspects of copyright and be for some others.
It’s very funny when people declare “piracy isn’t stealing”, as if the metaphor of piracy is all about singing and drinking.
Copying and distributing works isn’t identical to theft (deliberately depriving someone of their property), but you’re enjoying someone’s work without compensating them, so it isn’t totally unlike depriving them of something.
I guess it depends how you feel about refusing to pay a window washer. Or indeed you not being paid by your employer. It isn’t theft, but someone is clearly stiffing someone else.
As for only big companies benefitting from the copyright regime… seems like an ideological assumption. I know plenty of authors and they are quite happy having legal protections around their work which means they can earn from their labour.
It’s shifts like that, going from a copyleft to a copyright crowd, that make me increasingly suspicious the HN is authentic as it was years ago. Another weird one is the socialism lean instead of more libertarian ideals by many commentators. I think it might be generational issues and being 50+ years old makes me an old timer!
Isn’t that already the case, with the capacity required to train these models?
That's true. Those handful of companies shouldn't get to do it either.
Curious: are you trolling? Or do you really think doing math shouldn’t be a priori allowed to humans?
Do you seriously think it's a matter of doing math in a vacuum? The issue is pirating books, using them as inputs to math which is then resold as an LLM without compensation to the authors who have clearly copyrighted material. I can't tell if _you're_ trolling.
> The issue is pirating books, using them as inputs to math which is then resold as an LLM without compensation to the authors who have clearly copyrighted material.
No one seems to be able to explain what exactly the issue is here. How are authors harmed by LLMs (where such harm is often used to understand the range of copyright in lawsuits)? I don't see anyone replacing authors and their works, such as JK Rowling being harmed just because people can output Harry Potter esque texts. And if that's the case, well, fan fiction has been around for a long time, with no LLMs in its writing.
No. It means model training is transformative enough to be fair use. They should just be asked to pay them back plus reimbursement/punishment, say pay 10x the price of the pirated books
I wonder how much it would cost to buy every book that you'd want to train a model.
500,000 x $20 = $10 million
Obviously there would be handling costs + scanning costs, so that’s the floor.
Maybe $20 million total? Plus, of course, the time it would take to execute.
The real expense is in the data centers/hardware.
The cost of the books is negligible in comparison.
Somewhere a gritty warehouse in a developing country is receiving shipping containers of old books, massive teams manually flipping each page as a 2nd hand Canon digicam takes a pic of each page, to be OCR’d by the same AI being trained.
Once the book is done, 99% of them go into the furnace at the district heating boiler next door. The other 1% back to a developed country for resale.
I don't know if I agree with it, but you could argue that if a model was built for purely academic purposes, and then used for purely academic purposes, it could meet requirements for fair use.
Setting aside whether or not I think it should be fair use, you’re only going to be training a new foundation model these days if you have billions of dollars to spend on the endeavor anyway. Nobody is training Llama 5 in their garage.
Millions, not billions, as DeepSeek and others have shown.
(Half joking but) I wonder if musicians need to worry if they learned to play by listening to cassette mixtapes.
This is a settlement. It does not set a precedent nor even admit to wrongdoing.
> otherwise only the big companies who can afford to pay off publishers like Anthropic will be able to do so
Only well funded companies can afford to hire a lot of expensive engineers and train AI models on hundreds of thousands of expensive GPUs, too.
Something tells me many the grassroots LLM training people are less concerned about legality of their source training set than the big companies anyway.
I wish the hn rules were more flexible because I would write the best comment to you right now.
After the book publishers burned Google Book's Library of Alexandria, they are now making it impossible to train a LLM unless you engage in the medieval process of manually buying paper-copies of work just to scan & destroy them...
If they wanted a copyright free world maybe they should publish all their models as copyright free as well. But they are not doing it are they?
don't come up with logics defending the proletarian /s
They are nondestructive methods of scanning. I bought an edge scanner to scan collectible public domain books for Project Gutenberg.
for recent books, they could buy digital version of the books and use them for training, though.
I was wondering about this, but digital versions are typical DRM-encumbered and actually a license (not a true purchase) whose terms probably don't allow this. The court's decision was that training is fair use, but in practice, it seems many avenues are blocked.
It reminds of the theoretically public beaches that are blocked off by privately owned land.
DRM is irrelevant. That's only if you want to efficiently extract the text.
If you point a camera at an ebook reader with a little motor to tap the screen, "next" that's still easier than scanning physical books.
The reason why companies aren't using ebooks is because all the publishers and ebook companies make you click through a license stating that "this book for personal use" (paraphrased).
Is this legal: scan billions of pirated books, train a LLM on them and generate billion public domain books with it so that nobody ever needs copyrighted books anymore?
Also if there is a software library with annoying Stallman-style license, can one use LLM to generate a compatible library but in a public domain or with commercial license? So that nobody needs to respect software licenses anymore? Can we also generate a free Photoshop, Linux kernel and Windows this way?
See kids? Its okay to steal if you steal more money than the fine costs.
They're paying $3000 per book. It would've been a lot cheaper to buy the books (which is what they actually did end up doing too).
I'm sure the lesson they learned isn't to pay for the books in the future, but to not get caught in the future.
In the business where all the value is in data, all they lost is a bit of money.
That’s not enough punitive damages for the crime (which is egregious and absolutely deplorable).
I wonder how many of the people saying this about copyright infringement today were complaining about the ridiculously harsh enforcement of it 10-15 years ago. They've just paid $1.5B for torrenting.
That metaphor doesn't really work. It's a settlement, not a punishment, and this is payment, not a fine. Legally it's more like "The store wasn't open, so I took the items from the lot and paid them later".
It's not the way we expect people to do business under normal circumstances, but in new markets with new products? I guess I don't see much actually wrong with this. Authors still get paid a price they were willing to accept, and Anthropic didn't need to wait years to come to an agreement (again, publishers weren't actually selling what AI companies needed to buy!) before training their LLMs.
The Silicon Valley dream: If you’re not getting sued left and right by people with every right to, you didn’t disrupt hard enough.
It will be interesting to see how this impacts the lawsuits against OpenAI, Meta, and Microsoft. Will they quickly try to settle for billions as well?
It’s not precedent setting but surely it’ll have an impact.
I’m sure this’ll be misreported and wilfully misinterpreted because of the current fractious state of the AI discourse, but given the lawsuit was to do with piracy, not the copyright-compliance of LLMs, and in any case, given they settled out of court, thus presumably admit no wrongdoing, conveniently no legal precedent is established either way.
I would not be surprised if investors made their last round of funding contingent on settling this matter out of court precisely to ensure no precedents are set.
Anthropic certainly seems to be hoping that their competitors will have to face some consequences too:
>During a deposition, a founder of Anthropic, Ben Mann, testified that he also downloaded the Library Genesis data set when he was working for OpenAI in 2019 and assumed this was “fair use” of the material.
Per the NYT article, Anthropic started buying physical books in bulk and scanning them for their training data, and they assert that no pirated materials were ever used in public models. I wonder if OpenAI can say the same.
Maybe, though this lawsuit is different in respect to the piracy issue. Anthropic is paying the settlement because they pirated the books, not because training on copyrighted books isn’t fair use which isn’t necessarily true with the other cases.
Didn't Meta do the exact same thing?
https://www.tomshardware.com/tech-industry/artificial-intell...
That was my first though. While not legal precedent, it does sort of open the flood gates for others.
One thing that comes to mind is...
Is there a way to make your content on the web "licensed" in a way where it is only free for human consumption?
I.e. effectively making the use of AI crawlers pirating, thus subject to the same kind of penalties here?
Yes to the first part. Put your site behind a login wall that requires users to sign a contract to that effect before serving them the content... get a lawyer to write that contract. Don't rely on copyright.
I'm not sure to what extent you can specify damages like these in a contract, ask the lawyer who is writing it.
Contracts generally require an exchange of consideration (something of value, like money).
If you put a “contract” on your website that users click through without paying you or exchanging value with you and then you try to collect damages from them according to your contract, it’s not going to get you anywhere.
The consideration the viewer received was access to your private documents.
The consideration you received was a promise to refrain from using those documents to train AI.
I'm not a lawyer, but by my understanding of contract law consideration is trivially fulfilled here.
I'd argue you don't actually want this! You're suggesting companies should be able to make web scraping illegal.
That curl script you use to automate some task could become infringing.
>I'd argue you don't actually want this! You're suggesting companies should be able to make web scraping illegal.
At this point, we do need some laws regulating excessive scraping. We can't have the ineternet grind to a halt over everyone trying to drain it of information.
The GP was talking about web scraping, not "excessive web scraping". It's an important difference.
Maybe some kind of captcha like system could be devised that could be considered a security measure under the DMCA and not allowed to be circumvented. Make the same content available under a licence fee through an API.
DMCA is a US thing, and people in other countries don't have to follow it.
No. Neither legally or technically possible.
Ummm.. terms and conditions?
I'm sure one can try, but copyright has all kinds of oddities and carve-outs that make this complicated. IANAL, but I'm fairly certain that, for example, if you tried putting in your content license "Free for all uses public and private, except academia, screw that ivory tower..." that's a sentiment you can express but universities are under no obligation legally to respect your wish to not have your work included in a course presentation on "wild things people put in licenses." Similarly, since the court has found that training an LLM on works is transformative, a license that says "You may use this for other things but not to train an LLM" couldn't be any more enforceable than a musician saying "You may listen to my work as a whole unit but God help you if I find out you sampled it into any of that awful 'rap music' I keep hearing about..."
The purpose of the copyright protections is to promote "sciences and useful arts," and the public utility of allowing academia to investigate all works(1) exceeds the benefits of letting authors declare their works unponderable to the academic community.
(1) And yet, textbooks are copyrighted and the copyright is honored; I'm not sure why the academic fair-use exception doesn't allow scholars to just copy around textbooks without paying their authors.
So… when can I expect my cheque?
Seriously, how will this money propagate to the authors (if at all) or will it just stay with the publishers?
Maybe I would think differently if I was a book author but I can't help but think that this is ugly but actually quite good for humanity in some perverse sense. I will never, ever, read 99.9% of these books presumably but I will use claude.
I feel like there could a business opportunity for authors here - selling their books to LLM companies. For the LLM companies, it could be cheaper than a lawsuit and the authors get paid.
I hope this leads to the big AI companies pushing for copyright reform that makes access to DRM-free digital content better for everyone.
I wonder who will be the first country to make an exception to copyright law for model training libraries to attract tax revenue like Ireland did for tech companies in the EU. Japan is part of the way there, but you couldn't do a common crawl type thing. You could even make it a library of congress type of setup.
This is already a thing in several places.
EU has copyright exemptions for AI training. You don't need to respect opt outs if you are doing research.
South Korea, Japan has some exemptions too I think?
Singapore has very strong copyright exemptions for AI training. You can completely ignore opt-outs legally, even if doing it commercially.
Just search up "TDM laws globally".
So could they have library genesis on a local server and other pirate sources and use that for training data then? That is the level I'm speaking of, much like common crawl and the reddit archive
As long as you're not distributing, it's legal in Switzerland to download copyrighted material. (Switzerland was on the naughty US/MPAA list for a while, might still be)
Is it distribution though if someone trains a model in switzerland through downloading copyrighted material, training AI on it and then distributing it...
Or what if not even distributing it but rather distributing the outputs of the LLM (so closed source LLM like anthropic)
I am genuinely curious as to if there is some gray area that might be exploited by AI companies as I am pretty sure that they don't want to pay 1.5B dollars yet still want to exploit the works of authors. (let's call a spade a spade)
Using copyrighted material to train AI is a legal grey zone. The nyt vs openAI case is litigating this. The anthropic settlement here is about how the material is obtained. If openAI wins their case and switzerland rules the same way I dont think there would be a problem
This might go down (I think) to be one of the most influential court cases to happen then.
We really are getting at some metaphysical / philosophical questions and maybe we will one day arrive at a question that just can't be answered (I think this is pretty close, right?) and then AI companies would do things freely without being accountable since sure you could take to the courts but how would you come to the decision...?
Another question though
So lets say that the nyt vs openAI case is going on, so in the meantime while they are litigating (lets say), could OpenAI still continue doing the same thing while the case is going on?
How do legal penalties and settlements work internationally? Are entities in other countries somehow barred from filing similar suits with more penalties?
This was a very tactical decision by Anthropic. They have just received Series F funding, and they can now afford to settle this lawsuit.
OpenAI and Google will follow soon now that the precedent has been set, and will likely pay more.
It will be a net win for Anthropic.
As a published author who had works in the training data, can I take my settlement payout in the form of Claude Code API credits?
TBH I'm just going to plow all that money back into Anthropic... might was well cut out the middleman.
I wonder if Antrhopic's lawyers have enough of a sense of humor to take you up on that if you sent them an email asking...
Illegal with a fine is legal with a fee.
Silicon Valley's unofficial motto for the last 15 years
I think that one under-discussed effect for settlements like this is the additional tax on experimentation. The largest players can absorb a $1.5B hit or negotiate licensing at scale. Smaller labs and startups, which often drive breakthroughs, may not survive the compliance burden.
That could push the industry toward consolidation—fewer independent experiments, more centralized R&D inside big tech. I feel that, this might slow the pace of unexpected innovations and increase dependence on incumbents.
This def. raises the question: how do we balance fair compensation for creators with keeping the door open for innovation?
"That could push the industry toward consolidation"
Based on history this is not a possibility but a certainty.
The larger players - who grew because of limited regulations - will start supporting stricter regulation and compliance structures in order to increase the barrier of entry with the excuse of "Oh we learned our lesson, you are right". The hypocrisy is crazy but it makes sense from a capitalistic perspective.
That is part of the ‚first mover advantage‘. Sometimes operating and experimenting in grey zones before they become regulated.
The European and especially German approach of regulating pre-emptive might be more fair, but apparently it also stifles innovation, as we can observe. Almost no significant players from Europe and Germany.
> the law allowed the company to train A.I. technologies using the books because this transformed them into something new.
Unless, of course, the transformation malfunctioned and you got the good old verbatim source, with many of examples compiled in similar lawsuits
This notably wasn't one of the allegations levied against Anthropic, as Claude was accompanied by software that filtered any infringing outputs. From the relevant opinion finding Anthropic's use of the books to be fair use:
> When each LLM was put into a public-facing version of Claude, it was complemented by other software that filtered user inputs to the LLM and filtered outputs from the LLM back to the user. As a result, Authors do not allege that any infringing copy of their works was or would ever be provided to users by the Claude service.
(from Bartz v. Anthropic in the Northern District of California)
Wooo, I sure could use $3k right now and I've got something in the pirate libraries they scraped. Nice.
I get the "Welcome to nginx!" error when I try to visit the archive.ph site, here is an archive.org version: https://web.archive.org/web/20250906042130/http://web.archiv...
This settlement highlights the growing pains of the AI industry as it scales rapidly. While $1.5B is significant, it's a fraction of Anthropic's valuation and funding. It underscores the need for better governance in AI development to avoid future legal pitfalls. Interesting to see how this affects their competition with OpenAI.
"$3,000 per work" seems like an incredibly good deal to license a book.
If the case is about piracy rather than use (which is I think the case?), wouldn’t the comparison be to buying all the books? $3000 each would be a pretty bad deal for that.
$3000 per work isn't a bad price. It seems insulting to the copy write holder.
It's better to ask for forgiveness than for permission.
Taken right from the VC's handbook.
What about OpenAI and Meta? Are they going to face similar lawsuits?
That's the worst AI news I read ever.
Even might AI with billions must kneel to copyright industry. We are forever doomed. Human culture will never be free from the grasp of rent seeking.
So this is a straight-up victory for Anthropic, right?
They pay out (relative) chump change as a penalty for explicitly pirating a bunch of ebooks, and in return they get a ruling that they can train on copyrighted works forever, for the purchase price of the book (not the price that would be needed to secure the rights!)
I thought the opposite - they set a precedent indicating that reproduction of a copyrighted text by an LLM is infringement. If authors refuse to sell to them (via legal terms indicating LLMs aren't allowed), it's infringement. No?
I'd be curious to hear from a legal professional...
From a systems design perspective, $3,000 per book makes this approach completely unscalable compared to web scraping. It's like choosing between a O(n) and O(n²) algorithm - legally compliant data acquisition has fundamentally different scaling characteristics than the 'move fast and break things' approach most labs took initially.
Isn't a flat price per book quite plainly O(n)? If not, what's n?
more of a large difference in constant factor, like a galactic algorithm for data trawling
I don't know if anyone has actually read the article or the ruling, but this is about pirating books.
Anthropic went back and bought->scanned->destroyed physical copies of them afterward... but they pirated them first, and that's what this settlement is about.
The judge also said:
> “The training use was a fair use,” he wrote. “The technology at issue was among the most transformative many of us will see in our lifetimes.”
So you don't need to pay $3,000 per book you train on unless you pirate them.
i agree. this is very gray imo. e.g., books in India have cheap EEE editions compared to the ones in US/Europe. so they can pre-process the data in India & then compile it in US. does that save them from piracy rules & reduces cost as well.
I mean relative to the cost of pre-training, books are going to be cheap even if you buy them in the US (as demonstrated by the fact Anthropic bought them after)
For post-training, other data sources (like human feedback and/or examples) are way more expensive than books
Any models trained on the ill-gotten data should now be public domain.
It is a very good deal for them, did not have to acquire books and had them in a very convenient format (no digitalization), saved tons of time (5+ years), got access to rare books and the LLM is not considered derived work, when it is actually clearly one
It doesn't set precedent, but the message to other AI companies is clear: if you're going to bet your model on gray-area data, have a few billion handy for settlements
So the article notes Anthropic states they never publicly released a frontier model that was trained on the downloaded copyright material. So were Claude 2 and 3 only trained on legally purchased and scanned books, or do they now use a different training system that does not rely on books at all ?
I assumed they were literally just lying.
it sounds like the former
OT: Is anybody else seeing that Datadome is blocking their IP?
I haven't had this in a while, but I always hate it when I'm blocked by Cloudflare/Datadome/etc.
Quid of the already neural-network already feed by those books ? In case the court choose to protect the writers they should be deleted and retrain without all of this materials removed.
I do not believe authors will see any of this money. I will change my mind when I see an email or check.
I don’t understand how training an LLM on a book and then selling its contents via subscriptions is fine but using a probabilistic OCR to read a book and then selling its contents is a crime that deserves jail time.
It's not a crime. It is civil lawsuit.
(Sorry, meta question: how do we insert in submissions that "'Also' <link> <link>..." below the title and above the comment input? The text field in the "submit" page creates a user's post when the "url" field is also filled. I am missing something.)
I wonder how many author will see real money out of this (if any). The techbros prayed to the new king of America with the best currency they had: money - so the king may intervene, like he did many times.
They also agreed to destroy the pirated books. I wonder how large of a portion of their training data comes from these shadow libraries, and if AI labs in countries that have made it clear they won't enforce anti-piracy laws against AI companies will get a substantial advantage by continuing to use shadow libraries.
Perhaps they'll quickly rent the whole contents of a few physical libraries and then scan them all
They already, prior to this lawsuit, prior to serving public models, replaced this data set with one they made by scanning purchased books. Destroying the data set they aren't even using should have approximately zero effect.
So if a startup wants to buy book PDFs legally to use for AI purposes, any suggestions on how to do that?
Reach the publishers or resellers (like amazon for instance)
Give them this order : "I want to buy all your books as epub"
Pay and fetch the stuff
That's all
For e-books, there will usually be a license agreement that prohibits any kind of nonstandard use.
That's why Anthropic had to scan physical books.
It's the concentration of power, monopolies driving this trend of ignoring the fines and punishments. The fine system was not designed for these monstrous beasts. Legal code was designed to deter the common man from wrong doing. It did not anticipate the technological super powers doing winner-take-it-all in a highly connected world, and growing beyond the control of law. Basically, it's law of jungle for these companies. Law and punishment is never going to have any effect on them, as long as they can grab enough market share and customer base. Same as any mafia.
We are entering a world which is filled with corporate mafia that is above law (due to insignificant damage it can cause). These mafia would grip the world providing the essential services that make up the future world. The State would become much weaker, as policy makers could be bought by lobbying, punishments can be offset by VC funding.
It is all part of the playbook.
You or I would go to jail.
How did Meta get away without a scratch?
> "The technology at issue was among the most transformative many of us will see in our lifetimes"
A judge making on a ruling based on his opinion of how transformative a technology will be doesn't inspire confidence. There's an equivocation on the word "transformative" here -- not just transformative in the fair use sense, but transformative as in world-changing, impactful, revolutionary. The latter shouldn't matter in a case like this.
> Companies and individuals who willfully infringe on copyright can face significantly higher damages — up to $150,000 per work
Settling for 2% is a steal.
> In June, the District Court issued a landmark ruling on A.I. development and copyright law, finding that Anthropic’s approach to training A.I. models constitutes fair use,” Aparna Sridhar, Anthropic’s deputy general counsel, said in a statement.
This is the highest-order bit, not the $1.5B in settlement. Anthropic's guilty of pirating.
Printing press, audio recording, movies, radio, television were also transformative. Did not get rid of copyright or actually brought them.
I feel it is insane that authors do not receive some sort of standard compensation for each training use. Say a few hundred to a few thousand depending on complexity of their work.
Why would they earn more from models reading their works than I would pay to read it?
Same reason why the enterprise edition is more expensive than personal. Companies have more money to give and usually use it to generate profit. Individuals do not.
Because ones doing the training are profiting from it. Ai is not a human with limited time. And it is also owned by a company not a legal person.
I might find argument of comparing it to human when it is fully legal person and cutting power to it or deleting is treated as murder. Before that it is just bullshit.
And fundamentally reason for copy right to exist is to support creators and to promote them to create more. In world where massively funded companies can freely exploit their work and even in many case fully substitute that principle is failed.
>They paid for the books after getting caught, the other companies are paying for the copyrighted training materials
Are they paying reasonable compensation? Say like with streaming services, movie theatres, radio and tv stations. As a whole their model is much close to those than individuals buying books, cds or dvds...
You might even consider Theatrical License or Public Performance License. Paid even if you have memorized a thing...
LLMs are just bad technology that require massive amount of inputs so the authors cannot be compensated enough for it. And I fully believe they should be. And lot more than single copy of their work under entirely ill-fitting first-sale doctrine does.
> If I buy a book, learn something, and then profit from it, should I also be paying more than the original price to read the book?
Depends on how you do it. Clearly reading the book word from word is different from making a podcast talking about your interpretation of the book.
So that's 10% of their latest series?
Why are they paying $3000 per book. Does anyone think these authors srll their books for that amount?
Copies of these books are for sale for much less than that - very very few books demand a price that high.
They're paying much more than the actual damages because US copyright law comes with statutory damages for infringement of registered works on top of actual damages, between $200 and $150,000 per work. And the two sides negotiated this as a fair settlement to reduce the risk of an unfavourable outcome.
If you acquire something illegally of course the judgement against you has to be much higher than the legal price. Why would anyone purchase anything if the worst thing that could happen to you for stealing it was just paying the retail price?
They are not paying for reading the book, they are paying for redistributing the book in perpetuity presumably.
Nope, the settlement specifically excludes actions after Aug 25th 2025 (not perpetuity), and it specifically excludes the output of LLMs (not one form of redistribution).
Meanwhile it's not alleged that they redistributed the books in any form except as the output of LLMs (not any other form of redistribution).
This looks to be almost entirely a settlement for pirating the books. It does also cover the act of training the LLMs on the books, but since the district court already found that to be fair use it's unlikely to have been a major factor in the amount.
Does anyone know which models were trained on the pirated books? I would like to avoid using those models.
This shouldn't be allowed to be settled outside courts
Anyone have a link to the class action? I published a book and would love to know if I'm in the class.
Deep research on Claude perhaps for some irony if you will.
I thought 1.5 B is the penalty for one torrent, not for a couple million torrents.
At least if you're a regular citizen.
Make sure to grab the mother-of-all-torrents I guess if you're going to go that path. That way you get more bang for your 1.5B penalty.
$150,000 statutory damages for willful infringement.
So they only paid for 10k books?
A million torrents would cost 1,500 each.
(Everyone say it with me)
Thats a weird way for Anthropic to announce they're going out of business.
So if you buy the content legally and fine tune using it that's fair use?
Yes. Or download it legally (e.g. web content not behind a paywall).
For legal observers, Judge William Haskell Alsup’s razor-sharp distinction between usage and acquisition is a landmark precedent: it secures fair use for transformative generative AI while preserving compensation for copyright holders. In a just world, this balance would elevate him to the highest court of the land, but we are far from a just world.
I wrote a book, can I get my 1 dollar cheque?
You can check, see: https://news.ycombinator.com/item?id=45144261
Reminder that just recently, Anthropic raised a $13 billion series F at a $183 billion post-money evaluation.
In March, they were worth $61.5 billion
In six months they've created $120 billion in value. That's almost 700 million dollars per day. Avoiding being slowed down by even a few days is worth a billion dollar payout when you are on this trajectory. This lawsuit, and any lawsuit AI model companies are likely to get, will be a rounding error at the end of the fiscal year.
They know that superintelligent AI is far larger than money, and even so, the money they'll make on the way there is hefty enough for copyright law to not be an issue.
Here's some money, now piss off and let us get back to taking everyone else's.
Same racket the media cartels and patent trolls have been forcing for 40-50 years.
This weirdly seems like its the best mechanism to buy this much data.
Imagine going to 500k publishers to buy it individually. 3k per book is way cheaper. The copyright system is turning into a data marketplace in front of our eyes
I suspect you could acquire and scan every readily purchasable book for much less than $3k each. Scanhouse for instance charges $0.15 per page for regular unbound (disassembled) books, plus $0.25 for supervised OCR, plus another dollar if the formatting is especially complex; this comes out to maybe $200-300 for a typical book. Acquiring, shipping, and disposing of them all would of course cost more, but not thousands more.
The main cost of doing this would be the time - even if you bought up all the available scanning capacity it would probably take months. In the meantime your competition who just torrented everything would have more high-quality training data than you. There are probably also a fair number of books in libgen which are out of print and difficult to find used.
It's a tiny amount of data relatively speaking. Much more expensive per token than almost any data source imaginable
Isn't this basically what Spotify did originally?
Wait, I’m a published author, where’s my check
The court has to give preliminary approval to the settlement first. After that there should be a notice period during which the lawyers will attempt to reach out and tell you what you need to do to receive your money. (Not a lawyer, not legal advice).
You can follow the case here: https://www.courtlistener.com/docket/69058235/bartz-v-anthro...
You can see the motion for settlement (what the news article is about) here: https://storage.courtlistener.com/recap/gov.uscourts.cand.43...
Thank you very much. There seems to be a lot of friction in this seemingly simple process…
For what it's worth the friction exists for a reason, conflicts of interest.
The lawyers suing Anthropic here will probably walk away with several hundred million dollars - they have won the lottery.
If they managed to extract twice as much money from Anthropic for the class, they'd walk away with probably twice as much... but winning the lottery twice isn't actually much better than winning the lottery once. Meanwhile $4500 is a lot more than $2250 (the latter is a reasonable estimate of how much you'll get per work after the lawyers cut). Which risks the lawyers settling for less than is in their clients best interests so that they can reliably get rich.
Personally (not a lawyer or anything) I think this settlement seems very fair, and I expect the court will approve it. But there's definitely been plenty of class actions in the past where lawyers really did screw over the class and (try to) settle for less than they should have to avoid risking going to trial.
Interesting. Maybe there should be an easier way to file class action lawsuits and collect it - in a cheaper and more efficient manner
Smart move: now that they're an established player, and that they have a few billions of investors' money to spend, they comfort a jurisprudence that stealing IP to train your models is a billion dollar offense.
What a formidable moat against newcomers, definitely worth the price!
Do they even have that much cash on hand?
They just raised $13B, so yes
https://www.anthropic.com/news/anthropic-raises-series-f-at-...
“Agrees” is a funny old word
... in one economy and for specific authors and publishers. But the offence is global in impact on authors worldwide, and the consequences for other IPR laws remains to be seen.
5 days ago GamesNexus did a piece on Meta having the same problems, but resolving it differently:
https://www.youtube.com/watch?v=sdtBgB7iS8c
Somehow excuses like "we torrented it, but we configured low seeding" "temptation was too strong because there was money to be made" "we tried getting a licenses, but then ignored it" and more ludicrous excuses actually worked.
Internal meta emails seemed to point to people knowing the blatant breach of copyright, and yet Meta won the case.
I guess there are tiers of laws even between billionaire companies.
This will paid to rights holders, not authors. Published authors sign away the rights to financial exploitation of their books under the terms of contracts offered. I expect some authors suing publishers in turn. This has happened before when authors realised that they were not getting paid royalties on sales of ebooks.
was the latest round for paying off fines?
This is exactly what could imped LLM training dataset in the western world, which will mechanically lead to "richer" LLM training dataset in countries where some PI is not walling that data for training.
But then, the countries with the freedom to add everything to the training dataset will have to distribute for free the weights in PI walled countries (because they would be plain 'illegal' and will be "blocked" over there, unless free as in free beer I guess), basically only what deepseek could work.
If powerfull LLM hardware becomes somewhat affordable (look at nvidia omega push on LLM specific hardware), "local" companies may run at reasonable speed those 'foreign trained LLM models', but "here".
I'm wondering, if they could purchase all the books that had been in the pirate stash, in physical or DRM-free ebook form, could they have been out of trouble? Use the stash because it's already pre-digitized and accessible, give money to publishers.
It would take time, sure, to compile the lists and make bulk orders, but wouldn't it be cheaper in the end than the settlement?
It is a good opportunity to ask: is it true, that Anthropic can get indemnification from user actions that end up in the company being sued? User actions that are related to the use of Claude. Even just for the accusation. The user needs to cover their bills of lawyers and proceedings. Also they take control of the legal process, can do the way they please, settle or what, user footing the bill. Without limit. Be the user an individual or organization, doesn't matter.
Sounds harsh, if true. Making its use practical only for hobby projects basically where the results of Claude kept for yourself completely (be it information, product using Claude, or product is made by using Claude). Difficult to believe, I hope I heard it wrong.
Why only Anthropic?
In related thought, when I listen to Suno, when I create "Epic Power Metal", the singer is very-often indistinguishible from the famous Hansi Kursch, of Blind Guardian.
https://en.wikipedia.org/wiki/Hansi_K%C3%BCrsch
I'm not sure if he even knows, but that is almost certainly his tracks they trained on.
$1.5B is a nothing but a handslap for the big gold rush companies.
It's less than 1% Anthropic's valuation -- a valuation utterly dependent on all the hoovering up of others' copyrighted works.
AFAICT, if this settlement signals that the typical AI foundation model company's massive-scale commercial theft doesn't result in judgments that wipe out a company (and its execs), then we have confirmation that is a free-for-all for all the other AI gold rush companies.
Then making deals to license rights, in sell-it-to-us-or-we'll-just-take-it-anyway deals, becomes only a routine and optional corporate cost reduction exercise, but not anything the execs will lose sleep over if it's inconvenient.
> It's less than 1% Anthropic's valuation
The settlement is real money though. Valuation is imaginary.
There’s alternatives to wiping out the company that could be fair. For example, a judgement resulting in a shares of the company or revenue shares in the future rather than a one time pay off.
Writers were the true “foundational” piece of LLMs, anyway.
If this is an economist idea of fair, where is the market?
If someone breaks into my house and steals my valuables, without my consent, then giving me stock in their burglary business isn't much of a deterrent to them and other burglars.
Deterrence/prevention is my real goal, not the possibly of a token settlement from whatever bastard rips me off.
We need the analogue of laws and police, or the analogue of homeowner has a shotgun.
I don't much like the idea of settling in stock, but I also think you're looking for criminal law here. Civil law, and this is a civil suit, is far more concerned with making damaged parties whole than acting as a deterrent.
I understand that intentional copyright infringement is a crime in the US, you just need to convince the DOJ to prosecute Anthropic for it...
A terrible precedent that guarantees China a win in the AI race
Nobody is winning the AI race.
Because everyone is expecting AGI now and it's not happening with our current tech.
Let us not forget that this one is the good, ethical AI company. The one founded by splinter AI safety cultists who thought that OpenAI wasn't deep enough in the safety cult for their liking. And here they are, keeping the humans safe. By robbing them.
Because it turns out that nobody in the whole safety cult cares a whit for the human mind, the human experience, human art. Maybe for something they call "human values" in some abstract thought experiment, but never for any human decency. No, the human mind is just ones and zeros, just like a computer, no soul and no spark, to people in the cult. The cult thinks that an LLM reading a book is just the same mechanically as a human reading it.
Your brain is just emergence, your honor. Fair use. Blah blah Dennett Hofstadter Yudkowsky.
Do you feel safe?
I don’t think that copyright is related to safety at all. Copyright is something that doesn’t really exist, except as a social agreement, while safety is something that exists whether or not society believes it does.
We are gods, we are very smart, we make our own rules, says the cult. We define what safety is, not you. Safety is protecting you from imagined monsters for your own good, not protecting you from getting robbed by us. And anyways, property rights are just made up, man, a social construct.
But investors, please give us billions of dollars worth of that imaginary social agreement. We need it so that we can build the inevitable future. We're gonna do it more ethically than those guys over there.
What a swindle.
"Laws against infiltrating the IRS don't really exist, except as a social agreement, while Lord Xenu is someone who exists whether or not society believes he does."
That's what you sound like, to people not in the safety cult.
I can see a price hike incoming.
It was coming regardless of the case results.
Now how about Meta and their questionable means of acquiring tons of content?
Maybe it's time to get some Llama models copied before an overzealous court rules badly.
Honestly, this is a steal for Anthropic.
Ha, this gave me a ripping good laugh.
> A trial was scheduled to begin in December to determine how much Anthropic owed for the alleged piracy, with potential damages ranging into the hundreds of billions of dollars.
It has been admitted and Anthropic knew that this trial would totally bankrupt them had they said they were innocent and continued to fight the case.
But of course, there's too much money on the line, which means even though Anthropic settled (admitting guilt and profiting off of pirated books) they (Anthropic) knew there was no way they could win that case, and was not worth taking that risk.
> The pivotal fair-use question is still being debated in other AI copyright cases. Another San Francisco judge hearing a similar ongoing lawsuit against Meta ruled shortly after Alsup's decision that using copyrighted work without permission to train AI would be unlawful in "many circumstances."
The first of many.
If it was a sure thing, then the rights holders wouldn't have accepted a settlement deal for a measly couple billion. Both sides are happier to avoid risking losing the suit.
Also knowing how pro corporate the legal system is piercing the veil and going after everyone holding the stock would have been unlikely. So getting 1,5 billion out of them likely could have been reasonable move. Otherwise they could have just burned all the money and flipped what was leftover to someone else, with uncertain price and horizon.
Wait, DID they admit guilt? A lot of times companies settle without admitting guilt.
They would only be wiped out if the court awarded the maximum statutory damages (or close to it). There was never any chance of that happening.
[dead]
I'm excited for the moment where these models are able to treat using copyrighted work in a fair-use way that pays out to authors the way Spotify does when you listen to a song. Why? Because authors recieving royalties for their works when they get used in some prompt would likely encourage them to become far more accepting towards LLMs.
Also passing on the cost to consumers of generated content since companies now would need to pay royalties on the back-end should also likely increase the cost of generating slop and hopefully push back against that trend.
This shouldn't just be books, but all written content, like scholarly journals and essays, news articles and blogs, etc.
I realize this is just wishful thinking, but there's got to be some nugget of aspirational desire to pay it forward.
Great. Which rich person is going to jail for breaking the law?
This isn't a criminal case so zero people of any financial position would end up in prison.
Well their seed funder SBF went to jail, but not for bankrolling this particular theft. He did a theft of his own. Still, SBF and the Anthropic guys got their "ethics" from the same shitty blogs, and it shows.
No one, rich or poor, goes to jail for downloading books.
If I walked into a store and stole $1000 of books I would go to jail. If a tech company steals countless thousands of dollars worth of books, someone should go to jail.
Stealing physical goods is not the same as downloading copyrighted material.
Are you sure? I think in some jurisdictions they would, according to the law.
Tell that to Aaron schwartz
Swartz wasn’t charged only for downloading copyrighted material, he was also charged with wire fraud and breaking and entering.
This settlement I guess could be a landmark moment. $1.5 billion is a staggering figure and I hope it sends a clear signal that AI companies can’t just treat creative work as free training data.
I mean the ruling does in fact find that treating this particular kind of creative work qualifies as fair use.
All the ai companies are still using books as training data. Theyre just finding the cheapest scanned copies they can get their hands on to cover their ass
I'm gonna say one thing. If you agree that something was unfairly taken from book authors, then the same thing was taken from people publishing on the web, and on a larger scale.
Book authors may see some settlement checks down the line. So might newspapers and other parties that can organize and throw enough $$$ at the problem. But I'll eat my hat if your average blogger ever sees a single cent.
The blogger’s content was freely available, this fine is for piracy.
This is not a fine, it's a settlement to recompense authors.
More broadly, I think that's a goofy argument. The books were "freely available" too. Just because something is out there, doesn't necessarily mean you can use it however you want, and that's the crux of the debate.
It's not the crux of this case. This is a settlement based on the judge's ruling that they books had been illegally downloaded. The same judge said that the training itself was not the problem – it was downloading the pirated books. It will be tough to argue that loading a public website is an illegal download.
But you can use copyrighted works for transformative works under the fair-use doctrine, and training was ruled to be fair use in the previous ruling.
Books aren't hosted publicly online free for anyone to access. The court seems to think buying a book and scanning it is fair use. Just using pirated books is forbidden. Blogs weren't accessed via pirating.
The settlement was for downloading the pirated books, not training from them. Unless they're paywalled it would be hard to argue the same for a blog.
It seems weird that there was legal culpability for downloading pirated books but not for training on them. At the very least, there is a transitive dependency between the two acts.
Other people have said that Anthropic bought the books later on, but I haven't found any official records for that. Where would I find that?
Also, does anyone know which Anthropic models were NOT trained on the pirated books. I want to avoid such models.
As far as anyone knows, no models were trained on the illegally downloaded books.
The following document indicates otherwise.
https://storage.courtlistener.com/recap/gov.uscourts.cand.43....
"Similarly, different sets or “subsets” or “parts of” or “portions” of the collections sourced from Books3, LibGen, and PiLiMi were used to train different LLMs..." Page 5
"In sum, the copies of books pirated or purchased-and-destructively-scanned were placed into a central “research library” or “generalized data area,” sets or subsets were copied again to create training copies for data mixes, the training copies were successively copied to be cleaned, tokenized, and compressed into any given trained LLM, and once trained an LLM did not output through Claude to the public any further copies." Page 7
The phrase "Finally, once Anthropic decided a copy of a pirated or scanned book in the library would not be used for training at all or ever again, Anthropic still retained that work as a “hard resource” for other uses or future uses" implies to me Anthropic excluded certain books from training, not that they excluded all the pirated books from training.