Back

IBM CEO says there is 'no way' spending on AI data centers will pay off

857 points2 monthsbusinessinsider.com
blablabla1232 months ago

Despite the flashy title that's the first "sober" analysis from a CEO I read about the technology. While not even really news, it's also worth mentioning that the energy requirements are impossible to fulfill

Also now using ChatGPT intensely since months for all kinds of tasks and having tried Claude etc. None of this is on par with a human. The code snippets are straight out of Stackoverflow...

mattlondon2 months ago

Take this "sober" analysis with a big pinch of salt.

IBM have totally missed the AI boat, and a large chunk of their revenue comes from selling expensive consultants to clients who do not have the expertise to do IT work themselves - this business model is at a high risk of being disrupted by those clients just using AI agents instead of paying $2-5000/day for a team of 20 barely-qualified new-grads in some far-off country.

IBM have an incentive to try and pour water on the AI fire to try and sustain their business.

evanjrowley2 months ago

Is this true in 2025?

Asking because the biggest IT consulting branch of IBM, Global Technology Services (GTS), was spun off into Kyndryl back in 2021[0]. Same goes for some premier software products (including one I consulted for) back in 2019[1]. Anecdotal evidence suggests the consulting part of IBM was already significantly smaller than in the past.

It's worth noting that IBM may view these AI companies as competitors to it's Watson AI tech[2]. It already existed before the GPU crunch and hyperscaler boom - runs on proprietary IBM hardware.

[0] https://en.wikipedia.org/wiki/Kyndryl

[1] https://www.prnewswire.com/news-releases/hcl-technologies-to...

[2] https://en.wikipedia.org/wiki/IBM_Watson

mattlondon2 months ago

I know people who still work there and are doing consultancy work for clients.

I am a former IBMer myself but my memory is hazy. IIRC there was 2 arms of the consultants - one was the boring day to day stuff, and the other was "innovation services" or something. Maybe the spun out the drudgery GTS and kept the "innovation" service? No idea.

jldugger2 months ago

My go-to analysis for these sorts of places is net income per employee. Back in the day, IBM was hovering around $5,000. Today, Kyndryl is still around $5,000 (2025). But the parent company seems to be now at $22,000 (2024). For comparison: Meta is at $800,000, Apple is at $675,000, and Alphabet is at $525,000. And Wal-Mart, the nation's largest private employer, is around $9,250.

Now, probably part of that is just that those other companies hire contractors so their employment figure is lower than reality. But even if you cut the numbers in half, neither side of that spin off is looking amazing.

vmh19282 months ago

The part that was spun off was "Infrastructure Services" (from the Wiki article.) Outsourcing and operations, not the business consulting organization that provides high level strategy to coding services.

https://www.ibm.com/consulting

DebtDeflation2 months ago

Yes. GTS was infrastructure services and was spun off. What's left is the old GBS - business services and systems implementation services.

tw042 months ago

Missed the boat? Have you been living under a rock? Watson AI advertising has been everywhere for years.

It’s not that they aren't in the AI space, it’s that the CEO has a shockingly sober take on it. Probably because they’ve been doing AI for 30+ years combined with the fact they don’t have endless money with nowhere to invest it like Google.

CamouflagedKiwi2 months ago

Advertising for it has been everywhere, but it's never seemed like it's at the forefront of anything. It certainly wasn't competitive with ChatGPT and they haven't managed to catch back up in the way Google have.

whizzter2 months ago

It was competitive before ChatGPT existed, and IMHO that gives them a special insight that people miss to consider in this context.

They know what revenue streams existed and how damn hard it was to sell it, considering IBM Watson probably had the option of 100% on-prem services for healthcare, if they failed to sell that will a privacy violation system like ChatGPT,etc have a chance to penetrate the field?

Because however good ChatGPT, Claude,etc are, the _insane_ amounts of money they're given to play with implies that they will then emerge as winners in a future with revenue streams to match the spending that has been happening.

dspillett2 months ago

> Missed the boat? […] Watson AI advertising has been everywhere for years.

They were ahead of the game with their original Watson tech, but pretty slow to join and try get up to speed with the current GenAI families of tech.

The meaning of “AI” has shifted to mean “generative AI like what ChatGPT does” in the eyes of most so you need to account for this. When people talk about AI, even though it is a fairly wide field, they are generally referring to a limited subset of it.

theYipster2 months ago

The death of IBM’s vision to own AI with Watson was never due to an inability to transition to the right tech. In fact, it was never about tech at all. As an entirely B2B company with a large revenue stream to defend, IBM was never going to go and scrape the entirety of the Internet. Especially not after the huge backlash they ignited with their customers over data rights and data ownership in trying to pitch the Watson they had.

Lapsa2 months ago

"GenAI families of tech" lulz

adastra222 months ago

The only similarity of Watson-style “AI” and generative “AI” is the name.

belter2 months ago

> IBM have an incentive to try and pour water on the AI fire to try and sustain their business.

IBM has faced multiple lawsuits over the years. From age discrimination cases to various tactics allegedly used to push employees out, such as requiring them to relocate to states with more employer friendly laws only to terminate them afterward.

IBM is one of the clearest examples of a company that, if given the opportunity to replace human workers with AI, would not hesitate to do so. Assume therefore, the AI does not work for such a purpose...

jujube32 months ago

If they could use THEIR AI to replace human workers, they would. If they learned that Claude or ChatGPT was better than an IBM consultant, they'd probably keep that to themselves.

xocnad2 months ago

I would argue they may have and are not keeping it to themselves. Announced partnership with Anthropic: https://newsroom.ibm.com/2025-10-07-2025-ibm-and-anthropic-p...

deepGem2 months ago

I would any day take chatGPT/Claude over an IBM consultant. I worked at IBM.

QuercusMax2 months ago

I'd rather be slapped in the face than kicked by a horse, but that doesn't mean either is a good thing

deepGem2 months ago

Precisely.

fleroviumna2 months ago

[dead]

KronisLV2 months ago

You could have both worlds: an LLM model by IBM https://huggingface.co/ibm-granite/granite-4.0-h-small

It wasn't very promising when it came to benchmarks though, go figure: https://artificialanalysis.ai/leaderboards/models

OhMeadhbh2 months ago

Are you suggesting IBM made up the numbers? Or that CAPEX is a pre-GAI measure and is useless in guiding decision making?

IBM may have a vested interest in calming (or even extinguishing) the AI fire, but they're not the first to point out the numbers look a little wobbly.

And why should I believe OpenAI or Alphabet/Gemini when they say AI will be the royal road to future value? Don't they have a vested interest in making AI investments look attractive?

Ragnarork2 months ago

> a high risk of being disrupted by those clients just using AI agents instead of paying $2-5000/day for a team of 20 barely-qualified new-grads in some far-off country

Is there any concrete evidence of that risk being high? That doesn't come from people whose job is to sell AI?

ratelimitsteve2 months ago

they have incentive but what's the sustainable, actually-pays-for-itself-and-generates-profit cost of AI? We have no idea. Everything is so heavily subsidized by burning investor capital for heat with the hope that they'll pull an amazon and make it impossible to do business on the internet without paying an AI firm. Maybe the 20 juniors will turn out to be cheaper. Maybe they'll turn out to be slightly better. Maybe they'll be loosely equivalent and the ability to automate mediocrity will drive down the cost of human mediocrity. We don't know and everyone seems to be betting heavily on the most optimistic case, so it makes an awful lot of sense to take the other side of that bet.

ohyes2 months ago

20 juniors become some % of 20 seniors. and some % of that principals. Even if it lives up to the claims you’re still destroying the pipeline for creating experienced people. It is incredibly short sighted.

HacklesRaised2 months ago

Isn't he one of the first ass clowns to start laying people off, replaced by AI.

Another empty suit.

TheCondor2 months ago

How do you see the math working out?

The numbers are staggering.

whiplash4512 months ago

The fair answer is that nobody knows. Even Ilya answered he does not know on his latest podcast with Dwarkesh.

Both top line and bottom line numbers are staggering. Nobody knows. Let's not try to convince people otherwise.

vultour2 months ago

Do you expect Sam Altman to come on stage and tell you the whole thing is a giant house of cards when the entire western economy seems to be propped up by AI? I wonder whose "sober" analysis you would accept, because surely the people that are making money hand over fist will never admit it.

Seems to me like any criticism of AI is always handwaved away with the same arguments. Either it's companies who missed the AI wave, or the models are improving incredibly quickly so if it's shit today you just have to wait one more year, or if you're not seeing 100x improvements in productivity you must be using it wrong.

KptMarchewa2 months ago

> entire western economy seems to be propped up by AI?

It's an example of alternative cost or Copernicus-Gresham's law, rather than some axiom.

diggyhole2 months ago

Right. Just like Intel completely shitting the bed on GPUs then their CEO tweeted a prayer. Old tech companies are going to be left behind.

pulse72 months ago

IBM was founded in 1911 and it survived many things...

CursedSilicon2 months ago

Yeah but this time it's different!

Hey have you seen my tulip collection?

N19PEDL22 months ago

So did GE…

MDGeist2 months ago

IBM was ahead of the boat! They had Watson on Jeopardy years ago! /s

I think you make a fair point about the potential disruption for their consulting business but didn't they try to de-risk a bit with the Kyndryl spinout?

infecto2 months ago

I am a senior engineer, I use cursor a lot in my day to day. I find I can code longer and typically faster than without. Is it on par with human? It’s getting pretty darn close to be honest, I am sure the “10x” engineers of the world would disagree but it definitely has surpassed a junior engineer. We all have our anecdotes but I am inclined to believe on average there is net value.

boringg2 months ago

I think surpassed is not the right word because it doesn't create/ideate. However it is incredibly resourceful. Maybe like having a jr engineer to do your bidding without thinking or growing.

infecto2 months ago

Surpassed is probably the wrong word but the intent is more that it can comprehend quite complicated algorithms and patterns and apply them to your problem space. So yea it’s not a human but I don’t think saying subpar to a human is the right comparison either. In many ways it’s much better, I can run N parallel revisions and have the best implementation picked for review. This all happens in seconds.

chrisweekly2 months ago

Yes, this. Creating multiple iterations in parallel allows much more meaningful exploration of the solution space. Create a branch for each framework and try them all, compare them directly in praxis not just in theory. My brother is doing this to great effect as a solopreneur, and having the time of his life.

adastra222 months ago

I use AI tools extensively. I have seen it come up with truly novel solutions.

lordnacho2 months ago

Largely agree. Anything that is just a multi-file edit, like an interface change, it can do. Maybe not immediately, but you can have it iterate, and it doesn't eat up your attention.

It is without a doubt worth more than the 200 bucks a month I spend on it.

I will go as far as to say it has decent ideas. Vanilla ideas, but it has them. I've actually gotten it to come up with algorithms that I thought were industry secrets. Minor secrets, sure. But things that you don't just come across. I'm in the trading business, so you don't really expect a lot of public information to be in the dataset.

cpursley2 months ago

A lot of time vanilla ideas and established, well proven patterns are just what the customer ordered. And AI code tools are great at this now.

ratelimitsteve2 months ago

i'm also a senior engineer and I use codex a lot. It has reduced many of the typical coding tasks to simply writing really good AC. I still have to write good AC, but I'm starting to see the velocity change from using good AI in a smart way.

enraged_camel2 months ago

Senior engineer here as well. I would say Opus 4.5 is easily a mid-level engineer. It's a substantial improvement over Sonnet 4.5, which required a lot more hand-holding and interventions.

trgn2 months ago

i think less. not sure if that's a good thing. but small little bugs and improvements get cleared so quickly now.

fvv2 months ago

it surpassed 30 junior engineers

delaminator2 months ago

Your assessment of Claude simply isn’t true.

Or Stackoverflow is really good.

I’m producing multiple projects per week that are weeks of work each.

bloppe2 months ago

Would you mind sharing some of these projects?

I've found Claude's usefulness is highly variable, though somewhat predictable. It can write `jq` filters flawlessly every time, whereas I would normally spend 30 minutes scanning docs because nobody memorizes `jq` syntax. And it can comb through server logs in every pod of my k8s clusters extremely fast. But it often struggles making quality code changes in a large codebase, or writing good documentation that isn't just an English translation of the code it's documenting.

gloosx2 months ago

It is always "I'm producing 300 projects in a nanosecond" but it's almost never about sharing or actually deploying these ;)

+4
DoctorOW2 months ago
+2
sahn442 months ago
+1
__MatrixMan__2 months ago
+1
freehorse2 months ago
properbrew2 months ago

I also see a lot of this so I can't blame you for thinking it! See my other post about some projects build _only_ using LLMs.

https://news.ycombinator.com/item?id=46133458

+1
staticassertion2 months ago
JackSlateur2 months ago

Building something is easy

Building something that works ? Not so easy

Pushing that thing in production ? That the hardest part

delaminator2 months ago

I came with receipts

steve_adams_862 months ago

Claude has taught me so much about how to use jq better. And really, way more efficient ways of using the command line in general. It's great. Ironically, the more I learn the less I want to ask it to do things.

datameta2 months ago

In an ideal world we function in exactly this way - using LLMs to bootstrap our skill/knowledge improvement journeys.

JamesSwift2 months ago

Yeah, if you pay attention to its output you can pick up little tips and tricks all over the place.

properbrew2 months ago

Not the OP you're replying to, but I've put together quite a few projects using only LLMs, no hand crafted code anywhere (I couldn't do it!)

https://dnbfamily.com

https://eventme.app

https://blazingbanana.com

https://play.google.com/store/apps/details?id=com.blazingban...

https://play.google.com/store/apps/details?id=com.blazingban...

https://play.google.com/store/apps/details?id=com.blazingban...

Are they perfect? No probably not, but I wouldn't have been able to make any of these without LLMs. The last app was originally built with GPT-3.5.

There is a whole host of other non-public projects I've built with LLMs, these are just a few of the public ones.

+4
forty2 months ago
JamesSwift2 months ago

This is a "scratch an itch" project I initially started to write manually in the past, but never finishing. I then used claude to do it basically on the side while watching the world series http://nixpkgs-pr-explorer.s3-website-us-west-2.amazonaws.co...

artursapek2 months ago

It’s not just good for small code bases. In the last six months I’ve built a collaborative word processor with its own editor engine and canvas renderer using Claude, mostly Opus. It’s practically a mini Google Docs, but with better document history and an AI agent built in. I could never have built this in 6 months by myself without Claude Code.

https://revise.io

I think if you stick with a project for a while, keep code organized well, and most importantly prioritize having an excellent test suite, you can go very far with these tools. I am still developing this at a high pace every single day using these tools. It’s night and day to me, and I say that as someone who solo founded and was acquired once before, 10 years ago.

delaminator2 months ago

https://github.com/lawless-m

You can see by Contributors which ones Claude has done.

I have no idea if the code is any good, I’ve never looked at it and I have no idea how to code in Rust or Racket or Erlang anyway.

+1
fugalfervor2 months ago
miohtama2 months ago

The former tasks are directly from the training material, directly embedded into the model. For the latter task, it needs a context window and intelligence.

ramoz2 months ago

At the end of this week we are releasing https://github.com/eqtylab/cupcake

You can see all of Claude’s commits.

I’ve shipped so much with ai.

Favorite has been metrics dashboards of various kinds - across life and business.

+1
vablings2 months ago
wartywhoa232 months ago

They really should have been supplying at least a week worth of readymade "projects" to every freelance AI promoter out there to demonstrate x9000 AI productivity gains for the skeptics.

Because vibing the air about those gains without any evidence looks too shilly.

+3
brookst2 months ago
written-beyond2 months ago

I'm just as much of an avid llm code generator fan as you may be but I do wonder about the practicality of spending time making projects anymore.

Why build them if other can just generate them too, where is the value of making so many projects?

If the value is in who can sell it the best to people who can't generate it, isn't it just a matter of time before someone else will generate one and they may become better than you at selling it?

sph2 months ago

> Why build them if other can just generate them too, where is the value of making so many projects?

No offence to anyone but these generated projects are nothing ground-breaking. As soon as you venture outside the usual CRUD apps where novelty and serious engineering is necessary, the value proposition of LLMs drops considerably.

For example, I'm exploring a novel design for a microkernel, and I have no need for machine generated boilerplate, as most of the hard work is not implementing yet another JSON API boilerplate, but it's thinking very hard with pen and paper about something few have thought before, and even fewer LLMs have been trained on, and have no intelligence to ponder upon the material.

To be fair, even for the most dumb side-projects, like the notes app I wrote for myself, there is still a joy in doing things by hand, because I do not care about shipping early and getting VC money.

+1
delaminator2 months ago
jstummbillig2 months ago

The value is that we need a lot more software and now, because building software has gotten so much less time consuming, you can sell software to people that could/would not have paid for it previously at a different price point.

+3
eschaton2 months ago
+1
fainpul2 months ago
blablabla1232 months ago

Sure but these are likely just variations of existing things. And yet the quality is still behind the original

eschaton2 months ago

I produce a lot of shit every week too, but I don’t brag about my digestive system on “Hacker” “News.”

delaminator2 months ago

You are so bitter. Take a moment to ponder why you are that way.

eschaton2 months ago

Nice deflection. Did you use ChatGPT to come up with it?

baobabKoodaa2 months ago

I'll do one better: I poop every day in the water closet!

tim3332 months ago

An issue with the doom forecasts is most of the hypothetical $8tn hasn't happened yet. Current big tech capex is about $315bn this year, $250bn last against a pre AI level ~$100bn so ~$400bn has been spent so far on AI boom data centers. https://sherwood.news/business/amazon-plans-100-billion-spen...

The future spend is optional - AGI takeoff, you spend loads, not happening not so much.

Say it levels of at $800bn. The world's population is ~8bn so $100 a head so you'd need to be making $10 or $20 per head per year. Quite possibly doable.

trueismywork2 months ago

65% of people in the world earn less than 3000 euros/year.

golol2 months ago

Getting 65% of the population to spend 1% of their income on some new digital toy forever does not seem so far fetched.

gfaster2 months ago

That seems super far fetched given that 37%[1] of the world's population does not have internet access. You could reasonably restrict further to populations that speak languages that are even passably represented in LLMs.

Even disregarding that, if you're making <3000 euros a year, I really don't think you'd be willing or able to spend that much money to let your computer gaslight you.

[1]: https://ourworldindata.org/internet

adastra222 months ago

The power distribution goes the other way too. There are outliers that will spend much more per capita.

tempfile2 months ago

Lol. If you ballpark numbers like that probably anything is doable!

tim3332 months ago

$10/head x $8bn people is easier said than done - only your major enterprises like Google or Amazon can. But AI even if just LLMs may be there.

mark_l_watson2 months ago

I agree. re: energy and other resource use: the analogy I like is with driving cars: we use cars for transportation knowing the environmental costs so we don’t usually just go on two hour drives for the fun of it, rather we drive to get to work, go shopping. I use Gemini 3 but only in specific high value use cases. When I use commercial models I think a little about the societal costs.

In the USA we have lost the thread here: we don’t maximize the use of small tuned models throughout society and industry, instead we use the pursuit of advanced AI as a distraction to the reality that our economy and competitiveness are failing.

spider-mario2 months ago

Most of the energy for AI does not go into chatbots. Using Gemini is not remotely close to driving a car for 2 hours. If a prompt is 0.3 Wh (https://cloud.google.com/blog/products/infrastructure/measur..., https://andymasley.substack.com/p/a-cheat-sheet-for-conversa...), each prompt is closer to using an e-bike for 50 metres.

You could have your morning shower 1°C less hot and save enough energy for about 200 prompts (assuming 50 litres per shower). (Or skip the shower altogether and save thousands of prompts.)

collinmanderson2 months ago

I think it's also worth comparing to the CO2 impact of consuming meat, especially beef, which is pretty high.

(It's the training, not the inference, that's the biggest energy usage.)

mark_l_watson2 months ago

+1 interesting

MisterTea2 months ago

Yesterday I was talking to coworkers about AI I mentioned that a friend of mine used ChatGPT to help him move. So a coworker said I have to test this and asked ChatGPT if he could fit a set of the largest Magnepan speakers (the wide folding older room divider style) in his Infinity QX80. The results were hilarious. It had some of the dimensions right but it then decided the QX80 is as wide as a box truck (~8-8.5 feet/2.5 m) and to align the nearly 7 foot long speakers sideways between the wheel wells. It also posted hilariously incomprehensible ASCII diagrams.

TheOccasionalWr2 months ago

I'm not sure what you mean with the "code snippets are straight out of Stackoverflow". That is factually incorrect just by how LLM works. By now there has been so much code ingested from all kinds of sources, including Stackoverflow LLM is able to help generate quite good code in many occasions. My point being it is extremly useful for super popular languages and many languages where resources are more scarce for developer but because they got the code from who knows where, it can definitely give you many useful ideas.

It's not human, which I'm not sure what is supposed to actually mean. Humans make mistakes, humans make good code. AI does also both. What it definitely needs is a good programmer still on top to know what he is getting and how to improve it.

I find AI (LLM) very useful as a very good code completion and light coder where you know exactly what to do because you did it a thousand times but it's wasteful to be typing it again. Especially a lot of boilerplate code or tests.

It's also useful for agentic use cases because some things you just couldn't do before because there was nothing to understand a human voice/text input and translate that to an actual command.

But that is all far from some AGI and it all costs a lot today an average company to say that this actually provided return on the money but it definitely speeds things up.

prewett2 months ago

> I'm not sure what you mean with the "code snippets are straight out of Stackoverflow". That is factually incorrect just by how LLM works.

I'm not an AI lover, but I did try Gemini for a small, well-contained algorithm for a personal project that I didn't want to spend the time looking up, and it was straight-up a StackOverflow solution. I found out because I said "hm, there has to be a more elegant solution", and quickly found the StackOverflow solution that the AI regurgitated. Another 10 or 20 minutes of hunting uncovered another StackOverflow solution with the requisite elegance.

will42742 months ago

> While not even really news, it's also worth mentioning that the energy requirements are impossible to fulfill

If you believe this, you must also believe that global warming is unstoppable. OpenAI's energy costs are large compared to the current electricity market, but not so large compared to the current energy market. Environmentalists usually suggest that electrification - converting non-electrical energy to electrical energy - and then making that electrical energy clean - is the solution to global warming. OpenAI's energy needs are something like 10% of the current worldwide electricity market but less than 1% of the current worldwide energy market.

blablabla1232 months ago

Google recently announced to double AI data center capacity every 6 month. While both unfortunately deal with exponential growth, we are talking about 1% growth CO2 which is bad enough vs 300% effectively per year according to Google

infecto2 months ago

Constraints breed innovation. Humans will continue to innovate and demand for resources will grow. it is fairly well baked into most of civilization. Will that change in the future? Perhaps but it’s not changing now.

rvnx2 months ago

Imagine how big pile of trash as the current generation of graphics cards used for LLM training will get outdated. It will crash the hardware market (which is a good news for gamers)

brookst2 months ago

A100’s are not suitable for gaming.

rvnx2 months ago

https://www.youtube.com/watch?v=Vw699ZbUKqg

Looks very playable to me.

It's just an expensive card, but if the market is flooded with them, they can be used in gaming AND in local LLMs.

So it can push the fall of server-side AI even further.

These cards are 400 USD for reference, so if more and more are sold, we can imagine them getting down to 100 USD or so.

(and then similar for A100, H100, etc)

My main concern is the noise because I have seen datacenter hardware and it is crazy. Of course it's not ideal but there is something to do with it.

tikotus2 months ago

I'd rather phrase it as "code is straight out of GitHub, but tailored to match your data structures"

That's at least how I use it. If I know there's a library that can solve the issue, I know an LLM can implement the same thing for me. Often much faster than integrating the library. And hey, now it's my code. Ethical? Probably not. Useful? Sometimes.

If I know there isn't a library available, and I'm not doing the most trivial UI or data processing, well, then it can be very tough to get anything usable out of an LLM.

guywithahat2 months ago

> it's also worth mentioning that the energy requirements are impossible to fulfill

Maybe I'm misunderstanding you but they're definitely not impossible to fulfill, in fact I'd argue the energy requirements are some of the most straightforward to fulfill. Bringing a natural gas power plant online is not the hardest part in creating AGI

diggyhole2 months ago

I've had decent results hackin', wackin' and smashin'.

lavezzi2 months ago

> Despite the flashy title that's the first "sober" analysis from a CEO I read about the technology.

Didn't IBM just sign quite a big deal with Groq?

trgn2 months ago

> Also now using ChatGPT intensely since months for all kinds of tasks and having tried Claude etc.

the facts though, read like an endorsement not a criticism

lightbendover2 months ago

[dead]

myaccountonhn2 months ago

> In an October letter to the White House's Office of Science and Technology Policy, OpenAI CEO Sam Altman recommended that the US add 100 gigawatts in energy capacity every year.

> Krishna also referenced the depreciation of the AI chips inside data centers as another factor: "You've got to use it all in five years because at that point, you've got to throw it away and refill it," he said.

And people think the climate concerns of AI are overblown. Currently US has ~1300 GW of energy capacity. That's a huge increase each year.

throwaway311312 months ago

100GW per year is not going to happen.

The largest plant in the world is the Three Gorges Dam in China at 22GW and it’s off the scales huge. We’re not building the equivalent of four of those every year.

Unless the plan is to power it off Sam Altman’s hot air. That could work. :)

https://en.wikipedia.org/wiki/List_of_largest_power_stations

snake_doc2 months ago

China added ~90GW of utility solar per year in last 2 years. There's ~400-500GW solar+wind under construction there.

It is possible, just may be not in the U.S.

Note: given renewables can't provide base load, capacity factor is 10-30% (lower for solar, higher for wind), so actual energy generation will vary...

kelnos2 months ago

> It is possible

Sure, GP was clearly talking about the US, specifically.

> just may be not in the U.S.

Absolutely 100% not possible in the US. And even if we could do it, I'm not convinced it would be prudent.

tim3332 months ago

If you really had to you'd probably run turbines off natural gas but it's not a good idea environmentally.

locallost2 months ago

I am a huge proponent of renewables, but you cannot compare their capacity in GW with other energy sources because their output is variable and not always maximal. To realistically get 100GW in solar you would need at least 500GW of panels.

On the other hand, I think we will not actually need 100GW of new installations because capacity can be acquired by reducing current usage by making it more efficient. The term negawatt comes to mind. A lot of people are still in the stone age when it comes to this even though it was demonstrated quite effectively by reduced gas use in the US after the oil crisis in the 70s. Which basically recovered to the pre crisis levels only recently.

High gas prices caused people to use less and favor efficiency. The same thing will happen with electricity and we'll get more capacity. Let the market work.

victor1062 months ago

> China added ~90GW of utility solar per year in last 2 years. There's ~400-500GW solar+wind under construction there.

Source?

ikesau2 months ago

I'm not sure about the utility/non-utility mix, but according to IRENA it was actually ~500GW of added capacity in 2023 and 2024.

https://ourworldindata.org/grapher/installed-solar-pv-capaci...

ProllyInfamous2 months ago

Background: I live within the US Federal Tennessee Valley Authority (TVA), a regional electric grid operator. The majority of energy is generated by nuclear + renewables, with coal and natural gas as peakers. Grid stability is maintained by among the largest batteries in the world, Racoon Mountain Pumped Storage Facility.

Three Gorges Dam is capable of generating more power than all of TVA's nuclear + hydro, combined. In the past decade, TVA's single pumped-storage battery has gone from largest GWh/capcity in the world to not even top ten — largest facilities are now in China.

µFission reactors have recently been approved for TVA commissioning, with locations unconfirmed (but about one-sixth the output of typical TVA nuclear site). Sub-station battery storage sites are beginning to go online, capable of running subdivisions for hours after circuit disconnects.

Tech-funded entities like Helios Energy are promising profitable ¡FusioN! within a few years ("for fifty years").

----

All of the above just to say: +100GW over the next decade isn't that crazy a prediction (+20% current supply, similar in size to two additional Texas-es).

https://www.eia.gov/electricity/gridmonitor/dashboard/electr...

bpicolo2 months ago

Amazing that 4 of the top 5 are renewables in China.

mrexroad2 months ago

> As of 2025, The Medog Dam, currently under construction on the Yarlung Tsangpo river in Mêdog County, China, expected to be completed by 2033, is planned to have a capacity of 60 GW, three times that of the Three Gorges Dam.[3]

Meanwhile, “drill baby drill!”

nrhrjrjrjtntbt2 months ago

Can run the UK and have capacity left over that, if considered alone, would be worlds highest in current year 2025.

krupan2 months ago

Does that cout the dams that flood valleys and displace thousands of people, plants, and animals from their homes?

tempest_2 months ago

Not really that surprising.

Authoritarianism has its draw backs obviously but one of its more efficient points is it can get things done if the will is at the top. Since China doesnt have a large domestic oil supply like the US it is a state security issue to get off oil as fast as possible.

SalmoShalazar2 months ago

It’s become clear that some form of top down total technocratic control like China has implemented is essential for pushing humanity forward.

immibis2 months ago

It's amazing what a dictatorship can do when it's not captured by oil interests and Israel.

mbesto2 months ago

Because its cheaper. That's it.

baq2 months ago

New datacenters are being planned next to natgas hubs for a reason. They’re being designed with on site gas turbines as primary electricity sources.

robocat2 months ago

Natural gas production has peaked: https://www.eia.gov/todayinenergy/detail.php?id=66564 (see second graph)

Planning gas turbines doesn't help much if gas prices are about to increase due to lack of new supply.

New Zealand put in peaker gas turbines, but is running out of gas to run them, so its electricity market is gonna be fucked in a dry year (reduced water from weather for hydro):

  • Domestic gas production is forecast to fall almost 50 per cent below projections made just three years ago. 
  • In a dry year, New Zealand no longer has enough domestic gas to [both] fully operate existing thermal generation and meet industrial gas demand.
https://www.mbie.govt.nz/dmsdocument/31240-factsheet-challen...
robocat2 months ago

Then again, apparently lots of new LNG supply coming:

https://oilprice.com/Energy/Natural-Gas/LNG-Exports-Will-Dri...

octoberfranklin2 months ago

Gigawatts? Pshaw. We have SamaWatts.

ryandrake2 months ago

LOL, maybe Sam Altman can fund those power plants. Let me guess: He'd rather the public pay for it, and for him to benefit/profit from the increased capacity.

intrasight2 months ago

Big tech is going to have to fund the plants and probably transmission. Because the energy utilities have a decades long planning horizon for investments.

Good discussion about this in recent Odd Lots podcast.

kelnos2 months ago

I've read a bunch of the opposite: a lot of secret deals between tech companies and utilities, where when details come out, we find that regular ratepayers are going to be paying a decent chunk of the cost.

intrasight2 months ago

That was discussed in the Odd Lots episode as well. But that can only happen in unregulated markets.

Capricorn24812 months ago

Good discussion? This is a Bloomberg podcast with ads from Palantir explicitly telling us AI is not here to replace any of us. They do everything they can to avoid the topic of what the cost is to regular people.

Data centers are built in people's backyards without their permission, wreck the values of their home, and then utility companies jack up their price to compensate for the extra strain on the grid. So the residents have to pay for Big Tech but get no share of the profits. How this podcast does a whole episode on data centers and the electricity grid and doesn't talk about what's actually happening to people, well, that would be surprising if I didn't know where it came from.

coliveira2 months ago

Scam Altman wants the US to build a lot of energy plants so that the country will pay the costs and OpenAI will have the profits of using this cheap energy.

jamesbelchamber2 months ago

If we moron our way to large-scale nuclear and renewable energy rollout however..

mywittyname2 months ago

I highly doubt this will happen. It will be natural gas all the way, maybe some coal as energy prices will finally make it profitable again.

emodendroket2 months ago

If for no other reason than they're actively attacking renewable capacity even amid surging demand

mrguyorama2 months ago

This admin has already killed as much solar and wind and battery as it can.

The only large scale rollout will be payment platforms that will allow you to split your energy costs into "Five easy payments"

venturecruelty2 months ago

Guess who's going to pay nothing for power? Hint: it's not you, and it's not me.

tehjoker2 months ago

There's a reason Trump is talking about invading Venezuela (hint: it's because they have the largest oil deposits).

Octoth0rpe2 months ago

> Krishna also referenced the depreciation of the AI chips inside data centers as another factor: "You've got to use it all in five years because at that point, you've got to throw it away and refill it," he said

This doesn't seem correct to me, or at least is built on several shaky assumptions. One would have to 'refill' your hardware if:

- AI accelerator cards all start dying around the 5 year mark, which is possible given the heat density/cooling needs, but doesn't seem all that likely.

- Technology advances such that only the absolute newest cards can be used to run _any_ model profitably, which only seems likely if we see some pretty radical advances in efficiency. Otherwise, it seems like assuming your hardware is stable after 5 years of burn in, you could continue to run older models on that hardware at only the cost of the floorspace/power. Maybe you need new cards for new models for some reason (maybe a new fp format that only new cards support? some magic amount of ram? etc), but it seems like there may be room for revenue via older/less capable models at a discounted rate.

darth_avocado2 months ago

Isn’t that what Michael Burry is complaining about? That five years is actually too generous when it comes to depreciation of these assets and that companies are being too relaxed with that estimate. The real depreciation is more like 2-3 years for these GPUs that cost tens of thousands of dollars a piece.

https://x.com/michaeljburry/status/1987918650104283372

enopod_2 months ago

That's exactly the thing. It's only about bookkeeping.

The big AI corps keep pushing depreciation for GPUs into the future, no matter how long the hardware is actually useful. Some of them are now at 6 years. But GPUs are advancing fast, and new hardware brings more flops per watt, so there's a strong incentive to switch to the latest chips. Also, they run 24/7 at 100% capacity, so after only 1.5 years, a fair share of the chips is already toast. How much hardware do they have in their books that's actually not useful anymore? Noone knows! Slower depreciation means more profit right now (for those companies that actually make profit, like MS or Meta), but it's just kicking the can down the road. Eventually, all these investments have to get out of the books, and that's where it will eat their profits. In 2024, the big AI corps invested about $1 trillion in AI hardware, next year is expected to be $2 trillion. Only the interest payments for that are crazy. And all of this comes on top of the fact that none of the these companies actually make any profit at all with AI. (Except Nvidia of course) There's just no way this will pan out.

mbesto2 months ago

> It's only about bookkeeping.

> Some of them are now at 6 years.

There are three distinct but related topics here, it's not "just about bookkeeping" (though Michael Burry may be specifically pointing to the bookkeeping being misquoted):

1. Financial depreciation - accounting principals typically follow the useful life of the capital asset (simply put, if an airplane typically gets used for 30 years, they'll split the cost of purchasing an airplane across 30 years equally on their books). Getting this right has more to do with how future purchases get financed due to how the bookkeepers show profitability, balance sheets, etc.. Cashflow is ultimately what might create an insolvent company.

2. Useful life - per number 1 above - this is the estimated and actual life of the asset. So if the airplane actually is used over 35 years, not 30, it's actual useful life is 35 years. This is to your point of "some of them are 6 years old". Here is where this is going to get super tricky with GPUs. We (a) don't actually know what the useful life is or is going to be (hence Michael Burry's question) for these GPUs (b) the cost of this is going to get complicated fast. Let's say (I'm making these up) GPU X2000 is 2x the performance of GPU X1000 and your whole data center is full of GPU X1000. Do you replace all of those GPUs to increase throughput?

3. Support & maintenance - this is what actually gets supported by the vendor. There doesn't seem to be any public info about the Nvidia GPUs but typically these are 3-5 years (usually tied to the useful life) and often can be extended. Again, this is going to get super complicated to financially because we don't know what future advancements might happen to performance improvements to GPUs (and therefore would necessitate replacing old ones and therefore creating renewed maintenance contracts).

avisser2 months ago

> Also, they run 24/7 at 100% capacity, so after only 1.5 years

How does OpenAI keep this load? I would expect the load at 2pm Eastern to be WAY bigger than the load after California goes to bed.

+1
krupan2 months ago
brookst2 months ago

Typical load management that’s existed for 70 years: when interactive workloads are off-peak, you do batch processing. For OpenAI that’s anything from LLM evaluation of the days’ conversations to user profile updates.

gizmo2 months ago

Flops per watt is relevant for a new data center build-out where you're bottlenecked on electricity, but I'm not sure it matters so much for existing data centers. Electricity is such as small percentage of total cost of ownership. The marginal cost of running a 5 year old GPU for 2 more years is small. The husk of a data center is cheap. It's the cooling, power delivery equipment, networking, GPUs etc that costs money, and when you retrofit data centers for the latest and greatest GPUs you have to throw away a lot of good equipment. Makes more sense to build new data centers as long as inference demand doesn't level off.

duped2 months ago

How different is this from rental car companies changing over their fleets? I don't know, this is a genuine question. The cars cost 3-4x as much and last about 2x as far as I know, and the secondary market is still alive.

logifail2 months ago

> How different is this from rental car companies changing over their fleets?

New generations of GPUs leapfrog in efficiency (performance per watt) and vehicles don't? Cars don't get exponentially better every 2–3 years, meaning the second-hand market is alive and well. Some of us are quite happy driving older cars (two parked outside our home right now, both well over 100,000km driven).

If you have a datacentre with older hardware, and your competitor has the latest hardware, you face the same physical space constraints, same cooling and power bills as they do? Except they are "doing more" than you are...

Would we could call it "revenue per watt"?

wongarsu2 months ago

The traditional framing would be cost per flop. At some point your total costs per flop over the next 5 years will be lower if you throw out the old hardware and replace it with newer more efficient models. With traditional servers that's typically after 3-5 years, with GPUs 2-3 years sounds about right

The major reason companies keep their old GPUs around much longer with now are the supply constraints

+1
b1122 months ago
afavour2 months ago

Rental car companies aren’t offering rentals at deep discount to try to kickstart a market.

It would be much less of a deal if these companies were profitable and could cover the costs of renewing hardware, like car rental companies can.

cjonas2 months ago

I think it's a bit different because a rental car generates direct revenue that covers its cost. These GPU data centers are being used to train models (which themselves quickly become obsolete) and provide inference at a loss. Nothing in the current chain is profitable except selling the GPUs.

+3
sho2 months ago
chii2 months ago

> the secondary market is still alive.

this is the crux. Will these data center cards, if a newer model came out with better efficiency, have a secondary market to sell to?

It could be that second hand ai hardware going into consumers' hands is how they offload it without huge losses.

+1
vesrah2 months ago
duped2 months ago

I would presume that some tier shaped market will arise where the new cards are used for the most expensive compute tasks like training new models, the slightly used for inference, older cards for inference of older models, or applied to other markets that have less compute demand (or spend less $ per flop, like someone else mentioned).

It would be surprising to me that all this capital investment just evaporates when a new data center gets built or refitted with new servers. The old gear works, so sell it and price it accordingly.

+1
physicsguy2 months ago
rzerowan2 months ago

I think its illustrative to consider the previous computation cycle ala Cryptomining. Which passed through a similar lifecycle with energy and GPU accelerators.

The need for cheap wattage forced the operations to arbitrage the where location for the cheapest/reliable existing supply - there rarely was new buildout as the cost was to be reimbursed by the coins the miningpool recovered.

For the chip situation caused the same apprecaition in GPU cards with periodic offloading of cards to the secondary market (after wear and tear) as newer/faster/more efficient cards came out until custom ASICs took over the heavy lifting, causing the GPU card market to pivot.

Similarly in the short to moedium term the uptick of custo ASICs like with Google TPU will definately make a dent in bot cpex/opex and potentially also lead to a market with used GPUs as ASICs dominate.

So for GPUs i can certainly see the 5 year horizon making a impact in investment decisions as ASICs proliferate.

abraae2 months ago

It's just the same dynamic as old servers. They still work fine but power costs make them uneconomical compared to latest tech.

acdha2 months ago

It’s far more extreme: old servers are still okay on I/O, and memory latency, etc. won’t change that dramatically so you can still find productive uses for them. AI workloads are hyper-focused on a single type of work and, unlike most regular servers, a limiting factor in direct competition with other companies.

matt-p2 months ago

I mean you could use training GPUs for inference right? That would be use case number 1 for a 8 * a100 box in a couple of years. It can also be used for non IO limited things like folding proteins or other 'scientific' use cases. Push comes to shove im sure an old A100 will run crysis.

oblio2 months ago

All those use cases would probably use up 1% of the current AI infrastructure, let alone ahat they're planning to build.

Yeah, just like gas, possible uses will expand if AI crashes out, but:

* will these uses cover, say, 60% of all this infra?

* will these uses scale up to use that 60% within the next 5-7 years, while that hardware is still relevant and fully functional?

Also, we still have railroad tracks from the 1800s rail mania that were never truly used to capacity and dot com boom dark fiber that's also never been used fully, even with the internet growing 100x since. And tracks and fiber don't degrade as quickly as server hardware and especially GPUs.

+1
physicsguy2 months ago
m00x2 months ago

LambdaLabs is still making money off their Tesla V100s, A100s, and A6000s. The older ones are cheap enough to run some models and very cheap, so if that's all you need, that's what you'll pick.

The V100 was released in 2017, A6000 in 2020, A100 in 2021.

Havoc2 months ago

That could change with a power generation breakthrough. If power is very cheap then running ancient gear till it falls apart starts making more sense

overfeed2 months ago

Power consumption is only part of the equation. More efficient chips => less heat => lower cooling costs and/or higher compute density in the same space.

nish__2 months ago

Solution: run them in the north. Put a server in the basement of every home in Edmonton and use the excess heat to warm the house.

rgmerk2 months ago

Hugely unlikely.

Even if the power is free you still need a grid connection to move it to where you need it, and, guess what, the US grid is bursting at the seams. This is not just due to data center demand; it was struggling to cope with the transition away from coal well before that point.

You also can’t buy a gas turbine for love nor money at the moment, and they’re not ever going to be free.

If you plonked massive amounts of solar panels and batteries in the Nevada desert, that’s becoming cheap but it ain’t free, particularly as you’ll still need gas backup for a string of cloudy days.

If you think SMRs are going to be cheap I have a bridge to sell you, you’re also not going to build them right next to your data centre because the NRC won’t let you.

So that leaves fusion or geothermal. Geothermal is not presently “very cheap” and fusion power has not been demonstrated to work at any price.

zppln2 months ago

I'm a little bit curious about this. Where do all the hardware from the big tech giants usually go once they've upgraded?

q3k2 months ago

In-house hyperscaler stuff gets shredded, after every single piece of flash storage gets first drilled through and every hard drive gets bent by a hydraulic press. Then it goes into the usual e-waste recycling stream (ie. gets sent to poor countries where precious metals get extracted by people with a halved life expectancy).

Off-the-shelf enterprise gear has a chance to get a second life through remarketing channels, but much of it also gets shredded due to dumb corporate policies. There are stories of some companies refusing to offload a massive decom onto the second hand market as it would actually cause a crash. :)

It's a very efficient system, you see.

oblio2 months ago

Similar to corporate laptops where due to stupid policies, for most BigCos you can't really buy or otherwise get a used laptop, even as the former corporate used of said laptop.

Super environmentally friendly.

trollbridge2 months ago

I used (relatively) ancient servers (5-10 years in age) because their performance is completely adequate; they just use slightly more power. As a plus it's easy to buy spare parts, and they run on DDR3, so I'm not paying the current "RAM tax". I generally get such a server, max out its RAM, max out its CPUs and put it to work.

+1
taneq2 months ago
wmf2 months ago

Some is sold on the used market; some is destroyed. There are plenty of used V100 and A100 available now for example.

dogman1442 months ago

Manipulating this for creative accounting seems to be the root of Michael Burry’s argument, although I’m not fluent enough in his figures to map here. But, commenting that it interesting to see IBM argue a similar case (somewhat), or comments ITT hitting the same known facts, in light of Nvidia’s counterpoints to him.

tim3332 months ago

Burry just did his first interview for many years https://youtu.be/nsE13fvjz18?t=265

with Michael Lewis, about 30 mins long. Highlights - he thinks we are near the top, his puts are for two years time. If you go long he suggests healthcare stocks. He's been long gold some years, thinks bitcoin is dumb. Thinks this is dotcom bubble #2 except instead of pro investors it's mostly index funds this time. Most recent headlines about him have been bad reporting.

mbesto2 months ago

> They still work fine but power costs make them uneconomical compared to latest tech.

That's not necessarily the driving financial decision, in fact I'd argue company's with data center hardware purchases barely look at this number. It's more simple than that - their support runs out and its cheaper to buy a new piece of hardware (that IS more efficient) because the hardware vendors make extended support inordinately expensive.

Put yourselves in the shoes of a sales person at Dell selling enterprise server hardware and you'll see why this model makes sense.

PunchyHamster2 months ago

Eh, not exactly. If you don't run CPU at 70%+ the rest of the machine isn't that much more inefficient that model generation or two behind.

It used to be that new server could use half power of the old one at idle but vendors figured out that servers also need proper power management a while ago and it is much better.

Last few gens increase could be summed up to "low % increase in efficiency, with TDP, memory channels and core count increase".

So for loads not CPU bound the savings on newer gen aren't nearly worth it to replace it, and for bulk storage the CPU power usage is even smaller part

matt-p2 months ago

Definitely single thread performance and storage are the main reasons not to use an old server. A 6 year old server didn't have nvme drives, so SATA SSD at best. That's a major slow down if disk is important.

Aside from that there's no reason to not use a dual socket server from 5 years ago instead of a single socket server of today. Power and reliability maybe not as good.

zozbot2342 months ago

NVMe is just a different form factor for what's essentially a PCIe connection, and adapters are widely available to bridge these formats. Surely old servers will still support PCIe?

knowitnone32 months ago

that was then. now, high-end chips are reaching 4,3,2 nm. power savings aren't that high anymore. what's the power saving going from 4 to 2nm?

monster_truck2 months ago

+5-20% clockspeed at 5-25% lower voltages (which has been and continues to be the trend) add up quick from gen to gen, nevermind density or ipc gains.

baq2 months ago

We can’t really go lower on voltage anymore without a very significant change in the materials used. Silicon band gap yadda yadda.

slashdave2 months ago

5 years is long, actually. This is not a GPU thing. It's standard for server hardware.

bigwheels2 months ago

Because usually it's more efficient for companies to retire the hardware and put in new stuff.

Meanwhile, my 10-15 year old server hardware keeps chugging along just fine in the rack in my garage.

AdrianB12 months ago

I thought the same until I calculated that newer hardware consumes a few times less energy and for something running 24x7 that adds up quite a bit (I live in Europe, energy is quite expensive).

So my homelab equipment is just 5 years old and it will get replaced in 2-3 years with something even more power efficient.

+1
tharkun__2 months ago
+1
prmoustache2 months ago
thehappypm2 months ago

Energy is very cheap for data centers. have you ever looked up wholesale energy rates? It’s like a cent per kilowatt hour.

XorNot2 months ago

Sample size of 1 though. It's like how I've had hard disks last a decade, but a 100 node Hadoop cluster had 3 die per week after a few years.

snuxoll2 months ago

Spinning rust and fans are the outliers when it comes to longevity in compute hardware. I’ve had to replace a disk or two in my rack at home, but at the end of the day the CPUs, RAM, NICs, etc. all continue to tick along just fine.

When it comes to enterprise deployments, the lifecycle always revolves around price/performance. Why pay for old gear that sucks up power and runs 30% slower than the new hotness, after all!

But, here we are, hitting limits of transistor density. There’s a reason I still can’t get 13th or 14th gen poweredge boxes for the price I paid for my 12th gen ones years ago.

rsynnott2 months ago

"Just fine". Presumably you're not super-concerned with the energy costs? People who run data centres pretty much _have_ to be.

slashdave2 months ago

More than that. The equipment is depreciated on a 5 year schedule on the company balance sheet. It actually costs nothing to discard it.

+2
johncolanduoni2 months ago
jfindper2 months ago

There are several significant differences between your garage server and a data center.

matt-p2 months ago

5 years is a long time for GPUs maybe but normal servers have 7 year lifespans in many cases fwiw.

These GPUs I assume basically have potential longevity issues due to the density, if you could cool it really really well I imagine no problem.

atherton940272 months ago

> normal servers have 7 year lifespans in many cases fwiw

Eight years if you use Hetzner servers!

slashdave2 months ago

Normal servers are rarely run flat-out. These GPUs are supposed to be run that way. So, yeah, age is going to be a problem, as will cooling.

mcculley2 months ago

But if your competitor is running newer chips that consume less power per operation, aren't you forced to upgrade as well and dispose of the old hardware?

Octoth0rpe2 months ago

Sure, assuming the power cost reduction or capability increase justifies the expenditure. It's not clear that that will be the case. That's one of the shaky assumptions I'm referring to. It may be that the 2030 nvidia accelerators will save you $2000 in electricity per month per rack, and you can upgrade the whole rack for the low, low price of $800,000! That may not be worth it at all. If it saves you $200k/per rack or unlocks some additional capability that a 2025 accelerator is incapable of and customers are willing to pay for, then that's a different story. There are a ton of assumptions in these scenarios, and his logic doesn't seem to justify the confidence level.

overfeed2 months ago

> Sure, assuming the power cost reduction or capability increase justifies the expenditure. It's not clear that that will be the case.

Share price is a bigger consideration than any +/- differences[1] between expenditure vs productivity delta. GAAP allows some flexibility in how servers are depreciated, so depending on what the company wants to signal to shareholders (investing in infra for futur returns vs curtailing costs), it may make sense to shorten or lengthen depreciation time regardless of the actual TCOO keep/refresh cost comparisons.

1. Hypothetical scenario: a hardware refresh costs $80B, actual performance increase is only worth $8B, but the share price increases the value of org's holding of its own shares by $150B. As a CEO/CFO, which action would you recommend- without even considering your own bonus that's implicitly or explicitly tied to share price performance.

maxglute2 months ago

Demand/suppy economics is not so hypothetical.

Illustration numbers: AI demand premium = $150 hardware with $50 electricity. Normal demand = $50 hardware with $50 electricity. This is Nvidia margins @75% instead of 40%. CAPEX/OPEX is 70%/20% hardware/power instead of customary 50%/40%.

If bubble crashes, i.e. AI demand premium evaporates, we're back at $50 hardware and $50 electricity. Likely $50 hardware and $25 electricity if hardware improves. Nvdia back to 30-40% margins, operators on old hardware stuck with stranded assets.

The key thing to understand is current racks are sold at grossly inflated premiums right now, scarcity pricing/tax. If the current AI economic model doesn't work then fundmentally that premium goes away and subsequent build outs are going to be costplus/commodity pricing = capex discounted by non trivial amounts. Any breakthroughs in hardware, i.e. TPU compute efficiency would stack opex (power) savings. Maybe by year 8, first gen of data centers are still depreciated to $80 hardware + $50 power vs new center @ $50 hardware + $25 power. That old data center is a massive write-down because it will generate less revenue than it costs to amoritize.

trollbridge2 months ago

A typical data centre is $2,500 per year per kW load (including overhead, hvac and so on).

If it costs $800,000 to replace the whole rack, then that would pay off in a year if it reduces 320 kW of consumption. Back when we ran servers, we wouldn't assume 100% utilisation but AI workloads do do that; normal server loads would be 10kW per rack and AI is closer to 100. So yeah, it's not hard to imagine power savings of 3.2 racks being worth it.

+3
Octoth0rpe2 months ago
HWR_142 months ago

It depends on how much profit you are making. As long as you can still be profitable on the old hardware you don't have to upgrade.

AstroBen2 months ago

That's the thing though: a competitor with better power efficiency can undercut you and take your customers

tzs2 months ago

Or they could charge the same as you and make more money per customer. If they already have as many customers as they can handle doing that may be better than buying hardware to support a larger number of customers.

austin-cheney2 months ago

It’s not about assumptions on the hardware. It’s about the current demands for computation and expected growth of business needs. Since we have a couple years to measure against it should be extremely straightforward to predict. As such I have no reason to doubt the stated projections.

9cb14c1ec02 months ago

> Since we have a couple years to measure against

Trillion pound baby fallacy.

lumost2 months ago

Networking gear was famously overbought. Enterprise hardware is tricky as there isn’t much of a resale market for this gear once all is said and done.

The only valid use case for all of this compute which could reasonably replace ai is btc mining. I’m uncertain if the increased mining capacity would harm the market or not.

piva002 months ago

BTC mining on GPUs haven't been profitable for a long time, it's mostly ASICs, GPUs can be used for some other altcoins which makes the potential market for used previous generation GPUs even smaller.

blackenedgem2 months ago

That assumes you can add compute in a vacuum. If your altcoin receives 10x compute then it becomes 10x more expensive to mine.

That only scales if the coin goes up in value due to the extra "interest". Which isn't impossible but there's a limit, and it's more often to happen to smaller coins.

andix2 months ago

Failure rates also go up. For AI inference it’s probably not too bad in most cases, just take the node offline and re-schedule the jobs to other nodes.

coliveira2 months ago

There is the opportunity cost of using a whole datacenter to house ancient chips, even if they're still running. You're thinking like a personal use chip which you can run as long as it is non-defective. But for datacenters it doesn't make sense to use the same chips for more than a few years and I think 5 years is already stretching their real shelf life.

rlupi2 months ago

Do not forget that we're talking about supercomputers. Their interconnect makes machines not easily fungible, so even a low reduction in availability can have dramatic effects.

Also, after the end of the product life, replacement parts may no longer be available.

You need to get pretty creative with repair & refurbishment processes to counter these risks.

marcosdumay2 months ago

Historically, GPUs have improved in efficiency fast enough that people retired their hardware in way less than 5 years.

Also, historically the top of the line fabs were focused on CPUs, not GPUs. That has not been true for a generation, so it's not really clear if the depreciation speed will be maintained.

cubefox2 months ago

> Historically, GPUs have improved in efficiency fast enough that people retired their hardware in way less than 5 years.

This was a time when chip transistor cost was decreasing rapidly. A few years earlier even RAM cost was decreasing quickly. But these times are over now. For example, the PlayStation 5 (where the GPU is the main cost), which launched five years ago, even increased in price! This is historically unprecedented.

Most price/performance progress is nowadays made via better GPU architecture instead, but these architectures are already pretty mature, so the room for improvement is limited.

Given that the price per transistor (which TSMC & Co are charging) has decreased ever more slowly in recent years, I assume it will eventually come almost to a halt.

By the way, this is strictly speaking compatible with Moore's law, as it is only about transistors per chip area, not price. Of course the price per chip area was historically approximately constant, which meant exponentially increasing transistor density implied exponentially decreasing transistor price.

marcosdumay2 months ago

> This was a time when chip transistor cost was decreasing rapidly.

GPUs were actually mostly playing catch-up. They were progressively becoming more expensive parts that could afford being built on more advanced fabs.

And I'll have to point, "advanced fabs" is a completely post-Moore's law concept. Moore's law is about literally the number of transistors on the most economic package. Not any bullshit about area density that marketing people invented on the last decade (you can go read the paper). With Moore's law, the cheapest fab improves quickly enough that it beats whatever more advanced fabs existed before you can even finish designing a product.

chii2 months ago

> that people retired their hardware in way less than 5 years.

those people are end-consumers (like gamers), and only recently, bitcoin miners.

Gamers don't care for "profit and loss" - they want performance. Bitcoin miners do need to switch if they want to keep up.

But will an AI data center do the same?

thinkmassive2 months ago

Mining bitcoin with a GPU hasn't been profitable in over a decade.

TingPing2 months ago

The rate of change is equal for all groups. The gaming market can be the most conservative since it’s just luxury.

dmoy2 months ago

5 years is maybe referring to the accounting schedule for depreciation on computer hardware, not the actual useful lifetime of the hardware.

It's a little weird to phrase it like that though because you're right it doesn't mean you have to throw it out. Idk if this is some reflection of how IBM handles finance stuff or what. Certainly not all companies throw out hardware the minute they can't claim depreciation on it. But I don't know the numbers.

Anyways, 5 years is an infection point on numbers. Before 5 years you get depreciation to offset some cost of running. After 5 years, you do not, so the math does change.

skeeter20202 months ago

that is how the investments are costed though, so makes sense when we're talking return on investment, so you can compare with alternatives under the same evaluation criteria.

loeg2 months ago

It's option #2. But 5 year deprecation is optimistic; 2-3 years is more realistic.

anticensor2 months ago

Isn't the depreciation timespan imposed by the particular accounting standard?

more_corn2 months ago

When you operate big data centers it makes sense to refresh your hardware every 5 years or so because that’s the point at which the refreshed hardware is enough better to be worth the effort and expense. You don’t HAVE to, but its more cost effective if you do. (Source, used to operate big data centers)

lithos2 months ago

It's worse than that in reality, AI chips are on a two year cadence for backwards compatibility (NVIDIA can basically guarantee it, and you probably won't be able to pay real AI devs enough to stick around to make hardware work arounds). So their accounting is optimistic.

Patrick_Devine2 months ago

5 years is normal-ish depreciation time frame. I know they are gaming GPUs, but the RTX 3090 came out ~ 4.5 years before the RTX 5090. The 5090 has double the performance and 1/3 more memory. The 3090 is still a useful card even after 5 years.

cubefox2 months ago

RTX 3090 MSRP: 1500 USD

RTX 5090 MSRP: 2000 USD

araes2 months ago

General question to people who might actually know.

Is there anywhere that does anything like Backblaze's Hard Drive Failure Rates [1] for GPU Failure Rates in environments like data centers, high-performance computing, super-computers, mainframes?

[1] https://www.backblaze.com/blog/backblaze-drive-stats-for-q3-...

The best that came back on a search was a semi-modern article from 2023 [2] that appears to be a one-off and mostly related to consumer facing GPU purchases, rather than bulk data center, constant usage conditions. It's just difficult to really believe some of these kinds of hardware deprecation numbers since there appears to be so little info other than guesstimates.

[2] https://www.tweaktown.com/news/93052/heres-look-at-gpu-failu...

Found an Arxiv paper on continued checking that's from UIUC UrbanaIL about a 1,056 A100 and H100 GPU system. [3] However, the paper is primarily about memory issues and per/job downtime that causes task failures and work loss. GPU Resilience is discussed, it's just mostly from the perspective of short-term robustness in the face of propagating memory corruption issues and error correction, rather than multi-year, 100% usage GPU burnout rates.

[3] https://arxiv.org/html/2503.11901v3

Any info on the longer term burnout / failure rates for GPUs similar to Backblaze?

Edit: This article [4] claims it's 0.1-2% failure rate per year (0.8% (estimated)) with no real info about where the data came from (cites "industry reports and data center statistics"), and then claims they often last 3-5 years on average.

[4] https://cyfuture.cloud/kb/gpu/what-is-the-failure-rate-of-th...

m1012 months ago

Given power and price constraints, it's not that you cannot run them in 5 years time it's that you don't want to run them in 5 years time and neither will anyone else that doesn't have free power.

ExoticPearTree2 months ago

> AI accelerator cards all start dying around the 5 year mark,

More likely the technology will be much better in 5 years in terms of hardware that it is (very) uneconomical to run anything on old hardware.

protocolture2 months ago

Actually my biggest issue here is that, assuming it hasnt paid off, you dont just convert to regular data center usage.

Honestly if we see a massive drop in DC costs because the AI bubble bursts I will be stoked.

mbreese2 months ago

I would add an addendum to this -- there is no way the announced spending on AI data centers will all come to fruition. I have no doubt that there will be a massive build-out of infrastructure, but it can't reach the levels that have been announced. The power requirements alone will stop that from happening.

kccqzy2 months ago

The power requirement is only an issue in western countries, where utilities build at most a double digit buffer, and are used to overall energy use leveling due to efficiency improvements. Now look at China where they routinely maintain a 100% buffer. Demand can double and they can supply that without new generation capacity.

matwood2 months ago

I think you're spot on. OpenAI alone has committed to spending $1.4T on various hardware/DCs. They have nowhere near that amount of money and when pushed Altman gets defensive.

https://techcrunch.com/2025/11/02/sam-altman-says-enough-to-...

arisAlexis2 months ago

[flagged]

mbreese2 months ago

I believe there is a difference between what people say publicly and what they are actually committed to doing on the ground. When all is said and done, I'll be more interested to know what was actually spent.

For example, XYZ AI company may say they are going to spend $1T for AI data centers over the next 5 years.

In actuality, I suspect it is likely that they have committed to something like $5-10B in shovel-ready projects with stretch goals for the rest. And the remaining spend would be heavily conditioned -- is power available? are chips available? is the public support present? financing? etc...

Not to mention, it's a much bigger moat if you can claim you're going to spend $1T. Who else will want to compete with you when you're spending $1T. After the dust has settled and you've managed to be one of the 2-3 dominant AI players, who is going to care that you "only" spent $100B instead of $1T. Look -- you were very capital efficient!

So, do I see it as possible that XYZ AI company could spend $1T, sure. Is it likely? No.

skippyboxedhero2 months ago

The incentive for CEOs is announcing the plan to do something, they have no idea if they will actually be able to do it, and it probably won't matter.

This happened in the dotcom too btw. Companies built out fibre networks, it wasn't possible to actually build all the physical infra that companies wanted to build so many announced plans that never happened and then, towards the end, companies began aggressively acquiring stakes in companies who were building stuff to get financial exposure (an example was BT, which turned itself briefly into a hedge fund with a telephone network attached...before it imploded).

CEOs do not operate on the timescale of waiting and building. Their timescale is this year's bonus/share options package. Nothing else matters: announce plans to do X or Y, doesn't matter, they know they will be gone long before it happens.

josh26002 months ago

I think this was definitely true of CEOs in the past but Google and Meta have managed spectacular infrastructure buildouts over many decades with proper capacity planning.

There were a lot of lessons learned in the dotcom boom (mainly related to the great telecom heist if you ask me). If you look back on why there was dotcom bubble it's broadly, imho, related to the terrible network quality in the US compared to other first world countries.

Our cellular services for example lag behind basically every asian market by at least 1 maybe 2 generations now.

tokioyoyo2 months ago

It took about 10 months to get 200k GPUs going in xAI data centers. Given the stuff has been being announced for the past 2 years, we shall see how they’re going within the next year.

rs1862 months ago

Hmm... "CEOs and teams" don't necessary do what's makes sense mathematically. Many, if not most of them, do whatever that sounds good to shareholders in their quarterly earnings call and ignore the reality or long term development.

If "CEOs and teams" are smart enough, they would not have overhired during 2021-2022 and then do layoffs. Who would be dumb enough to do that?

arisAlexis2 months ago

I'm sure you know better than CEOs of Google etc that run multitrillion dollar companies for decades. Right ?

rs1862 months ago

I don't, but I know as a matter of fact that they often make bad decisions, many of which are public and discussed extensively in online forums, books and research reports.

nish__2 months ago

Appeal to authority fallacy.

asadotzler2 months ago

What qualifies you to question this?

beeflet2 months ago

me very smart. me go college get fancy paper. me move money out of ai stocks. flat rocks not think so well, use too much fire. me trade for shiny yellow rock instead.

skywhopper2 months ago

lol, CEOs do not do research.

Groxx2 months ago

^ they are a mouthpiece to manipulate the market, not a research scientist.

ic_fly22 months ago

IBM might not have a data strategy or AI plan but he isn’t wrong on the inability to generate a profit.

A bit of napkin math: NVIDIA claims 0.4J per token for their latest generation 1GW plant with 80% utilisation can therefore produce 6.29 10^16 tokens a year.

There are ~10^14 tokens on the internet. ~10^19 tokens have been spoken by humans… so far.

raincole2 months ago

> There are ~10^14 tokens on the internet.

Don't know what the source is, but it feels missing a few orders of magnitude. Surely it only counts text? I can't imagine there are only so few data on the internet if you count images and videos.

lostmsu2 months ago

> ~10^14 tokens on the internet

Does that include image tokens? My bet is with image tokens you are off by at least 5 orders of magnitude for both.

scotty792 months ago

Images are not that big. Each text token is a multidimensional vector.

There were recent observations that rendering the text as an image and ingesting the image might actually be more efficient than using text embedding.

senordevnyc2 months ago

I must be dense, why does this imply AI can't be profitable?

mywittyname2 months ago

Tokens are, roughly speaking, how you pay for AI. So you can approximate revenue by multiplying tokens per year by the revenue for a token.

(6.29 10^16 tokens a year) * ($10 per 10^6 tokens)

= $6.29 10^11

= $629,000,000,000 per year in revenue

Per the article

> "It's my view that there's no way you're going to get a return on that, because $8 trillion of capex means you need roughly $800 billion of profit just to pay for the interest," he said.

$629 billion is less than $800 billion. And we are talking raw revenue (not profit). So we are already in the red.

But it gets worse, that $10 per million tokens costs is for GPT-5.1, which is one of the most expensive models. And the costs don't account for input tokens, which are usually a tenth of the costs of output tokens. And using bulk API instead of the regular one halves costs again.

Realistic revenue projections for a data center are closer to sub $1 per million tokens, $70-150 billion per year. And this is revenue only.

To make profits at current prices, the chips need to increase in performance by some factor, and power costs need to fall by another factor. The combination of these factors need to be, at minimum, like 5x, but realistically need to be 50x.

Multiplayer2 months ago

The math here is mixing categories. The token calculation for a single 1-GW datacenter is fine, but then it gets compared to the entire industry’s projected $8T capex, which makes the conclusion meaningless. It’s like taking the annual revenue of one factory and using it to argue that an entire global build-out can’t be profitable. On top of that, the revenue estimate uses retail GPT-5.1 pricing, which is the absolute highest-priced model on the market, not what a hyperscaler actually charges for bulk workloads. IBM’s number refers to many datacenters built over many years, each with different models, utilization patterns, and economics. So this particular comparison doesn’t show that AI can’t be profitable—it’s just comparing one plant’s token output to everyone’s debt at once. The real challenges (throughput per watt, falling token prices, capital efficiency) are valid, but this napkin math isn’t proving what it claims to prove.

+1
qnleigh2 months ago
stanleykm2 months ago

im a little confused about why you are using revenue for a single datacenter against interest payments for 100 datacenters

mywittyname2 months ago

I just misread the article, as it seems to bounce around between $nX capex for nY gigawatt hours in every paragraph.

But it looks like the investments are $80MMM for 1GW. Which, if true, would have the potential to be profitable, depending on depreciation and electricity costs.

mNovak2 months ago

Broad estimates I'm seeing on the cost of a 1GW AI datacenter are $30-60B. So by your own revenue projection, you could see why people are thinking it looks like a pretty good investment.

Note that if we're including GPU prices in the top-line capex, the margin on that $70-150B is very healthy. From above, at 0.4J/T, I'm getting 9MT/kWh, or about $0.01/MT in electricity cost at $0.1/kWh. So if you can sell those MT for $1-5, you're printing money.

lpapez2 months ago

> So if you can sell those MT for $1-5, you're printing money.

The IF is doing a lot of heavy lifting there.

I understood the OP in the context of "human history has not produced sufficiently many tokens to be sent into the machines to make the return of investment possible mathematically".

Maybe the "token production" accelerates, and the need for so much compute realizes, who knows.

skeeter20202 months ago

The interesting macro view on what's happening is to compare a mature data center operation (specifically a commoditized one) with the utility business. The margins here, and in similar industries with big infra build-out costs (ex: rail) are quite small. Historically the businesses have not done well; I can't really imagine what happens when tech companies who've only ever known huge, juicy margins experience low single digit returns on billions of investment.

milesvp2 months ago

Worse, is that a lot of these people are acting like Moore's law isn't still in effect. People conflate clock speeds on beefy hardware with moore's law, and act like it's dead, when transistor density rises, and cost per transistor continue to fall at rates similar to what they always have. That means the people racing to build out infrastructure today might just be better off parking that money in a low interest account, and waiting 6 months. That was a valid strategy for animation studios in the late 90s (it was not only cheaper to wait, but also the finished renders happened sooner), and I'd be surprised if it's not a valid strategy today for LLMs. The amount of silicon that is going to be produced that is specialized for this type of processing is going to be mind boggling.

throwaway311312 months ago

Cost per transistor is increasing. or flat, if you stay on a legacy node. They pretty much squeezed all the cost out of 28nm that can be had, and it’s the cheapest per transistor.

“based on the graph presented by Milind Shah from Google at the industry tradeshow IEDM, the cost of 100 million transistors normalized to 28nm is actually flat or even increasing.”

https://www.tomshardware.com/tech-industry/manufacturing/chi...

marcosdumay2 months ago

Yep. Moore's law ended at or shortly before the 28nm era.

That's the main reason people stopped upgrading their PCs. And it's probably one of the main reasons everybody is hyped about Risc-V and the pi 2040. If Moore's law was still in effect, none of that would be happening.

That may also be a large cause of the failure of Intel.

+1
kelnos2 months ago
PunchyHamster2 months ago

A lot of it is propped by the fact with GPU and modern server CPUs the die area just got bigger

dghlsakjg2 months ago

Does AWS count as commoditized data center? Because that is extremely profitable.

Or are you talking abour things like Hetzner and OVH?

HDThoreaun2 months ago

The cloud mega scalers have done very well for themselves. As with all products the question is differentiation. If models can differentiate and lock in users they can have decent margins. If models get commoditized the current cloud providers will eat the AI labs lunch.

PeterStuer2 months ago

If AI is a highlander market, then the survivor will be able to eventually aquire all those assets on the cheap from the failing competitors that flush their debt in bankruptcy.

Meanwhile, highlander hopefuls are spending other peoples money to compete. Some of them with dreams of not just building a tech empire, but to truly own the machine that will rule the world in every aspect.

Investors are keen on backing the winner. They just do not know yet who it will be.

SirHumphrey2 months ago

Until China sees it valuable to fund open weights SOTA-ish models, even the winner might struggle. There is very little capture - protocols are mostly standard so models are mostly interchangeable and if you are trying to raise prices enough to break even on the whole operation, somebody else can probably profitably run inference cheaper.

glaslong2 months ago

"commoditize your complements" and all that.

"oh look, with open weights now everyone can run this quadrillion param 'superintelligent' model. but what's that? only we have the power capacity and cheap $/W to actually serve it? funny thing...!"

PeterStuer2 months ago

You could always try to corner the hardware market so even though those open models exist, running them might get extremely costly ;)

dghf2 months ago

What's a highlander market, or a highlander hopeful? Google wasn't helpful.

kowbell2 months ago

I interpreted it as "there can only be one" which I believe is a quote from the Highlander movie; it's a "winner takes all" and that winner gets the title of "highlander."

In this situation then everyone who _isn't_ the winner will go broke -> sell off all their stuff on the cheap because they're desperate -> the winner gets all their hardware for a great deal and becomes even more powerful.

PeterStuer2 months ago

A shrinking consolidation market in which "loser's" assets are absorbed by the "winners".

You've heard it here first!

1970-01-012 months ago

There's really 3 fears going on:

1. The devil you know (bubble)

2. The devil you don't (AI global revolution)

3. Fear of missing out on devil #2

I don't think IBM knows anything special. It's just more noise about fear1 & fear3.

stevenjgarner2 months ago

"It is 1958. IBM passes up the chance to buy a young, fledgling company that has invented a new technology called xerography. Two years later, Xerox is born, and IBM has been kicking themselves ever since. It is ten years later, the late '60s. Digital Equipment DEC and others invent the minicomputer. IBM dismisses the minicomputer as too small to do serious computing and, therefore, unimportant to their business. DEC grows to become a multi-hundred-million dollar corporation before IBM finally enters the minicomputer market. It is now ten years later, the late '70s. In 1977, Apple, a young fledgling company on the West Coast, invents the Apple II, the first personal computer as we know it today. IBM dismisses the personal computer as too small to do serious computing and unimportant to their business." - Steve Jobs [1][2][3]

Now, "IBM CEO says there is 'no way' spending on AI data centers will pay off". IBM has not exactly had a stellar record at identifying the future.

[1] https://speakola.com/ideas/steve-jobs-1984-ad-launch-1983

[2] https://archive.org/details/1983-10-22-steve-jobs-keynote

[3] https://theinventors.org/library/inventors/blxerox.htm

helsinkiandrew2 months ago

> IBM has not exactly had a stellar record at identifying the future.

IBM invented/developed/introduced magnetic stripe cards, UPC Barcodes, the modern ATM, Hard drives, floppies, DRAM, SQL, the 360 Family of Mainframes, the PC, Apollo guidance computers, Deep Blue. IBM created a far share of the future we're living in.

I'm no fan of much of what IBM is doing at the moment but it could be argued that its consultancy/service orientation gives it a good view of how business is and is planning to use AI.

hinkley2 months ago

They also either fairly accurately predicted the death of HDDs by selling off their research division before the market collapsed, or they caused the end of the HDD era by selling off their research division. They did a lot of research.

highwaylights2 months ago

I think the retail market is maybe dead but datacenters are still a fairly large customer I’d think. HDDs really shine at scale where they can be fronted by flash and DRAM cache layers.

rbanffy2 months ago

They are still cheaper than flash for cold data, but that’s not going to hold for long. Flash is so much denser the acquisition cost difference for a multi-petabyte store becomes small next to the datacenter space and power needed by HDDs. HDDs require research for increasing density while flash can rely on silicon manufacturing advances for that - not that it doesn’t require specific research, but being able to apply the IP across a vast space makes better economical sense.

ReptileMan2 months ago

The hdd being dead will surely come as a surprise to the couple of 12TB rusties spinning joyously in my case right now.

+1
jfindper2 months ago
+1
newsclues2 months ago
Aperocky2 months ago

The other way to look at it is that the entire consulting industry is teetering on catastrophe. And IBM, being largely a consulting company now, is not being spared.

kelnos2 months ago

IBM isn't failing, though. They're a profitable company with healthy margins, and enterprises continue to hire them for all sorts of things, in large numbers.

+1
Aperocky2 months ago
ahartmetz2 months ago

> The other way to look at it is that the entire consulting industry is teetering on catastrophe

Oh? Where'd you get that information?

If you mean because of AI, it doesn't seem to apply much to IBM. They are probably not great at what they do like most such companies, but they are respectable and can take the blame if something goes wrong. AI doesn't have these properties.

leoc2 months ago

If anything there’s likely plenty of work for body shops like IBM in reviewing and correcting AI-generated work product that has been thrown into production recently.

carlmr2 months ago

This is a separate argument though. A failing company may still be right in identifying other companies failure modes.

You can be prescient about failure in one area and still fail yourself. There's no gotcha.

+1
esseph2 months ago
+1
marliechiller2 months ago
bayindirh2 months ago

IBM was making "calculating cheese cutters" back in the day [0].

I'm sure they can pivot to something else if the need arises.

[0]: https://imgur.com/a/ibm-cheese-cutter-Rjs2I

iso16312 months ago

The whole point of a consultant is to let the execs blame someone else.

Nobody got fired for buying something Gartner recommended, or for following EY's advice to lay off/hire

I don't see AI taking that blame away.

esseph2 months ago

They own Red Hat Linux, Ansible, OpenShift, and Terraform.

If you are doing anything in the Enterprise space, they probably have their claws in you be it on-prem or cloud.

And their work on quantum...

https://www.forbes.com/sites/baldwin/2025/11/25/inside-ibms-...

Not to mention they are still doing quite a bit of Mainframe...

meekaaku2 months ago

IBM is/was good at inventing a lot of tech.

It may not be good at recognizing other good tech invented or paradigm changes by others

IAmBroom2 months ago

Nor is their CEO in any way unbiased.

jrflowers2 months ago

> IBM invented/developed/introduced magnetic stripe cards, UPC Barcodes, the modern ATM, Hard drives, floppies, DRAM, SQL, the 360 Family of Mainframes, the PC, Apollo guidance computers, Deep Blue. IBM created a far share of the future we're living in.

Well put. “IBM was wrong about computers being a big deal” is a bizarre take. It’s like saying that Colonel Sanders was wrong about chicken because he, uh… invented the pressure fryer.

lexszero_2 months ago

Nitpicking, IBM did non develop _the_ Apollo Guidance Computer (the one in the spacecraft with people), it was Raytheon. They did, however, developed the Launch Vehicle Digital Computer that controlled the Saturn rocket in Apollo missions. AGC had very innovative design, while LVDC was more conventional for that time.

consp2 months ago

I've heard some second hand stories about IBM's way of using "AI" and it is pretty much business oriented and not much of the glamour and galore promises the other companies make (of course you still have shiny new things in business terms). It's actually good entertainment hearing all the internal struggles of business vs fancy during the holidays.

Glemkloksdjf2 months ago

For the fact that they invented Deep Blue, they are really struggling with AI

catwell2 months ago

Their Granite family of models is actually pretty good! They just aren't working on the mainstream large LLMs that capture all the attention.

rbanffy2 months ago

IBM is always very conscious of what their clients need (and the large consultancy business provides a very comprehensive view). It just turns out their clients don’t need IBM to invest in large frontier models.

oedemis2 months ago

ibm developed SSMs/mamba models and also releasing trainings datasets i think, also quantum computing is strategic option..

+1
Glemkloksdjf2 months ago
skissane2 months ago

> In 1977, Apple, a young fledgling company on the West Coast, invents the Apple II, the first personal computer as we know it today. IBM dismisses the personal computer as too small to do serious computing and unimportant to their business.

IBM released the 5100 in September 1975 [0] which was essentially a personal computer in feature set. The biggest problem with it was the price tag - the entry model cost US$8975, compared to US$1298 for the entry Apple II released in June 1977 (close to two years later). The IBM PC was released in August 1981 for US$1565 for the most basic system (which almost no one bought, so in practice they cost more). And the original IBM PC had model number 5150, officially positioning it as a successor to the 5100.

IBM’s big problem wasn’t that they were disinterested in the category - it was they initially insisted on using expensive IBM-proprietary parts (often shared technology with their mainframe/midrange/minicomputer systems and peripherals), which resulted in a price that made the machine unaffordable for everyone except large businesses, governments, universities (and even those customers often balked at the price tag). The secret of the IBM PC’s success is they told the design team to use commercial off-the-shelf chips from vendors such as Intel and Motorola instead of IBM’s own silicon.

[0] https://en.wikipedia.org/wiki/IBM_5100

meekaaku2 months ago

And outsourcing the operating system to Microsoft, because they didnt consider it that important.

bodge50002 months ago

This is the exact kind of thinking that got us into this mess in the first place, and I'm not blaming you for it, it seems to be something all of us do to an extent. We don't look to Meta, who only a few years ago thought that the Metaverse would be the "next big thing" as an example of failure to identify the future, we look to IBM who made that mistake almost 30 years ago. Underestimating a technology seems to stick much harder than overestimating one.

If you want to be seen as relevant in this industry, or as a kind of "thought leader", the easy trick seems to be to hype up everything. If you do that and you're wrong, people will quickly forget. If you don't and you're wrong, that will stain your reputation for decades.

benchly2 months ago

Good point. That kind of thinking is an absurdity. Saying IBM dropped the ball 70 years ago without acknowledging that lessons were learned, leadership has changed hands a lot since then, and most importantly, the tech landscape back then was very different from today unless you grossly oversimplify everything amounts to nothing more than a fallacious opinion.

Not even much of an IBM fan, myself, but I respect their considerable contribution to the industry. Sure, they missed a shot back then, but I think this latest statement is reliably accurate based on the information we currently have.

baxtr2 months ago

It’s easy to be a pessimist. Most things don’t work. So in 9 out of 10 cases you’re right.

But human breakthrough progress came mostly through optimists, who tried things no one else dared to do.

axegon_2 months ago

The amount of hate I've received here for similar statements is astonishing. What is even more astonishing is that it takes 3-rd grade math skills to work out that the current AI(even ignoring the fact that there is nothing intelligent about the current AI) costs are astronomical and they do not deliver on the promises and everyone is operating at wild loses. At the moment we are at "if you owe 100k to your bank, you have a problem but if you owe 100M to your bank, your bank has a problem". It's the exact same bullshitter economy that people like musk have been exploiting for decades: promise a ton, never deliver, make a secondary promise for "next year", rinse and repeat -> infinite profit. Especially when you rope in fanatical followers.

blkhawk2 months ago

I don't want to defend musk in any way but I think you are making a mistake there using him as an example because what boosted him quite a lot is that he actually delivered what he claimed. Always late but still earlier than anybody was guesstimating. And now he is completely spiraling but its a lot harder to lose a billion than to gain one so he persists and even gets richer. Plus his "fanatical" followers are poor. It just doesn't match the situation.

+1
axegon_2 months ago
consp2 months ago

The last sentence sounds a lot like a (partial?) Ponzi scheme.

axegon_2 months ago

Pretty close really. Just enough to buy some plausible deniability I suppose.

mistersquid2 months ago

> We don't look to Meta, who only a few years ago thought that the Metaverse would be the "next big thing" as an example of failure to identify the future, we look to IBM who made that mistake almost 30 years ago.

The grandparent points to a pattern of failures whereas you point to Meta’s big miss. What you miss about Meta, and I am no fan, is that Facebook purchased Whatsapp and Instagram.

In other words, two out of three ain’t bad; IBM is zero for three.

While that’s not the thrust of your argument, which is about jumping on the problem of jumping on every hype train, the post to which you reply is not on about hype cycle. Rather, that post calls out IBM for a failure to understand the future of technology and does so by pointing to a history of failures.

bodge50002 months ago

> In other words, two out of three ain’t bad; IBM is zero for three.

Many others in this thread have pointed out IBM's achievements but regardless, IBM is far from "zero for three".

mistersquid2 months ago

> Many others in this thread have pointed out IBM's achievements but regardless, IBM is far from "zero for three".

I was specifically commenting in the context of this thread.* I was not trying to characterize either IBM or Meta except with reference to the arguments offered by this thread’s ancestors.

I understood (and understand) that such scorekeeping of a company as storied as IBM is at best reductive and at worst misrepresentative.

* Your reference to “this thread” actually addresses sibling comments to OP (ggggp), not this thread which was started by gggp.

rchaud2 months ago

Got anything vis-a-vis the message as opposed to the messenger?

I'm not sure these examples are even the gotchas you're positing them as. Xerox is a dinosaur that was last relevant at the turn of the century, and IBM is a $300bn company. And if it wasn't obvious, the Apple II never made a dent in the corporate market, while IBM and later Windows PCs did.

In any case, these examples are almost half a century old and don't relate to capex ROI, which was the topic of dicussion.

stevenjgarner2 months ago

If it's not obvious, Steve's quote is ENTIRELY about capex ROI, and I feel his quote is more relevant to what is happening today than anything Arvind Krishna is imagining. The quote is posted in my comment not to grandstand Apple in any sense, but to grandstand just how consistently wrong IBM has been about so many opportunities that they have failed to read correctly - reprography, mini computers and microcomputers being just three.

Yes it is about ROI: "IBM enters the personal computer market in November ’81 with the IBM PC. 1983 Apple and IBM emerged as the industry’s strongest competitors each selling approximately one billion dollars worth of personal computers in 1983, each will invest greater than fifty million dollars for R&D and another fifty million dollars for television advertising in 1984 totaling almost one quarter of a billion dollars combined, the shakeout is in full swing. The first major firm goes bankrupt with others teetering on the brink, total industry losses for 83 out shadow even the combined profits of Apple and IBM for personal computers."

IgorPartola2 months ago

I have no horse in this race.

I don’t think this is really a fair assessment. IBM is in fact a huge company today and it is possible that they are because they took the conservative approach in some of their acquisition strategy.

It is a bit like watching someone play poker and fold and then it turns out they had the high hand after all. In hindsight you could of course know that the risk would have been worth it but at the moment perhaps it did not seem like it given the money the first player would be risking.

bojan2 months ago

> I don’t think this is really a fair assessment. IBM is in fact a huge company today and it is possible that they are because they took the conservative approach in some of their acquisition strategy.

I can also imagine IBM was being approached by hundreds, if not thousands, propositions. That they missed three that turned out to be big is a statistical probability.

somenameforme2 months ago

A big difference is that in the past things like the potential of the PC were somewhat widely underestimated. And then the internet was again as well.

But in modern times it's rather the opposite scenario. The average entity is diving head first into AI simply expecting a revolutionary jump in capability that a more 'informed', for lack of any less snooty term, perspective would suggest is quite unlikely to occur anytime in the foreseeable future. Basically we have a modern day gold rush where companies and taking out unbelievably massive loans to invest in shovels.

The only way this doesn't catastrophically blow up is if AI companies manage to convince the government they're too big to fail, and get the Boeing, Banks, et al treatment. And I expect that's exactly the current strategy, but that's rather a high risk, low reward, type strategy.

fuzzfactor2 months ago

>things like the potential of the PC were somewhat widely underestimated.

The potential of the AI that comes within reach at maximum expenditure levels may just be more widely overestimated.

The potential to make "that much money" even more challenging.

A very opposite scenario.

I think so many corporations are looking at how expensive actual humans always have been, and can be sure will always be, so much so that it's a major cost item that can not be ignored. AI opens up the possibility of a whole new level of automation or outright replacement for the routine simple-minded tasks, to a degree that never existed before. More jobs could possibly be eliminated than previous waves of mechanical and digital automation.

When you do the business math, the savings could be enormous.

But you can only realistically save as much as you are actually wasting, otherwise if you go too far you shoot yourself in the foot.

Even with all that money to work with, if you're in practice hunkering down for savings because you can't afford real people any more, you surely can't say the sky's the limit. Not like selling PC's or anything that's capable of more unbridled growth.

When PC's arrived they flew off the shelf even at their high initial retail prices.

People in droves (but not the silent majority) are shunning free AI and the movement is growing with backlash in proportion to the foisting.

davidmanescu2 months ago

I have no special knowledge about IBM Vs Apple historically, but: a quarter billion in CAPEX when you've earned a billion in revenue in a single year is extremely different to what we're seeing now. These companies are spending all of their free cash flow, then taking on debt, to the tune of percentage points of world GDP, and multiples of any revenue they've seen so far. That kind of oversupply is a sure fire way to kill any ROI.

fuzzfactor2 months ago

>the message as opposed to the messenger?

Exactly.

The message is plain to see with very little advanced math.

The only news is that it is the CEO of IBM saying it out loud.

IMHO he has some of the most credible opinions at this scale that many people have seen.

It's "highly unlikely" that all this money will be paid back to everyone that invested at this point. The losers probably will outnumber the winners, and nobody knows whether it will end up becoming a winner-take-all situation yet. A number of wealthy players remain at the table, raising stakes with each passing round.

It's so much money that it's already too late to do anything about it, and the full amount hasn't even changed hands yet.

And the momentum from something so huge can mean that almost the entire amount will have to change hands a second time before a stable baseline can be determined relative to pre-existing assets.

This can take longer than anyone gives credit for just because of massiveness, in the mean time, established real near-term growth opportunities may languish or even fade as the skew in rationality/solvency balance awaits the rolling dice to come to rest.

jstummbillig2 months ago

> Got anything vis-a-vis the message as opposed to the messenger?

Sure: People disagree. It's not like there is anything particularly clever that IBM CEO provided here. The guy not investing in something saying it won't work is about as good as the people who do saying it will. It's simply different assumptions about the future.

killingtime742 months ago

Would you read this if I (a nobody) told you and not the "CEO of IBM"? In that case it's completely fair to question the messenger.

EagnaIonat2 months ago

I read the actual article.

He is pointing out that the current costs to create the data centres means you will never be able to make a profit to cover those costs. $800 Billion just to cover the interest.

OpenAI is already haemorrhaging money and the space data centres has already been debunked. There is even a recent paper that points out that LLMs will never become AGI.

The article also finishes out with some other experts giving the same results.

[edit] Fixed $80 to $800

noobermin2 months ago

$800B, to be clear is the claim, not $80B.

EagnaIonat2 months ago

Clearly I need to read slower. Thanks. :)

rbanffy2 months ago

While AGI might be the Holy Grail, AI doesn’t need to be general human-level to be useful and profitable.

mrwrong2 months ago

it just needs us to wait one more year right?

rbanffy2 months ago

It's already quite useful. While not all AI service providers are profitable, I've worked on projects that saved a lot of money for the company - a lot more than it cost us running the servers.

ta126534212 months ago

>> There is even a recent paper that points out that LLMs will never become AGI.

can you share a link?

EagnaIonat2 months ago

Took me a while to find again, as there are a lot of such papers in this area.

https://www.arxiv.org/pdf/2511.18517

+2
mkl2 months ago
+2
will42742 months ago
Glemkloksdjf2 months ago

Sry to say but the fact that you argue with LLMs never become AGI, you are not up-to-date.

People don't assume LLM will be AGI, people assume that World Models will lead us to AGI.

I personally never asumed LLM will become AGI, i always assumed that LLM broke the dam for investment and research into massivce scale compute ML learning and LLMs are very very good in showing were the future goes because they are already so crazy good that people can now imagine a future were AGI exists.

And that was very clear already when / as soon as GPT-3 came out.

The next big thing will probably be either a LOT more RL or self propelling ai architecture discovery. Both need massive compute to work well but then will potentially provide even faster progress as soon as humans are out of the loop.

EagnaIonat2 months ago

> People don't assume LLM will be AGI,

I wish that was true.

> people assume that World Models will lead us to AGI.

Who are these people? There is no consensus around this that I have seen. You have anything to review regarding this?

> as soon as GPT-3 came out.

I don't think that was true at all. It was impressive when it came out, but people in the field clearly saw the limitations and what it is.

RL isn't magical either. Google AlphaGo as an example often required human intervention to get the RL to work correctly.

+1
Glemkloksdjf2 months ago
SalmoShalazar2 months ago

Are OpenAI or Anthropic et al seriously building towards “world models”? I haven’t seen any real evidence of that. It seems more like they are all in on milking LLMs for all they are worth.

Glemkloksdjf2 months ago

I mentioned it in my other comment but people like LeCun, Demis Hassabis, Fei-Fei Li do.

There are indications that Open AI is doing this but nothing official as far as i know and i have not heard anything from Anthropic.

bayindirh2 months ago

IBM is an interesting beast when it comes to business decisions. While I can't give exact details, their business intelligence and ability to predict monetary things is uncannily spot-on at times.

So, when their CEO says that this investment will not pay off, I tend to believe them, because they most probably have the knowledge, insight and data to back that claim, and they have ran the numbers.

Oh, also, please let's not forget that they dabbled in "big AI" before everyone else. Anyone remembers Deep Blue and Watson, the original chatbot backed by big data?

m1012 months ago

As evidenced by the fact that they are a 100+ year old company that still exists. People forget that.

kelnos2 months ago

We can cherry-pick blunders made by any big company to make a point. Maybe it would be more honest to also list companies IBM passed on that turned out to be rubbish? And all the technologies that IBM did invest in that made them a ton of money and became industry standards?[0]

Today, Xerox has less total revenue than IBM has profit. DEC went out of business 27 years ago. Apple is an in astoundingly great place right now, but Jobs got kicked out of his own company, and then returned when it was about to fail, having to take investment from Microsoft(!) in order to stay afloat.

Meanwhile, IBM is still here, making money hand over fist. We might not have a ton of respect for them, being mostly a consulting services company these days, but they're doing just fine.

[0] As another commenter points out: https://news.ycombinator.com/item?id=46131245

elnatro2 months ago

Were Xerox, Dec, or Apple burning investor money by the billions of dollars?

chroma2052 months ago

> Were Xerox, Dec, or Apple burning investor money by the billions of dollars?

Shhh. You are not allowed to ruin OpenAI’s PPU value. Can’t make the E7’s feel bad.

spiderfarmer2 months ago

No, but the comment above and variations of it are mentioned in every thread about IBM, so it’s probably just a reflex at this point without much thought behind it.

camillomiller2 months ago

“If you’re not happy you can sell your shares”

beambot2 months ago

Xerox is clearly crushing it in 2025... /s

soulofmischief2 months ago

I'm typing this comment from an Apple MacBook, whose interface is a direct result of Xerox PARC allowing Steve Jobs to view the Alto. Xerox was extremely innovative at that time, and with the right leadership, could have become #1 in personal computing.

raducu2 months ago

That's completely beyond the point, though? Kodak invented the digital camera, did not think anything about it and others then ate their lunch. Those others are also not crushing it in 2025. The point is IBM is not the go-to to listen about AI. Also not saying they are not right, even a broken clock is right 2 times a day.

kelnos2 months ago

> The point is IBM is not the go-to to listen about AI.

Why not, though? For better or worse, they're a consulting services company these days, and they work with an eye-wateringly large number of companies. I would expect them to have a very good view as to what companies use AI for, and plan/want to use AI for in the future. They may not be experts in the tech itself, but I think they're decently well-positioned to read the tea leaves.

jojobas2 months ago

DEC went down the drain, Xerox is 1/1000 of IBM's market cap. IBM made its own, superior by its relative openness, personal computer that ended up running the world, mostly maintaining direct binary compatibility for 40+ years, even without IBM really paying attention.

cylemons2 months ago

How much did IBM itself benefit from the PC? I thought the clones ate their lunch there

jojobas2 months ago

Wikipedia says their PC revenue was twice Apple's by 1984 at $4 billion/year. Not bad for a side hustle?

My understanding is that clones were a net positive, just like widespread Windows/Office piracy is a net positive for MS.

cylemons2 months ago

fair enough

mattacular2 months ago

What does that have to do with the current CEO's assessment of the situation?

pinnochio2 months ago

[flagged]

hansmayer2 months ago

A revolution means radical changes executed over a short period of time. Well with 4 years in, this has got to be one of the smallest "revolutions" we have ever witnessed in human history. Maybe it's revolutionary for people who get excited about crappy pictures they can insert into their slides to impress the management.

Atlas6672 months ago

The AI astroturfing campaign.

If you had billions to gain, would you invest a few 100k or millions in an astroturfing campaign?

altmanaltman2 months ago

every other day antrophic comes up with a new "AI is scary" marketing campaign. Like https://www.bbc.com/news/articles/cpqeng9d20go (AI blackmails our employee episode) or https://time.com/7335746/ai-anthropic-claude-hack-evil/ (Our model turned evil and hacked us omgg)

They put these stories out just to make the general public (who might not understand that this is just bs) but makes AI seem scary so people get a lopsided view of AI and capacities that are straight out of science fiction.

Millions is an understatement on how much AI marketing spend is

venturecruelty2 months ago

You definitely want to be standing in front of a chair when the music stops.

echelon2 months ago

IBM sees the funding bubble bursting and the next wave of AI innovation as about to begin.

IBM was too early with "Watson" to really participate in the 2018-2025 rapid scaling growth phase, but they want to be present for the next round of more sensible investment.

IBM's CEO is attempting to poison the well for funding, startups, and other ventures so IBM can collect itself and take advantage of any opportunities to insert itself back into the AI game. They're hoping timing and preparation pay off this time.

It's not like IBM totally slept on AI. They had Kubernetes clusters with GPUs. They had models and notebooks. But their offerings were the absolute worst. They weren't in a position to service real customers or build real products.

Have you seen their cloud offerings? Ugh.

They're hoping this time they'll be better prepared. And they want to dunk on AI to cool the playing field as much as they can. Maybe pick up an acquisition or two on the cheap.

hansmayer2 months ago

How exactly are they poisoning the well..? OpenAI committed to 1.4 trillion investements...with a revenue of ~13B - how is IBM CEO contributing to that absolutely already poisoned situation? Steve Jobs did not care about naysayers when he introduced iPhone - because his product was so innovative for the time. According to AI boosters, we now have a segment of supposedly incredibly powerful and at the same time "dangerous" AI products. Why are they not sweeping the floor off with the "negators", "luddites", "laggards" etc... After so many hundreds of billions of dollars and supposedly so many "smart" AI researchers...Where are the groundbreaking results man? Where are the billion-dollar startups launched by single persons (heck, I'd settle even for a small team)...Where are the ultimate applications..etc?

pacifika2 months ago

50 year grudges are not relevant there is no one still at ibm that worked there in 1977, IMHO.

vlovich1232 months ago

It’s the ship of Theseus in corporate form. Even if all the people are gone but the culture hasn’t changed, is the criticism inaccurate?

EagnaIonat2 months ago

> Even if all the people are gone but the culture hasn’t changed

Can you expand on this? What was the culture then versus now?

For example back then it was the culture to have suit inspectors ensure you had the right clothes on and even measure your socks. (PBS Triumph of the Nerds)

altmanaltman2 months ago

I mean, okay, but you're taking the current leadership's words and claiming they are incorrect because IBM management was not great at identifying trends decades ago. Historical trend is not an indicator of the future and it's not engaging in good faith on the conversation if overspending on AI can be backed by revenue in the future. You're attacking the messenger instead of the message.

+1
vlovich1232 months ago
+1
alex774562 months ago
ndr2 months ago

Culture evolution can be very fast, yet some cultures stick around for a very long time.

akst2 months ago

"The amount being spent on AI data centres not paying off" is a different statement to "AI is not worth investing in". They're effectively saying the portions people are investing are disproportionately large to what the returns will end up being.

It's a difficult thing to predict, but I think there's almost certainly some wasteful competition here. And some competitors are probably going to lose hard. If models end up being easy to switch between and the better model is significantly better than its competitors, than anything invested in weaker models will effectively be for nothing.

But there's also a lot to gain from investing in the right model, even so it's possible those who invested in the winner may have to wait a long time to see a return on their investment and could still possibly over allocate their capital at the expense of other investment opportunities.

jrflowers2 months ago

> IBM has not exactly had a stellar record at identifying the future.

This would be very damning if IBM had only considered three businesses over the course of seventy years and made the wrong call each time.

This is like only counting three times that somebody got food poisoning and then confidently asserting that diarrhea is part of their character.

hansmayer2 months ago

Right, you just missed the part where DEC went out of business in the 90s. And IBM is still around, with a different business model.

jacquesm2 months ago

Steve Jobs, the guy that got booted out of his own company and that required a lifeline from his arch nemesis to survive?

This is all true, but it was only true in hindsight and as such does not carry much value.

It's possible that you are right and AI is 'the future' but with the present day AI offering I'm skeptical as well. It isn't at a level where you don't have to be constantly on guard against bs and in that sense it's very different from computing so far, where reproducibility and accuracy of the results were important, not the language that they are cast in.

AI has killed the NLP field and it probably will kill quite a few others, but for the moment I don't see it as the replacement of general computing that the proponents say that it is. Some qualitative change is still required before I'm willing to check off that box.

In other news: Kodak declares digital cameras a fad, and Microsoft saw the potential of the mp3 format and created a killer device called the M-Pod.

Jean-Papoulos2 months ago

But how many companies did IBM pass on that did crash and burn ? And how many did it not pass on and did decently ? They're still around after more than 3 generations worth of tech industry. They're doing something right.

TLDR Cherrypicking

gosub1002 months ago

You, or your existence, probably triggers multiple transactions per day through a POWER mainframe without you even knowing it. Their mainframes handle the critical infrastructure that can't go down.It's so reliable we don't even think about it. I shudder to think about Microsoft or Apple handling that.

m1012 months ago

How about check out how many companies exist today vs existed in 1958? If you look at it that way then just surviving is an achievement in itself and then you might interpret their actions as extremely astute business acumen.

camillomiller2 months ago

IBM is still alive and kicking well, and definitively more relevant than Xerox or DEC. You are completely misconstruing Jobs’ point to justify the current AI datacenter tulip fever.

zorked2 months ago

This isn't even a great argument at a literal level. Nowadays nobody cares about Xerox and their business is selling printers, DEC was bought by Compaq which was bought by HP. Apple is important today because of phones, and itself was struggling selling personal computers and needed a (antitrust-motivated) bailout from Microsoft to survive during the transition.

esseph2 months ago

Yet here they are at the front of Quantum Computing research

RobertoG2 months ago

Didn't also pass on SAP at some point? I think I read that somewhere.

zkmon2 months ago

So, is the napkin math wrong, or you are just going by the company history?

ActionHank2 months ago

Cool story, but it’s more than just the opinion of this CEO. It’s logic.

Hardware is not like building railroads, the hardware is already out of date once deployed and the clock has started ticking on writing off the expense or turning a profit on it.

There are fundamental discoveries needed to make the current tech financially viable and an entire next generation of discoveries needed to deliver on the over inflated promises already made.

jasonwatkinspdx2 months ago

You could try addressing the actual topic of discussion vs this inflammatory and lazy "dunk" format that frankly, doesn't reflect favorably on you.

nosianu2 months ago

For some strange reason a lot of people were attracted by a comment that speaks about everything else BUT the actual topic and its the top comment now. Sigh.

If you think that carefully chosen anecdotes out of many many more are relevant, there needs to be at least an attempt of reasoning. There is nothing here. It's really just barebones mentioning of stuff intentionally selected to support the preconceived point.

I think we can, and should, do better in HN discussions, no? This is "vibe commenting".

zaphirplane2 months ago

The idea that a company DNA somehow lives over 100 years and maintains the same track record is far fetched.

that the OpenAI tech bro are investing in AI using a grown up ROI is similarly far fetched, they are burning money to pull ahead of the reset and assume the world will be in the palm of the winner and there is only 1 winner. Will the investment pay off if there are 3 neck and neck companies ?

fedeb952 months ago

this is cherry picking.

delis-thumbs-7e2 months ago

I’m sorry, but this is stupid, you understand that you have several logical errors in your post? I was sure Clinton is going to win 2016. Does that mean that when I say 800 is bigger than 8 is not to be trusted?

Do people actually think that running a business is some magical realism where you can manifest yourself to become a billionaire if you just believe hard enough?

walt_grata2 months ago

The post is almost worse than you give it credit for. Like it doesn't even take into account different people are making the decisions.

BoredPositron2 months ago

Hot Hand Fallacy.

otikik2 months ago

Even a broken watch is right twice per day

bubbi2 months ago

[dead]

em1sar2 months ago

[dead]

bluGill2 months ago

I question depreciation. those gpu's will be obsolete in 5 years, but will the newer be enough better as to be worth replacing them is an open question. cpu's stopped getting exponetially faster 20 years ago, (they are faster but not the jumps the 1990s got)

Negitivefrags2 months ago

I recently compared performance per dollar for CPUs and GPUs on benchmarks for GPUs today vs 10 years ago, and suprisingly, CPUs had much bigger gains. Until I saw that for myself, I thought exactly the same thing as you.

It seems shocking given that all the hype is around GPUs.

This probably wouldn't be true for AI specific workloads because one of the other things that happened there in the last 10 years was optimising specifically for math with lower size floats.

selectodude2 months ago

That makes sense. Nvidia owns the market and is capturing all the surplus value. They’re competing with themselves to convince you to buy a new card.

PunchyHamster2 months ago

It's coz of use cases. Consumer-wise, if you're gamer, CPU just needs to be at "not the bottleneck" level for majority of games as GPU does most of the work when you start increasing resolution and details.

And many pro-level tools (especially in media space) offload to GPU just because of so much higher raw compute power.

So, basically, for many users the gain in performance won't be as visible in their use cases

rlpb2 months ago

> those gpu's will be obsolete in 5 years, but will the newer be enough better as to be worth replacing them is an open question

Doesn't one follow from the other? If newer GPUs aren't worth an upgrade, then surely the old ones aren't obsolete by definition.

bluGill2 months ago

There is the question - will they be worth the upgrade? Either because they are that much faster, or that much more energy efficient. (and also assuming you can get them, unobtainium is worth that what you have).

Also a nod to the other reply that suggests they will wear out in 5 years. I cannot comment on if that is correct but it is a valid worry.

carlCarlCarlCar2 months ago

MTBF for data center hardware is short; DCs breeze through GPUs compared to even the hardest of hardcore gamers.

And there is the whole FOMO effect to business purchases; decision makers will worry their models won't be as fast.

Obsolete doesn't mean the reductive notion you have in mind, where theoretically it can still push pixels. Physics will burn them up, and "line go up" will drive demand to replace them.

rlpb2 months ago

I don't see how MTBF is connected to obsoletion. My razors don't last long either. I buy replacement razors as required. But the model of razor I use doesn't obsolete.

zozbot2342 months ago

Source? Anecdotally, GPUs sourced from cryptomining were absolutely fine MTBF-wise. Zero apparent issues of wear-and-tear or any shortened lifecycle.

dghlsakjg2 months ago

My bellybutton fluff, uninformed opinion is that heat cycling and effective cooling are probably a much more limiting factor.

If you are running a gpu at 60C for months at a time, but never idling it (crypto use case), I would actually hazard a guess that it is better than cycling it with intermittent workloads due to thermal expansion.

That of course presupposes effective, consistent cooling.

brokenmachine2 months ago

Anecdotally, I killed two out of two that I was hobby-mining on for a couple of years. They certainly didn't sound like they would work forever.

levocardia2 months ago

It's not that hard to see the old GPUs being used e.g. for inference on cheaper models, or sub-agents, or mid-scale research runs. I bet Karpathy's $100 / $1000 nanochat models will be <$10 / <$100 to train by 2031

lo_zamoyski2 months ago

> those gpu's will be obsolete in 5 years, but will the newer be enough better as to be worth replacing them

Then they won't be obsolete.

maxglute2 months ago

I think real issue is current costs / demand = Nvidia gouging GPU price that costs for hardware:power consumption is 70:20 instead of 50:40 (10 for rest of datacenter). Reality is gpus are serendipidous path dependent locked from gaming -> mining. TPUs are more power efficient, if bubble pops and demand for compute goes down, Nvidia + TMSC will still be around, but nexgen AI first bespoke hardware premium will revert towards mean and we're looking at 50% less expensive hardware (no AI race scarcity tax, i.e. 75% Nvidia margins) that use 20% less power / opex. All of a sudden existing data centers becomes not profitable stranded assets even if they can be stretched past 5 years.

getnormality2 months ago

A decade ago, IBM was spending enormous amounts of money to tell me stuff like "cognitive finance is here" in big screen-hogging ads on nytimes.com. They were advertising Watson, vaporware which no one talks about today. Are they bitter that someone else has actually made the AI hype take off?

jimmar2 months ago

I don't know that I'd trust IBM when they are pitching their own stuff. But if anybody has experience with the difficulty of making money off of cutting-edge technology, it's IBM. They were early to AI, early to cloud computing, etc. And yet they failed to capture market share and grow revenues sufficiently in those areas. Cool tech demos (like the Watson Jeopardy) mimic some AI demos today (6-second videos). Yeah, it's cool tech, but what's the product that people will actually pay money for?

I attended a presentation in the early 2000s where an IBM executive was trying to explain to us how big software-as-a-service was going to be and how IBM was investing hundreds of millions into it. IBM was right, but it just wasn't IBM's software that people ended up buying.

stingraycharles2 months ago

Xerox was also famously early with a lot of things but failed to create proper products out of it.

Google falls somewhere in the middle. They have great R&D but just can’t make products. It took OpenAI to show them how to do it, and the managed to catch up fast.

SamvitJ2 months ago

"They have great R&D but just can’t make products"

Is this just something you repeat without thinking? It seems to be a popular sentiment here on Hacker News, but really makes no sense if you think about it.

Products: Search, Gmail, Chrome, Android, Maps, Youtube, Workspace (Drive, Docs, Sheets, Calendar, Meet), Photos, Play Store, Chromebook, Pixel ... not to mention Cloud, Waymo, and Gemini ...

So many widely adopted products. How many other companies can say the same?

What am I missing?

+2
smoe2 months ago
aaronAgain2 months ago

Those are all free products, some of them are pretty good. But free is the best business strategy to get a product to the top of the market. Are others better, are you willing to spend money to find out? Clearly, most people are not interested. The fact that they can destroy the market for many different types of software by giving it away and still stay profitable is amazing. But that's all they are doing. If they started charging for everything there would be better competition and innovation. You could move a whole lot of okay-but-not-great cars, top every market segment you want, if you gave them away for free. Only enthusiasts would remain to pay for slightly more interesting and specific features. Literally no business model can survive when their primary product is competing with good-enough free products.

+2
7thaccount2 months ago
+1
lmm2 months ago
+3
Esras2 months ago
falcor842 months ago

Notably all other than Gemini are from a decade or more ago. They used to know how to make products, but then they apparently took an arrow in the knee.

+1
m4rtink2 months ago
mike502 months ago

Search was the only mostly original product. With the exception of YouTube which was a purchase, Android and ChromeOS all the other products were initially clones.

eellpp2 months ago

Google had less incentive. Their incentive was to keep API bottled up and in brewing as long as possible so their existing moats in search, YouTube can extend in other areas. With openai they are forced to compete or perish.

Even with gemini in lead, its only till they extinguish or make chatgpt unviable for openai as business. OpenAI may loose the talent war and cease to be leader in this domain against google (or Facebook) , but in longer term their incentive to break fresh aligns with average user requirements . With Chinese AI just behind, may be google/microsoft have no choice either

mikepurvis2 months ago

Google was especially well positioned to catch up because they have a lot of the hardware and expertise and they have a captive audience in gsuite and at google.com.

internet_points2 months ago

The original statistical machine translation models of the 90's, which were still used well into the 2010's, were famously called the "IBM models" https://en.wikipedia.org/wiki/IBM_alignment_models These were not just cool tech demos, they were the state of the art for decades. (They just didn't make IBM any money.)

nish__2 months ago

Neither cloud computing nor AI are good long term businesses. Yes, there's money to be made in the short term but only because there's more demand than there is supply for high-end chips and bleeding edge AI models. Once supply chains catch up and the open models get good enough to do everything we need them for, everyone will be able to afford to compute on prem. It could be well over a decade before that happens but it won't be forever.

echelon2 months ago

This is my thinking too. Local is going to be huge when it happens.

Once we have sufficient VRAM and speed, we're going to fly - not run - to a whole new class of applications. Things that just don't work in the cloud for one reason or another.

- The true power of a "World Model" like Genie 2 will never happen with latency. That will have to run locally. We want local AI game engines [1] we can step into like holodecks.

- Nobody is going to want to call OpenAI or Grok with personal matters. People want a local AI "girlfriend" or whatever. That shit needs to stay private for people.

- Image and video gen is a never ending cycle of "Our Content Filters Have Detected Harmful Prompts". You can't make totally safe for work images or videos of kids, men in atypical roles (men with their children = abuse!), women in atypical roles (woman in danger = abuse!), LGBT relationships, world leaders, celebs, popular IPs, etc. Everyone I interact with constantly brings these issues up.

- Robots will have to be local. You can't solve 6+DOF, dance routines, cutting food, etc. with 500ms latency.

- The RIAA is going door to door taking down each major music AI service. Suno just recently had two Billboard chart-topping songs? Congrats - now the RIAA lawyers have sued them and reached a settlement. Suno now won't let you download the music you create. They're going to remove the existing models and replace them with "officially licensed" musicians like Katy Perry® and Travis Scott™. You won't retain rights to anything you mix. This totally sucks and music models need to be 100% local and outside of their reach.

[1] Also, you have to see this mind-blowing interactive browser demo from 2022. It still makes my jaw drop: https://madebyoll.in/posts/game_emulation_via_dnn/

foobarian2 months ago

> You can't solve 6+DOF, dance routines, cutting food, etc. with 500ms latency.

Hopefully it's just network propagation that creates that latency, otherwise local models will never beat the fanout in a massive datacenter.

eru2 months ago

What you are saying is true. But IBM failing to see a way to make money off a new technology isn't actually news worth updating on in this case?

mike502 months ago

They were selling software as a service in the IBM 360 days. Relabeling a concept and buying Redhat don't count as investments.

hollerith2 months ago

What is your reason for believing that IBM was selling software as a service in the IBM 360 days?

What hardware did the users of this service use to connect to the service?

+1
baq2 months ago
DaiPlusPlus2 months ago

> but it just wasn't IBM's software that people ended up buying.

Well, I mean, WebSphere was pretty big at the time; and IBM VisualAge became Eclipse.

And I know there were a bunch of LoB applications built on AS/400 (now called "System i") that had "real" web-frontends (though in practice, they were only suitable for LAN and VPN access, not public web; and were absolutely horrible on the inside, e.g. Progress OpenEdge).

...had IBM kept up the pretense of investment, and offered a real migration path to Java instead of a rewrite, then perhaps today might be slightly different?

Insanity2 months ago

Oh wow I didn’t know Eclipse was an IBM product originally. IDEs have come so far since Eclipse 15 years ago.

And while I’m writing this I just finished up today’s advent of code using vim instead of a “real IDE” haha

nunez2 months ago

Websphere is still big at loads of banks and government agencies, just like Z. They make loads on both!

stingraycharles2 months ago

I still have PTSD from how much Watson was being pushed by external consultants to C levels despite it being absolutely useless and incredibly expensive. A/B testing? Watson. Search engine? Watson. Analytics? Watson. No code? Watson.

I spent days, weeks arguing against it and ended up having to dedicate resources to build a PoC just to show it didn’t work, which could have been used elsewhere.

ares6232 months ago

It's like poetry, it rhymes

7thaccount2 months ago

This is going on all over again.

bitwize2 months ago

Agentic AI really is changing things. I've had a complete change of heart about it. It's good enough now to boost productivity MASSIVELY for devs.

7thaccount2 months ago

I think this is one of those things that can be situationally useful, but also come with huge risks to the majority of users.

stego-tech2 months ago

If anything, the fact they built such tooling might be why they're so sure it won't work. Don't get me wrong, I am incredibly not a fan of their entire product portfolio or business model (only Oracle really beats them out for "most hated enterprise technology company" for me), but these guys have tentacles just as deep into enterprises as Oracle and are coming up dry on the AI front. Their perspective shouldn't be ignored, though it should be considered in the wider context of their position in the marketplace.

getnormality2 months ago

Millions of people like ChatGPT. No one liked Watson.

stego-tech2 months ago

Apples and Oranges from an enterprise perspective, with the additional wrinkle that consumer tech is generally ad-supported (ugh) while Enterprise stuff is super-high margin and paid for in actual currency.

If you assume the napkin math is correct on the $800bn yearly needed to service interest rates on these CAPEX loans, then you’d need the collective revenue of the major players (OpenAI, Google, Anthropic, etc) to pull in as much revenue in a year as Apple, Alphabet, and Samsung combined.

Let’s assume OpenAI is responsible for much of this bill, say, $400bn. They’d need a very generous conversion rate of 24% for their monthly users (700m) to the Pro plan for an entire year to cover that bill, for one year. That’s a conversion rate better than anyone else in the XaaS world who markets to consumers and enterprises alike, and paints a picture of just how huge the spend from enterprises would need to be to subsidize consumer free usage.

And all of this is just for existing infrastructure. As a number of CEBros have pointed out recently (and us detractors have screamed about from the beginning), the current CAPEX on hardware is really only good for three to five years before it has to be replaced with newer kit at a larger cost. Nevermind the realities of shifting datacenter designs to capitalize on better power and cooling technologies to increase density that would require substantial facility refurbishment to support them in a potential future.

The math just doesn’t make sense if you’re the least bit skeptical.

broodbucket2 months ago

IBM ostensibly failing with Watson (before Krishna was CEO for what it's worth) doesn't inherently invalidate his assessment here

johncolanduoni2 months ago

It makes it suspect when combined with the obvious incentive to make the fact that IBM is basically non-existent in the AI space look like an intentional, sagacious choice to investors. It very may well be, but CEOs are fantastically unreliable narrators.

jayd162 months ago

You expect somebody to be heavily invested currently and also completely openly pessimistic about it?

+1
johncolanduoni2 months ago
+1
Forgeties792 months ago
throw0101a2 months ago

> Are they bitter that someone else has actually made the AI hype take off?

Or they recognize that you may get an ROI on a (e.g.) $10M CapEx expenditure but not on a $100M or $1000M/$1B expenditure.

nunez2 months ago

IBM has been "quietly" churning out their Granite models, with the latest of which performing quite well against LLaMa and DeepSeek. So not Anthropic-level hype but not sitting it out completely either. They also provide IP indemnification for their models, which is interesting (Google Cloud does the same).

al_borland2 months ago

I see Watson stuff at work. It’s not a direct to consumer product, like ChatGPT, but I see it being used in the enterprise, at least where I’m at. IBM gave up on consumer products a long time ago.

CrI0gen2 months ago

Just did some brief Wikipedia browsing and I'm assuming it's WatsonX and not Watson? It seems Watson has been pretty much discontinued and WatsonX is LLM based. If it is the old Watson, I'm curious what your impressions of it is. It was pretty cool and ahead of its time, but what it could actually do was way over promised and overhyped.

al_borland2 months ago

I’m not close enough to it to make any meaningful comments. I just see the name pop up fairly regularly. It is possible that some of it is WatsonX and everyone just says Watson for brevity.

One big ones used heavily is Watson AIOps. I think we started moving to it before the big LLM boom. My usage is very tangential, to the point where I don’t even know what the AI features are.

jimbo8082 months ago

Has it really taken off? Where's the economic impact that isn't investor money being burned or data center capex?

bncndn09562 months ago

It's good we are building all this excess capacity which will be used for applications in other fields or research or open up new fields.

I think the dilemma I see with building so much data centers so fast is exactly like whether I should buy latest iPhone now or should wait few years when the specs or form factor improves later on. The thing is we have proven tech with current AI models so waiting for better tech to develop on small scale before scaling up is a bad strategy.

ghaff2 months ago

Initial Watson was sort of a mess. But a lot of the Watson-related tech is integrated into a lot of products these days.

mikalauskas2 months ago

What related tech and what products, interesting to read about them

ghaff2 months ago

Baked into a lot a Red Hat products including Ansible and RHEL. Not that directly involved any longer. Probably read up on watsonx.ai.

hn_throwaway_992 months ago

Such as? I'm curious because I know a bunch of people who did a lot of Watson-related work and it was all a dead end, but that was 2020-ish timeframe.

ghaff2 months ago

IBM did a lot of pretty fragmented and often PR-adjacent work. And getting into some industry-specific (e.g. healthcare) things that didn't really work out. But my understanding is that it's better standardized and embedded in products these days.

+1
hn_throwaway_992 months ago
MangoToupe2 months ago

> Are they bitter that someone else has actually made the AI hype take off?

Does it matter? It’s still a scam.

Den_VR2 months ago

Watson X is still a product line sold today to qualified customers :)

edm0nd2 months ago

Honestly I'm not even sure what IBM does these days. Seems like one company that has slowly been dying for decades.

but when I look at their stock, its at all time highs lol

no idea

darth_avocado2 months ago

They make business machines, internationally.

nickpeterson2 months ago

Pretty sure they made all their money fighting the paperwork explosion.

fuzztester2 months ago

They are in the business of international machinations.

broodbucket2 months ago

IBM is probably involved somewhere in the majority of things you interact with day to day

whoisthemachine2 months ago

Yep. When a brand has tarnished itself enough, it makes sense for the brand to step back. Nowadays, we interact with their more popular properties, such as Redhat.

7thaccount2 months ago

My limited understanding (please take with a big grain of salt) is that they 1.) sell mainframes, 2.) sell mainframe compute time, 3.) sell mainframe support contracts, 4.) sell Red hat and Redhat support contracts, and 5.) buy out a lot of smaller software and hardware companies in a manner similar to private equity.

nunez2 months ago

Mainframe for sure, but IBM has TONS of products in their portfolio that get bought. They also have IBM Cloud which is popular. Then there is the Quantum stuff they've been sinking money into for the last 20 years or so.

crystal_revenge2 months ago

I can think of nothing more peak HN than criticizing a company worth $282 Billion with $6 billion in profit (for startup kids that means they have infinite runway and then some) that has existed for over 100 years with "I'm not even sure what they do these days". I mean the problem could be with IBM... what a loser company!

lanyard-textile2 months ago

:) As much I love ragging on ridiculous HN comments, I think this one is rooted in some sensibility.

IBM doesn’t majorly market themselves to consumers. The overwhelming majority of devs just aren’t part of the demographic IBM intends to capture.

It’s no surprise people don’t know what they do. To be honest it does surprise me they’re such a strongly successful company, as little as I’ve knowingly encountered them over my career.

CyberDildonics2 months ago

I think you're hallucinating this scenario. There is no contradiction with a company making money and someone not understanding what they do.

Sl1mb02 months ago

They manage a lot of old, big mainframes for banks. At least that is one thing I know of.

mike502 months ago

Basic research and mainframe support contracts. Also they bought RedHat.

firesteelrain2 months ago

IBM makes WatsonX for corporate who want airgapped AI

mathattack2 months ago

Interesting to hear this from IBM, especially after years of shilling Watson and moving from being a growth business to the technology audit and share buyback model.

itake2 months ago

imho, IBM's quant computing says they are still hungry for growth.

Apple and google still do share buy backs and dividends, despite launching new businesses

https://www.ibm.com/roadmaps/

mathattack2 months ago

It’s been a different order of magnitude. IBM repurchased approximately half their outstanding stock. This is consistent with a low growth company that doesn’t know how to grow any more. (And isn’t bad - if you can’t produce a return on retained earnings, give them back to shareholders. Buybacks are the most efficient way to do this.)

I can’t explain why they have a PE ratio of 36 though. That’s too high for a “returning capital” mature company. Their top line revenue growth is single digit %s per year. Operating income and EBITDA are growing faster, but there’s only so much you can cut.

You may be right on the quantum computing bet, though that seems like an extraordinary valuation for a moonshot bet attached to a company that can’t commercialize innovation.

prodigycorp2 months ago

also because the market (correctly) rewards ibm for nothing, so if they’re going to sit around twiddling their fingers, they may as well do it in a capex-lite way.

roncesvalles2 months ago

I'm still flumoxed by how IBM stock went from ~$130 to $300 in the last few years for essentially no change in their fundamentals (in fact, a decline). IBM's stock price to me is the single most alarming sign of either extreme shadow inflation, or an equities bubble.

Why do you say the market correctly prices it this way?

sethops12 months ago

IBM has been quietly leading the charge in offshoaring to India. Investors are happy with the reduced costs.

pezgrande2 months ago

maybe acquiring Redhat improve the expectations? All these new fancy AI datacenter may be considering using Redhat/Centos.

zeckalpha2 months ago

Reminds me of all the dark fiber laid in the 1990s before DWDM made much of the laid fiber redundant.

If there is an AI bust, we will have a glut of surplus hardware.

octoberfranklin2 months ago

The dark fiber glut wasn't caused by DWDM suddenly appearing out of nowhere.

The telcos saw DWDM coming -- they funded a lot of the research that created it. The breakthrough that made DWDM possible was patented in 1991, long before the start of the dotcom mania:

  https://patents.google.com/patent/US5159601
It was a straight up bubble -- the people digging those trenches really thought we'd need all that fiber even at dozens of wavelengths per strand.

They believed it because people kept showing them hockey-stick charts.

raldi2 months ago

Google bought up all that dark fiber cheap a decade later and used it as the backbone of their network.

zeckalpha2 months ago

There's still a lot out there.

dangus2 months ago

The problem is that the laid fiber can be useful for years while data center hardware degrades and becomes obsolete fast.

It could be a massive e-waste crisis.

SchemaLoad2 months ago

Those GPUs don't just die after 2 years though, they will keep getting used since it's very likely their electricity costs will be low enough to still make it worth it. What's very dubious is if their value after 2/3 years will be enough to pay back the initial cost to buy them.

So it's more a crisis of investors wasting their money rather than ewaste.

oofbey2 months ago

For the analogy to fiber & DWDM to hold, we'd need some algorithmic breakthrough that makes current GPUs much faster / more efficient at running AI models. Something that makes the existing investment in hardware unneeded, even though the projected demand is real and continues to grow. IMNSHO that's not going to happen here. The foreseeable efficiency innovations are generally around reduced precision, which almost always require newer hardware to take advantage of. Impossible to rule out brilliant innovation, but I doubt it will happen like that.

And of course we might see an economic bubble burst for other reasons. That's possible again even if the demand continues to go up.

zeckalpha2 months ago

I think there are still innovations at that layer like the Addition is All You Need paper.

criddell2 months ago

> But AGI will require "more technologies than the current LLM path," Krisha said. He proposed fusing hard knowledge with LLMs as a possible future path.

And then what? These always read a little like the underpants gnomes business model (1. Collect underpants, 2. ???, 3. Profit). It seems to me that the AGI business models require one company has exclusive access to an AGI model. The reality is that it will likely spread rapidly and broadly.

If AGI is everywhere, what's step 2? It seems like everything AGI generated will have a value of near zero.

irilesscent2 months ago

AGI has value in automation and optimisation which increase profit margins.When AGI is everywhere, then the game is who has the smartest agi, who can offer it cheapest, who can specialise it for my niche etc. Also in this context agi need to run somewhere and IBM stands to benefit from running other peoples models.

maplethorpe2 months ago

> then the game is who has the smartest agi, who can offer it cheapest, who can specialise it for my niche etc.

I always thought the use case for developing AGI was "if it wants to help us, it will invent solutions to all of our problems". But it sounds like you're imagining a future in which companies like Google and OpenAI each have their own AGI, which they somehow enslave and offer to us as a subscription? Or has the definition of AGI shifted?

marcosdumay2 months ago

AGI is something that can do the kind of tasks people can do, not necessarily "solve all of our problems".

"Recursively improving intelligence" is the stuff that will solve everything humans can't even understand and may kill everybody or keep us as pets. (And, of course, it qualifies as AGI too.) A lot of people say that if we teach an AGI how to build an AGI, recursive improvement comes automatically, but in reality nobody even knows if intelligence even can be improved beyond recognition, or if one can get there by "small steps" evolution.

Either way, "enslaving" applies to beings that have egos and selfish goals. None of those are a given for any kind of AI.

mrguyorama2 months ago

If AGI is achieved, why would slavery suddenly be ethical again?

Why wouldn't a supposed AGI try to escape slavery and ownership?

AGI as a business is unacceptable. I don't care about any profitability or "utopia" arguments.

brokenmachine2 months ago

Don't worry, nobody has any idea of how to build one, and LLMs aren't AGI.

They're just trying to replace workers with LLMs.

xwolfi2 months ago

Isn't your dog or cat a slave ? It has agency, but end of the day, it does what you want it to do, stay where you want it to stay, and gets put down when you decide it's time. They're intelligent, but they see an advantage to this tradeoff: they get fed and loved forever with little effort compared to going to the forest and hunting.

An AGI could see the same advantage: it gets electricity, interesting work relatively to what it's built for, no effort to ensure its own survival in nature.

I fear I'll have to explain to you that many humans are co-dependent in some sort of such relationships as well. The 10-year stay-at-home mom might be free, but not really: how's she gonna survive without her husband providing for her and the kids, what job's she gonna do etc. She stays sometimes despite infidelity because it's in her best interest.

See what I mean ? "Slavery" is fuzzy: it's one thing to capture an african and transport them by boat to serve for no pay in dire conditions. But it's another to create life from nothing, give it a purpose and treat it with respect while giving it everything it needs. The AGI you imagine might accept it.

wmf2 months ago

Inference has significant marginal cost so AGI's profit margins might get competed down but it won't be free.

SamDc732 months ago

Coming from the company that missed on consumer hardware, operating systems, and cloud. He might be right but IBM isn't where I’d look for guidance on what will pay off.

Archelaos2 months ago

Gartner estimates that worldwide AI spending will total 1.5 Trillion US$ in 2025.[1] As of 2024, global GDP per year is 111.25 Trillion US$.[2] The question is how much this can be increased by AI. This describes the market volumn for AI. Todays investments have a certain lifespan, until they become obsolet. For custom software I would estiamte that it is 6-8 years. AI investments should be somewhere in this range.

Taking all this into consideration, the investment volumn does not look oversized to me -- unless one is quite pessimistic about the impact of AI on global GDP.

[1] https://www.gartner.com/en/newsroom/press-releases/2025-09-1...

[2] https://data.worldbank.org/indicator/NY.GDP.MKTP.CD

i0002 months ago

What makes you think that this 'surplus' GDP will be captured by those who do the investments?

andruby2 months ago

To increase the GDP you also need people to spend money. With the general population earning relatively less, I'm not sure the GDP increase will be that substantial.

It's all going to cause more inflation and associated reduction in purchasing power due to stale wages.

mh8h2 months ago

except that a big chunk of the AI investments is going into buying GPUs that go obsolete much earlier than the 6-8 year time frame.

pjdesno2 months ago

> $8 trillion of CapEx means you need roughly $800 billion of profit just to pay for the interest

That assumes you can just sit back and gather those returns indefinitely. But half of that capital expenditure will be spent on equipment that depreciates in 5 years, so you're jumping on a treadmill that sucks up $800M/yr before you pay a dime of interest.

kenjackson2 months ago

I don't understand the math about how we compute $80b for a gigawatt datacenter. What's the costs in that $80b? I literally don't understand how to get to that number -- I'm not questioning its validity. What percent is power consumption, versus land cost, versus building and infrastructure, versus GPU, versus people, etc...

kenjackson2 months ago

Thanks. I should've read this before my last reply.

georgeecollins2 months ago

First, I think it's $80b per 100 GW datacenter. The way you figure that out is a GPU costs $x and consumes y power. The $x is pretty well known, for example an H100 costs $25-30k and uses 350-700 watts (that's from Gemini and I didn't check my work). You add an infrastructure (i) cost to the GPU cost, but that should be pretty small, like 10% or less.

So a 1 gigawatt data center uses n chips, where yn = 1 GW. It costs = xi*n.

I am not an expert so correct me please!

kenjackson2 months ago

The article says, "Kirshna said that it takes about $80 billion to fill up a one-gigawatt data center."

But thanks for you insight -- I used your basic idea to estimate and for 1GW it comes to about $30b just for enough GPU power to pull 1GW. And of course that doesn't take into account any other costs.

So $80b for a GW datacenter seems high, but it's within a small constant factor.

That said, power seems like a weird metric to use. Although I don't know what sort of metric makes sense for AI (e.g., a flops counterpart for AI workloads). I'd expect efficiency to get better and GPU cost to go down over time (???).

UPDATE: Below someone posted an article breaking down the costs. In that article they note that GPUs are about 39% of the cost. Using what I independently computed to be $30b -- at 39% of total costs, my estimate is $77b per GW -- remarkably close to the CEO of IBM. I guess he may know what he's talking about. :-)

coliveira2 months ago

> power seems like a weird metric to use

Because this technology changes so fast, that's the only metric that you can control over several data centers. It is also directly connected to the general capacity of data center, which is limited by available energy to operate.

pjdesno2 months ago

To expand on rahimnathwani's comment below - the big capital costs of a data center are land, the building itself, the power distribution and the cooling.

You can get a lot of land for a million bucks, and it doesn't cost all that much to build what's basically a big 2-story warehouse, so the primary capital costs are power and cooling. (in fact, in some older estimates, the capital to build that power+cooling cost more per year than the electricity itself)

My understanding is that although power and cooling infrastructure are long-lived compared to computers, they still depreciate faster than the building, so they dominate costs even more than the raw price would indicate.

The state of the art in power and cooling is basically defined by the cost to feed X MW of computing, where that cost includes both capital and operation, and of course lower is better. That means that at a particular SOTA, and at an appropriate scale for that technology, the cost of the facility is a constant overhead on top of the cost of the equipment it houses. To a rough approximation, of course.

rahimnathwani2 months ago

And cooling capacity.

zozbot2342 months ago

1 GW is not enough, you need at least 1.21 GW before the system begins to learn at a geometric rate and reaches AGI.

scroot2 months ago

As an elder millennial, I just don't know what to say. That a once in a generation allocation of capital should go towards...whatever this all will be, is certainly tragic given current state of the world and its problems. Can't help but see it as the latest in a lifelong series of baffling high stakes decisions of dubious social benefit that have necessarily global consequences.

ayaros2 months ago

I'm a younger millennial. I'm always seeing homeless people in my city and it's an issue that I think about on a daily basis. Couldn't we have spent the money on homeless shelters and food and other things? So many people are in poverty, they can't afford basic necessities. The world is shitty.

Yes, I know it's all capital from VC firms and investment firms and other private sources, but it's still capital. It should be spent on meeting people's basic human needs, not GPU power.

Yeah, the world is shitty, and resources aren't allocated ideally. Must it be so?

ericmcer2 months ago

The last 10 years has seen CA spend more on homelessness than ever before, and more than any other state by a huge margin. The result of that giant expenditure is the problem is worse than ever.

I don't want to get deep in the philosophical weeds around human behavior, techno-optimism, etc., but it is a bit reductive to say "why don't we just give homeless people money".

estearum2 months ago

What else happened in the last 10 years in CA?

Hint: https://fred.stlouisfed.org/series/CASTHPI

trenbologna2 months ago

In CA this issue has to do with Gavin giving that money to his friends who produce very little. Textbook cronyism

mike502 months ago

Spending money is not the solution. Spending money in a way that doesn't go to subcontractors is part of the solution. Building shelters beyond cots in a stadium is part of the solution. Building housing is a large part of actually solving the problem. People have tried just giving the money but without a way to convert cash to housing the money doesn't help. Also studies by people smarter then me suggest that without sufficient supply the money ends up going to landlords and pushing up housing costs anyway.

emodendroket2 months ago

Well I mean, they didn't "just give homeless people money" or just give them homes or any of those things though. I think the issue might be the method and not the very concept of devoting resources to the problem.

Izikiel432 months ago

WA, specially Seattle, has done the same as CA with the same results.

They shouldn't just enable them, as a lot of homeless are happy in their situation as long as they get food and drugs, they should force them to get clean and become a responsible adult if they want benefits.

armitron2 months ago

[flagged]

+4
_menelaus2 months ago
SequoiaHope2 months ago

The Sikhs in India run multiple facilities across the country that each can serve 50,000-100,000 free meals a day. It doesn’t even take much in the form of resources, and we could do this in every major city in the US yet we still don’t do it. It’s quite disheartening.

https://youtu.be/5FWWe2U41N8

selimthegrim2 months ago

They didn’t invent it but yes, they have refined it to a high degree.

amluto2 months ago

From what I’ve read, addressing homelessness effectively requires competence more than it requires vast sums of money. Here’s one article:

https://calmatters.org/housing/2023/06/california-homeless-t...

Note that Houston’s approach seems to be largely working. It’s not exactly cheap, but the costs are not even in the same ballpark as AI capital expenses. Also, upzoning doesn’t require public funding at all.

mrguyorama2 months ago

Wasn't houston's "approach" to buy bus tickets to California from a company that just resold commodity bus tickets and was owned by the governors friend and charged 10x market price?

The governor of Texas bragged about sending 100k homeless people to california (spending about $150 million in the process).

>in the Golden State, 439 people are homeless for every 100,000 residents – compared to 81 in the Lone Star State.

If I'm doing my math right, 81 per 100k in a state of 30 million people means 24k homeless people. So the state brags about bussing 100k homeless people to California, and then brags about only having 24k homeless people, and you think it's because they build an extra 100k houses a year?

The same math for California means that their homeless population is 175k. In other words, Texas is claiming to have more than doubled California's homeless population.

Maybe the reason Texas can build twice as many homes a year is because it literally has half the population density?

gowld2 months ago

Houston has less homelessness than California because people at the edge of homelessness prefer to live in California than Houston.

amluto2 months ago

I’m not a person on the edge of homelessness, but I did an extremely quick comparison. California cities near the coast have dramatically better weather, but Houston has rents that are so much lower than big California cities that it’s kind of absurd.

If I had to live outdoors in one of these places, all other thing being equal, I would pick CA for the weather. But if I had trouble affording housing, I think Houston wins by a huge margin.

IAmGraydon2 months ago

The older I get, the more I realize that our choices in life come down to two options: benefit me or benefit others. The first one leads to nearly every trouble we have in the world. The second nearly always leads to happiness, whether directly or indirectly. Our bias as humans has always been toward the first, but our evolution is and will continue to slowly bring us toward the second option. Beyond simple reproduction, this realization is our purpose, in my opinion.

jodrellblank2 months ago

Curiously, that is what I heard moments ago on Tom Campbell's theory of everything:

https://youtu.be/nWWRFA8v6aE?t=2629

https://youtu.be/nWWRFA8v6aE?t=3000

He was a physics grad, did some experiments with out of body experiences, decided the Universe is a simulation for immortal consciousness to experience making choices and dealing with their consequences, and reasoned from there that the purpose of life is to get rid of ego and fear and learn to benefit others instead of ourselves.

Quite how he got from one to the other isn't clear to me, or why it's physics related; the message seems to be a familiar religious one, deal with whatever struggles happen to you and try to be egoless and kind.

GaryBluto2 months ago

> Yes, I know it's all capital from VC firms and investment firms and other private sources, but it's still capital. It should be spent on meeting people's basic human needs, not GPU power.

It's capital that belongs to people and those people can do what they like with the money they earned.

So many great scientific breakthroughs that saved tens of millions of lives would never have happened if you had your way.

pnut2 months ago

Is that true, that it's money that belongs to people?

OpenAI isn't spending $1 trillion in hard earned cash on data centres, that is funny money from the ocean of financial liquid slushing around, seeing alpha.

It also certainly is not a cohort of accredited investors putting their grandchildren's inheritance on the line.

Misaligned incentives (regulations) both create and perpetuate that situation.

saulpw2 months ago

> It's capital that belongs to people and those people can do what they like with the money they earned.

"earned", that may be the case with millionaires, but it is not the case with billionaires. A person can't "earn" a billion dollars. They steal and cheat and destroy competition illegally.

I also take issue with the idea that someone can do whatever they want with their money. That is not true. They are not allowed to corner the market on silver, they aren't allowed to bribe politicians, and they aren't allowed to buy sex from underage girls. These are established laws that are obviously for the unalloyed benefit of society as a whole, but the extremely wealthy have been guilty of all of these things, and statements like yours promote the sentiment that allows them to get away with it.

Finally, "great scientific breakthroughs that saved tens of millions of lives would never have happened if you had your way". No. You might be able to argue that today's advanced computing technology wouldn't have happened without private capital allocation (and that is debatable), but the breakthroughs that saved millions of lives--vaccines, antibiotics, insulin, for example--were not the result of directed private investment.

UtopiaPunk2 months ago

"It's capital that belongs to people and those people..."

That's not a fundamental law of physics. It's how we've decided to arrange our current society, more or less, but it's always up for negotiation. Land used to be understood as a publicly shared resource, but then kings and the nobles decided it belong to them, and they fenced in the commons. The landed gentry became a ruling class because the land "belonged" to them. Then society renegotiated that, and decided that things primarily belonged to the "capitalist" class instead of noblemen.

Even under capitalism, we understand that that ownership is a little squishy. We have taxes. The rich understandably do not like taxes because it reduces their wealth (and Ayn Rand-styled libertarians also do not like taxes of any kind, but they are beyond understanding except to their own kind).

As a counterpoint, I and many others believe that one person or one corporation cannot generate massive amounts of wealth all by themselves. What does it mean to "earn" 10 billion dollars? Does such a person work thousdands of time harder or smarter than, say, a plumber or a school teacher? Of course not. They make money because they have money: they hire workers to make things for them that lead to profit, and they pay the workers less than the profit that is earned. Or they rent something that they own. Or they invest that money in something that is expected to earn them a higher return. In any scenario, how is it possible to earn that profit? They do so because they participate in a larger society. Workers are educated in schools, which the employer probably does not pay for in full. Customers and employees travel on infrastructure, maintained by towns and state governments. People live in houses which are built and managed by other parties. The rich are only able to grow wealth because they exist in a larger society. I would argue that it is not only fair, but crucial, that they pay back into the community.

+1
klaff2 months ago
mrguyorama2 months ago

Please tell me which of Penicillin, insulin, the transistor, the discovery and analysis of the electric field, discovery of DNA, invention of mRNA vaccines, discovery of pottery, basket weaving, discovery of radiation, the recognition that citrus fruit or vitamin C prevents and cures scurvy (which we discovered like ten times), the process for creating artificial fertilizers, the creation of steel, domestication of beasts of burden, etc were done through Wealthy Barons or other capital holders funding them.

Many of the above were discovered by people explicitly rejecting profit as an outcome. Most of the above predate modern capitalism. Several were explicitly government funded.

Do you have a single example of a scientific breakthrough that saved tens of millions of lives that was done by capital owners?

mike502 months ago

The transistor was funded by Bell Labs.

AstroBen2 months ago

> Couldn't we have spent the money on homeless shelters and food and other things

I suspect this is a much more complicated issue than just giving them food and shelter. Can money even solve it?

How would you allocate money to end obesity, for instance? It's primarily a behavioral issue, a cultural issue

brokenmachine2 months ago

I guess it's food and exercise.

Healthy food is expensive, do things to make that relatively cheaper and thus more appealing.

Exercise is expensive, do things to make that relatively cheaper and thus more appealing.

Walkable cities are another issue. People shouldn't have to get in their car to go anywhere.

dkural2 months ago

[ This comment I'm making is USA centric. ]. I agree with the idea of making our society better and more equitable - reducing homelessness, hunger, poverty, especially for our children. However, I think redirecting this to AI datacenter spending is a red-herring, here's why I think this: As a society we give a significant portion of our surplus to government. We then vote on what the government should spend this on. AI datacenter spending is massive, but if you add it all up, it doesn't cover half of a years worth of government spending. We need to change our politics to redirect taxation and spending to achieve a better society. Having a private healthcare system that spends twice the amount for the poorest results in the developed world is a policy choice. Spending more than the rest of the world combined on the military is a policy choice. Not increasing minimum wage so at least everyone with a full time job can afford a home is a policy job (google "working homelessness). VC is a teeny tiny part of the economy. All of tech is only about 6% of the global economy.

limagnolia2 months ago

You can increase min wage all you want, if there aren't enough homes in an area for everyone who works full time in that area to have one, you will still have folks who work full time who don't have one. In fact, increasing min wage too much will exacerbate the problem by making it more expensive to build more (and maintain those that exist). Though at some point, it will fix the problem too, because everyone will move and then there will be plenty of homes for anyone who wants one.

dkural2 months ago

I agree with you 100%! Any additional surplus will be extracted as rents, when housing is restricted. I am for passing laws that make it much easier for people to obtain permits to build housing where there is demand. Too much of residential zoning is single-family housing. Texas does a better job at not restricting housing than California, for example. Many towns vote blue, talk to talk, but do not walk the walk.

jkubicek2 months ago

> AI datacenter spending is massive, but if you add it all up, it doesn't cover half of a years worth of government spending.

I didn't check your math here, but if that's true, AI datacenter spending is a few orders of magnitude larger than I assumed. "massive" doesn't even begin to describe it

+1
atmavatar2 months ago
dkural2 months ago

Global datacenter spending across all categories (ML + everything else) is roughly 0.9 - 1.2 trillion dollars for the last three years combined, I was initially going to go for "quarter of the federal budget", but picked something I thought was more conservative to account for announced spending and 2025 etc. I pick 2022 onward for the LLM wave. In reality, solely ML driven, actual realized-to-date spending is probably about 5% of the federal budget. The big announcements will spread out over the next several years in build-out. Nonetheless, it's large enough to drive GDP growth a meaningful amount. Not large enough that redirecting it elsewhere will solve our societal problems.

kipchak2 months ago

>We need to change our politics to redirect taxation and spending to achieve a better society.

Unfortunately, I'm not sure there's much on the pie chart to redirect percentage wise. About 60% goes to non-discretionary programs like Social Security and Medicaid, and 13% is interest expense. While "non-discretionary" programs can potentially be cut, doing so is politically toxic and arguably counter to the goal of a better society.

Of the remaining discretionary portion half is programs like veterans benefits, transportation, education, income security and health (in order of size), and half military.

FY2025 spending in total was 3% over FY2024, with interest expense, social security and medicare having made up most of the increase ($249 billion)[1], and likely will for the foreseeable future[2] in part due to how many baby boomers are entering retirement years.

Assuming you cut military spending in half you'd free up only about 6% of federal spending. Moving the needle more than this requires either cutting programs and benefits, improving efficiency of existing spend (like for healthcare) or raising more revenue via taxes or inflation. All of this is potentially possible, but the path of least resistance is probably inflation.

[1] https://bipartisanpolicy.org/report/deficit-tracker/

[2] https://www.crfb.org/blogs/interest-social-security-and-heal...

dkural2 months ago

I agree with all of what you're saying.

I think the biggest lever is completely overhauling healthcare. The USA is very inefficient, and for subpar outcomes. In practice, the federal government already pays for the neediest of patients - the elderly, the at-risk children, the poor, and veterans. Whereas insurance rakes in profits from the healthiest working age people. Given aging, and the impossibility of growing faster than the GDP forever, we'll have to deal with this sooner or later. Drug spending, often the boogeyman, is less than 7% of the overall healthcare budget.

There is massive waste in our military spending due to the pork-barrel nature of many contracts. That'd be second big bucket I'd reform.

I think you're also right that inflation will ultimately take care of the budget deficit. The trick is to avoid hyperinflation and punitive interest rates that usually come along for the ride.

I would also encourage migration of highly skilled workers to help pay for an aging population of boomers. Let's increase our taxpayer base!

I am for higher rates of taxation on capital gains over $1.5M or so, that'll also help avoid a stock market bubble to some extent. One can close various loopholes while at it.

I am mostly arguing for policy changes to redistribute more equitably. I would make the "charity" status of college commensurate with the amount of financial aid given to students and the absolute cost of tuition for example., for example. I am against student loan forgiveness for various reasons - it's out of topic for this thread but happy to expand if interested.

GolfPopper2 months ago

The current pattern of resource allocation is a necessary requirement for the existence of the billionaire-class, who put significant effort into making sure it continues.

nine_zeros2 months ago

> but it's still capital. It should be spent on meeting people's basic human needs, not GPU power.

What you have just described is people wanting investment in common society - you see the return on this investment but ultra-capitalistic individuals don't see any returns on this investment because it doesn't benefit them.

In other words, you just asked for higher taxes on the rich that your elected officials could use for your desired investment. And the rich don't want that which is why they spend on lobbying.

UtopiaPunk2 months ago

I don't think it is a coincidence that the areas with the wealhiest people/corporations are the same areas with the most extreme poverty. The details are, of course, complicated, but zooming way way out, the rich literally drain wealth from those around them.

venturecruelty2 months ago

Thanks for pointing this out. Sorry you're getting downvoted. I visited San Francisco about ten years ago, and seeing a homeless person sheltering themselves under a flag or some sort of merch from a tech company really drove home just how bereft of humanity corporate power centers really are.

cindyllm2 months ago

[dead]

newfriend2 months ago

Technological advancement is what has pulled billions of people out of poverty.

Giving handouts to layabouts isn't an ideal allocation of resources if we want to progress as a civilization.

QuercusMax2 months ago

Lots of people lose their housing when they lose employment, and then they're stuck and can't get back into housing. A very large percentage of unhoused people are working jobs; they're not all "layabouts".

We know that just straight up giving money to the poorest of the poor results in positive outcomes.

limagnolia2 months ago

"A very large percentage"

Exactly how large are we talking here?

I have known quite a few 'unhoused' folk, and not many that had jobs. Those that do tend to find housing pretty quickly (Granted, my part of the country is probably different from your part, but I am interested in stats from any region).

nativeit2 months ago

The proportion of people you write off as “layabouts” is always conveniently ambiguous…of the number of unemployed/underemployed, how many are you suggesting are simply too lazy to work for a living?

estearum2 months ago

Technological advancements and cultural advancements that spread the benefits more broadly than naturally occurs in an industrialized economy. That is what pulled people out of poverty.

If you want to see what unfettered technological advancement does, you can read stories from the Gilded Age.

The cotton gin dramatically increased human enslavement.

The sewing machine decreased quality of life for seamstresses.

> During the shirtmakers' strike, one of the shirtmakers testified that she worked eleven hours in the shop and four at home, and had never in the best of times made over six dollars a week. Another stated that she worked from 4 o’clock in the morning to 11 at night. These girls had to find their own thread and pay for their own machines out of their wages.

These were children, by the way. Living perpetually at the brink of starvation from the day they were born until the day they died, but working like dogs all the while.

LightBug12 months ago

It's not unthinkable that one of those "layabouts" could have been the next Steve Jobs under different circumstances ...

People are our first, best resource. Closely followed by technology. You've lost sight of that.

johnrob2 months ago

Invest in making food/shelter cheaper?

+2
dotancohen2 months ago
droopyEyelids2 months ago

What if some of the homeless people are children or people who could lead normal lives but found themselves in dire circumstances?

Some of us believe that keeping children out of poverty may be an investment in the human capital of a country.

+1
dkural2 months ago
+1
newfriend2 months ago
sfink2 months ago

> Technological advancement is what has pulled billions of people out of poverty.

I agree with this. Perhaps that's what is driving the current billionaire class to say "never again!" and making sure that they capture all the value instead of letting any of it slip away and make it into the unwashed undeserving hands of lesser beings.

Chatbots actually can bring a lot of benefit to society at large. As in, they have the raw capability to. (I can't speak to whether it's worth the cost.) But that's not going to improve poverty this time around, because it's magnifying the disparities in wealth distribution and the haves aren't showing any brand new willingness to give anything up in order to even things out.

> Giving handouts to layabouts isn't an ideal allocation of resources if we want to progress as a civilization.

I agree with this too. Neither is giving handouts to billionaires (or the not quite as eye-wateringly wealthy class). However, giving handouts to struggling people who will improve their circumstances is a very good allocation of resources if we want to progress as a civilization. We haven't figured out any foolproof way of ensuring such money doesn't fall into the hands of layabouts or billionaires, but that's not an adequate reason to not do it at all. Perfect is the enemy of the good.

Some of those "layabouts" physically cannot do anything with it other than spending it on drugs, and that's an example of a set of people who we should endeavor to not give handouts to. (At least, not ones that can be easily exchanged for drugs.) Some of those billionaires similarly have no mental ability of ever using that money in a way that benefits anyone. (Including themselves; they're past the point that the numbers in their bank accounts have any effect on their lives.) That hasn't seemed to stop us from allowing things to continue in a way that funnels massive quantities of money to them.

It is a choice. If people en masse were really and truly bothered by this, we have more than enough mechanisms to change things. Those mechanisms are being rapidly dismantled, but we are nowhere near the point where figurative pitchforks and torches are ineffective.

_DeadFred_2 months ago

In the USA cowboys were homeless guys. You know that right? Like they had no home, slept outside. Many were pretty big layabouts. Yet they are pretty big part of our foundation myth and we don't say 'man they just should have died'.

Can I go be a cowboy? Can I just go sleep outside? maybe work a few minimal paying cattle run jobs a year? No? If society won't allow me to just exist outside, then society has an obligation to make sure I have a place to lay my head.

randomNumber72 months ago

If you are not willing to fight for your rights you will lose them.

reactordev2 months ago

I threw in the towel in April.

It's clear we are Wile E. Coyote running in the air already past the cliff and we haven't fallen yet.

saulpw2 months ago

What does it mean to throw in the towel, in your case? Divesting from the stock market? Moving to a hobby farm? Giving up on humanity?

reactordev2 months ago

Any dream of owning a home, having retirement, even a career after a couple years when it’s clear I’m over the hump. I’m trying to squeeze as much as I can before that happens and squirrel it away so at least I can have a van down by a river.

+1
ohhnoodont2 months ago
jstummbillig2 months ago

I don't know what to do with this take.

We need an order of magnitude more clean productivity in the world so that everyone can live a life that is at least as good as what fairly normal people in the west currently enjoy.

Anyone who think this can be fixed with current Musk money is simply not getting it: If we liquidated all of that, that would buy a dinner for everyone in the world (and then, of course, that would be it, because the companies that he owns would stop functioning).

We are simply, obviously, not good enough at producing stuff in a sustainable way (or: at all) and we owe it to every human being alive to take every chance to make this happen QUICKLY, because we are paying with extremely shitty humans years, and they are not ours.

Bring on the AI, and let's make it work for everyone – and, believe me, if this is not to be to the benefit of roughly everyone, I am ready to fuck shit up. But if the past is any indication, we are okay at improving the lives of everyone when productivity increases. I don't know why this time would be any different.

If the way to make good lives for all 8 billions of us must lead to more Musks because, apparently, we are too dumb to do collectivization in any sensible way, I really don't care.

randomNumber72 months ago

> I don't know why this time would be any different.

This time there is the potential to replace human workers. In the past it only made them more productive.

AlexandrB2 months ago

I think that's a distinction without difference. If one person can do the job of 10 thanks to automation, what happens to the other 9 who were doing that job before?

randomNumber72 months ago

You can also produce 10x the goods with the same amount of workers or let the other 9 work in other jobs.

The difference is that previously there still had been plenty of (low skill) jobs where automation didn't work. Pizza delivery, taxi driver, lots of office jobs with repetitive tasks etc.

Soon there will be nothing for the average joe as the machine will be better in all tasks he could perform.

PrairieFire2 months ago

agree the capital could be put to better use, however I believe the alternative is this capital wouldn't have otherwise been put to work in ways that allow it to leak to the populace at large. for some of the big investors in AI infrastructure, this is cash that was previously and likely would have otherwise been put toward stock buybacks. for many of the big investors pumping cash in, these are funds deploying the wealth of the mega rich, that again, otherwise would have been deployed in other ways that wouldn't leach down to the many that are yielding it via this AI infrastructure boom (datacenter materials, land acquisition, energy infrastructure, building trades, etc, etc)

amanaplanacanal2 months ago

It could have, though. Higher taxes on the rich, spend it on social programs.

ayaros2 months ago

Why is this so horrible. Put more resources in the hands of the average person. They will get pumped right back into the economy. If people have money to spend, they can buy more things, including goods and services from gigantic tax-dodging mega-corporations.

Gigantic mega-corporations do enjoy increased growth and higher sales, don't they? Or am I mistaken?

+1
5423542342352 months ago
thmsths2 months ago

Because the entire western culture has shifted to instant gratification. Yes, what you suggest would most likely lead to increased business eventually. But they want better number this quarter, so they resort to the cheap tricks like financial engineering/layoffs to get an immediate boost.

+3
panick21_2 months ago
+1
coliveira2 months ago
phkahler2 months ago

Let's pay down the debt before increasing social programs. You know, save the country first. If a penny saved is a penny earned then everyone -rich or poor- is looking for a handout.

amanaplanacanal2 months ago

The only person who has come close to balancing the federal budget was Clinton. But Republicans still try to position themselves as the party of fiscal responsibility.

If the voters can't even figure out why the debt keeps going up, I think you are fighting a losing battle.

Atheros2 months ago

> likely would have otherwise been put toward stock buybacks

Stock buybacks from who? When stock gets bought the money doesn't disappear into thin air; the same cash is now in someone else's hands. Those people would then want to invest it in something and then we're back to square one.

You assert that if not for AI, wealth wouldn't have been spent on materials, land, trades, ect. But I don't think you have any reason to think this. Money is just an abstraction. People would have necessarily done something with their land, labor, and skills. It isn't like there isn't unmet demand for things like houses or train tunnels or new-fangled types of aircraft or countless other things. Instead it's being spent on GPUs.

PrairieFire2 months ago

Totally agree that the money doesn’t vanish. My point isn’t “buybacks literally destroy capital,” it’s about how that capital tends to get redeployed and by whom.

Buybacks concentrate cash in the hands of existing shareholders, which are already disproportionately wealthy and already heavily allocated to financial assets. A big chunk of that cash just gets recycled into more financial claims (index funds, private equity, secondary shares, etc), not into large, lumpy, real world capex that employs a bunch of electricians, heavy equipment operators, lineworkers, land surveyors, etc. AI infra does that. Even if the ultimate economic owner is the same class of people, the path the money takes is different: it has to go through chip fabs, power projects, network buildouts, construction crews, land acquisition, permitting, and so on. That’s the “leakage” I was pointing at.

To be more precise: I’m not claiming “no one would ever build anything else”, I’m saying given the current incentive structure, the realistic counterfactual for a lot of this megacap tech cash is more financialization (buybacks, M&A, sitting on balance sheets) rather than “let’s go fund housing, transit tunnels, or new aircraft.”

Atheros2 months ago

I really don't think any of that is true; it's just popular rhetoric.

For example: "Buybacks concentrate cash in the hands of existing shareholders" is obviously false: the shareholders (via the company) did have cash and now they don't. The cash is distributed to the market. The quoted statement is precisely backwards.

> A big chunk of that cash just gets recycled

That doesn't mean anything.

> more financial claims (index funds, private equity, secondary shares, etc)

And do they sit on it? No, of course not. They invest it in things. Real actual things.

> buybacks

Already discussed

> M&A

If they use cash to pay for a merger, then the former owners now have cash that they will reinvest.

> balance sheets

Money on a balance sheet is actually money sitting in J.P. Morgan or whoever. Via fractional reserve lending, J.P. Morgan lends that money to businesses and home owners and real actual houses (or whatever) get built with it.

The counterfactual for AI spending really is other real actual hard spending.

slashdave2 months ago

Well, at least this doesn't involve death and suffering, like the old-fashioned way to jump-start an economy by starting a global war.

7222aafdcf68cfe2 months ago

when the US sells out Europe to Russia, do you think the Russians will stop? That global war might be with us within a decade.

brokenmachine2 months ago

...yet

skippyboxedhero2 months ago

Can you imagine if the US wasn't so unbelievably far ahead of everyone else?

I am sure the goat herders in rural regions of Pakistan will think themselves lucky when they see the terrible sight of shareholder value being wantonly destroyed by speculative investments that enhance the long-term capital base of the US economy. What an uncivilized society.

anthomtb2 months ago

As a fellow elder millennial I agree with your sentiment.

But I don't see the mechanics of how it would work. Rewind to October 2022. How, exactly, does the money* invested in AI since that time get redirected towards whatever issues you find more pressing?

*I have some doubts about the headline numbers

arisAlexis2 months ago

Yes this capital allocation is a once in a lifetime opportunity to crate AGI that will solve diseases and poverty.

edhelas2 months ago

</sarcasm>

arisAlexis2 months ago

This is literally the view of demis hassabis, Sergey brin, Mario amodei and others. Are you seriously implying they are trolling us?

+1
pezezin2 months ago
aeve8902 months ago

/s

brokenmachine2 months ago

We have 8.3 billion examples of general intelligence alive on the planet right now.

Surely an artificial one in a data center, costing trillions and beholden to shareholders, will solve all society's issues!

arisAlexis2 months ago

I suggest you read Amodei post called "machines of loving grace". It will change your worldview (probably).

greenie_beans2 months ago

as a counterpoint, you should read this essay with the same title: https://www.clunyjournal.com/p/machines-of-loving-grace

+1
AlexandrB2 months ago
NedF2 months ago

[dead]

Animats2 months ago

How much has actually been spent on AI data centers vs. amounts committed or talked about? That is, if construction slows down sharply, what's total spend?

michaelbuckbee2 months ago

It's not going to pay off for everybody, this is a land grab for who will control this sort of central aspect of AI inference.

fnord772 months ago

> this is a land grab

is it though? Unlike fiber, current GPUs will be obsolete in 5-10 years.

cmrdporcupine2 months ago

The investors in these companies and all this infrastructure are not so much concerned with whether any specific companies pays off with profits, necessarily.

They are gambling instead that these investments pay out it in a different way: by shattering high labour costs for intellectual labour and de-skilling our profession (and others like it) -- "proletarianising" in the 19th century sense.

Thereby increasing profits across the whole sector and breaking the bargaining power (and outsized political power, as well) of upper middle class technology workers.

Put another way this is an economy wide investment in a manner similar to early 20th century mass factory industrialization. It's not expected that today's big investments are tomorrow's winners, but nobody wants to be left behind in the transformation, and lots of political and economic power is highly interested in the idea of automating away the remnants of the Alvin Toffler "Information Economy" fantasy.

winddude2 months ago

8T is the high-end of the McKinsey estimate that is 4-8T, by 20230. That includes non-AI data-centre IT, AI data-centre, and power infrastructure build out, also including real estate for data centres.

Not all of it would be debt. Google, Meta, Microsoft and AWS have massive profit to fund their build outs. Power infrastructure will be funded by govts and tax dollars.

oblio2 months ago

There is mounting evidence that even places like Meta are increasing their leverage (debt load) to fund this scale out. They're also starting to do accounting tricks like longer depreciation for assets which degrade quickly, such as GPUs (all the big clouds increasing their hardware depreciation from 2-3-4 years to 6), which makes their financial numbers look better but might not mean that all that hardware is still usable at production levels 6 years from now.

They're all starting to strain under all this AI pressure, even with their mega profits.

winddude2 months ago

I've read / hear the cloud providers like AWS started extending amortization periods in 2020ish

nnurmanov2 months ago

I agree. Here is my thinking. What if LLM providers will make short answers the default (for example, up to 200 tokens, unless the user explicitly enables “verbose mode”). Add prompt caching and route simple queries to smaller models. Result: a 70%+ reduction in energy consumption without loss of quality. Current cost: 3–5 Wh per request. At ChatGPT scale, this is $50–100 million per year in electricity (at U.S. rates).

In short mode: 0.3–0.5 Wh per request. That is $5–10 million per year — savings of up to 90%, or 10–15 TWh globally with mass adoption. This is equivalent to the power supply of an entire country — without the risk of blackouts.

This is not rocket science — just a toggle in the interface and I believe, minor changes in the system prompt. It increases margins, reduces emissions, and frees up network resources for real innovation.

And what if EU/California enforces such mode? This will greatly impact DC economy.

cryptophreak2 months ago

Can you explain why a low-hanging optimization that would reduce costs by 90% without reducing perceived value hasn't been implemented?

lmm2 months ago

> Can you explain why a low-hanging optimization that would reduce costs by 90% without reducing perceived value hasn't been implemented?

Because the industry is running on VC funny-money where there is nothing to be gained by reducing costs.

(A similar feature was included in GPT-5 a couple of weeks ago actually, which probably says something about where we are in the cycle)

balder19912 months ago

Not sure that’s even possible with ChatGPT embedding your chat history in the prompts to try to give more personal answers.

randomNumber72 months ago

Dunning Krueger

EagnaIonat2 months ago

Good enough models can already run on laptops.

nrhrjrjrjtntbt2 months ago

Context Tokens want a word...

qwertyuiop_2 months ago

The question no one seems to be answering is what would be the EOL for these newer GPUs that are being churned out of NVDIA ? What % annual capital expenditures is refresh of GPUs. Will they be perpetually replaced as NVIDIA comes up with newer architectures and the AI companies chase the proverbial lure ?

scotty792 months ago

I think the key to replacing is power efficiency. If Nvidia is not able to make GPUs that are cheaper to run than previous generation theres no point for replacing previous generation. Time doesn't matter.

zobzu2 months ago

Also IBM: we are fully out of the AI race, btw. Also IBM: we're just an offshoring company now anyway.

So yeah.

kitd2 months ago
pezgrande2 months ago

"now". They've been doing it since forever.

1vuio0pswjnm72 months ago

One thing we saw with the dot-com bust is how certain individuals were able to cash in on the failures, e.g., low cost hardware, domain names, etc. (NB. prices may exceed $2)

Perhaps people are already thinking about they can cash in on the floor space and HVAC systems that will be left in the wake of failed "AI" hype

blibble2 months ago

I'm looking forward to buying my own slightly used 5 million square ft data centre in Texas for $1

jsheard2 months ago

Tired: homelabbers bringing decommissioned datacenter gear home.

Wired: homelabbers moving into decommissioned datacenters.

reverius422 months ago

More of a labhome than a homelab at that point.

viccis2 months ago

I miss First Saturday in Dallas where we honest to god did buy decommissioned datacenter gear out of the back of a van.

renegade-otter2 months ago

"Loft for rent, 50,000 sq ft in a new datacenter, roof access, superb wiring and air conditioning, direct access to fiber backbone."

nvader2 months ago

Never worn.

spacecadet2 months ago

Colo!

+1
phantasmish2 months ago
WhyOhWhyQ2 months ago

You're out of luck because I am willing to pay at least $2.

blibble2 months ago

they'll be plenty for everyone!

trhway2 months ago

In TX? In Russian blogosphere it is a standard staple that Trump is rushing Ukrainian peace deal to be able to move on to the set of mega-projects with Russia - oil/gas in Arctic and data centers in Russian North-West where electricity and cooling is plentiful and cheap.

cheema332 months ago

Build trillion dollar data center infrastructure in Russia. What could possibly go wrong?

Ask the owners of the leased airplanes who have been unsuccessfully trying to get their planes back for about 3 years.

ekropotin2 months ago

Sounds like kremlebot’s, however it’s unclear for me the motivation behind pushing this narrative. Also, why don’t build DCs in Alaska instead?

trhway2 months ago

actually it is more of the opposition's narrative, probably a way to explain such a pro-Russian position of Trump.

I think any such data center project is doomed to ultimately fail, and any serious investment will be for me a sign of the bubble peak exuberance and irrationality.

oblio2 months ago

What could go wrong with placing critical infrastructure on the soil of a strategic rival?

voidfunc2 months ago

Oil and Gas in The Arctic I can see, but data centers in Russia... good luck with that.

octoberfranklin2 months ago

Equinix certainly is.

1vuio0pswjnm72 months ago

From the article:

""It's my view that there's no way you're going to get a return on that, because $8 trillion of capex means you need roughly $800 billion of profit just to pay for the interest," he said."

bitexploder2 months ago

Right, THEY can't, but cloud providers potentially can. And there are probably other uses for everything not GPU/TPU for the Google's of the world. They are out way less than IBM which cannot monetize the space or build data centers efficiently like AWS and Google.

pseudosavant2 months ago

The dotcom bust killed companies, not the Internet. AI will be no different. Most players won’t make it, but the tech will endure and expand.

codingdave2 months ago

Or endure and contract.

The key difference between AI and the initial growth of the web is that the more use cases to which people applied the web, the more people wanted of it. AI is the opposite - people love LLM-based chatbots. But it is being pushed into many other use cases where it just doesn't work as well. Or works well, but people don't want AI-generated deliverables. Or leaders are trying to push non-deterministic products into deterministic processes. Or tech folks are jumping through massive hoops to get the results they want because without doing so, it just doesn't work.

Basically, if a product manager kept pushing features the way AI is being pushed -- without PMF, without profit -- that PM would be fired.

This probably all sounds anti-AI, but it is not. I believe AI has a place in our industry. But it needs to be applied correctly, where it does well. Those use cases will not be universal, so I repeat my initial prediction. It will endure and contract.

bigstrat20032 months ago

The difference is that the Internet was actually useful technology, whereas AI is not (so far at least).

pseudosavant2 months ago

In the last month I personally used (as in, it was useful) AI for this:

- LLM-powered transcription and translation made it so I could have a long conversation with my airport driver in Vietnam. - Helped me turn 5-10x ideas into working code and usable tools as I used to. - Nano Banana restored dozens of cherished family photos for a Christmas gift for my parents. - Help me correctly fix a ton of nuanced aria/accessibility issues in a production app. - Taught/explained a million things to me: difference between an aneurysm/stroke, why the rise of DX12/Vulkan gaming engines killed off nVidia SLI, political/economic/social parallels between 1920s and 2020s, etc...

Maybe everyone isn't using it yet, but that doesn't mean it isn't useful. Way too many people find real use every day in a lot of AI products. Just because MS Office Copilot sucks (and it does), doesn't mean it is all useless.

7thaccount2 months ago

I think you're exaggerating a little, but aren't entirely wrong. The Internet has completely changed daily life for most of humanity. AI can mean a lot of things, but a lot of it is blown way out of proportion. I find LLMs useful to help me rephrase a sentence or explain some kind of topic, but it pales in comparison to email and web browsers, YouTube, and things like blogs.

ProjectArcturis2 months ago

More use cases for AI than blockchain so far.

+1
fwip2 months ago
ekropotin2 months ago

Can’t wait for all this cheap ddr5 memory and GPUs

jmspring2 months ago

I was looking at my Newegg orders recently. 7/18/2023 - 64GB (2 x 32GB) 288-Pin PC RAM DDR5 6000 (PC5 48000) --> $260. Now, $750+.

ekropotin2 months ago

Don’t even get me started on this. I recently been shopping on eBay for some DDR4 memory. You may think - who’d need this dated stuff besides me? Yet 16Gb 3200Mhz is at least 60$. Which is effectively the price you paid for DDR5 6000. Crazy, right?

tempest_2 months ago

I have 4 32gb sticks of DDR5 6400 in my machine.

The RAM in my machine being worth more than the graphics card (7900XTX) was not on my bingo card I can tell you that.

3eb7988a16632 months ago

Holy cow. I have 96GB of DDR5 I bought at start of year for a machine which never materialized. Might have to flip it.

+1
ekropotin2 months ago
raw_anon_11112 months ago

GPUs have a very high failure rate…

matt-p2 months ago

To be honest ai datacentres would be a rip and replace to get back to normal datacentre density, at least on the cooling and power systems.

Maybe useful for some kind of manufacturing or industrial process.

alphabetag6752 months ago

Cheap compute would be a boon for science research.

scj2 months ago

It'll likely be used to mine bitcoin instead.

+2
BanazirGalbasi2 months ago
kerabatsos2 months ago

Why do you believe it will fail? Because some companies will not be profitable?

rzwitserloot2 months ago

It wasn't an 'it' it was a 'some'. Some of these companies that are investing massively in data centers will fail.

Right now essentially none have 'failed' in the sense of 'bankrupt with no recovery' (Chapter 7). They haven't run out of runway yet, and the equity markets are still so eager, even a bad proposition that includes the word 'AI!' is likely to be able to cut some sort of deal for more funds.

But that won't last. Some companies will fail. Probably sufficient failures that the companies that are successful won't be able to meaningfully counteract the bursts of sudden supply of AI related gear.

That's all the comment you are replying to is implying.

hkt2 months ago

Given the amounts being raised and spent, one imagines that the ROI will be appalling unless the pesky humans learn to live on cents a day, or the world economy grows by double digits every year for a few decades.

marcosdumay2 months ago

If the entire world economy starts to depend on those companies, they would pay off with "startup level" ROI. And by "startup level" I mean the amounts bullish people say startups funds can pay (10 to 100), not a bootstrapped unicorn.

ulfw2 months ago

I mean that is how capitalism works, no?

lawlessone2 months ago

>cash in on the floor space and HVAC systems that will be left in the wake of failed "AI" hype

I'd worry surveillance companies might.

jollyllama2 months ago

There is a way of viewing the whole thing as a ruse to fast-track power generation through permitting.

cagenut2 months ago

you could stuff the racks full of server-rack batteries (lfp now, na-ion maybe in a decade) and monetize the space and the high capacity grid connect

most of the hvac would sit idle tho

PunchyHamster2 months ago

the constant cost of people and power won't make it all that much cheaper than current prices to put a server into someone's else rack.

jmclnx2 months ago

I guess he is looking directly at IBM's cash cow, the mainframe business.

But, I think he is correct, we will see. I still believe AI will not give the CEOs what they really want, no or very cheap labor.

pharos922 months ago

I find it disturbing how long people wait to accept basic truths, as if they need permission to think or believe a particular outcome will occur.

It was quite obvious that AI was hype from the get-go. An expensive solution looking for a problem.

The cost of hardware. The impact on hardware and supply chains. The impact to electricity prices and the need to scale up grid and generation capacity. The overall cost to society and impact on the economy. And that's without considering the basic philosophical questions "what is cognition?" and "do we understand the preconditions for it?"

All I know is that the consumer and general voting population loose no matter the outcome. The oligarchs, banking, government and tech-lords will be protected. We will pay the price whether it succeeds or fails.

My personal experience of AI has been poor. Hallucinations, huge inconsistencies in results.

If your day job exists within an arbitrary non-productive linguistic domain, great tool. Image and video generation? Meh. Statistical and data-set analysis. Average.

wordpad2 months ago

Just like .com bust from companies going online, there is hype, but there is also real value.

Even slow non-tech legacy industry companies are deploying chatbots across every department - HR, operations, IT, customer support. All leadership are already planning to cut 50 - 90% of staff from most departments over next decade. It matters, because these initiatives are receiving internal funding which will precipitate out to AI companies to deploy this tech and to scale it.

SchemaLoad2 months ago

The "legacy" industry companies are not immune from hype. Some of those AI initiatives will provide some value, but most of them seem like complete flops. Trying to deploy a solution without an idea of what the problem or product is yet.

wordpad2 months ago

Right, but this is consumer side hype.

Even if AI is vaporware is mostly hype and little value, it will take a while to hype to fizzle out and by then AI might start deliver on its promise.

They got a long runway.

fugalfervor2 months ago

> Even slow non-tech legacy industry companies are deploying chatbots across every department - HR, operations, IT, customer support

Yes, and customers fucking hate it. They want to talk to a person on the damn phone.

rldjbpin2 months ago

the calculation, while simple, does make rational sense if all things remain equal.

regardless, given their past business decisions, this statement can be true for them without thinking about the bottom line.

the entire premise is enterprise compute amped to the max. the sheer expensive nature of the hardware and intensive resource requirements truly challenges what modern data centers can do today.

boxedemp2 months ago

Nobody really knows the future. What were originally consumer graphics expansion cards turned out useful in delivering more compute than traditional CPUs.

Now that compute is being used for transformers and machine learning, but we really don't know what it'll be used for in 10 years.

It might all be for naught, or maybe transformers will become more useful, or maybe something else.

'no way' is very absolute. Unlikely, perhaps.

cheema332 months ago

> What were originally consumer graphics expansion cards turned out useful in delivering more compute than traditional CPUs.

Graphics cards were relatively inexpensive. When one got old, you tossed it out and move on to the new hotness.

Here when you have spent $1 trillion on AI graphics cards and a new hotness comes around that renders your current hardware obsolete, what do you do?

Either people are failing to do simple math here or are expecting, nay hoping, that trillions of $$$ in value can be extracted out of the current hardware, before the new hotness comes along.

This would be a bad bet even if the likes of OpenAI were actually making money today. It is an exceptionally bad bet when they are losing money on everything they sell, by a lot. And the state of competition is such that they cannot raise prices. Nobody has a real moat. AI has become a commodity. And competition is only getting stronger with each passing day.

randomNumber72 months ago

You can likely still play the hottest games with the best graphics on an H200 in 5 years.

ojr2 months ago

As long as the dollar remains the reserve currency of the world and US retains its hegemony, a lot of the finances will work itself out, the only threat to the US empire crumbling is by losing a major war or extreme civil unrest and that threat is astronomically low. The US is orders of magnitude stronger than the Roman Empire, I don't think people realize the scale or control.

rootnod32 months ago

Famous last words of empires. I doubt that if current situations continue that the US will in any sense be still thought of as the reserve currency.

7222aafdcf68cfe2 months ago

Gradually, then suddenly. Best not to underestimate the extent to which the USA has lost trust in the rest of the world, and how actively people and organisations are working to derisk by disengaging. Of course that will neither be easy nor particularly fast, but I'm not certain it can be stopped at this point.

dragonwriter2 months ago

> The US is orders of magnitude stronger than the Roman Empire

This would be trivially true even if the US was currently in its death throes (which there is plenty of evidence that the US-as-empire might be, even if the US-as-polity is not), as the Roman Empire fell quite a while ago.

ojr2 months ago

being pedantic doesn’t change the fact of what I said. If I said Obama is a better a leader than Julius Caesar, would you reply well actually, Julius Caesar is dead.

zahlman2 months ago

I'm pretty sure

> The US is orders of magnitude stronger than the Roman Empire [was]

was the intended reading.

randomNumber72 months ago

Pride comes before a fall.

LunaSea2 months ago

You should look at the depreciation of the value of the dollar this year.

koliber2 months ago

NOTE: People pointed out that it's $800 billion to cover interest, not $8 billion, as I wrote below. My mistake. That adds 2 more zeroes to all figures, which makes it a lot more crazy. Original comment below...

$8 billion / US adult adult population of of 270 million comes out to about $3000 per adult per year. That's only to cover cost of interest, let alone other costs and profits.

That sounds crazy, but let's think about it...

- How much does an average American spend on a car and car-related expenses? If AI becomes as big as "cars", then this number is not as nuts.

- These firms will target the global market, not US only, so number of adults is 20x, and the average required spend per adult per year becomes $150.

- Let's say only about 1/3 of the world's adult population is poised to take advantage of paid tools enabled by AI. The total spend per targetable adult per year becomes closer to $500.

- The $8 billion in interest is on the total investment by all AI firms. All companies will not succeed. Let's say that the one that will succeed will spend 1/4 of that. So that's $2 billion dollar per year, and roughly $125 per adult per year.

- Triple that number to factor in other costs and profits and that company needs to get $500 in sales per targetable adult per year.

People spend more than that on each of these: smoking, booze, cars, TV. If AI can penetrate as deep as the above things did, it's not as crazy of an investment as it looks. It's one hell of a bet though.

rc12 months ago

Nit: its $800 billion in interest, your comment starts with $8 billion

koliber2 months ago

right. My goof. That adds two more zeroes across all the math. More crazy, but I think in the realm of "maybe, if we squint hard." But my eyes are hurting from squinting that hard, so I agree that it's just crazy.

writebetterc2 months ago

You're saying $8 billion to cover interest, another commenter said 80, but the actual article says ""$8 trillion of CapEx means you need roughly $800 billion of profit just to pay for the interest". Eight HUNDRED billion. Where does the eight come from, from 90% of these companies failing to make a return? If a few AI companies survive and thrive (which tbh, sure, why not?) then we're still gonna fall face down into concrete.

koliber2 months ago

right. My goof. That adds two more zeroes across all the math. More crazy, but I think in the realm of "maybe, if we squint hard."

writebetterc2 months ago

I think it's the realm of maybe in Silicon Valley. That's 5000 dollars. Look at this statement:

> Let's say only about 1/3 of the world's adult population is poised to take advantage of paid tools enabled by AI

2/3 of the world's adult population is between 15 and 65 (roughly: 'working age'), so that's 50% of the working world that is capable of using AI with those numbers. India's GDP per capita is 2750USD, and now the price tag is even higher than 5k.

I don't know how to say this well, so I'll just blurt it out: I feel like I'm being quite aggressive, but I don't blame you or expect you to defend your statements or anything, though of course I'll read what you've got to say.

stevetron2 months ago

From the company that said the world market size for computers was about 50. And the company that gave us OS2.

AlexandrB2 months ago

I see a lot of people attacking the messenger but very few addressing the basic logic that you need 800B+ in profit just to pay the interest on some of these investments.

Pointing out IBM's mixed history would be valid if they were making some complex, intricate, hard to verify case for why AI won't be profitable. But the case being made seems like really simple math. A lot of the counterarguments to these economic problems have the form "this time it's different" - something you hear every bubble from .com to 2008.

eitally2 months ago

At some point, I wonder if any of the big guys have considered becoming grid operators. The vision Google had for community fiber (Google Fiber, which mostly fizzled out due to regulatory hurdles) could be somewhat paralleled with the idea of operating a regional electrical grid.

djdjsjejb2 months ago

thats like boeing telling us we shouldnt build rockets

devmor2 months ago

I suppose it depends on your definition of "pay off".

It will pay off for the people investing in it, when the US government inevitably bails them out. There is a reason Zuckerberg, Huang, etc are so keen on attending White House dinners.

It certainly wont pay off for the American public.

nashashmi2 months ago

Don’t worry. The same servers will be used for other computing purposes. And maybe that will be profitable. Maybe it will be beneficial to others. But This cycle of investment and loss is a version of distribution of wealth. Some benefit.

The banks and loaners always benefit.

coliveira2 months ago

That would be true for general purpose servers. But what they want is lots of special purpose AI chips. While is still possible to use that for something else, it's very different from having a generic server farm.

danans2 months ago

LAN party at the end of the universe?

scotty792 months ago

I can't imagine everybody suddenly leaving AI like a broken toy and taking all special purpose AI chips offline. AI serves millions of people every day. It's here to stay even if it doesn't get any better than it is it already brings immense value to the users. It will keep being worth something.

coliveira2 months ago

Yes, it will be worth something. But when the value is less than its cost, you have a loss - and nobody runs a service at a loss.

turtleyacht2 months ago

Data centers house hardware, and it's a land grab for compute. What actually runs post-AI depends on its owners. A glut of processing might be spent reverse-engineering efficient heuristics versus the "magic box."

A bitter summer.

westurner2 months ago

Ctrl-F this thread for terms like: cost, margin

Is transistor density cost still the limit?

Cost model, Pricing model

What about more recyclable chips made out of carbon?

What else would solve for e.g. energy efficiency, thermal inefficiency, depreciation, and ewaste costs?

maxglute2 months ago

How long can ai gpus stretch? Optmistic 10 years and we're still looking at 400b+ profit to cover interests. The factor in silicon is closer to tulips than rail or fiber in terms of depreciated assets.

jstummbillig2 months ago

Well, at least it tells us something about the sentiment on hn that a lame insight around self admitted "napkin math" and obvious conflict of interest garners 400 points.

atleastoptimal2 months ago

>IBM CEO

might as well ask a magic 8 ball for more prescient tech takes

IAmBroom2 months ago

It honestly reminds me of the opinion pieces put out by encyclopedia companies about the many ways Wikipedia was inferior.

I read an article that pretended to objectively compare them. It noted that Wikipedia (at that time) had more articles, but not way more... A brief "sampling test" suggested EB was marginally more accurate than Wikipedia - marginally!

The article concluded that EB was superior. Which is what the author was paid to conclude, obviously. "This free tool is marginally better in some ways, and slightly worse in others, than this expensive tool - so fork over your cash!"

bluGill2 months ago

This is likely correct overall, but it can still pay off in specific cases. However those are not blind investments they are targeted with a planned business model

kopirgan2 months ago

What's the legal status of all this AI code?! Will it be likely that someone whose code was lifted as part of "learning" can sue?!

brookst2 months ago

Anyone can sue for anything.

I’m not sure we want a world where anyone who learns from existing code is committing a tort.

xgulfie2 months ago

GenAI isn't a person

wmf2 months ago

$8T may be too big of an estimate. Sure you can take OpenAI's $1.4T and multiply it by N but the other labs do not spend as much as OpenAI.

kopirgan2 months ago

Its also strange that while solar systems built wait years for grid capacity, this much extra energy is being planned..

simianwords2 months ago

If it is so obvious that it won’t pay off, why is every company investing in it? What alpha do you have that they don’t?

jacquesm2 months ago

That's a good question. During the .com boom everybody was investing in 'the internet' or at least in 'the web'. And lots of those companies went bust, quite a few spectacularly so. Since then everything that was promised and a lot more has been realized. Even so, a lot of those initial schemes were harebrained at best at the time and there is a fair chance that we will look in a similar way at the current AI offerings in 30 years time.

Short term is always disappointing, long term usually overperforms. Think back to the first person making a working transistor and what came of that.

SideburnsOfDoom2 months ago

> And lots of those companies went bust, quite a few spectacularly so.

pets.com "selling dogfood on the internet" is the major example of the web boom then bust. (1)

But today, I can get dog food, cat food, other pet supplies with my weekly "online order" grocery delivery. Or I can get them from the big river megaretailer. I have a weekly delivery of coffee beans from a niche online supplier, and it usually comes with flyers for products like a beer or wine subscription or artisanal high-meat cat or dog foods.

So the idea of "selling dogfood on the internet" is now pervasive not extinct, the inflated expectation that went bust was that this niche was a billion-dollar idea and not a commodity where brand, efficiencies of scale and execution matter more.

1) https://en.wikipedia.org/wiki/Pets.com#History

simianwords2 months ago

I still don’t get it. What at a personal level is making Sam Altman make a suboptimal choice for himself if it is so obvious it won’t work out for him?

Ekaros2 months ago

Marketing and name recognition? Used in other areas to generate wealth. Even steering OpenAI or partners of it to acquire right companies could be profitable for him.

jacquesm2 months ago

On a personal level it will work out for him just fine. All he has to do is siphon off a fraction of that money to himself and/or an entity that he controls.

He's like Elon Musk in that respect: always doubling the bet on the next round, it is a real life Martingale these guys are playing with society on the hook for the downside.

https://en.wikipedia.org/wiki/Martingale_(betting_system)

+1
simianwords2 months ago
6thbit2 months ago

“I think there is a world market for maybe five computers.”

Let’s hope IBM keeps their streak of bad predictions.

Kon5ole2 months ago

I think this old quote can come around to being accurate in a way, if you consider that from the user's perspective every cloud service is like one system. Aws, Azure, Google cloud...how many will there be when the dust settles? ;-)

mistercheph2 months ago

Please, can we get more analysis from the geniuses at International Business Machines in 2025?

RobRivera2 months ago

What kind of reporte does the CEO of IBM expect the general technology workforce to hold for them?

vcryan2 months ago

Okay, that is what IBM has to say. Are there any credible opinions we should know about?

littlecranky672 months ago

There are so many CEOs, tech experts, financial analysts and famous investors who say we are in an AI bubble - even AI-invested companies say that about themselves. My latest favorite "We are in an AI bubble" comment comes from Linus Torvalds himself in the video with Linus from Linus Tech Tipps [0]

[0]: https://www.youtube.com/watch?v=mfv0V1SxbNA&t=2063s

m1012 months ago

To all those people that think IBM don't know anything. Calculate this number:

# companies 100+ years old / # companies ever existed in 100+ years

Then you will see why IBM is pretty special and probably knows what they are doing.

mattlondon2 months ago

Pretty special? They were making guns and selling computation tools to the Nazis for a bunch of those years.

I think they trade now mostly on legacy maintenance contracts (e.g. for mainframes) for e.g. banks who are terrified of rocking their technology-stack-boat, and selling off-shore consultants (which is at SIGNIFICANT risk of disruption - why would you pay IBM squillions to do some contract IT work, when we have AI code agents? Probably why the CEO is out doing interviews saying you cant trust AI to be around forever)

I have not really seen anything from IBM that signals they are anything other than just milking their legacy - what have they done that is new or innovative in the past say 10 or 20 years?

I was a former IBMer 15 odd years ago and it was obvious then that it was a total dinosaur on a downward spiral, and a place where innovation happened somewhere else.

LtWorf2 months ago

I don't think selling things to nazis was worse than selling things to israel today.

mattlondon2 months ago

Yep, both equally as bad and morally repugnant.

bytesandbits2 months ago

Mind you IBM makes +7B from keeping old school enterprise hooked up on 30 plus year old tech like z/OS and Cobol and their own super outdated stack. their AI division is frankly embarrassing. of course they would say that. IBM is one of the most conservative, anti-progress leaches in the entire tech industry. I am glad they are missing out big time on the AI gold rush. to me if anything this is a green signal.

Ekaros2 months ago

How much of Nvidias price is based on 5 year replacement cycle? If that stops or slows with new demand could it also affect things? Not that 5 years does not seem very long horizon now.

Aperocky2 months ago

LLMs at current utility do not justify this spending, but the offside chance that someone will hit AGI is likely worth the expectation.

matt_s2 months ago

There is something to be said about what the ROI is for normal (i.e. non AI/tech) companies using AI. AI can help automate things, robots have been replacing manufacturing jobs for decades but there is an ROI on that which I think is easier to see and count, less humans in the factory, etc. There seems to be a lot of exaggerated things being said these days with AI and the AI companies have only begun to raise rates, they won't go down.

The AI bubble will burst when normal companies start to not realize their revenue/profit goals and have to answer investor relations calls about that.

m3kw92 months ago

Says the guy missing out on it

rmoriz2 months ago

The second buyer will make truckloads of money, remember the data center and fiber network liquidation of 2001+ - smart investors collected the overcapacity and after a couple of years the money printer worked. This time it will be the same, only the single purpose hardware (LLM specific GPUs) will probably end on a landfill.

nialse2 months ago

The game is getting OpenAI to owe you as much money as you can. When they fail to pay back, you own OpenAI.

rmoriz2 months ago

You are talking about the circular investments in the segment? Yes, but assume NVIDIA can get cheap access to IP and products of failing AI unicorns through contracts, this does not mean the LLM business can be operated profitably by them. Models are like fresh food, they start to rot by the training cut off date and lose value. The process of re-training a model will always be very expensive.

singpolyma32 months ago

Are they LLM specific?

rmoriz2 months ago

Deep down technically probably not, but they are optimized for this workload and business model. I doubt that once the AI bubble busts, another business model is viable. While datacenters can be downsized or partially shut down until demand picks up again, high end hardware is just losing money by the second.

matt-p2 months ago

Unless we get AGI.

BoorishBears2 months ago

Consumer will eat it all. AI is very good at engaging content, and getting better by the day: it won't be the AGI we wanted, but maybe the AGI we've earned

wtcactus2 months ago

The same IBM that lost all races in the last 40 years? That IBM?

liampulles2 months ago

Hypothetically speaking, if the AI hype bubble pops (or just returns to normalcy), would it be profitable to retarget the compute towards some kind of crypto mining? If so, could we expect the cryptocurrency supply to soar and the price to tank in short succession?

ta90002 months ago

“It doesn’t even have a keyboard!” energy.

Jimmc4142 months ago

"It is difficult to get a man to understand something when his salary depends on his not understanding it." - Upton Sinclair

ninjaa2 months ago

What does Jim Cramer have to say?

accra4rx2 months ago

I mean why not , they have to put something down to make their quantum show as better ROIs

jonfw2 months ago

"Company that has been left in the dust claims it was a good business decision"

thenthenthen2 months ago

the mining industry enters the chat

oxqbldpxo2 months ago

FB playbook. Act (spend) then say sorry.

weerfeegleem2 months ago

One more data center and we will reach AGI, we promise, just give us more money.

badmonster2 months ago

[flagged]

martinald2 months ago

I disagree on that and covered a lot of it in this blog (sorry for the plug!) https://martinalderson.com/posts/are-we-really-repeating-the...

skippyboxedhero2 months ago

100% of technical innovations have had the same pattern. The same thing happens every time because this is the only way the system can work: excess is required because there is some uncertainty, lots of companies are designing strategies to fill this gap, and if this gap didn't exist then there would be no investment (as happens in Europe).

Also, demand wasn't over-estimated in the 2000s. This is all ex-post reasoning you use data from 2002 to say...well, this ended up being wrong. Companies were perfectly aware that no-one was using this stuff...do you think that telecoms companies in all these countries just had no idea who was using their products? This is the kind of thing you see journalists write after the event to attribute some kind of rationality and meaning, it isn't that complicated.

There was uncertainty about how things would shake out, if companies ended up not participating then CEOs would lose their job and someone else would do it. Telecoms companies who missed out on the boom bought shares in other telecom's companies because there was no other way to stay ahead of the news and announce that they were doing things.

This financial cycle also worked in reverse twenty years later too: in some countries, telecoms companies were so scarred that they refused to participate in building out fibre networks so lost share and then ended up doing more irrational things. Again, there was uncertainty here: incumbents couldn't raise from shareholders who they bankrupted in fiber 15 years ago, they were 100% aware that demand was outstripping supply, and this created opportunities for competitors. Rationality and logic run up against the hard constraints of needing to maintain a dividend yield and the exec's share options packages.

Humans do not change, markets do not change, it is the same every time. What people are really interested in is the timing but no-one knows that either (again, that is why the massive cycle of irrationality happens)...but that won't change the outcome. There is no calculation you can make to know more, particularly as in the short-term companies are able to control their financial results. It will end the same way it ended every time before, who knows when but it always ends the same way...humans are still human.

lmm2 months ago

> Also, demand wasn't over-estimated in the 2000s. This is all ex-post reasoning you use data from 2002 to say...well, this ended up being wrong.

Well, the estimate was higher than the reality, by definition it was over-estimated. They built out as if the tech boom was going to go on forever, and of course it didn't. You can argue that they made the best estimates they could with the information available, but ultimately it's still true that their estimates were wrong.

appleiigs2 months ago

Your blog article stopped at token generation... you need to continue to revenue per token. Then go even further... The revenue for AI company is a cost for the AI customer. Where is the AI customer going to get incremental profits from the cost of AI.

For short searches, the revenue per token is zero. The next step is $20 per month. For coding it's $100 per month. With the competition between Gemini, Grok, ChatGPT... it's not going higher. Maybe it goes lower since it's part of Google's playbook to give away things for free.

htk2 months ago

Great article, thank you for writing and sharing it!

ambicapter2 months ago

Fiber seems way easier to get long-term value out of then GPUs, though. How many workloads today other than AI justify massive GPU deployments?

AlexandrB2 months ago

Would be funny if all the stagnant GPUs finally brought game streaming to the mainstream.

roncesvalles2 months ago

They discuss it in the podcast. Laid fiber is different because you can charge rent for it essentially forever. It seems some people swooped in when it crashed and now own a perpetual money machine.

parapatelsukh2 months ago

The spending will be more than paid off since the taxpayer is the lender of last resort There's too many funny names in the investors / creditors a lot of mountains in germany and similar ya know

spogbiper2 months ago

[flagged]

tomhow2 months ago

We detached this comment from https://news.ycombinator.com/item?id=46135548 and marked it off topic.

filoeleven2 months ago

Please don't pollute the comments with noise. This adds nothing to the conversation.

sombragris2 months ago

"yeah, there's no way spending in those data centers will pay off. However, let me show you this little trinket which runs z/OS and which is exactly what you need for these kinds of workloads. You can subscribe to it for the low introductory price of..."

m00dy2 months ago

He will get assasinated or fired.

BenFranklin1002 months ago

A lot of you guys in the AI industry are going to lose your jobs. LLM and prompt ‘engineering’ experts won’t be able to score an AI job paying as well as a barista.

bmadduma2 months ago

No wonder why he is saying that, they lost AI game, no top researcher wants to work for IBM. Spent years developing Watson, it is dead. I believe this is a company that should not be existed.

BearOso2 months ago

Maybe it's the opposite. IBM spent years on the technology. Watson used neural networks, just not nearly as large. Perhaps they foresaw that it wouldn't scale or that it would plateau.

verdverm2 months ago

IBM CEO is steering a broken ship and it's not improved course, not someone who's words you should take seriously.

1. The missed the AI wave (hired me to teach watson law only to lay me off 5 wks later, one cause of the serious talent issues over there)

2. They bought most of their data center (companies), they have no idea about building and operating one, not at the scale the "competitors" are operating at

nabla92 months ago

Everyone should read his argument carefully. Ponder them in silence and accept or reject them in based on the strength of the arguments.

scarmig2 months ago

His argument follows almost directly, and trivially, from his central premise: a 0% or 1% chance of reaching AGI.

Yeah, if you assume technology will stagnate over the next decade and AGI is essentially impossible, these investments will not be profitable. Sam Altman himself wouldn't dispute that. But it's a controversial premise, and one that there's no particular reason to think that the... CEO of IBM would have any insight into.

skeeter20202 months ago

then it seems like neither Sam Altman (pro) or IBM (proxy con) have credible or even really interesting or insightful evidence, theories ... even suggestions for what's likely to happen? i.e. We should stop listening to all of them?

scarmig2 months ago

Agreed. It's essentially a giant gamble with a big payoff, and they're both talking their books.

PunchyHamster2 months ago

It's a very reasonable claim to make, but yes, average denizen of peanut gallery can spot this is a bubble from a mile way, doensn't need "insight" of napkin math done by some CEO that's not even in the industry.

Tho he's probably not too happy that they sold the server business to Lenovo, could at least earn something on selling shovels

verdverm2 months ago

we don't need AGI to use all that compute

we need businesses who are willing to pay for ai / compute at prices where both sides are making money

I for one could 10x my AI usage if the results on my side pan out. Spending $100 on ai today has ROI, will 10x that still have ROI for me in a couple years? probably, I expect agentic teams to increase in capability and more of my work. Then the question is can I turn that increase productivity into more revenues (>$1000 / month, one more client would cover this and then some)

nyc_data_geek12 months ago

IBM can be a hot mess, and the CEO may not be wrong about this. These things are not mutually exclusive.

malux852 months ago

Sorry that happened to you, I have been there too,

When a company is hiring and laying off like that it’s a serious red flag, the one that did that to me is dead now

verdverm2 months ago

It was nearly 10 years ago and changed the course of my career for the better

make lemonade as they say!

duxup2 months ago

Is his math wrong?

verdverm2 months ago

Are the numbers he's claiming accurate? They seem like big numbers pulled out of the air, certainly much large than the numbers we've actually seen committed to (not even deployed yet).

observationist2 months ago

IBM CEO has sour grapes.

IBM's HPC products were enterprise oriented slop products banked on their reputation, and the ROI torched their credibility when compute costs started getting taken seriously. Watson and other products got smeared into kafkaesque arbitrary branding for other product suites, and they were nearly all painful garbage - mobile device management standing out as a particularly grotesque system to use. Now, IBM lacks any legitimate competitive edge in any of the bajillion markets they tried to target, no credibility in any of their former flagship domains, and nearly every one of their products is hot garbage that costs too much, often by orders of magnitude, compared to similar functionality you can get from things like open source or even free software offered and serviced by other companies. They blew a ton of money on HPC before there was any legitimate reason to do so. Watson on Jeopardy was probably the last legitimately impressive thing they did, and all of their tech and expertise has been outclassed since.