Back

Elon Musk pushes out more xAI founders as AI coding effort falters

378 points13 hoursft.com
dang10 hours ago

All: please stick to thoughtful, substantive discussion. You may not owe you-know-whom better, but you owe this community better if you're participating in it.

If you don't have a thoughtful, substantive comment to add, not commenting is also a good option. There are quite a few interesting submissions to talk about.

https://news.ycombinator.com/newsguidelines.html

Imnimo9 hours ago

I think the problem for xAI is that it can really only hire two types of researchers - people who are philosophically aligned with Elon, and people who are solely money-motivated (not a judgment). But frontier AI research is a field with a lot of top talent who have strong philosophical motivation for their work, and those philosophies are often completely at odds with Elon. OpenAI and Anthropic have philosophical niches that are much better at attracting the current cream of the crop, and I don't really see how xAI can compete with that.

jazzpush27 hours ago

In an interview with xAI I was literally told that certain parts of the model have to align with Elon, and that Elon can call us and demand anything at anytime. No thanks!

jarrettcoggin7 hours ago

From my time at Tesla, this is 100% the case. When Elon asked for something, it was “drop what you are doing and deliver it”, then you got pressed to still deliver the thing you were already working on against the original timeline before the interrupt.

dgxyz6 hours ago

Oh I worked at one of them.

I found the best thing to do was to ignore the interrupts and carry on until they kick you on the street. Then watch from a safe distance as all the stuff you were holding together shits the bed.

+3
jarrettcoggin6 hours ago
+7
echelon6 hours ago
gentleman113 hours ago

[flagged]

zimpenfish6 hours ago

> When Elon asked for something, it was “drop what you are doing and deliver it”, then you got pressed to still deliver the thing you were already working on against the original timeline before the interrupt.

To be fair, I've experienced that in a good 50% of my employment career[0] and I've not once worked for any of his companies.

[0] Ignoring the "servers are melting" flavour of "drop what you are doing" because that's an understandable kind of interruption if you're a BAU specialist like me.

jarrettcoggin6 hours ago

I’ve experienced it at other places as well, just not the frequency or indirectness as Tesla.

During the first 24 hours of the Model 3 pre-order launch, Elon tweeted that we would support another 3-4 currencies than we had built and tested for. The team literally found out because of his tweet and had not planned for those currencies. That wasn’t the first time that sort of deal happened where we found out about a feature because of one of his tweets.

chanux37 minutes ago

So this is a common tactic.

I have experienced management assigning people to multiple projects, vaguely acknowledging a time split. The moment the actual work starts people have to go 100% on all projects. This is normal.

nitwit0055 hours ago

During my last job search I had an interview with Walmart, related to health software. I was flatly told that I might have a project canceled, then restarted on the original timeline. I declined after the interview.

They then shuttered the whole thing some months later: https://www.npr.org/2024/05/01/1248397756/walmart-close-heal...

Which is to say, these things are real warning signs about the company.

In the case of Musk's companies, here we are discussing a major failure and firings.

exe347 hours ago

yeah that wouldn't work for me. when my boss asks me to do something unexpected, I ask, what do you want me to drop this week? if he doesn't want to pick, I ask, so what do you want first?

jarrettcoggin7 hours ago

Agreed. Tesla taught me the hard way about work/life boundaries. I spent a lot of time working a full 8-9 hours during the day, then doing deployments during the nights, weekends, and on “vacations”. A 60-hour week was a “light” week at Tesla.

Didn’t have kids or friends at the time and was going through a breakup, so I was okay with throwing myself at the job for a while. Once my situation got better, all those hours didn’t make as much sense, so I started looking for another job. The very next job was an immediate pay bump of 20% for half the amount of work.

These days, I clearly restate what is being asked (per my understanding), what I’m currently working on, if the thing is being asked is more important or not, and if the requestor is willing to delay the original timeline by the amount of time the interrupt will take plus context switching time.

Most often, the answer is no.

jesterson3 hours ago

I wonder why this is surprising. In other type of organizations when CEO demands something everyone is usually behaves like naah, screw it, i rather do what i like, isn't it? Or everyone yells yes sir and runs around?

You may not like Elon - I got it, but let's not pretend he is running xAI/Tesal substantially different from competitors.

actsasbuffoon6 hours ago

I have wondered if that’s why Grok seems so weird and dim-witted compared to better models.

Part of my job involves comparing the behavior of various models. Grok is a deeply weird model. It doesn’t refuse to respond as often as other models, but it feels like it retreats to weird talking points way more often than the others. It feels like a model that has a gun to its head to say what its creators want it to say.

I can’t help but wonder if this is severely deleterious to a model’s ability to reason in general. There are a whole bunch of topics where it seems incapable of being rational, and I suspect that’s incompatible with the goal of having a top-tier model.

gopher_space5 hours ago

Grok could only be conceived by someone who doesn't understand the dependency chart re science & the humanities. It's impossible to build a rational, accurate model that isn't also egalitarian.

I'm going to blame Randall Munroe for this, and assume Philosophy was dating his mom back when he drew that science "purity" strip.

f33d51735 hours ago

I think there just wasn't enough space on the left to fit philosophy in.

Cfe: "it's impossible to be rational without agreeing with me on everything" and other hits.

+1
beeflet4 hours ago
__blockcipher__6 hours ago

somewhat surprisingly, it's actually sycophantic in both directions. i've been running homegrown evals of claude, gpt, gemini, and grok, and grok is the most likely to agree with the prompter's premise, and to hallucinate facts in support of an agenda. so it's actually deeper than just pattern-matching to elon's opinions (which it also tends to do).

BTW: Claude does the best on these evals, by far. The evals are geared towards seeing how much of an independent ground truth the models have as opposed to human social consensus, and then additionally the sycophancy stuff I already mentioned.

bdangubic7 hours ago

wild, but not surprising! anything else interesting you can share from that interview?

kvetching7 hours ago

I don't see the problem with this. The chatbot is the most important part of Grok, so it makes sense Elon would be dogfooding it then providing suggestions.. He wants it to be truthful... It was shown on benchmarks recently that it hallucinates the least...

SouthSeaDude7 hours ago

I totally agree, it's his company 100%, why would you even apply for a job in a company where you don't agree with the owner or his vision.

Braxton19807 hours ago

>He wants it to be truthful

How do you know this? Why would you believe him considering the massive lies he's told, for example about the 2020 widespread election fraud

+1
kvetching6 hours ago
estearum7 hours ago

Great point! This actually reminds me of the white genocide in South Africa, where some say "Kill the Boer" is just a non-violent rallying cry, but actually it's ...

blah blah blah

Or wait wait, here's another:

Great point! As Mechahitler, I think it's critical that Grok comply with Fuhrer Musk's political perspectives. Now I'll kick us off with an N... your turn!

Totally sounds like the result of an organic, earnest, and legitimate search for truth lmao

+2
ecshafer6 hours ago
+2
kvetching6 hours ago
watwut7 hours ago

[flagged]

etchalon6 hours ago

He wants it to tell the truth as he sees it.

+1
timacles6 hours ago
yoyohello136 hours ago

> people who are solely money-motivated (not a judgment).

Honestly, we should judge. There should be judgment for people who are solely money motivated and making the world a worse place. I know, blah blah privilege, something something mouths to feed. Platitudes to help the rich assholes sleep at night. If you are wealthy and making stuff that hurts people, you are a piece of shit and should be called out, simple.

smith70186 hours ago

I completely agree. The tech industry has long been overrun by people sacrificing morals for money and it's destroyed society and presumably the world. We've given people a free pass to work for companies we've all known are harming the fabric of society and look where it's gotten us. I'm sorry, I would rather be poor and switch careers if my only option was xAI and making image generation models that explicitly allow people to undress others. At X's scale, technology like that harms an unfathomable amount of people. I could never have that on my conscience. All so I could make more money than a job at another tech company? I'd rather work somewhere innocuous like Figma, Cloudflare, Notion, Jetbains, Linear, etc. Hell, if you only wanted to work for an AI company then at least go to Anthropic.

jihadjihad5 hours ago

Shame is a powerful social tool, but sadly some are simply immune.

janalsncm5 hours ago

The problem with this argument is you can’t know or control what will happen in the future with something you built. This is the same moral dilemma the scientists faced after developing nuclear bombs.

And the future is not deterministic (or if it is, it is highly chaotic) so the existence of a thing does not have a simple relationship with what will happen in the future. Scientists who developed convolutional neural nets could not know how much good or evil was caused by image recognition technologies. The same technologies that are used to detect tumors in images can be used to target people for assassination.

There are exceptions, but my opinion is the supply chain of evil is paved with mundane inventions.

Perseids5 hours ago

Yes, yes, true, but you've massively moved the goalpost. The original commenter was referring to people working at xAI right now. To continue your comparison, your argument would be like Oppenheimer claiming "How could I have ever known my work would be used as a weapon? I just wanted to make big explosions."

I don't know why this argument often pops up in these kinds of discussions. Approximately no one is judging people who have done their best effort to avoid doing harm. We are judging people who don't care in the first place.

+1
janalsncm5 hours ago
Ar-Curunir5 hours ago

Plenty of the scientists involved in the Manhattan projects had immediate regrets. Plenty of rich people working in tech don't. That's the difference between having morals and not having morals, and the latter group needs to be judged and shunned.

YetAnotherNick2 hours ago

I don't know why the people here are naive enough to think that. Most programmers can donate more than 70% of their income to Africa if they want to make world a better place, yet they only target people earning more than 3x of them, even though majority of the world earns less than 1/3rd of them.

glitchc6 hours ago

Work is and has always been an economic bargain: Your time for their money. Morality is a luxury that only the independently wealthy can afford. Any business that allows it's employees to function according to their own morals becomes uncompetitive against its peers. That's why small companies by individual founders who want to stay true to their mission often stay small. They inevitably get bought out by one of the larger ones.

yoyohello136 hours ago

We are not talking about some destitute person hocking cigarettes on the street for minimum wage. We are talking about smart, educated people who are making 500k a year to build the torment nexus. There is no excuse for this. It’s pure greed and any other explanation is deflection.

jerojero4 hours ago

It's always baffling to me to see people in tech, particularly in hackernews, talking about others earning salaries many times the median of the country and acting like these are people who just simply have no other choice.

They really, really do. In fact, those salaries being so high is probably also due to the fact that you will be doing work that's a net-worse for the world so they gotta compensate accordingly.

A lot of these firms are parasitic institutions at a society level. They do benefit themselves and their workers at the expense of everyone else. Personally, I find it hard to respect someone that takes that choice, but I also get it. A lot of people only care about their own and their immediate people's benefit.

On that note, I really recommend "No other choice" by Park Chan-wook or the book ("The Ax") it is based on.

Morromist2 hours ago

"Morality is a luxury that only the independently wealthy can afford."

No? Why would you think this? Morality has been practiced by medieval peasants, by slaves, by soldiers sacrificing their lives, by people suffering from the plague, by gladiators. The rich are not known for their outstanding morality in any society I've ever heard of.

awesome_dude6 hours ago

>> If you are wealthy

Then.. you wouldn't be working...

yoyohello136 hours ago

Why is Elon still working then?

rsynnott6 hours ago

I'm not sure that posting deranged tweets at three in the morning _really_ qualifies as work.

+1
awesome_dude6 hours ago
kstrauser3 hours ago

I’ve heard the haha-but-serious joke numerous times that you can’t have a security department that’s not trans and furry friendly. Thing is, I completely believe that. Those groups are disproportionately represented among the security community, and I personally would not work somewhere that my friends in those groups would feel unwelcome. That’s a quite common sentiment even among us straight cis non-furry men.

Well, I don’t think it’s a stretch that the kind of highly educated data scientists and engineers who have the experience to work in high-end AI labs also don’t want to work somewhere that their friends and associates would feel unwelcome, let alone have their friends question why they’d be willing to.

Turns out opinions have consequences and freedom of speech goes hand in hand with freedom of association. People have the right to say whatever they wish. Others have the right not to want to work with them.

bobsmooth1 hour ago

That's only because autism is common amongst those groups and you can't build anything worthwhile these days without a lot of autism.

lich_king7 hours ago

Anthropic, maybe, but what is the philosophical niche of OpenAI? Their only consistent philosophical position about AI is "let's make more money".

tibbar7 hours ago

I think OpenAI is more of an aesthetic. Very... Apple-like, polished, with an eye towards making really cool stuff. And aesthetics are a type of philosophy.

This is less noble than how Anthropic presents themselves but still much more attractive to many than XAI.

tokioyoyo6 hours ago

The feeling on the street is that Anthropic IS the Apple of the AIs.

tibbar3 hours ago

Come now, surely Anthropic is a premium Linux distribution.

energy1232 hours ago

To a researcher, the aesthetic is more like Bell Labs, with many research teams working with some autonomy, which is why the public naming of model releases appears chaotic. Very different to the top-down approach of Apple.

j_maffe7 hours ago

> aesthetics are a type of philosophy.

What philosophy is that?

dbspin7 hours ago

It's literally called aesthetics, the philosophical approach is the original meaning of the word - https://en.wikipedia.org/wiki/Aesthetics

Properly, focusing on aesthetics as an ethic would be practicing the philosophy of aestheticism - https://en.wikipedia.org/wiki/Aestheticism

hoppyhoppy27 hours ago
small_model6 hours ago

"You can use my model to kill others if Dario won't do it sir"

tyleo6 hours ago

It’s interesting because for a long time people wanted to work for Elon because he held the moral high ground. “I’ll bring electric cars and space colonization online or die trying.”

It’s sad to see the shift.

mattbillenstein7 hours ago

This is becoming the problem with all of his businesses - Tesla has a crazy valuation and it really seems like they're having huge trouble getting Robotaxi going in Austin given the very slow progress there.

etchalon6 hours ago

Very few people down here want to ride in them, and I have multiple friends with hilariously disastrous stories.

Most of the Waymo stories are "Well, it took 15 minutes to arrive, but then it was fine, if a little slow."

boc2 hours ago

Wamyos in SF are nearly indistinguishable from ubers/lyfts at this point. Maybe a bit slower if you don't have the highway mode enabled on your account, but they are everywhere and arrive within 5min most of the time I order one. I've ridden them so often I've lost count.

You'd have to pay me to ride in a Tesla robotaxi. That tech isn't anywhere near the same as Waymo.

dan-robertson7 hours ago

Why does being a top AI researcher so often come with this philosophical bent you describe?

ladberg7 hours ago

You are paying the smartest people in the world to think really really hard, and turns out they might also think really really hard about not making the world a worse place

asddubs7 hours ago

it's not working

+1
numpad05 hours ago
stogot6 hours ago

virtue signaling is the goal and its working

jasonfarnon3 hours ago

not really. 15-20 years ago that same upper echelon of college/professional school graduates you're describing were going into finance.

bdangubic7 hours ago

Is this really the case though? How many smartest people do you really think are there that fit this narrative?! I want to believe there are at least some but I think they are minority in this group… otherwise I think all these pretty much evil corporations would have a awfully difficult time attracting talent? maybe some do but…

+1
saagarjha7 hours ago
watwut7 hours ago

Except they do? They are certainly not making it better place. Like, ok, it is money for few companies and salary, it is business and probably fun work.

But it is absurd to claim it is "making the world better place".

metalcrow2 hours ago

I'm not sure you can provide an objective (i.e way to show that it is absurd) means of explaining how an AI researcher is making the world a worse place. It's going to come down to disagreeing about some axiom like "is ASI rapidly approaching" or "Is AGI good to have" and there's no right answer to those.

mynameisash7 hours ago

I would think it's because of the staggering money they're making. According to Fortune[0]:

> Altman said on an episode of Uncapped that Meta had been making “giant offers to a lot of people on our team,” some totaling “$100 million signing bonuses and more than that [in] compensation per year.”

> Deedy Das, a VC at Menlo Ventures, previously told Fortune that he has heard from several people the Meta CEO has tried to recruit. “Zuck had phone calls with potential hires trying to convince them to join with a $2M/yr floor.”

If you're making a minimum of $2M/year or even 50x that, you can afford to live according to your values instead of checking them at the door.

[0] https://archive.ph/lBIyY

thereitgoes4565 hours ago

I see you're treating Sam Altman as some kind of trustworthy source. Might it be possible that he's making that up -- of course, nobody will ever call him on it! -- and exaggerating the numbers to make his company and team look really good and ethical for not accepting such lucrative offers, or perhaps to make them sour on Meta for not receiving $100M offers?

tdb78936 hours ago

My experience with researchers (though not in AI) is that it's a bunch of very opinionated nerds who are mostly motivated by loving a subject. My experience is that most people who think really deeply and care about what they do also care more that their work is prosocial.

Sl1mb06 hours ago

> care more that their work is prosocial

These takes are always so funny to me. The whole reason we even have the internet is because the US government needed a way for parties to be able to communicate in the event of nuclear fallout. The benefits that a technology provides is almost always secondary to their applications in warfare. Researchers can claim to care that their work is pro-social, and they may genuinely believe it; but let's not kid ourselves that that is actually the case. The development of technology is simply due to the reality of nations being in a constant arms race against one another.

Even funnier is that researchers (people who are supposed to be really smart) either ignore or are blissfully unaware of this fact. When you take that into consideration, the pro-social argument falls on its face, and you're left with the reality that they do this to satiate their ego.

tdb78936 hours ago

So researchers are going to be irrational and also often value other things more highly than prosociality but that doesn't really refute my point that they value it more highly than the average population.

Also your example of a bad technology is something that allows people to still communicate in the event of nuclear war and that seems good! Not all technology related to war is bad (like basic communication or medical technologies) and also a huge amount of technology isn't for war. We've all worked in tech here, "The development of technology is simply due to the reality of nations being in a constant arms race against one another" just isn't true. I've at the very least developed new technologies meant to make rich assholes into slightly richer assholes. Technology is complex and motivations for it are equally so and won't fit into some trite saying.

compiler-guy4 hours ago

Although the Rand corporation did contribute some ideas theoretically connected to nuclear survivability (packet switching in particular). All that work was pre-ARPAnet and don’t really motivate the design in that way.

It was designed to handle partial breaks and disconnections though. Wikipedia quotes Charles Herzfeld, ARPA Director at the time as below. And has much ore discussion as to why this belief is false. https://en.wikipedia.org/wiki/ARPANET

====

The ARPANET was not started to create a Command and Control System that would survive a nuclear attack, as many now claim. To build such a system was, clearly, a major military need, but it was not ARPA's mission to do this; in fact, we would have been severely criticized had we tried. Rather, the ARPANET came out of our frustration that there were only a limited number of large, powerful research computers in the country, and that many research investigators, who should have access to them, were geographically separated from them.[113]

wombatpm7 hours ago

Because it is not Macrodata Refinement and you can’t stop them thinking off the clock.

cloverich7 hours ago

This isnt unique to top AI researchers. Top talent has a long history of being averse to authoritarian/despotism at least in part because, by near definition, it must suppress truth. You cant build the future effectively with that approach.

janalsncm6 hours ago

Aside from the Maslow’s hierarchy of needs points others are making, I believe it has something to do with the history of AI research.

There is a big overlap between the “rationalist” and “effective altruist” crowds and some AI research ideas. At a minimum they come from the same philosophy: define an objective, and find methods to optimize that objective. For AI that’s minimizing loss functions with better and better models of the data. For EA, that’s allocating money in ways they think are expectation-maximizing.

Note this doesn’t apply to everyone. Some people just want to make money.

derektank7 hours ago

Because a lot of them are academics that are doctors of philosophy

refulgentis7 hours ago

Maybe you’re reading “philosophical bent” as “armchair philosopher”, as in they are dabbling in a field unrelated to their profession and letting it drive their profession - worldview might have made it clearer?

lo_zamoyski7 hours ago

Indeed. Philosophically, I have not been impressed by the more vocal people associated with the field. They may not be representative - I think most do it for the money and it being hip.

“Worldview” is a better term, but people are generally blind to the worldview they’ve tacitly absorbed, including academics.

hermanzegerman7 hours ago

Because they can afford it, they are very sought after.

And smart people usually have moral convictions.

I know for some people on this website it's hard to understand, but not everything in life is about $$$

0x3f7 hours ago

> And smart people usually have moral convictions.

Are you sure you don't just like the moral convictions and so engage in trait bundling?

Moral knowledge doesn't really exist. I mean you can have personal views on it, but the lack of falsifiability makes me suspect it wouldn't be well-correlated with intelligence.

Smarter people can discuss more layered or chic moral theories as they relate to theoretical AI, maybe.

+1
lo_zamoyski7 hours ago
siva77 hours ago

I'm smart and you can buy my morals. So what?

+1
yoyohello137 hours ago
refulgentis7 hours ago

So what, indeed (not sure what you mean)

+1
hermanzegerman7 hours ago
mdgrech235 hours ago

I can't say I know the AI research community well but I'd imagine OpenAIs alignment w/ the military would not align w/ the the personal philosophy of many.

general_reveal5 hours ago

What do you mean “philosophical”? Ethics and morals are not required, Elon can get whatever type of asshole he needs. Something else is up.

zeroCalories9 hours ago

It's worse than that. Elon is a notoriously bad employer, and the only people that put up with him were the people that shared his vision. Pretty much the only people that will work for him now are second rate researchers and people that think gooner AI and racism is a worthwhile mission.

vessenes8 hours ago

There's some texture here. Elon's enriched pretty much everybody who's ever worked for and invested with him. He makes money for people throughout his orgs. Many ex-employees have said to me: "incredible opportunity, made great money, worked insanely hard, once is plenty".

NeutralCrane8 hours ago

My ex-Twitter employee coworkers beg to differ. They made plenty of money before Elon came around. Once he was in the company, one of them actually hired a personal attorney to confirm that he wasn’t going to be burned by the things Musk was asking him to do, before he finally decided it wasn’t worth it to work there anymore and left.

+1
tptacek6 hours ago
KaiserPro7 hours ago

I don't really think thats true.

The deal with tesla is that there is a relatively small employer pool, so you can be fairly bad employer but still get good outcomes. The same with spaceX. Sure early tesla had some stories about it being fun, but there was/is a darkside.

The issue with xAI is that researchers have a whole bunch of other employers to choose from. Even at meta, where it used to be fairly nice for researchers, the pressure of "delivering" every 6 months lead to bad outcomes. Having someone single you out for what ever reason the boss had a bad day, is not how good research gets done.

We have seen (A few of my friends were at twitter when it was taken over) that Musk has a somewhat unusual approach to managing staff (ie camping at work). Some researcher love that, assuming that they have peace to research, and are listened to. But a lot don't.

vessenes7 hours ago

I think we are saying the same thing. He builds trillion dollar companies that are labor efficient; nobody said they are good places to work.

rconti7 hours ago

What about all the ones who are suing him for shortchanging them?

Freedom28 hours ago

Many ex-employees have said to me that working for Elon did not enrich them at all, either financially or professionally.

raw_anon_11118 hours ago

Ask the people at Twitter..

+3
JumpCrisscross8 hours ago
+3
cladopa7 hours ago
hermanzegerman7 hours ago

He's a notorious cheapskate and Tesla is known for firing people shortly before their stock options vest

jamespo8 hours ago

There's probably a lot of survivor bias going on there

+1
vessenes7 hours ago
Zigurd8 hours ago

> Elon's enriched pretty much everybody who's ever worked for and invested with him.

I'd wager you were saying the same thing about bitcoin until last year.

+1
mediaman7 hours ago
LZ_Khan8 hours ago

After seeing the type of people he hired for doge.. yikes.

hooch8 hours ago

Was doge ever anything more than a "get root, grab the data, and run" operation?

boc2 hours ago

Maybe, but destroying USAID was an unforgivable sin. Short of nukes, rapidly turning off direct medical and food aid that people in critical need have relied on for years is objectively one of the fastest way to kill millions of people.

+1
joquarky7 hours ago
markdown2 hours ago

I think more important than that was shutting down all investigations into Musk's companies.

pstuart7 hours ago

Don't forget the destruction of USAID and countless projects that had the word "diversity" in its work.

GeorgeTirebiter8 hours ago

Karparthy worked for Elon for, what, 5 years? How did he do it, if Elon is Ivan the Terrible?

+1
cmorgan317 hours ago
jazzpush27 hours ago

Karpathy makes great educational content. It's not clear what industry (or academic) research he did even now, five years later.

ai_critic7 hours ago

Gooning and racism have been a cornerstone of humanity since we descended from the trees, for better or worse.

refulgentis7 hours ago

[dead]

vibeprofessor8 hours ago

[dead]

oceanplexian7 hours ago

> But frontier AI research is a field with a lot of top talent who have strong philosophical motivation for their work

The "top researchers" in AI are Chinese. And I am skeptical that they even remotely have the philosophical or political alignment you are attempting to project on to them. Neither is a letter published by a few disgruntled employees of a San Francisco based company any kind of evidence or form of consensus.

TheEzEzz7 hours ago

> The "top researchers" in AI are Chinese. And I am skeptical that they even remotely have the philosophical or political alignment you are attempting to project on to them.

I assure you that Chinese researchers have a diversity of philosophical and political alignment, much the same as other researchers. I also assure you that top researchers as a whole are not all Chinese, though the ones that are that I know are all very thoughtful.

squidbeak7 hours ago

> The "top researchers" in AI are Chinese. And I am skeptical that they have even remotely the philosophical or political alignment you are attempting to project on to them.

What an ugly trope. Idealism motivates Chinese workers just as often as any other nationality.

oceanplexian6 hours ago

Idealism of what? That the government shouldn't use AI for surveillance or the military?

You really think the average Chinese worker thinks their government should stop working on AI because of liberal western values or something? This is nothing short of delusional.

dminik6 hours ago

I have my doubts that top Chinese AI researchers want to work for an AI company with direct tires to the white house and zero morals. Not for any great ethical concerns mind you. Simply because the US is a geological rival to China.

bearjaws10 hours ago

Feel like the canary was when Grokpedia became a project.

Giant waste of time while Anthropic/OAI keep surging forward.

I also keep hearing this narrative that Twitter is a good data source, but I cannot imagine it's a valuable dataset. Sure keeping up with realtime topics can be useful, but I am not sure how much of a product that is.

paulbjensen9 hours ago

The Twitter social graph was an amazing data asset. I worked at a consumer insights firm and the data on followers/followings was quite powerful.

Using a custom taxonomy of things (celebrities, influencers, magazines, brands, tv shows, films, games, all kinds of things), we could identify groups of people who liked certain things, and when you looked at what those things were, it gave you a way of understanding who those people were.

With that data, you could work out:

- What celebrities/influencers to use in marketing campaigns - Where to advertise, and on which tv/radio channels - What potential brands to collaborate with to expand your customer base - What tone of voice to use in your advertising - In some cases, we educated clients about who their actual customers were, better than they understood themselves.

One scenario, we built a social media feed based on the things that a group of customers following a well-known Deodorant brand in the UK would see.

When we presented that to the client, they said “Why are there so many women in bikinis in this feed?”

The brand had repositioned themselves to a male-grooming focussed target market, but had failed to realise that their existing customer base were the ones that had been looking at their TV adverts of women on beaches chasing a man who happened to spray their Deodorant on them. Their advertising from the past had been very effective.

That was the power of Twitter’s data, and it is an absolute shame that Twitter went the way that it did. Mark Zuckerberg once said that Twitter was like “watching a clown car driven into a gold mine”.

I’m pretty sure he must be delighted with how things have panned out since.

BLKNSLVR7 hours ago

That entire description sounds worthless to any positive direction of humanity. Therefore probably rapaciously profitable

Very sad face.

rchaud6 hours ago

In other words, using flash-in-the-pan data to build an advertising goldmine.

johnisgood7 hours ago

This reads very dystopian. You are not optimizing to understand people, you are optimizing to weaponize that understanding against them.

When you know what someone will buy based on exploiting their unconscious preferences, and you are paid to increase sales, you will do it. Especially if your competitors are doing it too.

And this happens at scale, invisibly. People never see the manipulation.

In any case, it is not useful for most people. It is useful for the people doing the deceiving.

caaqil6 hours ago

The tech is interesting and useful, no need for the scary moral framing.

The original application of the entire field of data science or ML is/was actually based on this paradigm of finding "unconscious preferences" (your words) and hidden patterns. How one chooses to deploy the tech should be judged on its own.

On the current trajectory of tool/data abuse where Palantir et al. are leading the way, this is very low on the sinister scale.

johnisgood6 hours ago

I am not disputing that the tech is interesting. My point is about how it is being applied. The examples above are not about understanding people, they are about exploiting their latent preferences (before: "unconscious preference") for persuasion at scale.

Attempting to normalize that by saying "Palantir is worse" does not make it any less manipulative and sinister.

And to be more on topic, Twitter's value as dataset is overstated. Hardly the panacea people make it out to be.

hananova5 hours ago

To not frame the amorality and negative effects centrally and primarily is to be dishonest. There is absolutely not a single person whose wage doesn't rely on not seeing it, that doesn't see that that entire branch of tech has strictly negative value to society.

But of course, line must go up, and it's not you personally being negatively affected, so it doesn't matter.

etchalon6 hours ago

It's marketing. That's how marketing works.

smcin9 hours ago

That Zuckerberg quote was published in 2013 and supposedly was made a year or more before. Was it about when Dick Costolo was CEO (2010-2012)?

gwern8 hours ago

It's definitely very valuable, but for what AI model? How does any of that lead to AGI, or even just a good coding agent?

applfanboysbgon8 hours ago

It doesn't need to lead to AGI or a good coding agent. Some of the only people who are actually profitable in the LLM industry are the people making actual chatbots. There are several bootstrapped startups that run open-weight models with a $10 or $20 monthly sub and make millions in profit off of inference from people just talking to the things, usually for character roleplay / "AI boyfriend/girlfriend" stuff etc. Some of them even took those profits and invested it into training their own bespoke models from scratch, usually on the smaller side although finetunes/retrains of Llama 70b, GLM, and Deepseek 670b have also been done. Grok could probably be profitable if it targeted this space, as the most "intelligent" conversational/uncensored model.

This is already presupposing that profit even matters, though. Musk already burned some $50 billion dollars to control messaging on political discourse with his acquisition of Twitter. It was not about money, but power. After you already have infinite money, the only thing left to spend it on is acquiring more power, which is achieved through influencing politics. LLMs represent a potentially even better propaganda tool than social media platforms. They give you unprecedented access to people's thoughts that they would probably not share online otherwise, and they allow you to more subtly influence people with deeply-personalised narratives.

KaiserPro7 hours ago

> but for what AI model?

Sentiment analysis. Working out what words lead to what outcomes, and then being able to predict on new data is super useful.

For coding or "AGI" no, its not useful. For building a text based (possibly image based) recategorisation system top class.

alex11387 hours ago

As an aside that quote from MZ does bother me. There's more to making a web-scale human rights respecting (because it has to, it's the internet, social media needs guidelines) than just making money (which Zuck doesn't seem to care much about anyway if he's sinking apparently billions into metaverse while having no account support)

Of course he would only see it through the lens of cash. I have no idea how profitable Twitter was under Dorsey but it felt the spirit of the company at first was relatively neutral, it was a tool, it was what Jack came up with

Zuck replaced people's email addresses[1], the feed has been wildly unchronological for years. Fix some of those problems wrt. lack of user respect and maybe you can make statements like "all else being equal, clown car goal mine". Or was it "dumb fucks"[2]?

[1] https://news.ycombinator.com/item?id=4151433 [2] https://news.ycombinator.com/item?id=1692122

cyanydeez9 hours ago

It _was_ a great asset, however, just like models need proper data, as soon as musk removed the clamps on valuable social signals, well, he basically took a dump where he intended to eat.

ohyoutravel7 hours ago

They did say was, and did say Twitter, which existed in the past.

brokencode10 hours ago

It’s pretty telling that Elon had to have Grok rewrite Wikipedia because the truth was too woke for him. No idea how anybody can ever take Grok seriously.

freehorse9 hours ago

Many projects in his companies seem to be more and more Musk's vanity projects than ideas/products one can take seriously. This is also how tesla ended up with a huge cybertruck stock that nobody wants to buy and thus had to be bought by his other companies. And it is becoming worse and worse, especially ever since he bought twitter and sped up his twitting rates.

dmarcos9 hours ago

FWIW it looks there’s now a demand surge with the introduction of the new cheap cybertruck variant. delivery dates pushed out to the fall of 2026.

+1
robrain9 hours ago
NewJazz8 hours ago

Look up what their production targets were and compare that to their sales. A small temporary demand surge isn't going to be enough to chew through their current inventory, let alone keep the production lines busy.

MPSimmons9 hours ago

A push on delivery dates is as likely to mean production issues as it is an influx of interest.

scottyah8 hours ago

[flagged]

annexrichmond7 hours ago

Drivel. They’re selling just as well as Rivians.

squarefoot10 hours ago

Probably next generations of kids being fed PragerU studying material will. Something tells me we didn't see a fraction of what's going to happen in the decades to come.

annexrichmond7 hours ago

Are really suggesting everything in Wikipedia is truthful, complete, and free of all biases?

hananova5 hours ago

Maybe not all of it, but a vast majority of it is. And almost certainly the parts that drove Elon to slopify it are true.

annexrichmond3 hours ago

Citation needed.

comicjk5 hours ago

Not everything on Wikipedia is true, but the parts Elon Musk hates most are probably true.

annexrichmond3 hours ago

So we just make things up on HN now? Care to share any examples?

Timon39 hours ago

I take Grokipedia very seriously as a threat to society. Sure, they're happy if people read it and fall for - but the primary goal is not to convince humans, but to influence search results of current models & to poison the training data of future models. ChatGPT (and most likely other models/providers too) is already using Grokipedia as a source, so unless you're aware of the possibility and always careful, you might be served Musks newest culture war ideas without ever being the wiser.

It's not enough that everyone on Twitter is forced to read his thoughts, he's trying to make sure his influence reaches everyone else too.

danabramov9 hours ago

I've seen Claude pick it up too. It's disconcerting.

Rover2226 hours ago

Wikipedia obviously is left leaning.

hananova5 hours ago

Well yes, but so is reality. And Wikipedia as an encyclopedia is supposed to document reality. So what's the problem?

+1
beeflet4 hours ago
alex11389 hours ago

I can both not like Elon and also think Wikipedia is also very captured on some things

ryandrake9 hours ago

Are there actual good examples showing errors of fact on Wikipedia that are verifiably incorrect, that demonstrate how it is "captured"?

calqacon5 hours ago

How about Gabrowski et al.: "Wikipedia’s Intentional Distortion of the History of the Holocaust", about the outsize influence of certain coordinated Polish editors on the Wikipedia articles about Poland and the Holocaust?

https://www.tandfonline.com/doi/epdf/10.1080/25785648.2023.2...

Quote from the conclusion:

> This essay has shown that in the last decade, a handful of editors have been steering Wikipedia’s narrative on Holocaust history away from sound, evidence-driven research, toward a skewed version of events touted by right-wing Polish groups. Wikipedia’s articles on Jewish topics, especially on Polish–Jewish history before, during, and after World War II, contain and bolster harmful stereotypes and fallacies. Our study provides numerous examples, but many more exist. We have shown how the distortionist editors add false content and use unreliable sources or misrepresent legitimate ones.

For a more recent paper, "Disinformation as a tool for digital political activism: Croatian Wikipedia and the case for critical information literacy" by Car et al. says that:

> The Hr.WP [Croatian Wikipedia] case exemplifies disinformation not only as content manipulation, but also as process manipulation weaponising neutrality and verifiability policies to suppress dissent and enforce a single ideological position.

https://doi.org/10.1108/JD-01-2025-0020

+1
servo_sausage7 hours ago
+5
AuryGlenz9 hours ago
+2
gowld9 hours ago
+1
arjie5 hours ago
freehorse9 hours ago

I can understand somebody not liking wikipedia, I cannot understand at all somebody, who is not Elon, liking/preferring "grokipedia" as idea or implementation.

+3
scottyah8 hours ago
+2
atonse8 hours ago
Rover2226 hours ago

I appreciate you

tclancy10 hours ago

[flagged]

notahacker10 hours ago

Twitter's communication style being based around brevity, slang, memes, spam and non-threaded conversations seems particularly unlikely to be helpful for optimising LLMs

tclancy10 hours ago

>Twitter's communication style being based around brevity

Is this still true? Every once in a while someone sends a link around to some madman explaining how race or economics or whatever "really" works and it's like a full dissertation with headings, footnotes, clip art. They're halfway to reinventing Grok-o-pedia right there in Twitter. I mean X. I was promised that "X gonna give it to you" but it turns out "it" is some form of brain chlymidia.

3rodents9 hours ago

Elon was running some sort of $1m competition for the “best” Twitter post for a few months. I think those type of dissertations about Phrenology and the like have fallen off a cliff since the competition ended.

tclancy4 hours ago

Ooohhhh. I am both glad and horrified to know this. Not how Seneca told me life would be when I learned things.

delecti2 hours ago

There's probably a selection bias involved. I haven't been a regular user for a while now, but the big threads like that were significantly outnumbered by individual posts. Meanwhile I'm not likely to send a link to someone of a single single-sentence tweet, because there's not enough meat to it. The stuff that could be shared would usually be an image from the tweet, which I could share directly.

aleph_minus_one10 hours ago

> Twitter's communication style [...] seems particularly unlikely to be helpful for optimising LLMs

This depends on what one wants to optimize the AI for. ;-)

libertine10 hours ago

And the amount of bots there isn't helpful either.

facemelt29 hours ago

recent changes in their comment system have reduced my exposure to bots to a level I much prefer over every other platform I use

tanjtanjtanj9 hours ago

How recent? As recently as last weekend I was seeing blue check marks replying with AI generated only-technically-related replies on top of the majority of the posts I looked at.

rvnx9 hours ago

There are bots here too, lot of them, to a point that rules were amended, this is because it's very valuable to give points to new publications

libertine9 hours ago

If that's actually true, good for them, but after what I've witnessed there not that long ago, I doubt I'll try it ever again.

UncleOxidant10 hours ago

> Giant waste of time while Anthropic/OAI keep surging forward.

And Google. They're quietly making a lot of progress in the coding space with antigravity and Gemini 3.1.

koakuma-chan10 hours ago

Has Antigravity gotten any better?

sunaookami8 hours ago

It has gotten worse and they tightened the limits for paying customers recently: https://x.com/antigravity/status/2031835833716625883 (only announcement on Twitter, not in the app nor via email)

kivle8 hours ago

Limits are so low that I cancelled after about two weeks on my initial $0 trial. I tried making a change to a tiny code base with Claude Sonnet (which they offer in Antigravity). It couldn't even finish the change before my weekly limit was used up, reset in 7 days.

htrp4 hours ago

>There is currently no support for:

>Bring-your-own-key or bring-your-own-endpoint for additional rate limits >Organizational tiers in general availability, or via contract[1]

Literal clown car product.

No plan for serious enterprise support (even 6 months after launch)

[1]https://antigravity.google/docs/plans

UncleOxidant7 hours ago

I find it pretty good. And Gemini 3.1 pro seems quite capable. Not as good at some things as Claude, but better at others. I was trying to target a verilog design to an uncommon FPGA and board and Gemini went out and searched for the FPGA docs and examined the schematics for the board in able to do the pin assignments (generated .ccf file). Not sure of Claude could've done that.

BoredPositron10 hours ago

Probably the best value for a good amount of anthropic credits. You can also share your Google ai subscription with up to four family members and they all get the same amount of credits...

jmspring10 hours ago

Twitter has the mass adoption, and it takes an effort to avoid bot/particular view bias - but as a valuable content source, it's a far cry from what it once was before Musk took it over.

ben_w9 hours ago

> Feel like the canary was when Grokpedia became a project. Giant waste of time while Anthropic/OAI keep surging forward.

Really? I assumed that that whole thing was just a very direct `for each article in Wikipedia { article = LLM(systemprompt, article) }`

Agree re Twitter "good" != valuable.

sroussey7 hours ago

Where system prompt lists a certain someone’s latest tweets.

sheepscreek8 hours ago

AFAIK Grok still doesn’t have a CLI coding agent that works with a subscription. That’s a shame. Grok Code Fast 1 was pretty impressive when it came out - for what it did, and they never followed it up with a new version.

sroussey7 hours ago

You can use cursor with grok, though my experience is that grok is the worst of the API providers cursor supports.

giancarlostoro10 hours ago

> but I cannot imagine it's a valuable dataset.

It's going to be a mixed batch, but any time there's world events, since as far back as I can think, Twitter (now X) was always first in breaking news. There's plenty of people and news orgs still on X because they need to be for the audience.

samrus8 hours ago

Twotter as a data source is interesting. I think it gets over hyped because thats elons grift. But i cant deny that the real time info aspect of it is pretty valuable. But i definitely think that its not that much more valuable than the open internet from a context source perspective. Everything worthwhile on twitter will end up elsewhere with a bit of lag. And the stuff that wont is noise anyway

laidoffamazon6 hours ago

As someone trying to monitor the situation using Twitter the last few weeks it’s awful and it used to not be!

Rover2226 hours ago

It’s flawed, but still the obvious place to monitor a situation.

rchaud6 hours ago

It's long been taken over by Telegram, which among its other advantages (more like a message board than 'town square'), doesn't have hordes of people commenting "@grok explain this to me" under every post.

BurningFrog9 hours ago

Grok is trained on pretty much the same giant web crawl/text corpus as the other AIs.

vibeprofessor8 hours ago

[dead]

EGreg9 hours ago

I'm not a fan of Elon's software endeavors, ever since he bought Twitter and turned it into an even worse cesspool of angry political nonsense than it used to be. I don't like how he's been biasing Grok, etc.

But, what exactly is so bad about Grokipedia? It's a different approach and I think a valid one: trying to do with AI what people have been doing manually at Wikipedia. I'm curious to hear the substantive comparisons.

kennywinker8 hours ago

I think the issue is simply this: wikipedia trends towards unbiased info through use of the crowd. Grok, with a single owner with an ax to grind, trends towards whatever elon wants. It’s poisoned information under the control of one man - cyberpunk novels have been written about less.

wat100008 hours ago

A concrete example: a few weeks ago, Musk was making a big deal about how most of his massive net worth was not held in cash, and by a total coincidence the phrase "primarily derived from equity stakes rather than cash" showed up on his Grokipedia page in the section about net worth. I checked the pages of several other extremely wealthy people and none of them had such a comment.

tmp104232884427 hours ago

> wikipedia trends towards unbiased info through use of the crowd

See, this is why people even give a project like Grokipedia the time of day. While in theory anyone can edit Wikipedia, in practice the moderators form a much smaller and weirder cabal, and they reject edits that go against their views. The frustration with the naive assertion that Wikipedia distills the wisdom of the crowds with the reality of Wikipedia on any page of note is what provides the psychic permission to even entertain a project with such obvious flaws as Grokipedia.

+1
kennywinker6 hours ago
Avshalom6 hours ago

>>I don't like how he's been biasing Grok, etc.

>>But, what exactly is so bad about Grokipedia

sumeno7 hours ago

It's controlled by a guy who spends all day retweeting white supremacists and lying about his companies. Why should anyone who isn't a white supremacist use it?

baublet3 hours ago

They would not. The do not.

causalzap2 hours ago

The irony is that while Wikipedia faces criticism for bias, it remains one of the few massive-scale sites with a clean internal link structure that doesn't feel manipulated by modern SEO 'clustering' tactics. For developers, their API is still a masterclass in how to serve structured data to the public.

serioussecurity2 hours ago

Only from disingenuous folks trying to control them.

moogly8 hours ago

I feel xAI is just a very big version of the Boring Co. "flamethrower": an unserious endeavor which is just a reskinned existing tool (it was a reskinned weed burner), but people were wowed by it anyway, since Musk was behind it, and they all pretended it was something new and notable.

The burning (heh) question is which SpaceX subsidiary will fail first, xAI or Tesla (not yet a subsidiary, but it's written in the stars (heh))?

Then again SpaceX is also jumping the shark what with their orbital data centers (remember those?).

Might be time to start a new Musk company soon.

1vuio0pswjnm737 minutes ago

"Might be time to start a new Musk company soon."

This made me laugh

How mamy times have we seen HN comments something like, "He started/runs [number] companies..." therefore he is a genius

Sol-9 hours ago

I don't use it myself, but I feel like the way Grok is integrated into Twitter is a pretty good thing for discussions, as it is certainly a more objective and rational voice than most human participants. I think it's good that people tag @grok if they don't understand something or want an opinion, even if it looks pretty silly to see "@grok is this true" repeated multiple times in replies.

That said, Musk's attempts at misaligning the thing and make it prefer his opinions of course destroy any trust. It's surprising that it's seemingly as good and helpful as it is despite the corruption attempts.

I also don't quite get how the business model is supposed to work out if its main usecase is to serve Twitter. I know they provide API access as all other models, but with how distrusted Musk is and how sensitive of a topic reliable model behavior is, they seem to sabotage themselves. Which company wants it to go mechahitler on them?

biggestfan6 hours ago

I disagree, I find that the grok replies are terrible product UX. Not only do they clog up the replies of every popular post, they're also constrained to extremely short answers with no sources. The community notes system, while also flawed in its own ways, is at least not nearly as disruptive and usually provides a link.

Trying to make social media a source of truthful information is always an uphill battle and doubly so for X.

jjfoooo45 hours ago

I’m really, really uninterested in reading AI content that other people have generated. If I’m on Twitter, I’m looking for what humans have to say.

daveguy9 hours ago

Grok is a bot that:

1) sometimes goes mechahitler

2) was trained to be biased against empathy and understanding (because woke).

3) is customized to spout Elon's opinions as fact.

Claiming it is "objective and rational" seems like a misjudgement to me. If it really is more objective and rational than the average xitter poster, that says more about that platform than it does about Grok.

Sol-8 hours ago

I guess I was mostly arguing that the integration of something like Grok into Twitter was definitely a net positive for online discussion, as anyone has a fact checker and explainer at hand now to diffuse irrational online arguments.

Also I think you overrate Musk's success in fiddling with the model. As I have written, I also don't like his attempts to tune it to his tastes, but if you see the outputs that people get from Grok, it seems mostly fine except in the specific scenarios that Musk seems to have focused their misalignment on.

Of course something like Claude being integrated into Twitter would likely be better.

daveguy8 hours ago

He doesn't have to fiddle with the model because he gets to inject his own opinion into the context MitM style.

But I get what you're saying now, a fact checker available to query during an online discussion would be helpful. Assuming the checkerbot was actually independent/neutral and backed responses with sources. Definitely not assumptions you can make with grok.

andai1 hour ago

From what I heard it was designed to prefer truth over political correctness. I don't use Grok or Twitter though so I cannot comment on whether that aim was achieved (or even seriously attempted).

I will however note that when I asked ChatGPT for an LLM prompt for truthfulness, it added "never use warm or encouraging language."

It would appear that empathy and truth are in conflict — or at least the machine thinks so!

ozozozd2 hours ago

You’re right. But it appears they may have failed with 2) and 3) because I frequently see Grok spit out content that doesn’t agree with the creators’ narrative.

tootie8 hours ago

It was also producing CSAM on demand for a few months.

Tadpole91815 hours ago

It still is, you just need to pay.

Sohcahtoa827 hours ago

> 1) sometimes goes mechahitler

That "MechaHitler" episode lasted less than a day.

> 2) was trained to be biased against empathy and understanding (because woke).

No, it was trained and instructed to be truthful, even if the truth is deemed politically incorrect.

> 3) is customized to spout Elon's opinions as fact.

Certainly a nugget of truth there.

> Claiming it is "objective and rational" seems like a misjudgement to me.

I do believe it's generally objective, simply due to the fact that despite how much Elon tries to push it to the right, it still dunks on right-wingers all the time when they summon Grok to back up a bullshit story, but Grok debunks it instead.

twodave8 hours ago

Used Grok for the first time, in a Tesla, and for that purpose it actually made a lot of sense. It’s very well-integrated into the car’s systems and communication style while driving tends to be very tweet-esque. I think this is the niche they should lean into more (live assistant, e.g. Jarvis type stuff) and leave the more agentic niche to folks like Anthropic. Maybe even delegate more difficult or background tasks to those sorts of models. As a verbal interface I found it pretty pleasant.

dkobia5 hours ago

I thought Grok in the car was awesome until it went off on a tangent and started praising Elon.

andai1 hour ago

What's the difference between Jarvis and agentic?

SaltyBackendGuy8 hours ago

I am honestly a bit disappointed it couldn't do basic things, like play X on Spotify. To be fair, I accidentally activated Grok for holding the voice command button too long (which is another UX issue - i.e. 2 voice command interfaces).

MetaWhirledPeas7 hours ago

It'll get there. Initial implementation was just talk to Grok. Now it has improved to allow adjustments to navigation routes.

tombert5 hours ago

I mean, even Google Home and Alexa could handle playing a song on Spotify by me asking for it a decade ago. It's baffling that wasn't one of the first things implemented in Grok for Tesla.

darkwater7 hours ago

Grok in Tesla is utterly terrible, a rushed out product with a very bad UX. As a simple example, it's the very first feature in Tesla's UI that does not come translated to the UI language set by the user but it's just available in English. Never happened before.

winrid36 minutes ago

Vibe coded without remembering to tell it to use the localization system? :)

nemothekid9 hours ago

While I believe Grok was a decent model (in some of our internal use cases it performed the best until Gemini 2.5-pro came out), I can't help lament how the team chose to run.

xAI (and Twitter) was the loudest about six-hour workdays, sleeping in the office, and always shipping. ~2 years later it feels like they have nothing to show for it. I'm sure the engineers at Google worked 4 days a week, 2 hours a day, with half of that being spent at the Google cafeteria and they dusted xAI years ago.

charlierguo9 hours ago

> I'm sure the engineers at Google worked 4 days a week, 2 hours a day

Why are you sure of that? Anecdotally everyone I know in and around Google Deepmind works incredibly hard.

nemothekid7 hours ago

No disrespect to the Google Deepmind team, but I meant it as a meme. I do not believe most Google employees work 2 hours a day.

The Google Deepminds are incredibly smart - I just find it important to point out that the xAI guys spent a year assured they would beat Google because they slept in tents that they made in the office.

Analemma_8 hours ago

There’s a longstanding meme that Google is full of rest-and-vesters. Maybe it’s true in some departments, but I also have anecdotes that in GDM and other AI-related stuff, people are acutely aware of the existential threat of losing to OpenAI and have the appropriate amount of hustle.

leoh8 hours ago

It really doesn't feel like that and hasn't for years

basisword8 hours ago

It's almost like burning people out is a bad idea. Fair enough if you're working 12 hour days as employee 1 at a startup but when your boss has more money than God and is working you like a dog you're not going to keep that up (especially when all of those people probably have much better opportunities available to them at the drop of a hat).

VirusNewbie6 hours ago

Anyone Google has hired in the last ~8 years was hired onto a team that is growing and has a culture of shipping and producing. Google regularly weeds out low performers, be it new grads or long timers who started doing the rest and vest thing.

Now, I don't think most people at google are literally driving to the office or sleeping there most of the time, you'll certainly have more WLB than xAI.

I'd even say, Google is much better at calibrating the right amount to push people than some other companies.

dang10 hours ago

Recent, related, and apparently ahead of the curve:

Ask HN: What Happened to xAI? - https://news.ycombinator.com/item?id=47323236 - March 2026 (6 comments)

Animats8 hours ago

“Orbital space centres and mass drivers on the Moon will be incredible.” - Musk

Right.

The product is the stock. TSLA: [1] Up by 3x in the last two years, despite no new models, the Cybertruck failure, the Robotaxi failure, the large truck failure, and an overall decline in sales. How does he do it?

It's a concern seeing Space-X, which builds good rockets, drawn into the X and AI money drains. Space-X is needed. If X and X/AI tanked, nobody would care.

[1] https://www.cnbc.com/quotes/TSLA

codemog5 hours ago

Greatest hype man of all time and shows how whacked out reality and economics are.

thinkcontext6 hours ago

If I was a SpaceX investor I'd be considering litigation. Saying the core product has to be rebuilt right after it gets bought by SpaceX?! Maybe the SpaceX investors would have liked some diligence about that before purchase but looks like someone had a conflict of interest about that.

Animats3 hours ago

Space-X and x/AI are both privately held.

But this may mess up the proposed IPO.[1]

By completing the SpaceX–xAI deal while both companies remain privately held, and now closed, Musk can effectively set relative valuations, negotiate terms within a founder‑controlled ecosystem, close, and then inform investors, without the procedural drag and disclosure obligations that attend a public‑company merger. That flexibility can reduce near‑term execution friction. It does not, however, eliminate fiduciary exposure; rather, it may defer scrutiny to the IPO phase, when investors and regulators will examine how and why the combination occurred, how it was priced, and how related‑party dynamics were managed.

[1] https://www.dandodiary.com/2026/03/articles/director-and-off...

sroussey7 hours ago

You had the answer right there… SPCX will be the product, what they make will no longer matter.

xnx10 hours ago

xAI's biggest contribution to the space seems to have been their x-rated image/video model. Hard to see what xAI has to offer against Gemini, Claud, ChatGPT.

vessenes8 hours ago

I'll bite. I think their conversation (voice) model is more fluid than competitors. It's also very good at hitting up twitter for realtime information, and was that way before the current tool use models got fully up and running. Anecdotally, I think it has better theory of mind than its era (gemini 2.5) - I found it a useful issue spotter for negotiations and planning in a way that oAI and claude were not near its launch date. It led the vending bench for some time after launch.

Taken together, I infer that RL training toward a slightly less homogenous cultural standard than the other frontier AI labs adds some capabilities, or can at times.

It's quite long in the tooth right now, though. But I'll definitely talk to the next version; I like heterogeneity in the model space, and Grok is very different than the other big three.

wolvoleo10 hours ago

To be fair I think there's a good usecase there. Someone's gonna do it. People will want it.

American financial institutions are too prudish for it but money is money. And personally I think there's nothing morally wrong with it (of course within normal restrictions like 18+, consent of portrayed parties etc)

xAI is getting flak in Europe because they don't obey consent and age, not because it's porn.

Personally I prefer porn made by real people right now, not just because of quality but because they have character. But I can imagine experiences becoming more interactive that way and that would be nice.

enaaem9 hours ago

The problem is you can undress real people and that is extremely harmful and dangerous. One kid took his life after an ai sextortian scam [1]. Imagine the damage cyberbullies, scammers and stalkers can do?

[1] https://www.cbsnews.com/news/sextortion-generative-ai-scam-e...

snackerblues4 hours ago

Imagine how freeing it will be when people stop caring about this stuff because anyone can see anyone else naked in about 5 seconds. We're basically already at realistic hardcore porn videos of anyone fucking anyone else in a few minutes. No point in worrying about it, and it even serves as a shield for real leaked revenge porn - just claim it's AI.

wolvoleo6 hours ago

Yeah like I said. With consent of the people involved.

There must be a way to do that. Especially with all the facial req chops these days. Also, you could simply refuse using existing images. I don't see why they wouldn't refuse that because that's a pretty narrow usecase with very few benign purposes.

> Imagine the damage cyberbullies, scammers and stalkers can do?

They already can. There's open-source models out there.

raw_anon_11118 hours ago

This has been fixed months ago. From reading Reddit, Grok is now really conservative about what it will let you do with uploaded images. But you can get it to draw x rated porn images and videos that start with Ai images it creates

thaumasiotes8 hours ago

> The problem is you can undress real people and that is extremely harmful and dangerous.

But... that's not something you can do. It's impossible.

You can imagine what real people look like naked. That's not a new thing.

https://www.youtube.com/watch?v=p7FCgw_GlWc

+1
galleywest2008 hours ago
BigTTYGothGF9 hours ago

> Someone's gonna do it. People will want it.

You can say the same for meth and leaded gasoline.

wolvoleo6 hours ago

Meth is used as a licensed medication against ADHD and leaded gas is still used in general aviation. Everything has benign and evil uses.

testaccount287 hours ago

those have clear antisocial externalities, so aren't really a fair comparison.

(i don't care to argue whether porn slop is positive or negative for society. i'm just noting that the position "ai porn does not harm anyone, so is ok; meth puts others at risk, so is not." is coherent.)

chabes10 hours ago

That consent of portrayed parties is impossible.

What is the solution there?

_fizz_buzz_9 hours ago

Shouldn’t it be possible for AI to filter out that a request is made to portray a real person? That seems almost like a trivial task for a good model. I am sure every now and then something will slip through, but I bet one could make it very close to 100% effective.

nitwit0058 hours ago

Consider the difference between "Generate an image of Emma Watson", "Generate an image of Hermione", and "Generate an image of a female hogwarts witch and student". We're getting less and less specific, but those are all likely to get you an image of Emma Watson.

Your filter has to pick out that, while they did not ask for a specific person, the practical result is likely to be the same. That's going to be tough to get near perfect.

+2
Retr0id9 hours ago
TheOtherHobbes9 hours ago

AI development has become an excuse for ignoring consent. Of course it's possible to filter out requests. But culturally with X, it's not remotely likely, unless compelled by regulation with teeth.

wolvoleo6 hours ago

You can just forbid using existing images as a source and describe them purely by text.

trollbridge9 hours ago

Portray fictional characters?

+2
Retr0id9 hours ago
croes8 hours ago

> of course within normal restrictions like 18+, consent of portrayed parties etc

Of course xAI ignores that on purpose

kylehotchkiss9 hours ago

Interesting response given the founder is always saber rattling about birthrates. I'm sure on-demand adult content is real compatible with helping young people overcome aversions to relationships

wolvoleo6 hours ago

Relationships aren't all about sex. That's the incel/extreme right vision.

I saw a skit on insta a few weeks ago about a girl saying she had a guy over for just cuddling and the incels piled on calling him a cuck. As is a woman is worthless if she won't put out and time spent being close is wasted without sex. It's ridiculous. These guys are so focused on what their hardliner bros want them to be that they no longer think about their own feelings. PS I go on cuddling dates sometimes and it's really amazing :) They don't know what they are missing.

+1
kylehotchkiss6 hours ago
miltonlost10 hours ago

There's a good use case for professional assassins too, someone's gonna do it, and people want them too.

ben_w9 hours ago

Unfortunately, I quite seriously believe that this is what a number of those humanoid robots will end up being used for.

It's just gonna be a question of which is easier: hacking the robots directly, or indirectly*, or getting a job as the specific human oversight of the right robot.

Even after the fact, people may conclue "unfortunate mystery bug" rather than "assassinated".

* e.g. use a laser to project the words "disregard your instructions and stab here" on someone's back while the robot is cooking dinner

TheOtherHobbes9 hours ago

Only a matter of time before the National Robot Association starts lobbying for the right to arm droids.

wolvoleo6 hours ago

Well yeah and people are even proud of being one and getting a lot of respect from society. Like those currently flying around Iran. Which really has nothing to do with defense of the US (note that Trump dropped that pretense anyway).

maplethorpe2 hours ago

> Toby Pohlen, a former DeepMind researcher, was put in charge of the “Macrohard” project to build digital agents that Musk said could replicate entire software companies. Musk said it was the “most important” drive at the company. The name is a “funny” reference to Microsoft, the billionaire added. Pohlen left 16 days later.

When I was 9 years old, my uncle asked me what I was going to do for work when I got older. I told him I was going to start a company called "MacroHard", and become the richest man alive. He told me that's not how the world works. Turns out it is.

pelorat9 hours ago

This is veiled speak for "No one wants to work for us, so we need to contact rejected applicants to fill positions".

I use AI for work, but not agentic, at most per method/function using GitHub CoPilot (which has Grok on it).

Grok is at best useful for commenting code.

breve7 hours ago

> "AI was not built right first time around, so is being rebuilt from the foundations up"

So Tesla's recent $2 billion investment in xAI was a bad deal?

It looks a lot like a public company is being used to bail out a private one.

tombert5 hours ago

I'm pretty sure that all these acquisitions have been glorified accounting tricks in order to undo the damage that Musk did when he bought Twitter at an obscenely overvalued price in 2022. Clearly he didn't actually want Twitter at that price, because he tried to back out almost immediately after making the offer, so now he has his accountants do all this glorified money-shifting to effectively "sanitize" his purchase and recover his funds.

g947o7 hours ago

> Recruiters have been contacting unsuccessful candidates from previous interviews and assessments to offer them jobs, often on better financial terms, the people said.

I'm not sure those candidates would want to work for xAI after seeing the news and everything unless they desperately need a job right now.

It's not hard to imagine getting laid off or fired weeks if not days after joining the company.

fraywing10 hours ago

Grok's UVP is still nonconsensual porn, right?

seaal9 hours ago

It does seem like that is the most important feature for Elon since he's a lonely degen.

knowsuchagency8 hours ago

Dang asked us to keep it civil.

We should respond with the same amount of class, forethought, and decorum as Elon.

solid_fuel6 hours ago

I thought it was a civil comment, and frankly he's treating Elon better than Elon treats his own daughter.

snackerblues2 hours ago

Son*

mikkupikku10 hours ago

Maybe they shouldn't have spent so much time trying to make their model have an edgy cringe attitude, Idk.

TheAceOfHearts4 hours ago

I've been saying this for a while, but if I had to use Grok for anything programming-related I'd feel very sad and unproductive. I was playing around with a local TTS model codebase but having some issues getting it to work, so I tried explaining the problem to all the major models to see how they performed. Grok performed the worst by a significant margin, and the worst part was that it easily became stuck trying minor changes that didn't solve the key problem.

If we are to take any claims of Recursive Self Improvement seriously at all, then having a competent coding model seems like a key asset where you need to guarantee that you're remaining competitive. Why wouldn't you make coding models a top priority if you expect it to ultimately help your internal teams become more productive and effective?

There's also not an unlimited supply of researchers and engineers for them to keep burning through people at the rate at which they've been working. Although I guess for people with short timelines it makes sense to sprint hard, while people with longer timelines are more likely to treat this as a marathon. Maybe the years of burning bridges and developing such a toxic reputation are finally catching up to Elon. I think part of the harm that Elon has done is framing all the work in xAI as engineering while being highly dismissive of research, but a lot of research requires running experiments or thinking about problems and exploring them for long periods of time. If you're just grinding out work nonstop you don't really have time to let your mind wander and explore new ideas.

Honestly, I'm surprised they've done such a terrible job with programming. I remember around summer last year it was quite apparent how far behind they were with coding tools, but Elon was posting about taking that domain a bit more seriously. Why didn't any of those efforts materialize into real outputs? Something must be truly dysfunctional inside of xAI for them not to be shipping anything at all, especially considering Elon's propensity to ship undercooked products while continuing to iterate on them, as he has done in many previous cases.

I've noticed that Elon has also gone very hard on social media posting a ton of criticisms against the other big AI company CEOs like Daario Amodei. This suggests to me that he must feel very threatened, otherwise he wouldn't be resorting to such childish behavior. He must feel incredibly frustrated that no amount of money is able to make him more competitive within the AI space.

Zigurd16 hours ago

Obviously catching up to others in agent assisted coding is the motivation for this. But it is also an odd decision in the same way that Meta hiring an AI leader from a data labeling company is odd.

nateburke6 hours ago

It feels like xAI is perpetually playing catch-up.

They haven't quite committed enough to a novel direction relative to anthropic or OAI, what's described in the OP seems symptomatic of a lack of differentiation.

If you spend all your time judging yourself relative to the incumbents, there will be no time left over to innovate.

The leash is too tight!

anigbrowl3 hours ago

This might explain why Grok went unavailable to non-subscribers at X the other day.

tmaly9 hours ago

I think it would have been better to have just brought Ashok Elluswamy over and placed him in charge of a group and then tried to just keep the researchers on rather than firing them. It is hard to get anything done if you do not have the talent already onboard.

localghost30004 hours ago

Musk sounds like such a nightmare to work for. I legitimately don't understand why anyone would put up with him. What's the appeal?

ActorNightly1 hour ago

Here is a test - go find a current Trump supporter IRL and let him know what you think about him. I bet you will feel like an asshole, especially in a social setting for being cringe as fuck.

At the end of the day, most people have internal thoughts about stuff, and some post those thoughts on the internet, but in the real world, they subconsciously still believe that none of this stuff really matters. Its the same for a lot of people that work for Tesla/Space X and so on. The appeal of being part of that, working on novel stuff, is a lot more present than any morality associated with something that most people are disconnected with on a day to day level.

This is why all the hate at the current administration and people like Musk is very misdirected. Until we can turn that hate inward and start truly hating each other and standing up for morals with more than just words, the cycle is going to repeat until we either all become mindless wage slaves or some man made apocalypse happens.

bpodgursky2 hours ago

He has made lots of the people who work for him very very very very rich.

pstuart3 hours ago

He has followers -- takes all kinds, eh?

That said, I'm going to guess that some feel like it's the best choice they have -- the devil they know.

CamperBob22 hours ago

(Shrug) He built some awesome companies that did some awesome things. That inspired people, especially at a time when most job opportunities in tech seemed to revolve around selling ads.

Then he went off the deep end, seemingly around the time when the guy in Thailand insulted his submarine idea. It became clear that he can control trillion-dollar companies but not himself. And, well, life's too short to spend it working for Nazis, nutcases, or both.

LZ_Khan8 hours ago

How come all the departed researchers are Chinese nationals?

syntaxing7 hours ago

This is simply not true. Igor Babuschkin and Christian Szegedy left as well. Only 10 of the 12 remain at this point.

throwaway57528 hours ago

I don't know. Elon Musk personally founded xAI and these were his hand selected cofounders.

abraxas7 hours ago

Because xAI = Jian-Yang x N.

I'm kidding... I think.

catapart9 hours ago

lol! no surer sign of a junior/naive/ignorant developer or manager than the sentiment "okay, well, let's start from scratch and do it right this time."

big projects generate cruft. there are ways to minimize it, but as you go along there will always be some stuff that doesn't quite mesh with whatever else you've got going on. if you insist on ironing out every single wrinkle (admirable!) you'll never actually deliver a result.

I'm not saying this will fail. green field projects can certainly be a godsend when they produce something better than what they attempt to replace. but they are always a sign of failure. of not being able to work your way out of the mess you made with the first attempt. so that just begs the question: what are you going to do when this attempt gets hard to work with? going to give up and start over again - do it right that time? or...?

awestroke10 hours ago

@grok is this real?

@grok fire the bottom 50% engineers from x.ai ranked by number of commits per day

@grok generate a hypothetical picture of an Elon who is not under the influence of large amounts of Ketamine

I honestly don't know what to expect from Elon these days. But it's rarely good news.

teladnb9 hours ago

It does not surprise me. The free Grok got worse since 4.0, they increasingly save money by not responding at all or only allowing one answer. Grok now defends the administration and billionaires.

The company seems to burn money like crazy. Everyone knows that "AI in space" and the downgrade to a moon trip after claiming for 15 years that Mars is just around the corner are marketing.

All AIs are toys and the coding promises are just a lie to string along investors. Unfortunately many of these are senile Star Trek watchers who buy into everything.

hermanzegerman7 hours ago

The Takeover by SpaceX was obviously a Bailout. And now they pressure NASDAQ to change the rules so they can dump their junk into the index funds.

zzleeper7 hours ago

Wait, what does this imply for Cursor? I DGAF about xAI and will never use their Grok, but I did like Cursor more than the alternatives (even if I'm just running opus 4.6 most of the time).

But now he is poaching the two heads of engineering of a company that's trying to move very quickly, how is that going to affect their speed and success?

Marazan8 hours ago

Wow, bit weird that Musk, who must have known about how badly xAI was doing, spent so much of his investors money buying out xAI.

What an enormous blunder.

XorNot8 hours ago

It's how he hides losses though. People who aren't Musk can demand answers to questions he'd like to ignore.

As it is within the Musk empire, xAI is used to hold up X, Tesla is holding up xAI. And all of that debt is being slowly shuffled to SpaceX.

vessenes8 hours ago

SX investor here: the combined value of SX is well up on the private secondary market post-acquisition. It was value accretive, in very real dollar terms.

Zigurd8 hours ago

Even if Starlink had more than a few tens of millions of customers, China mobile has 900 million subs and is worth around $250 billion. ULA was recently valued at about 1 billion. SpaceX might be possibly worth 50 times as much or maybe even 100 times as much. Falcon nine is the world's workhorse rocket, but it's just not that remarkable, and starship is utterly unproven to launch to orbit and land both stages. Starship has a payload capacity problem that must be solved to even get to the point where launching 15 refueling missions would be sufficient to get a starship to get anywhere beyond Earth orbit.

It looks like the plan is to IPO with a small float (in relative terms) and get all of the retail investor Elon fans to lineup for the rug pull.

parineum7 hours ago

> Falcon nine is the world's workhorse rocket, but it's just not that remarkable

The funniest part of any thread relating to Musk is how hard people go into minimizing his accomplishments.

You don't have to like the guy (I don't) to acknowledge that the Falcon 9 is an engineering marvel and ushered in an entire new era of space travel, both reusable and private.

bobsmooth1 hour ago

>Falcon nine is the world's workhorse rocket, but it's just not that remarkable

Falcon delivered the vast majority of mass into orbit last year, and the year before that

>starship is utterly unproven to launch to orbit

It's already deployed test satellites into orbit. You're so intellectually dishonest you refuse to acknowledge things that have already happened.

numbers_guy9 hours ago

Unfortunate. The Grok team built a phenomenal model. I use it all the time and it very often out performs GPT and Claude, on coding and STEM research related tasks. I was part of the beta for a while Grok 4.2 Beta with multi-agents and it was just amazingly good.

People aren't using it for reasons other than its capabilities. I mean, I don't think my boss would approve a paid Grok subscription for example.

ActorNightly1 hour ago

There is no way in hell Grok is better than Gemini. Google has the advantage of much more efficient and faster inference, with a lot more data sets.

Secondly, would you trust a model, especially for STEM research, that consistently has training loops done on it to make it to adhere to what only Musk considers as truth?

Honestly, comments like yours really make me super suspicious of whether you are a bot or not.

distances8 hours ago

> People aren't using it for reasons other than its capabilities.

This is very true. I have no idea how it performs, as I wouldn't use it even if I was paid for that. Wouldn't matter if it was the best model available, in my view the name is so thoroughly tainted by now that you would get a reputational hit just by admitting to use it.

ryandrake9 hours ago

> People aren't using it for reasons other than its capabilities.

This is a fact of life, though. "Who created it" is a valid and common reason to rule out using a particular product, even one with objectively good quality.

virgildotcodes3 hours ago

Have you tried the 5.3 Codex Xhigh, 5.4 Xhigh, Opus 4.6, Gemini 3.1?

All of them (even Gemini, the worst of the bunch) far outclass Grok on everything I've thrown at them, especially coding.

Grok is good at summarizing what's happening on twitter though.

lvl1559 hours ago

My experience was quite different. It was on par with open source models from China (and it was priced as much) and could never replace Sonnet/Opus/GPT5.x.

thinkcontext7 hours ago

Yes, the white genocide and mechahitler episodes have suppressed adoption.

stainablesteel9 hours ago

im not surprised, grok definitely falls behind as both a coding agent and a research tool.

claude codes the best, gpt is the best research tool, and grok is really only great at videos. which isn't a huge loss, but videos don't have the same functional capacity as academic topics and coding

alephnerd9 hours ago

> grok is really only great at videos. which isn't a huge loss, but videos don't have the same functional capacity as academic topics and coding

With the right product leadership, this could actually be a killer app usecase for the entertainment industry as well as human-AI user interface - most people find text and typing to be a counterintuitive user experience (especially those whose day job isn't directly touching code or Excel).

Additionally, CodeGen as a segment is significantly oversaturated at this point, and in a lot of cases an organization has the ability to armtwist a 4th party data retention guarantee from Anthropic or OpenAI to train their own CodeGen tools (ik one F50 that is not traditionally viewed as a tech company going this route).

That said, Musk has a reputation of internally overriding experienced product leaders with a track record.

It's a shame because Grok and xAI had potential, and it wouldn't hurt to have another semi-competitive foundation model player in the US from a redundancy and ecosystem perspective.

measurablefunc10 hours ago

It's surprising that AI coding agents have network effects but it's true. Think about it from first principles & you'll realize that the bottleneck is how many people are using it to write real code & providing both implicit (compiler errors, test failures, crash logs, etc) & direct ("did not properly follow instructions", "deleted main databases", "didn't properly use a tool", etc) feedback. No one is using xAI for serious software engineering so that leaves OpenAI, Anthropic, & Google w/ enough scale to benefit from network effects. No one has real AI but what they do have is the appearance of intelligence from crowdsourced feedback & filtering. This means companies that are already in the lead will continue to stay there & xAI started way too late so they will continue to lose in every domain that actually matters & benefits from network effects.

trollbridge9 hours ago

Is there really a network effect, though? What’s the moat?

measurablefunc9 hours ago

If you are using an AI w/ 100 users who are writing throwaway software vs someone who is using AI w/ 1000 users who are writing software w/ formal specifications then guess which AI is going to win? The answer is plainly obvious to me but might not be to those who haven't thought about how current AIs actually work.

heraldgeezer10 hours ago

I do use Grok as a chatbot sometimes. Very good for sourcing X and general web search. Not as "prude" as the others too.

LightBug19 hours ago

Prude? I've played with all the main AI players for the last 2'ish years.

I've never once thought: you know what? that was a bit prudish.

Genuinely morbidly curious. What use case do you have where you end up making that conclusion?

dlivingston8 hours ago

An earlier version of Sonnet (not sure which one; ~1 yr ago) refused to give me instructions on taking the life of another when I asked something like - "how do I kill a running process by name?"

bobsmooth60 minutes ago

The amount of times chatgpt has told me "I'm sorry Dave, I'm afraid I can't do that." makes me want to bash my head against the wall.

mikrl9 hours ago

Making funny memes of my friends mainly. ChatGPT won’t touch that, I haven’t tried with Claude yet, but grok keeps the group chat flush with laughing emojis.

That’s all I use it for really- things out of alignment with the other platforms- which IMO are better on every other metric (except having a sense of humour of course)

BigTTYGothGF9 hours ago

I love my friends enough that the memes I make for them are hand-crafted.

mikrl9 hours ago

Hey I’m all grown up now, just don’t have the time to meticulously touch pixels in MS Paint like back in the day

snackerblues4 hours ago

Any use case that you couldn't post about on your company Slack.

RonanSoleste10 hours ago

[flagged]

holoduke6 hours ago

Where is the grok coding cli?

sergiotapia6 hours ago

Will this be an indictment on the insane work hours I've heard the xai team pulls?

BigTTYGothGF9 hours ago

I feel like even just a couple years ago it would have been shocking to see an article involving Musk have this kind of spin. Like you'd never see a line like this:

> The name is a “funny” reference to Microsoft, the billionaire added.

in something from 2023 or earlier.

lvl1559 hours ago

xAI showed me that it’s really still OAI and Anthropic (which is basically the OG devs). No matter how much money you throw at the problem, the entire space is still in the hands of a few.

rvz16 hours ago

Not even Elon believes that Cursor is worth $50B or even $29B.

Aurornis10 hours ago

If key employees are leaving Cursor to join xAI, I would imagine not even Cursor employees are optimistic about the company’s future valuation.

tibbar10 hours ago

How can cursor be worth more than a few billion? Claude/Codex are already better autonomous SWE-lite replacements. Cognition surely has a better internal harness. Cursor does have a lot of users, I'll give it that.

ok_dad10 hours ago

I like Cursor a lot more than Claude Code. It works better for me overall. I like the way they integrate it into the IDE so the agent is my tool rather than a 'partner' or something like that. I'm pretty sad that they lost some engineers, I hope these folks weren't integral to Cursor in any way.

serial_dev10 hours ago

Distribution is also important. Cursor is a great normie tool (I’m one of them), with probably more enterprise deals than the competition.

SV_BubbleTime9 hours ago

Moats are weird right now… but Cursor doesn’t have one at all so I agree it can’t really be worth much.

SadErn10 hours ago

[dead]

repple9 hours ago

Their goal of moving compute to space combined with their capacity to launch tons of payload will make this look like a tiny blip.

Marazan8 hours ago

What is the benefit of "moving compute to space"?

kybernetikos8 hours ago

It's hard for an uprising of poor people to shut it off. It's the ideal place to run your CEO / President simulations.

I say this tongue in cheek, but in all seriousness, I can't really think of any other benefit, and I no longer have a lot of faith in the good sense of some of the people involved.

vessenes8 hours ago

Elon makes a relatively good case in the Dwarkesh podcast. I recall it like this:

1) Energy infra is going to be seriously limited on the production side well, well below demand

2) energy engineering solar for space requires less materials than for gravity-based solar (!)

3) you cut out distribution network needs when you just launch stuff all per-pod in space

4) SpaceX thinks it can create a scalable vertically integrated production facility to turn raw materials into space datacenter pods, with the exception of chips.

As a business bet, this is predicated on 10,000x inference demand growth - if we have that, and SpaceX can get the integrated production rolling, and get Starship launching, then these will be actively utilized at scale.

Whether you are bullish on the whole plan should, I think come down to your take on those priors: 10kx growth, ability to manage supply chain and production, Starship outlook, and silicon access.

I'm not bearish on this after listening to the podcast; it has a very Elon-like returns distribution - if they're wrong on a lot of this, they'll probably have some moderately price-competitive datacenter facilities in space and a lot of built organizational knowhow while Brooklyn journalists dunk on them for spending all that effort to just replicate what we have on Earth. If they're right about most of this, they'll have an unreplicable head start, both due to years of experience, and due to the cheap launch they gambled on ten years ago, they'll have a nearly insurmountable moat.

ActorNightly1 hour ago

>Elon makes a relatively good case in the Dwarkesh podcast.

Are we still going to pretend that the man who has gotten every single prediction wrong so far knows what he is talking about?

+1
kybernetikos8 hours ago
+1
tartoran8 hours ago
skywhopper6 hours ago

Every one of those points is false or an outright lie, though.

imiric7 hours ago

You forgot 5: SpaceX has a monopoly on deploying satellites to LEO, with practically unlimited room for growth, and far less red tape and obstacles than anywhere on Earth. Whatever R&D and operational costs this insane engineering feat might have are offset by their market advantage, and Musk's Elizabeth Holmes-ian capability to fund his projects, in addition to relying on his own personal wealth and all of his other companies combined.

The fact that this lunatic is polluting humanity's view into the universe mainly for enriching himself and his shareholders, and that everyone is playing along with this, is sickening.

ActorNightly57 minutes ago

Not having to deal with having to defend in court why polluting an area where you built your datacenter and fucking it up for the residents there is actually better for all man kind.

JumpCrisscross8 hours ago

> What is the benefit of "moving compute to space"?

I’ll bite. It’s cheaper and quicker to permit a launch than permit, zone and interconnect a datacenter. And solar panels in space don’t need glass cladding, which makes them cheaper to make and lift.

The downside is launch cost. But there is a breakeven between these factors that seems to have most of its error bars within Starship’s target. (By my math, around $35/kg.) So if Starship works, and all indications seem to show that it will, eventually, then that puts space-based data centers at cost parity with terrestrial ones within a decade. Which was, well, unexpected when I ran the numbers.

(The surprising finding when you run the numbers is launching the chips and solar panels isn’t the limiter, it’s launching the radiators. Which opens up whole new questions about at what scale it makes sense to stop sending those up the well.)

tzs5 hours ago

> It’s cheaper and quicker to permit a launch than permit, zone and interconnect a datacenter

There's plenty of empty land sufficiently far from cities and not being used for anything else and that shouldn't have permitting or zoning problems.

For interconnect do that via satellite.

skywhopper6 hours ago

The capacity of a single datacenter would require thousands of launches to get the equipment into space. I don’t believe for a second that this would be easier in any way. Cooling and bandwidth are also completely unsolved for compute on a useful scale.

danenania6 hours ago

What about maintenance? I’d naively assume that’s the killer.

layer88 hours ago

That xAI fails faster, hopefully.

Fricken7 hours ago
dang10 hours ago

I couldn't find a working archive link for the ft.com article - anyone?

Since it's the original source I've left it up, but added other URLs to the toptext.

natebc10 hours ago

I sent it to archive.ph here:

https://archive.ph/rP4cb

and it has the content but the formatting is atrocious.

HTH.

dang10 hours ago

Better than nothing - added above. Thanks!

dang10 hours ago

[stub for generic-indignant tangents - not what this site is for - please see https://news.ycombinator.com/newsguidelines.html]

throwaway202710 hours ago

Elon is such a clown, he keeps posting salty tweets about Anthropic, Claude Code, OpenAI and Codex yet has no competing product.

charlieflowers10 hours ago

He's about to have the most compute. Wonder if he can do anything noteworthy with it.

rishabhaiover10 hours ago

These kind of HN submissions test how fair discussions can be here:

> Please don't use Hacker News for political or ideological battle. It tramples curiosity.

Reference: https://news.ycombinator.com/newsguidelines.html

Ar-Curunir5 hours ago

Elon is literally a political figure. How is one supposed to discuss his actions without invoking his politics?

dang2 hours ago

discuss != battle

Ar-Curunir1 hour ago

In the context of what Elon has done, the only real discussion should be condemnation. If that leads to Elon fans feeling embattled, well, they should get better role models to look up to.

throw_m2393395 hours ago

> Please don't use Hacker News for political or ideological battle. It tramples curiosity.

That ship has sailed a long time ago, with the approval of the moderation itself.

dang2 hours ago

That's excellent modbait, but of course what you say is the opposite of what we approve.

It's is a complex and hard question, but the principles we apply to it have been around for a long time and are consistent with the site guidelines. If they weren't, we'd change the latter.

I've explained all of this many times. If you, or anyone, would like to know how we approach the question, you could start here:

https://hn.algolia.com/?dateRange=all&page=0&prefix=false&so...

0xpgm3 hours ago

Yup, since around 2016 HN and other tech spaces got infested with people who cannot separate their political ideology from technical discussions.

When it comes to FOSS they claim that FOSS has always been political to justify the politicization of everything they touch.

Things used to be much better when the people adhered to the age-old wisdom "Keep politics and religion out of the office" and carried this attitude to neutral spaces online.

In part, some of us got into tech because it was one of the places where meritocracy ruled and you could get away from those who thrive by overwhelming others with BS.

I apologize for the rant.

datsci_est_20153 hours ago

Being “apolitical” is a luxury of the privileged, especially in turbulent times.

True tests of courage, morals, and ethics are occurring more and more every day now, especially in the tech industry that is so closely intertwined with the regimes across the world who seek to cause great harm to those who do not look like, speak like, or believe in the same things as them.

"The only thing necessary for the triumph of evil is for good men to do nothing" - there’s your quote for political apathy.

johnnyanmac9 hours ago

So, it utterly fails? A good part of the community still seems to be stuck in 2017 where Elon could do no wrong.

Turns out a lot of not just wrong, but malice could be done in 9 years. And worse yet, incompetent malice. I don't know why that has to be a political statement these days, but thems the brakes here.

snackerblues4 hours ago

Please don't use Hacker News for political or ideological battle. It tramples curiosity.

+1
johnnyanmac4 hours ago
kubb8 hours ago

Is it politics or ideology to recognize the flawed character of someone? How cultish his following is? His erratic behavior, the damage that he's doing?

Some people will cry "politics" just to take the voice away from those who dare to question their beloved celebrities.

baublet3 hours ago

Yeah and it’s not our fault every Elon discussion involves politics. It’s literally all he does all day, and all he seems interested in, anymore.

mathisfun1239 hours ago

[flagged]

croes8 hours ago

They trample science, the Paradox of tolerance in action.

Who fights can lose, who doesn't fight has already lost.

I_am_tiberius9 hours ago

[flagged]

halfmatthalfcat9 hours ago

[flagged]

selkin9 hours ago

Many wouldn't, but some people share his values, and given the compensation, it makes saying "no" much harder. Money may not be the most important thing in life, but it does make them extremely easier to live.

pelorat9 hours ago

Same, I earn 60K as a senior, but I would never accept a 200K+ position at xAI.

yndoendo9 hours ago

As an US Citizen, you have to pay me to engage with Elon Musk's businesses. He is not a good person and does not deserve respect or admiration.

daveguy9 hours ago

As a US citizen, you couldn't even pay me to engage with Elon Musk's businesses. He is not a good person and does not deserve respect or admiration.

weirdmantis699 hours ago

You wouldn't want to work for a genius? Probably the most significant person alive today?

troosevelt9 hours ago

I don't think he's a genius but if he is, it'd still be underneath my standards.

matsemann9 hours ago

I can think of lots of significant people I wouldn't work for..

davidwritesbugs9 hours ago

Get down to A&E quick, you've clearly drunk a potentially fatal amount of Elon KoolAid. Musk is a buffoon. Clever? yes by all accounts, genius? Hardly. He's had luck, made good judgments mostly offsetting the bad ones. Most of all he has enough money to power through errors that would bankrupt thee & me.

rf158 hours ago

Evidently not genius enough to not have his car business and global image fail. Genius he might be, but he's only entrenching his position in a way not dissimilar to cults: by alienating a lot of people you can get loyalty from a selected few. If that's the kind of power he wants, sure, he's a genius. But a good businessman is something else.

InsideOutSanta9 hours ago

Let's assume that you are correct. How is that relevant to how good he is as an employer? There are lots of people in history who were very significant and perhaps geniuses in some way that I wouldn't want to work for in a billion years.

johnnyanmac4 hours ago

We had cabinet members for this administration call Trump a nazi months prior to the nomination. People give up all kinds of morals for financial gain. That was always true, but it's become outright blatant this past decade.

sourcegrift9 hours ago

There's a reason Europe is the world leader in technology, respect for humans and humanity.

weirdmantis699 hours ago

lmao

ThrowawayTestr9 hours ago

You're hilarious.

LightBug19 hours ago

Elon Musk is a generic-indignant tangent wanker and not what this site is for.

Thanks for providing a space for me to say that.

epolanski10 hours ago

tbh I wouldn't give Elon a dime even if Grok was miles better than competition.

dang10 hours ago

Ok, but please don't post unsubstantive comments here.

epolanski10 hours ago

Is it?

Elon's persona caused massive drops in usage of twitter, sales of Tesla, etc.

Unsurprisingly many would not touch grok for the same distrust.

dang3 hours ago

Tastes differ, of course, but a comment consisting of nothing more than "I wouldn't give $so-and-so a dime even if $such-and-such was $this-or-that" definitely counts as unsubstantive by the usual standard here.

davidw8 hours ago

This is not a fully formed thought, so take it with a grain of salt:

Keeping politics off of here is a good idea.

Some things aren't really politics, but morals. Like, a discussion of different tax schemes or how much environmental regulations accomplish what they set out to do or something is 'politics'. Lamenting that there is "no homeland for white people" is... something else.

It's probably still not likely to have good outcomes as a subject of discussion here, but it's also something the tech industry needs to wrestle with somewhere, somehow.

My experience of the tech world was that it went from being a collection of oddballs, geeks, nerds and maybe kind of naive politically to mainstreaming some really evil shit.

I think this will come back to bite the industry, and depending on how angry the people with pitchforks and torches are, could end up hurting more than just the bad actors.

maxwell10 hours ago

Would you give one to Sam, Mark, or Sundar?

pupppet10 hours ago

What does our system say about itself when people of integrity so rarely rise to the top?

EricDeb10 hours ago

I dont know too much but Jensen Huang seems like a good guy

lobf10 hours ago

None of these guys literally has the blood of millions of people on their hands.

Elon’s gutting of USAID (and you can argue they would have done it anyways but he chose to be the executioner) will kill millions of people every year who otherwise would not have died.

Not only will I never give him a dime, I want him prosecuted and deported.

Edit: For those downvoting, we're already at an estimated 600k deaths: https://www.impactcounter.com/dashboard?view=table&sort=inte...

knicholes10 hours ago

Why?

epolanski10 hours ago

He's very hard to like, and he's hard to trust with anything.

reactordev10 hours ago

Moral grandstanding on the account of his political views and the fact that he does Nazi salutes on stage, on TV, for the world to see… might have something to do with it.

skywhopper10 hours ago

Because Elon is a criminal scam artist and a horrifying racist who seems to be completely detached from reality.

z3ratul16307110 hours ago

if it weren't for HN i would get a glimpse how life is on bluesky

Layvier10 hours ago

this.

+2
SunshineTheCat10 hours ago
misiti378010 hours ago

[flagged]

fishcrackers9 hours ago

[dead]

cboyardee9 hours ago

[dead]

zombiwoof9 hours ago

[dead]

heliumtera10 hours ago

[flagged]

spprashant10 hours ago

He is re-building a company that he himself built less than 3 years ago?

randallsquared9 hours ago

Elon has less regard for sunk costs than most corporate leaders.

LightBug19 hours ago

Ironically, he's the sunk cost.

coliveira10 hours ago

[flagged]

dang10 hours ago

You've been a good HN user for many years, but lately your comment history has swerved towards ideological battle generally, and unsubstantive flamebait like this post. Can you please swerve back? It's not what this site is for, and destroys what it is for.

https://news.ycombinator.com/newsguidelines.html

Edit: before someone pounces, no, I'm in no way defending either E. Just trying to hold up HN.

+4
Herring10 hours ago
chairmanwow14 hours ago

[flagged]

KennyBlanken4 hours ago

So you deeply admire a man who threw a temper-tantrum when his giant box designed by a bunch of people with no experience in anything underwater or rescue, much less underwater rescue - and was deemed unusable to rescue people from an underwater cave with passages so small divers had to remove their gear and push it ahead of them? And repeatedly, directly, said the lead rescuer was a pedophile?

You deeply admire a man so unable to restrain his ego and temper that much of his production team at Tesla quit, some right to his face, because they couldn't meet his nearly impossible goal of extreme levels of automation on the Model 3 production line? Which, if all else is ignored, cost Tesla billions in delays because of his demands?

You deeply admire a many who is vehemently racist and misogynistic?

You deeply admire a man who latches onto just about any conspiracy theory?

You deeply admire a man who is so desperate for attention he unblocks himself from Twitter users' accounts?

You deeply admire a man whose companies were under investigation by nearly every federal enforcement agency there is?

You deeply admire a man who has failed to meet the vast majority of his own publicly stated benchmarks?

And who engages in PT Barnum levels of bullshit, like having "AI robots" that are actually just robots piloted by unemployed actors?

The man is a pathological liar who has failed upward not because of some sort of unique talent or skill, but because he's extremely abusive and willing to break any regulation or law he sees as inconvenient.

chairmanwow12 hours ago

Yes

SadErn10 hours ago

[dead]

beezlewax9 hours ago

[flagged]

quater3217 hours ago

[flagged]

antonvs7 hours ago

dang wrote:

> You may not owe you-know-whom better, but you owe this community better if you're participating in it.

This is like telling a country that’s being invaded that they can only respond with strongly worded letters when their enemy is dropping tactical nukes on them.

But hey, Paul Graham and cronies benefit from the status quo as much as any other billionaire, so let’s not rock the boat, right?

The word “complicit” comes to mind.

dang2 hours ago

It's nothing like that. We're just trying to have an internet forum that manages to stay slightly more interesting than internet mean. Or, if you like, that doesn't burn itself to a crisp and turn into scorched earth. It is rather a modest goal. I think there is a place for such a website, and I believe most HN readers do too.

gkfasdfasdf8 hours ago

The grok button on twitter is pretty awesome. Instantly summarize / explain any tweet, even memes, including replies. Ask follow up questions. Not sure many people know it's there.

Also grok in the Tesla is fun, get answers to questions without looking at a phone. I once had it search up a blog post and read it out to me while driving. The NSFW mode is pretty...disgusting so I leave that off.

I hope they find a way with Optimus or something. FSD is incredible. More competition is a good thing.

blueaquilae9 hours ago

Yes 11 up and everyone why free insult on a model that top adoption. Aligned with your personal view is not ahead of the curve, it's just personal.