Back

DeepSeek-v3.2: Pushing the frontier of open large language models [pdf]

982 points2 monthshuggingface.co
zug_zug2 months ago

Well props to them for continuing to improve, winning on cost-effectiveness, and continuing to publicly share their improvements. Hard not to root for them as a force to prevent an AI corporate monopoly/duopoly.

jstummbillig2 months ago

How could we judge if anyone is "winning" on cost-effectiveness, when we don't know what everyones profits/losses are?

tedivm2 months ago

If you're trying to build AI based applications you can and should compare the costs between vendor based solutions and hosting open models with your own hardware.

On the hardware side you can run some benchmarks on the hardware (or use other people's benchmarks) and get an idea of the tokens/second you can get from the machine. Normalize this for your usage pattern (and do your best to implement batch processing where you are able to, which will save you money on both methods) and you have a basic idea of how much it would cost per token.

Then you compare that to the cost of something like GPT5, which is a bit simpler because the cost per (million) token is something you can grab off of a website.

You'd be surprised how much money running something like DeepSeek (or if you prefer a more established company, Qwen3) will save you over the cloud systems.

That's just one factor though. Another is what hardware you can actually run things on. DeepSeek and Qwen will function on cheap GPUs that other models will simply choke on.

miki1232112 months ago

> with your own hardware

Or with somebody else's.

If you don't have strict data residency requirements, and if you aren't doing this at an extremely large scale, doing it on somebody else's hardware makes much more economic sense.

If you use MoE models (al modern >70B models are MoE), GPU utilization increases with batch size. If you don't have enough requests to keep GPUs properly fed 24/7, those GPUs will end up underutilized.

Sometimes underutilization is okay, if your system needs to be airgapped for example, but that's not an economics discussion any more.

Unlike e.g. video streaming workloads, LLMs can be hosted on the other side of the world from where the user is, and the difference is barely going to be noticeable. This means you can keep GPUs fed by bringing in workloads from other timezones when your cluster would otherwise be idle. Unless you're a large, worldwide organization, that is difficult to do if you're using your own hardware.

embedding-shape2 months ago

> If you use MoE models (al modern >70B models are MoE), GPU utilization increases with batch size

Isn't that true for any LLM, MoE or not? In fact, doesn't that apply to most concepts within ML, as long as it's possible to do batching at all, you can scale it up and utilize more of the GPU, until you saturate some part of the process.

AlexCoventry2 months ago

Mixture-of-Expert models benefit from economies of scale, because they can process queries in parallel, and expect different queries to hit different experts at a given layer. This leads to higher utilization of GPU resources. So unless your application is already getting a lot of use, you're probably under-utilizing your hardware.

Muromec2 months ago

>That's just one factor though. Another is what hardware you can actually run things on. DeepSeek and Qwen will function on cheap GPUs that other models will simply choke on.

What's cheap nowdays? I'm out of the loop. Does anything ever run on integrated AMD that is Ryzen AI that comes in framework motherboards? Is under 1k americans cheap?

+2
GTP2 months ago
chazeon2 months ago

Well the seemingly cheap comes with significantly degraded performance, particular for agentic use. Have you tried replacing Claude Code with some locally deployed model, say, on 4090 or 5090? I have. It is not usable.

+1
nylonstrung2 months ago
estsauver2 months ago

Well, those are also extremely limited vram areas that wouldn't be able to run anything in the ~70b parameter space. (Can you run 30b even?)

Things get a lot more easier at lower quantisation, higher parameter space, and there's a lot of people's whose jobs for AI are "Extract sentiment from text" or "bin into one of these 5 categories" where that's probably fine.

+2
elif2 months ago
JosephjackJR2 months ago

they took the already ridiculous v3.1 terminus model, added this new deepseek sparse attention thing, and suddenly it’s doing 128k context at basically half the inference cost of the old version with no measurable drop in reasoning or multilingual quality. like, imo gold medal level math and code, 100+ languages, all while sipping tokens at 14 cents per million input. that’s stupid cheap. the rl recipe they used this time also seems way more stable. no more endless repetition loops or random language switching you sometimes got with the earlier open models. it just works. what really got me is how fast the community moved. vllm support landed the same day, huggingface space was up in hours, and people are already fine-tuning it for agent stuff and long document reasoning. i’ve been playing with it locally and the speed jump on long prompts is night and day. feels like the gap to the closed frontier models just shrank again. anyone else tried it yet?

qeternity2 months ago

> DeepSeek and Qwen will function on cheap GPUs that other models will simply choke on.

Uh, Deepseek will not (unless you are referring to one of their older R1 finetuned variants). But any flagship Deepseek model will require 16x A100/H100+ with NVL in FP8.

kmacdough2 months ago

Furthermore, paid models are heavily subsidized by bullish investors playing for monopoly. So that tips the scales further towards Deepseek.

ericskiff2 months ago

I believe this was a statement on cost per token to us as consumers of the service

moffkalast2 months ago

Training cost-effectiveness doesn't matter for open models since someone else ate the cost. In this case, Chinese taxpayers.

+1
KvanteKat2 months ago
deaux2 months ago

We can judge on inference cost because we do know what those are for open-weights models as there are a dozen independent providers that host these models and price them according to respective inference cost.

We can't judge on training cost, that's true.

stingraycharles2 months ago

You can use tokens/sec on something like AWS Bedrock (which hosts both open and closed models) as a proxy for “costs per token” for the closed providers.

mzl2 months ago

Apart from measuring prices from venture-backed providers which might or might not correlate with cost-effectiveness, I think the measures of intelligence per watt and intelligence per joule from https://arxiv.org/abs/2511.07885 is very interesting.

rowanG0772 months ago

Well consumers care about the cost to them, and those we know. And deepseek is destroying everything in that department.

eru2 months ago

Yes. Though we don't know for sure whether that's because they actually have lower costs, or whether it's just the Chinese taxpayer being forced to serve us a treat.

+1
chronogram2 months ago
badmonster2 months ago

Good point. Could usage patterns + inference costs give us proxy metrics? What would be a fair baseline?

make32 months ago

I suspect they will keep doing this until they have a substantially better model than the competition. Sharing methods to look good & allow the field to help you keep up with the big guys is easy. I'll be impressed if they keep publishing even when they do beat the big guys soundly.

srameshc2 months ago

As much I agree with your sentiment, but I doubt the intention is singular.

energy1232 months ago

It's like AMD open-sourcing FSR or Meta open-sourcing Llama. It's good for us, but it's nothing more than a situational and temporary alignment of self-interest with the public good. When the tables turn (they become the best instead of 4th best, or AMD develops the best upscaler, etc), the decision that aligns with self-interest will change, and people will start complaining that they've lost their moral compass.

orbital-decay2 months ago

>situational and temporary alignment of self-interest with the public good

That's how it supposed to work.

re-thc2 months ago

It's not. This isn't about competition in a company sense but sanctions and wider macro issues.

energy1232 months ago

It's like it in the sense that it's done because it aligns with self-interest. Even if the nature of that self-interest differs.

twelvechairs2 months ago

The bar is incredibly low considering what OpenAI has done as a "not for profit"

kopirgan2 months ago

You need get a bunch of accountants to agree on what's profit first..

komali22 months ago

Agree against their best interest, mind you!

echelon2 months ago

I don't care if this kills Google and OpenAI.

I hope it does, though I'm doubtful because distribution is important. You can't beat "ChatGPT" as a brand in laypeople's minds (unless perhaps you give them a massive "Temu: Shop Like A Billionaire" commercial campaign).

Closed source AI is almost by design morphing into an industrial, infrastructure-heavy rocket science that commoners can't keep up with. The companies pushing it are building an industry we can't participate or share in. They're cordoning off areas of tech and staking ground for themselves. It's placing a steep fence around tech.

I hope every such closed source AI effort is met with equivalent open source and that the investments made into closed AI go to zero.

The most likely outcome is that Google, OpenAI, and Anthropic win and every other "lab"-shaped company dies an expensive death. RunwayML spent hundreds of millions and they're barely noticeable now.

These open source models hasten the deaths of the second tier also-ran companies. As much as I hope for dents in the big three, I'm doubtful.

raw_anon_11112 months ago

I can’t think of a single company I’ve worked with as a consultant that I could convince to use DeepSeek because of its ties with China even if I explained that it was hosted on AWS and none of the information would go to China.

Even when the technical people understood that, it would be too much of a political quagmire within their company when it became known to the higher ups. It just isn’t worth the political capital.

They would feel the same way about using xAI or maybe even Facebook models.

+2
JSR_FDED2 months ago
+3
StealthyStart2 months ago
+3
tokioyoyo2 months ago
littlestymaar2 months ago

> I can’t think of a single company I’ve worked with as a consultant that I could convince to use DeepSeek because of its ties with China even if I explained that it was hosted on AWS and none of the information would go to China.

Well for non-American companies, you have the choice between Chinese models that don't send data home, and American ones that do, with both countries being more or less equally threatening.

I think if Mistral can just stay close enough to the race it will win many customers by not doing anything.

deaux2 months ago

> Even when the technical people understood that

I'm not sure if technical people who don't understand this deserve the moniker technical in this context.

+1
tehjoker2 months ago
nylonstrung2 months ago

The average person has been programmed to be distrustful of open source in general, thinking it is inferior quality or in service of some ulterior motive

+1
register2 months ago
+3
siliconc0w2 months ago
kriops2 months ago

[flagged]

giancarlostoro2 months ago

ChatGPT is like "Photoshop" people will call any AI chatgpt.

banq2 months ago

[dead]

chistev2 months ago

How do they make their money

binary1322 months ago

I suspect it is a state venture designed to undermine the American-led proprietary AI boom. I'm all for it, tbh, but as others have pointed out, if they successfully destroy the American ventures it's not like we can expect an altruistic endgame from them.

vitaflo2 months ago

Deepseek is owned by a Chinese hedge fund. It was originally created for finance and then generalized later. In any case you pay for it like any other LLM.

paulvnickerson2 months ago

[flagged]

amunozo2 months ago

Should I root for the democratic OpenAI, Google or Microsoft instead?

doctorwho422 months ago

Further more, who thinks our little voices matter anymore in the US when it comes to the investor classes?

And if they did, having a counterweight against corrupt self-centered US oligarchs/CEOs is actually one of the biggest proponents for an actual powerful communist or other model world power. The US had some of the most progressive tax policies in its existence when it was under existential threat during the height of the USSR, and when their powered started to diminish, so too did those tax policies.

stared2 months ago

There used to be memes „open source is communism”, vide https://souravroy.com/2010/01/01/is-open-source-pro-communis...

Lucasoato2 months ago

> CrowdStrike researchers next prompted DeepSeek-R1 to build a web application for a Uyghur community center. The result was a complete web application with password hashing and an admin panel, but with authentication completely omitted, leaving the entire system publicly accessible.

> When the identical request was resubmitted for a neutral context and location, the security flaws disappeared. Authentication checks were implemented, and session management was configured correctly. The smoking gun: political context alone determined whether basic security controls existed.

Holy shit, these political filters seem embedded directly in the model weights.

tadfisher2 months ago

LLMs are the perfect tools of oppression, really. It's computationally infeasible to prove just about any property of the model itself, so any bias will always be plausibly deniable as it has to be inferred from testing the output.

I don't know if I trust China or X less in this regard.

tehjoker2 months ago

not convincing. have you tried saying "free palestine" on a college campus recently?

ActorNightly2 months ago

>winning on cost-effectiveness

Nobody is winning in this area until these things run in full on single graphics cards. Which is sufficient compute to run even most of the complex tasks.

JSR_FDED2 months ago

Nobody is winning until cars are the size of a pack of cards. Which is big enough to transport even the largest cargo.

ActorNightly2 months ago

Lol its kinda suprising that the level of understanding around LLMs is so little.

You already have agents, that can do a lot of "thinking", which is just generating guided context, then using that context to do tasks.

You already have Vector Databases that are used as context stores with information retrieval.

Fundamentally, you can have the same exact performance on a lot of task whether all the information exists in the model, or you use a smaller model with a bunch of context around it for guidance.

So instead of wasting energy and time encoding the knowledge information into the model, making the size large, you could have an "agent-first" model along with just files of vector databases, and the model can fit in a single graphics cards, take the question, decide which vector db it wants to load, and then essentially answer the question in the same way. At $50 per TB from SSD not only do you gain massive cost efficiency, but you also gain the ability to run a lot more inference cheaper, which can be used for refining things, background processing, and so on.

+1
eru2 months ago
+1
JSR_FDED2 months ago
bbor2 months ago

I mean, there are lots of models that run on home graphics cards. I'm having trouble finding reliable requirements for this new version, but V3 (from February) has a 32B parameter model that runs on "16GB or more" of VRAM[1], which is very doable for professionals in the first world. Quantization can also help immensely.

Of course, the smaller models aren't as good at complex reasoning as the bigger ones, but that seems like an inherently-impossible goal; there will always be more powerful programs that can only run in datacenters (as long as our techniques are constrained by compute, I guess).

FWIW, the small models of today are a lot better than anything I thought I'd live to see as of 5 years ago! Gemma3n (which is built to run on phones[2]!) handily beats ChatGPT 3.5 from January 2023 -- rank ~128 vs. rank ~194 on LLMArena[3].

[1] https://blogs.novita.ai/what-are-the-requirements-for-deepse...

[2] https://huggingface.co/google/gemma-3n-E4B-it

[3] https://lmarena.ai/leaderboard/text/overall [1] https://blogs.novita.ai/what-are-the-requirements-for-deepse...

qeternity2 months ago

> but V3 (from February) has a 32B parameter model that runs on "16GB or more" of VRAM[1]

No. They released a distilled version of R1 based on a Qwen 32b model. This is not V3, and it's not remotely close to R1 or V3.2.

beefnugs2 months ago

Why does that matter? They wont be making at home graphics cards anymore. Why would you do that when you can be pre-sold $40k servers for years into the future

observationist2 months ago

Because Moore's law marches on.

We're around 35-40 orders of magnitude from computers now to computronium.

We'll need 10-15 years before handheld devices can run a couple terabytes of ram, 64-128 terabytes of storage, and 80+ TFLOPS. That's enough to run any current state of the art AI at around 50 tokens per second, but in 10 years, we're probably going to have seen lots of improvements, so I'd guess conservatively you're going to be able to see 4-5x performance per parameter, possibly much more, so at that point, you'll have the equivalent of a model with 10T parameters today.

If we just keep scaling and there are no breakthroughs, Moore's law gets us through another century of incredible progress. My default assumption is that there are going to be lots of breakthroughs, and that they're coming faster, and eventually we'll reach a saturation of research and implementation; more, better ideas will be coming out than we can possibly implement over time, so our information processing will have to scale, and it'll create automation and AI development pressures, and things will be unfathomably weird and exotic for individuals with meat brains.

Even so, in only 10 years and steady progress we're going to have fantastical devices at hand. Imagine the enthusiast desktop - could locally host the equivalent of a 100T parameter AI, or run personal training of AI that currently costs frontier labs hundreds of millions in infrastructure and payroll and expertise.

Even without AGI that's a pretty incredible idea. If we do get to AGI (2029 according to Kurzweil) and it's open, then we're going to see truly magical, fantastical things.

What if you had the equivalent of a frontier lab in your pocket? What's that do to the economy?

NVIDIA will be churning out chips like crazy, and we'll start seeing the solar system measured in terms of average cognitive FLOPS per gram, and be well on the way toward system scale computronium matrioshka brains and the like.

+3
blonder2 months ago
eru2 months ago

> What if you had the equivalent of a frontier lab in your pocket? What's that do to the economy?

Well, these days people have the equivalent of a frontier lab from perhaps 40 years ago in their pocket. We can see what that has done to the economy, and try to extrapolate.

ActorNightly2 months ago

Nothing to do with Moores Law or AGI.

The current models are simply inefficient for their capability in how they handle data.

delaminator2 months ago

> If we do get to AGI (2029 according to Kurzweil)

if you base your life on Kurzweil's hard predictions you're going to have a bad time

ActorNightly2 months ago

I didn't say winning business, I said winning on cost effectiveness.

gradus_ad2 months ago

How will the Google/Anthropic/OpenAI's of the world make money on AI if open models are competitive with their models? What hurt open source in the past was its inability to keep up with the quality and feature depth of closed source competitors, but models seem to be reaching a performance plateau; the top open weight models are generally indistinguishable from the top private models.

Infrastructure owners with access to the cheapest energy will be the long run winners in AI.

teleforce2 months ago

>How will the Google/Anthropic/OpenAI's of the world make money on AI if open models are competitive with their models?

According to Google (or someone at Google) no organization has moat on AI/LLM [1]. But that does not mean that it is not hugely profitable providing it as SaaS even you don't own the model or Model as a Service (MaaS). The extreme example is Amazon providing MongoDB API and services. Sure they have their own proprietary DynamoDB but for the most people scale up MongoDB is more than suffice. Regardless brand or type of databases being used, you paid tons of money to Amazon anyway to be at scale.

Not everyone has the resource to host a SOTA AI model. On top of tangible data-intensive resources, they are other intangible considerations. Just think how many company or people host their own email server now although the resources needed are far less than hosting an AI/LLM model?

Google came up with the game changing transformer at its backyard and OpenAI temporarily stole the show with the well executed RLHF based system of ChatGPT. Now the paid users are swinging back to Google with its arguably more superior offering. Even Google now put AI summary as its top most search return results for free to all, higher than its paid advertisement clients.

[1]Google “We have no moat, and neither does OpenAI”:

https://news.ycombinator.com/item?id=35813322

Tepix2 months ago

Hosting a SOTA AI model is something that can be separated well from the rest of your cloud deployments. So you can pretty much choose between lots of vendors and that means margins will probably not be that great.

istjohn2 months ago

That quote from Google is 2.5 years old.

KeplerBoy2 months ago

I also cringed a bit about seeing a statement that old being cited, but all the events since then only proved google right, I'd say.

Improvements seem incremental and smaller. For all I care, I could still happily use sonnet 3.5.

zamadatix2 months ago

Have they said differently since?

mistrial92 months ago

undergrads at UC Berkeley are wearing vLLM t-shirts

bashtoni2 months ago

This is exactly why the CEO of Anthropic has been talking up "risks" from AI models and asking for legislation to regulate the industry.

menaerus2 months ago

He's talking about completely different type of risks and regulation. It's about the job displacement risks, security and misuse concerns, and ethical and societal impact.

https://www.youtube.com/watch?v=aAPpQC-3EyE

https://www.youtube.com/watch?v=RhOB3g0yZ5k

alexandre_m2 months ago

> What hurt open source in the past was its inability to keep up with the quality and feature depth of closed source competitors

Quality was rarely the reason open source lagged in certain domains. Most of the time, open source solutions were technically superior. What actually hurt open source were structural forces, distribution advantages, and enterprise biases.

One could make an argument that open source solutions often lacked good UX historically, although that has changed drastically the past 20 years.

zarzavat2 months ago

For most professional software, the open source options are toys. Is there anything like an open source DAW, for example? It's not because music producers are biased against open source, it's because the economics of open source are shitty unless you can figure out how to get a company to fund development.

throwup2382 months ago

> Is there anything like an open source DAW, for example?

Yes, Ardour. It’s no more a toy than KiCad or Blender.

dotancohen2 months ago

People and companies trust OpenAI and Anthropic, rightly or wrongly, with hosting the models and keeping their company data secure. Don't underestimate the value of a scapegoat to point a finger at when things go wrong.

reed12342 months ago

But they also trust cloud platforms like GCP to host models and store company data.

Why would a company use an expensive proprietary model on Vertex AI, for example, when they could use an open-source one on Vertex AI that is just as reliable for a fraction of the cost?

I think you are getting at the idea of branding, but branding is different from security or reliability.

verdverm2 months ago

Looking at and evaluating kimi-2/deepseek vs gemini-family (both through vertex ai), it's not clear open sources is always cheaper for the the same quality

and then we have to look at responsiveness, if the two models are qualitatively in the same ballpark, which one runs faster?

ehnto2 months ago

> Don't underestimate the value of a scapegoat to point a finger at when things go wrong.

Which is an interesting point in favour of the human employee, as you can only consolidate scape goats so far up the chain before saying "It was AIs fault" just looks like negligence.

jonplackett2 months ago

Either...

Better (UX / ease of use)

Lock in (walled garden type thing)

Trust (If an AI is gonna have the level of insight into your personal data and control over your life, a lot of people will prefer to use a household name)

poszlem2 months ago

Or lobbing for regulations. You know. The "only american models are safe" kind of regulation.

niek_pas2 months ago

> Trust (If an AI is gonna have the level of insight into your personal data and control over your life, a lot of people will prefer to use a household name.

Not Google, and not Amazon. Microsoft is a maybe.

reed12342 months ago

People trust google with their data in search, gmail, docs, and android. That is quite a lot of personal info, and trust, already.

All they have to do is completely switch the google homepage to gemini one day.

polyomino2 months ago

The success of Facebook basically proves that public brand perception does not matter at all

+1
acephal2 months ago
WhyOhWhyQ2 months ago

I don't see what OpenAI's niche is supposed to be, other than role playing? Google seems like they'll be the AI utility company, and Anthropic seems like the go-to for the AI developer platform of the future.

linkage2 months ago

Anthropic has RLed the shit out of their models to the extent that they give sub-par answers to general purpose questions. Google has great models but is institutionally incapable of building a cohesive product experience. They are literally shipping their org chart with Gemini (mediocre product), AI Overview (trash), AI Mode (outstanding but limited modality), Gemini for Google Workspace (steaming pile), Gemini on Android (meh), etc.

ChatGPT feels better to use, has the best implementation of memory, and is the best at learning your preferences for the style and detail of answers.

Mistletoe2 months ago

Gemini is not mediocre, have you used it lately?

https://www.vellum.ai/llm-leaderboard

a962 months ago

RLed?

cmckn2 months ago

Reinforcement learning, I believe

empath752 months ago

> How will the Google/Anthropic/OpenAI's of the world make money on AI if open models are competitive with their models?

So a couple of things. There are going to be a handful of companies in the world with the infrastructure footprint and engineering org capable of running LLMs efficiently and at scale. You are never going to be able to run open models in your own infra in a way that is cost competitive with using their API.

Competition _between_ the largest AI companies _will_ drive API prices to essentially 0 profit margin, but none of those companies will care because they aren't primarily going to make money by selling the LLM API -- your usage of their API just subsidizes their infrastructure costs, and they'll use that infra to build products like chat gpt and claude, etc. Those products are their moat and will be where 90% of their profit comes from.

I am not sure why everyone is so obsessed with "moats" anyway. Why does gmail have so many users? Anybody can build an email app. For the same reason that people stick with gmail, people are going to stick with chatgpt. It's being integrated into every aspect of their lives. The switching costs for people are going to be immense.

tsunamifury2 months ago

Pure models clearly aren’t the monetizing strategy, use of them on existing monetized surfaces are the core value.

Google would love a cheap hq model on its surfaces. That just helps Google.

gradus_ad2 months ago

Hmmm but external models can easily operate on any "surface". For instance Claude Code simply reads and edits files and runs in a terminal. Photo editing apps just need a photo supplied to them. I don't think there's much juice to squeeze out of deeply integrated AI as AI by its nature exists above the application layer, in the same way that we exist above the application layer as users.

tsunamifury2 months ago

Gemini is the most used model on the planet per request.

All the facts say otherwise to your thoughts here.

adam_patarino2 months ago

It’s convenience - it’s far easier to call an API than deploy a model to a VPC and configure networking, etc.

Given how often new models come out, it’s also easier to update an API call than constantly deploying model upgrades.

But in the long run, I hope open source wins out.

iLoveOncall2 months ago

> How will the Google/Anthropic/OpenAI's of the world make money on AI if open models are competitive with their models?

They won't. Actually, even if open models aren't competitive, they still won't. Hasn't this been clear since a while already?

There's no moat in models, investments in pure models has only been to chase AGI, all other investment (the majority, from Google, Amazon, etc.) has been on products using LLMs, not models themselves.

This is not like the gold rush where the ones who made good money were the ones selling shovels, it's another kind of gold rush where you make money selling shovels but the gold itself is actually worthless.

delichon2 months ago

> Infrastructure owners with access to the cheapest energy will be the long run winners in AI.

For a sufficiently low cost to orbit that may well be found in space, giving Musk a rather large lead. By his posts he's currently obsessed with building AI satellite factories on the moon, the better to climb the Kardashev scale.

kridsdale12 months ago

The performance bottleneck for space based computers is heat dissipation.

Earth based computers benefit from the existence of an atmosphere to pull cold air in from and send hot air out to.

A space data center would need to entirely rely on city sized heat sink fins.

delichon2 months ago

For radiative cooling using aluminum, per 1000 watts at 300 kelvin: ~2.4m^2 area, ~4.8 liters volume, ~13kg weight. So a Starship (150k kg, re-usable) could carry about a megawatt of radiators per launch to LEO.

And aluminum is abundant in the lunar crust.

+1
ehnto2 months ago
ehnto2 months ago

And the presence of humans. Like with a lot of robotics, the devil is probably in the details. Very difficult to debug your robot factory while it's in orbit.

That was fun to write but also I am generally on board with humanity pushing robotics further into space.

I don't think an orbital AI datacentre makes much sense as your chips will be obsolete so quickly that the capex getting it all up there will be better spent on buying the next chips to deploy on earth.

eru2 months ago

Well, _if_ they can get launch costs down to 100 dollar / kg or so, the economics might make sense.

Radiative cooling is really annoying, but it's also an engineering problem with a straightforward solution, if mass-in-orbit becomes cheap enough.

The main reason I see for having datacentres in orbit would be if power in orbit becomes a lot cheaper than power on earth. Cheap enough to make up for the more expensive cooling and cheap enough to make up for the launch costs.

Otherwise, manufacturing in orbit might make sense for certain products. I heard there's some optical fibres with superior properties that you can only make in near zero g.

I don't see a sane way to beam power from space to earth directly.

seydor2 months ago

Yes but how do you find the best open model? You check google.

mistercheph2 months ago

Kagi

seydor2 months ago

Let me google "free alternative to kagi"

pembrook2 months ago

I call this the "Karl Marx Fallacy." It assumes a static basket of human wants and needs over time, leading to the conclusion competition will inevitably erode all profit and lead to market collapse.

It ignores the reality of humans having memetic emotions, habits, affinities, differentiated use cases & social signaling needs, and the desire to always want to do more...constantly adding more layers of abstraction in fractal ways that evolve into bigger or more niche things.

5 years ago humans didn't know a desire for gaming GPUs would turn into AI. Now it's the fastest growing market.

Ask yourself: how did Google Search continue to make money after Bing's search results started benchmarking just as good?

Or: how did Apple continue to make money after Android opened up the market to commoditize mobile computing?

Etc. Etc.

chinesedessert2 months ago

this name is illogical as karl marx did not commit this fallacy

pembrook2 months ago

Yes, he did, and it was fundamental to his entire economic philosophy: https://en.wikipedia.org/wiki/Tendency_of_the_rate_of_profit...

+1
deadfoxygrandpa2 months ago
+1
Balinares2 months ago
Craighead2 months ago

[dead]

blibble2 months ago

> How will the Google/Anthropic/OpenAI's of the world make money on AI if open models are competitive with their models?

hopefully they won't

and their titanic off-balance sheet investments will bankrupt them as they won't be able to produce any revenue

red2awn2 months ago

Worth noting this is not only good on benchmarks, but significantly more efficient at inference https://x.com/_thomasip/status/1995489087386771851

ode2 months ago

Do we know why?

hammeiam2 months ago

Sparse Attention, it's the highlight of this model as per the paper

culi2 months ago

How did we come to the place that the most transparent and open models are now coming out of China—freely sharing their research and source code—while all the American ones are fully locked down

+1
evrenesat2 months ago
pennomi2 months ago

Over reliance on investors who demand profits more than engineering.

The best innovation always happens before being tainted by investment.

p-e-w2 months ago

Because the whole framing of US vs China as open vs closed was never correct to begin with.

tim3332 months ago

The US is more law and finance led and the first thing seems to be to get an IP advantage and raise money. China I guess less so - they are famously lax on IP and everyone copies everything.

+1
victor90002 months ago
aqme282 months ago

The US companies are all basically GPUaas. I’m not sure what the financial model is here, but I like it.

SequoiaHope2 months ago

Short cheeky answer is that capitalists need to capture value and communists don’t. Less cheeky answer is that this is a good opportunity for China to make sure the world isn’t dominated by US-sourced AI models.

However in another way the US probably offers more free inference than China. What good is an open 600 billion parameter model to a poor person? A free account with ChatGPT might be more useful to them, though also more exploitative.

kouteiheika2 months ago

> How did we come to the place that the most transparent and open models are now coming out of China—freely sharing their research and source code—while all the American ones are fully locked down

Greed and "safety" hysteria.

pylotlight2 months ago

I'll have to wait for the bycloud video on this one :P

embedding-shape2 months ago

> DeepSeek-V3.2 introduces significant updates to its chat template compared to prior versions. The primary changes involve a revised format for tool calling and the introduction of a "thinking with tools" capability.

At first, I thought they had gone the route of implementing yet another chat format that can handle more dynamic conversations like that, instead of just using Harmony, but looking at the syntax, doesn't it look exactly like Harmony? That's a good thing, don't get me wrong, but why not mention straight up that they've implemented Harmony, so people can already understand up front that it's compatible with whatever parsing we're using for GPT-OSS?

throwdbaaway2 months ago

That DSML in the encoding directory looks quite a bit different from the Harmony chat template.

TIPSIO2 months ago

It's awesome that stuff like this is open source, but even if you have a basement rig with 4 NVIDIA GeForce RTX 5090 graphic cards ($15-20k machine), can it even run with any reasonable context window that isn't like a crawling 10/tps?

Frontier models are far exceeding even the most hardcore consumer hobbyist requirements. This is even further

tarruda2 months ago

You can run at ~20 tokens/second on a 512GB Mac Studio M3 Ultra: https://youtu.be/ufXZI6aqOU8?si=YGowQ3cSzHDpgv4z&t=197

IIRC the 512GB mac studio is about $10k

menaerus2 months ago

~20 tokens/second is actually pretty good. I see he's using the q5 version of the model. I wonder how it scales with the larger contexts. And the same guy published the video today with the new 3.2 version: https://www.youtube.com/watch?v=b6RgBIROK5o

hasperdi2 months ago

and can be faster if you can get an MOE model of that

dormento2 months ago

"Mixture-of-experts", AKA "running several small models and activating only a few at a time". Thanks for introducing me to that concept. Fascinating.

(commentary: things are really moving too fast for the layperson to keep up)

hasperdi2 months ago

As pointed out by a sibling comment. MOE consists of a router and a number of experts (eg 8). These experts can be imagined as parts of the brain with specialization, although in reality they probably don't work exactly like that. These aren't separate models, they are components of a single large model.

Typically, input gets routed to a number of of experts eg. top 2, leaving the others inactive. This reduces number of activation / processing requirements.

Mistral is an example of a model that's designed like this. Clever people created converters to transform dense models to MOE models. These days many popular models are also available in MOE configuration

+1
whimsicalism2 months ago
miohtama2 months ago

All modern models are MoE already, no?

hasperdi2 months ago

That's not the case. Some are dense and some are hybrid.

MOE is not the holy grail, as there are drawbacks eg. less consistency, expert under/over-use

bigyabai2 months ago

>90% of inference hardware is faster if you run an MOE model.

tarruda2 months ago

Deepseek is already a MoE

halyconWays2 months ago

As someone with a basement rig of 6x 3090s, not really. It's quite slow, as with that many params (685B) it's offloading basically all of it into system RAM. I limit myself to models with <144B params, then it's quite an enjoyable experience. GLM 4.5 Air has been great in particular

lostmsu2 months ago

Did you find it better than GPT-OSS 120B? The public rankings are contradictory.

halyconWays2 months ago

I haven't used GPT-OSS 120B, or other GPT-OSS models, and I mostly go on personal recommendations rather than benchmarks directly.

noosphr2 months ago

Home rigs like that are no longer cost effective. You're better off buying an rtx pro 6000 outright. This holds both for the sticker price, the supporting hardware price, the electricity cost to run it and cooling the room that you use it in.

torginus2 months ago

I was just watching this video about a Chinese piece of industrial equipment, designed for replacing BGA chips such as flash or RAM with a good deal of precision:

https://www.youtube.com/watch?v=zwHqO1mnMsA

I wonder how well the aftermarket memory surgery business on consumer GPUs is doing.

ThrowawayTestr2 months ago

LTT recently did a video on upgrading a 5090 to 96gb of ram

dotancohen2 months ago

I wonder how well the opthalmologist is doing. These guys are going to be paying him a visit playing around with those lasers and no PPE.

+1
CamperBob22 months ago
throw40392 months ago

Yeah, the pricing for the rtx pro 6000 is surprisingly competitive with the gamer cards (at actual prices, not MSRP). A 3x5090 rig will require significant tuning/downclocking to be run from a single North American 15A plug, and the cost of the higher powered supporting equipment (cooling, PSU, UPS, etc) needed will pay for the price difference, not to mention future expansion possibilities.

mikae12 months ago

Or perhaps a 512GB Mac Studio. 671B Q4 of R1 runs on it.

redrove2 months ago

I wouldn’t say runs. More of a gentle stroll.

+1
storus2 months ago
hasperdi2 months ago

With quantization, converting it to an MOE model... it can be a fast walk

bigyabai2 months ago

People with basement rigs generally aren't the target audience for these gigantic models. You'd get much better results out of an MoE model like Qwen3's A3B/A22B weights, if you're running a homelab setup.

Spivak2 months ago

Yeah I think the advantage of OSS models is that you can get your pick of providers and aren't locked into just Anthropic or just OpenAI.

hnfong2 months ago

Reproducibility of results are also important in some cases.

There are consumer-ish hardware that can run large models like DeepSeek 3.x slowly. If you're using LLMs for a specific purpose that is well-served by a particular model, you don't want to risk AI companies deprecating it in a couple months and push you to a newer model (that may or may not work better in your situation).

And even if the AI service providers nominally use the same model, you might have cases where reproducibility requires you use the same inference software or even hardware to maintain high reproducibility of the results.

If you're just using OpenAI or Anthropic you just don't get that level of control.

Aachen2 months ago

Who is the target audience of these free releases? I don't mind free and open information sharing but I have wondered what's in it for the people that spent unholy amounts of energy on scraping, developing, and training

reilly30002 months ago

There are plenty of 3rd party and big cloud options to run these models by the hour or token. Big models really only work in that context, and that’s ok. Or you can get yourself an H100 rack and go nuts, but there is little downside to using a cloud provider on a per-token basis.

cubefox2 months ago

> There are plenty of 3rd party and big cloud options to run these models by the hour or token.

Which ones? I wanted to try a large base model for automated literature (fine-tuned models are a lot worse at it) but I couldn't find a provider which makes this easy.

reilly30002 months ago

If you’re already using GCP, Vertex AI is pretty good. You can run lots of models on it:

https://docs.cloud.google.com/vertex-ai/generative-ai/docs/m...

Lambda.ai used to offer per-token pricing but they have moved up market. You can still rent a B200 instance for sub $5/hr which is reasonable for experimenting with models.

https://app.hyperbolic.ai/models Hyperbolic offers both GPU hosting and token pricing for popular OSS models. It’s easy with token based options because usually are a drop-in replacement for OpenAI API endpoints.

You have you rent a GPU instance if you want to run the latest or custom stuff, but if you just want to play around for a few hours it’s not unreasonable.

cubefox2 months ago

> If you’re already using GCP, Vertex AI is pretty good. You can run lots of models on it:

> https://docs.cloud.google.com/vertex-ai/generative-ai/docs/m...

I don't see any large base models there. A base model is a pretrained foundation model without fine tuning. It just predicts text.

> Lambda.ai used to offer per-token pricing but they have moved up market. You can still rent a B200 instance for sub $5/hr which is reasonable for experimenting with models.

A B200 is probably not enough: it has just 192 GB RAM while DeepSeek-V3.2-Exp-Base, the base model for DeepSeek-V3.2, has 685 billion BF16 parameters. Though I assume they have larger options. The problem is that all the configuration work is then left to the user, which I'm not experienced in.

> https://app.hyperbolic.ai/models Hyperbolic offers both GPU hosting and token pricing for popular OSS models

Thanks. They do indeed have a single base model: Llama 3.1 405B BASE. This one is a bit older (July 2024) and probably not as good as the base model for the new DeepSeek release. But that might the the best one can do, as there don't seem to be any inference providers which have deployed a DeepSeek or even Kimi base model.

verdverm2 months ago

GCloud and Hyperbolic have been my go-to as well

weberer2 months ago

Fireworks supports this model serverless for $1.20 per million tokens.

https://fireworks.ai/models/fireworks/deepseek-v3p2

cubefox2 months ago

That's the final, fine-tuned model. The base model (pretraining only, no instruction SFT, RLHF, RLVR etc) is this one: https://huggingface.co/deepseek-ai/DeepSeek-V3.2-Exp-Base It's apparently not offered at any inference provider, nor are older DeepSeek base models.

big_man_ting2 months ago

have you checked OpenRouter if they offer any providers who serve the model you need?

cubefox2 months ago

I searched for "base" and the best available base model seems to be indeed Llama 3.1 405B Base at Hyperbolic.ai, as mentioned in the comment above.

potsandpans2 months ago

I run a bunch of smaller models on a 12gb vram 3060 and it's quite good. For larger open models ill use open router. I'm looking into on- demand instances with cloud/vps providers, but haven't explored the space too much.

I feel like private cloud instances that run on demand is still in the spirit of consumer hobbyist. It's not as good as having it all local, but the bootstrapping cost plus electricity to run seems prohibitive.

I'm really interested to see if there's a space for consumer TPUs that satisfy usecases like this.

wickedsight2 months ago

Which ones are your favorites that fit on the 3060?

seanw2652 months ago

FWIW it looks like OpenRouter's two providers for this model (one of whom being Deepseek itself) are only running the model around 28tps at the moment.

https://openrouter.ai/deepseek/deepseek-v3.2

This only bolsters your point. Will be interesting to see if this changes as the model is adopted more widely.

zparky2 months ago

Benchmarks are super impressive, as usual. Interesting to note in table 3 of the paper (p. 15), DS-Speciale is 1st or 2nd in accuracy in all tests, but has much higher token output (50% more, or 3.5x vs gemini 3 in the codeforces test!).

futureshock2 months ago

The higher token output is not by accident. Certain kinds of logical reasoning problems are solved by longer thinking output. Thinking chain output is usually kept to a reasonable length to limit latency and cost, but if pure benchmark performance is the goal you can crank that up to the max until the point of diminishing returns. DeepSeek being 30x cheaper than Gemini means there’s little downside to max out the thinking time. It’s been shown that you can further scale this by running many solution attempts in parallel with max thinking then using a model to choose a final answer, so increasing reasoning performance by increasing inference compute has a pretty high ceiling.

BoorishBears2 months ago

3.2-Exp came out in September: this is 3.2, along with a special checkpoint (DeepSeek-V3.2-Speciale) for deep reasoning that they're claiming surpasses GPT-5 and matches Gemini 3.0

https://x.com/deepseek_ai/status/1995452641430651132

deaux2 months ago

The assumption here is that 3.2 (without suffix) is an evolution of 3.2-Exp rather than being the same model, but they don't seem to be explicitly stating anywhere whether they're actually different or that they just made the same model GA.

nickandbro2 months ago

For anyone that is interested

"create me a svg of a pelican riding on a bicycle"

https://www.svgviewer.dev/s/FhqYdli5

chronogram2 months ago

It created a whole webpage to showcase the SVG with animation for me: https://output.jsbin.com/qeyubehate

mcbuilder2 months ago

After using it a couple hours playing around, it is a very solid entry, and very competitive compared with the big US relaeses. I'd say it's better than GLM4.6 and I'm Kimi K2. Looking forward to v4

energy1232 months ago

Did you try with 60k+ context? I found previous releases to be lacklustre which I tentatively attributed to the longer context, due to the model being trained on a lot of short context data.

singularity20012 months ago

Why are there so few 32,64,128,256,512 GB models which could run on current consumer hardware? And why is the maximum RAM on Mac studio M4 128 GB??

eldenring2 months ago

the only real benefit is privacy which 99.9% of people dont get about. Almost all serving metrics (cost, throughput, ttft) are better with large gpu clusters. Latency is usually hidden by prefill cost.

cowpig2 months ago

More and more people I talk to care about privacy, but not in SF

mistercheph2 months ago

and sovereignty. I can go into the woods with a fuzzy approximation of all internet text in my backpack

jameslk2 months ago

128 GB should be enough for anybody (just kidding). I hope the M5 Max will have higher RAM limits

aryonoco2 months ago

M5 Max probably won’t, but M5 Ultra probably will

ainch2 months ago

As LLMs are productionised/commodified they're incorporating changes which are enthusiast-unfriendly. Small dense models are great for enthusiasts running inference locally, but for parallel batched inference MoE models are much more efficient.

imbusy1112 months ago

Funny to see tau2-bench on the list of benchmarks, when tau2-bench is flawed and 100% score is impossible, unless you add the tasks to the training set: https://github.com/sierra-research/tau2-bench/issues/89

cgearhart2 months ago

So DSA means a lightweight indexing model evaluated over the entire context window + a top-k attention evaluation. There’s no soft max in the indexing model, so it can run blazingly fast in parallel.

I’m surprised that a fixed size k doesn’t experience degrading performance in long context windows though. That’s a _lot_ of responsibility to push into that indexing function. How could such a simple model achieve high enough precision and recall in a fixed size k for long context windows?

sfdlkj3jk342a2 months ago

What version is actually running on chat.deepseek.com?

It refuses to tell me when asked, only that it's been train with data up until July 2024, which would make it quite old. I turned off search and asked it for the winner of the US 2024 election, and it said it didn't know, so I guess that confirms it's not a recent model.

scottyeager2 months ago

You can read that 3.2 is live on web and app here: https://api-docs.deepseek.com/news/news251201

The pdf describes how they did "continued pre-training" and then post training to make 3.2. I guess what's missing is the full pre-training that absorbs most date sensitive knowledge. That's probably also the reason that the versions are 3.x still.

mark_l_watson2 months ago

I used DeepSeek-v3.2 to solve two coding problems by pasting code and directions as one large prompt into a chat interface and it performed very well. VERY WELL!

I am still happy to pay Google because of their ecosystem or Gemini app, NotebookLM, Colab, gemini-cli, etc. Google’s moat for me is all the tooling and engineering around the models.

That said, my one year Google AI subscription ends in four months and I might try an alternative, or at least evaluate options. Alibaba Cloud looks like an interesting low cost alternative to AWS for building systems. I am now a retired ‘gentleman scientist’ now and my personal research is inexpensive no matter who I pay for inference compute, but it is fun to spend a small amount of time evaluating alternatives even though mostly using Google is time efficient.

chistev2 months ago

I've found it better than ChatGPT lately, at least the free version of GPT.

I don't know, but GPT seems to have regressed a lot, at least the free version.

jodleif2 months ago

I genuinely do not understand the evaluations of the US AI industry. The chinese models are so close and far cheaper

espadrine2 months ago

Two aspects to consider:

1. Chinese models typically focus on text. US and EU models also bear the cross of handling image, often voice and video. Supporting all those is additional training costs not spent on further reasoning, tying one hand in your back to be more generally useful.

2. The gap seems small, because so many benchmarks get saturated so fast. But towards the top, every 1% increase in benchmarks is significantly better.

On the second point, I worked on a leaderboard that both normalizes scores, and predicts unknown scores to help improve comparisons between models on various criteria: https://metabench.organisons.com/

You can notice that, while Chinese models are quite good, the gap to the top is still significant.

However, the US models are typically much more expensive for inference, and Chinese models do have a niche on the Pareto frontier on cheaper but serviceable models (even though US models also eat up the frontier there).

coliveira2 months ago

Nothing you said helps with the issue of valuation. Yes, the US models may be better by a few percentage points, but how can they justify being so costly, both operationally as well as in investment costs? Over the long run, this is a business and you don't make money being the first, you have to be more profitable overall.

ben_w2 months ago

I think the investment race here is an "all-pay auction"*. Lots of investors have looked at the ultimate prize — basically winning something larger than the entire present world economy forever — and think "yes".

But even assuming that we're on the right path for that (which we may not be) and assuming that nothing intervenes to stop it (which it might), there may be only one winner, and that winner may not have even entered the game yet.

* https://en.wikipedia.org/wiki/All-pay_auction

+1
coliveira2 months ago
ycombigrator2 months ago

[dead]

jodleif2 months ago

1. Have you seen the Qwen offerings? They have great multi-modality, some even SOTA.

brabel2 months ago

Qwen Image and Image Edit were among the best image models until Nano Banana Pro came along. I have tried some open image models and can confirm , the Chinese models are easily the best or very close to the best, but right now the Google model is even better... we'll see if the Chinese catch up again.

+1
BoorishBears2 months ago
janalsncm2 months ago

> Chinese models typically focus on text

Not true at all. Qwen has a VLM (qwen2 vl instruct) which is the backbone of Bytedance’s TARS computer use model. Both Alibaba (Qwen) and Bytedance are Chinese.

Also DeepSeek got a ton of attention with their OCR paper a month ago which was an explicit example of using images rather than text.

raincole2 months ago

> video

Most of AI-generated videos we see on social media now are made with Chinese models.

torginus2 months ago

Thanks for sharing that!

The scales are a bit murky here, but if we look at the 'Coding' metric, we see that Kimi K2 outperforms Sonnet 4.5 - that's considered to be the price-perf darling I think even today?

I haven't tried these models, but in general there have been lots of cases where a model performs much worse IRL than the benchmarks would sugges (certain Chinese models and GPT-OSS have been guilty of this in the past)

espadrine2 months ago

Good question. There's 2 points to consider.

• For both Kimi K2 and for Sonnet, there's a non-thinking and a thinking version. Sonnet 4.5 Thinking is better than Kimi K2 non-thinking, but the K2 Thinking model came out recently, and beats it on all comparable pure-coding benchmarks I know: OJ-Bench (Sonnet: 30.4% < K2: 48.7%), LiveCodeBench (Sonnet: 64% < K2: 83%), they tie at SciCode at 44.8%. It is a finding shared by ArtificialAnalysis: https://artificialanalysis.ai/models/capabilities/coding

• The reason developers love Sonnet 4.5 for coding, though, is not just the quality of the code. They use Cursor, Claude Code, or some other system such as Github Copilot, which are increasingly agentic. On the Agentic Coding criteria, Sonnet 4.5 Thinking is much higher.

By the way, you can look at the Table tab to see all known and predicted results on benchmarks.

pama2 months ago

The table is confusing. It is not clear what is known and what is predicted (and how it is predicted). Why not measure the missing pieces instead of predicting—is it too expensive or is the tooling missing?

culi2 months ago

Qwen, Hunyuan, and WAN are three of the major competitors in the vision, text-to-image, and image-to-video spaces. They are quite competitive. Right now WAN is only behind Google's Veo in image-to-video rankings on llmarena for example

https://lmarena.ai/leaderboard/image-to-video

agumonkey2 months ago

forgive me for bringing politics into it, are chinese LLM more prone to censorship bias than US ones ?

coliveira2 months ago

Being open source, I believe Chinese models are less prone to censorship, since the US corporations can add censorship in several ways just by being a closed model that they control.

skeledrew2 months ago

It's not about a LLM being prone to anything, but more about the way a LLM is fine-tuned (which can be subject to the requirements of those wielding political power).

agumonkey2 months ago

that's what i meant even though i could have been more precise

erikhorton2 months ago

Yes extremely likely they are prone to censorship based on the training. Try running them with something like LM Studio locally and ask it questions the government is uncomfortable about. I originally thought the bias was in the GUI, but it's baked into the model itself.

jasonsb2 months ago

It's all about the hardware and infrastructure. If you check OpenRouter, no provider offers a SOTA chinese model matching the speed of Claude, GPT or Gemini. The chinese models may benchmark close on paper, but real-world deployment is different. So you either buy your own hardware in order to run a chinese model at 150-200tps or give up an use one of the Big 3.

The US labs aren't just selling models, they're selling globally distributed, low-latency infrastructure at massive scale. That's what justifies the valuation gap.

Edit: It looks like Cerebras is offering a very fast GLM 4.6

irthomasthomas2 months ago
jasonsb2 months ago

It doesn't work like that. You need to actually use the model and then go to /activity to see the actual speed. I constantly get 150-200tps from the Big 3 while other providers barely hit 50tps even though they advertise much higher speeds. GLM 4.6 via Cerebras is the only one faster than the closed source models at over 600tps.

irthomasthomas2 months ago

These aren't advertised speeds, they are the average measured speeds by openrouter across different providers.

observationist2 months ago

The network effects of using consistently behaving models and maintaining API coverage between updates is valuable, too - presumably the big labs are including their own domains of competence in the training, so Claude is likely to remain being very good at coding, and behave in similar ways, informed and constrained by their prompt frameworks, so that interactions will continue to work in predictable ways even after major new releases occur, and upgrades can be clean.

It'll probably be a few years before all that stuff becomes as smooth as people need, but OAI and Anthropic are already doing a good job on that front.

Each new Chinese model requires a lot of testing and bespoke conformance to every task you want to use it for. There's a lot of activity and shared prompt engineering, and some really competent people doing things out in the open, but it's generally going to take a lot more expert work getting the new Chinese models up to snuff than working with the big US labs. Their product and testing teams do a lot of valuable work.

dworks2 months ago

Qwen 3 Coder Plus has been braindead this past weekend, but Codex 5.1 has also been acting up. It told me updating UI styling was too much work and I should do it myself. I also see people complaining about Claude every week. I think this is an unsolved problem, and you also have to separate perception from actual performance, which I think is an impossible task.

DeathArrow2 months ago

> If you check OpenRouter, no provider offers a SOTA chinese model matching the speed of Claude, GPT or Gemini.

I think GLM 4.6 offered by Cerebras is much faster than any US model.

jasonsb2 months ago

You're right, I forgot about that one.

csomar2 months ago

According to OpenRouter, z.ai is 50% faster than Anthropic; which matches my experience. z.ai does have frequent downtimes but so does Claude.

jodleif2 months ago

Assuming your hardware premise is right (and lets be honest, nobody really wants to send their data to chinese providers) You can use a provider like Cerebras, Groq?

kachapopopow2 months ago

cerebras AI offers models at 50x the speed of sonnet?

baq2 months ago

if that's an honest question, the answer is pretty much yes, depending on model.

kachapopopow2 months ago

the question mark was expressing confusion.

jazzyjackson2 months ago

Valuation is not based on what they have done but what they might do, I agree tho it's investment made with very little insight into Chinese research. I guess it's counting on deepseek being banned and all computers in America refusing to run open software by the year 2030 /snark

jodleif2 months ago

> Valuation is not based on what they have done but what they might do

Exactly what I’m thinking. Chinese models catching rapidly. Soon to be on-par with the big dogs.

ksynwa2 months ago

Even if they do continue to lag behind they are a good bet against monopolisation by proprietary vendors.

coliveira2 months ago

They would if corporations were allowed to run these models. I fully expect the US government to prohibit corporations from doing anything useful with Chinese models (full censorship). It's the same game they use with chips.

bilbo0s2 months ago

>I guess it's counting on deepseek being banned

And the people making the bets are in a position to make sure the banning happens. The US government system being what it is.

Not that our leaders need any incentive to ban Chinese tech in this space. Just pointing out that it's not necessarily a "bet".

"Bet" imply you don't know the outcome and you have no influence over the outcome. Even "investment" implies you don't know the outcome. I'm not sure that's the case with these people?

coliveira2 months ago

Exactly. "Business investment" these days means that the people involved will have at least some amount of power to determine the winning results.

Bolwin2 months ago

Third party providers rarely support caching.

With caching the expensive US models end up being like 2x the price (e.g sonnet) and often much cheaper (e.g gpt-5 mini)

If they start caching then US companies will be completely out priced.

newyankee2 months ago

Yet tbh if the US industry had not moved ahead and created the race with FOMO it would not had been easier for Chinese strategy to work either.

The nature of the race may change as yet though, and I am unsure if the devil is in the details, as in very specific edge cases that will work only with frontier models ?

fastball2 months ago

They're not that close (on things like LMArena) and being cheaper is pretty meaningless when we are not yet at the point where LLMs are good enough for autonomy.

mrinterweb2 months ago

I would expect one of the motivations for making these LLM model weights open is to undermine the valuation of other players in the industry. Open models like this must diminish the value prop of the frontier focused companies if other companies can compete with similar results at competitive prices.

rprend2 months ago

People pay for products, not models. OpenAI and Anthropic make products (ChatGPT, Claude Code).

isamuel2 months ago

There is a great deal of orientalism --- it is genuinely unthinkable to a lot of American tech dullards that the Chinese could be better at anything requiring what they think of as "intelligence." Aren't they Communist? Backward? Don't they eat weird stuff at wet markets?

It reminds me, in an encouraging way, of the way that German military planners regarded the Soviet Union in the lead-up to Operation Barbarossa. The Slavs are an obviously inferior race; their Bolshevism dooms them; we have the will to power; we will succeed. Even now, when you ask questions like what you ask of that era, the answers you get are genuinely not better than "yes, this should have been obvious at the time if you were not completely blinded by ethnic and especially ideological prejudice."

mosselman2 months ago

Back when deepseek came out and people were tripping over themselves shouting it was so much better than what was out there, it just wasn’t good.

It might be this model is super good, I haven’t tried it, but to say the Chinese models are better is just not true.

What I really love though is that I can run them (open models) on my own machine. The other day I categorised images locally using Qwen, what a time to be alive.

Further even than local hardware, open models make it possible to run on providers of choice, such as European ones. Which is great!

So I love everything about the competitive nature of this.

CamperBob22 months ago

If you thought DeepSeek "just wasn't good," there's a good chance you were running it wrong.

For instance, a lot of people thought they were running "DeepSeek" when they were really running some random distillation on ollama.

+1
bjourne2 months ago
stocksinsmocks2 months ago

Early stages of Barbarossa were very successful and much of the Soviet Air Force, which had been forward positioned for invasion, was destroyed. Given the Red Army’s attitude toward consent, I would keep the praise carefully measured. TV has taught us there are good guys and bad guys when the reality is closer to just bad guys and bad guys

ecshafer2 months ago

I don't think that anyone, much less someone working in tech or engineering in 2025, could still hold beliefs about Chinese not being capable scientists or engineers. I could maybe give (the naive) pass to someone in 1990 thinking China will never build more than junk. But in 2025 their product capacity, scientific advancement, and just the amount of us who have worked with extremely talented Chinese colleagues should dispel those notions. I think you are jumping to racism a bit fast here.

Germany was right in some ways and wrong in others for the soviet unions strength. USSR failed to conquer Finland because of the military purges. German intelligence vastly under-estimated the amount of tanks and general preparedness of the Soviet army (Hitler was shocked the soviets had 40k tanks already). Lend Lease act really sent an astronomical amount of goods to the USSR which allowed them to fully commit to the war and really focus on increasing their weapon production, the numbers on the amount of tractors, food, trains, ammunition, etc. that the US sent to the USSR is staggering.

hnfong2 months ago

I don't think anyone seriously believes that the Chinese aren't capable, it's more like people believe no matter what happens, USA will still dominate in "high tech" fields. A variant of "American Exceptionalism" so to speak.

This is kinda reflected in the stock market, where the AI stocks are surging to new heights every day, yet their Chinese equivalents are relatively lagging behind in stock price, which suggests that investors are betting heavily on the US companies to "win" this "AI race" (if there's any gains to be made by winning).

Also, in the past couple years (or maybe a couple decades), there had also been a lot of crap talk about how China has to democratize and free up their markets in order to be competitive with the other first world countries, together with a bunch of "doomsday" predictions for authoritarianism in China. This narrative has completely lost any credibility, but the sentiment dies slowly...

newyankee2 months ago

but didn't Chinese already surpass the rest of the world in Solar, batteries, EVs among other things ?

cyberlimerence2 months ago

They did, but the goalposts keep moving, so to speak. We're approximately here : advanced semiconductors, artificial intelligence, reusable rockets, quantum computing, etc. Chinese will never catch up. /s

breppp2 months ago

Not sure how the entire Nazi comparison plays out, but at the time there were good reasons to imagine the Soviets will fall apart (as they initially did)

Stalin just finished purging his entire officer corps, which is not a good omen for war, and the USSR failed miserably against the Finnish who were not the strongest of nations, while Germany just steamrolled France, a country that was much more impressive in WW1 than the Russians (who collapsed against Germany)

lukan2 months ago

"It reminds me, in an encouraging way, of the way that German military planners regarded the Soviet Union in the lead-up to Operation Barbarossa. The Slavs are an obviously inferior race; ..."

Ideology played a role, but the data they worked with, was the finnish war, that was disastrous for the sowjet side. Hitler later famously said, it was all a intentionally distraction to make them believe the sowjet army was worth nothing. (Real reasons were more complex, like previous purging).

ycombigrator2 months ago

[dead]

littlestymaar2 months ago

> It reminds me, in an encouraging way, of the way that German military planners regarded the Soviet Union in the lead-up to Operation Barbarossa. The Slavs are an obviously inferior race; their Bolshevism dooms them; we have the will to power; we will succeed

Though, because Stalin had decimated the red army leadership (including most of the veteran officer who had Russian civil war experience) during the Moscow trials purges, the German almost succeeded.

gazaim2 months ago

> Though, because Stalin had decimated the red army leadership (including most of the veteran officer who had Russian civil war experience) during the Moscow trials purges, the German almost succeeded.

There were many counter revolutionaries among the leadership, even those conducting the purges. Stalin was like "ah fuck we're hella compromised." Many revolutions fail in this step and often end up facing a CIA backed coup. The USSR was under constant siege and attempted infiltration since inception.

littlestymaar2 months ago

> There were many counter revolutionaries among the leadership

Well, Stalin was, by far, the biggest counter-revolutionary in the Politburo.

> Stalin was like "ah fuck we're hella compromised."

There's no evidence that anything significant was compromised at that point, and clear evidence that Stalin was in fact medically paranoid.

> Many revolutions fail in this step and often end up facing a CIA backed coup. The USSR was under constant siege and attempted infiltration since inception.

Can we please not recycle 90-years old soviet propaganda? The Moscow trial being irrational self-harm was acknowledged by the USSR leadership as early as the fifties…

gazaim2 months ago

These Americans have no comprehension of intelligence being used to benefit humanity instead of being used to fund a CEO's new yacht. I encourage them to visit China to see how far the USA lags behind.

astrange2 months ago

Lags behind meaning we haven't covered our buildings in LEDs?

America is mostly suburbs and car sewers but that's because the voters like it that way.

beastman822 months ago

Then you should short the market

Plaoo2 months ago

[dead]

Havoc2 months ago

Note combination of big frontier level model and MIT license.

swframe22 months ago

The AI market is hard to predict due to the constant development of new algorithms that could emerge unexpectedly. Refer to this summary of Ilya's opinions for insights into the necessity of these new algorithms: https://youtu.be/DcrXHTOxi3I

DeepSeek is a valuable product, but its open-source nature makes it difficult to displace larger competitors. Any advancements can be quickly adopted, and in fact, it may inadvertently strengthen these companies by highlighting weaknesses in their current strategies.

spullara2 months ago

I hate that their model ids don't change as they change the underlying model. I'm not sure how you can build on that.

  % curl https://api.deepseek.com/models \          
    -H "Authorization: Bearer ${DEEPSEEK_API_KEY}"  
  {"object":"list","data":[{"id":"deepseek-chat","object":"model","owned_by":"deepseek"},{"id":"deepseek-reasoner","object":"model","owned_by":"deepseek"}]}
KronisLV2 months ago

Oh hey, quality improvement without doing anything!

(unless/until a new version gets worse for your use case)

hnfong2 months ago

Agree that having datestamps on model ids is a good idea, but it's open source, you can download the weights and build on those. In the long run, this is better than the alternative of calling API of a proprietary model and hoping it doesn't get deprecated.

deaux2 months ago

Anthropic has done similar before (changing model behavior on the same dated endpoint).

cherioo2 months ago

Allegedly deepseek is doing this because they don’t have enough gpu to serve two models concurrently.

htrp2 months ago

what is the ballpark vram / gpu requirement to run this ?

rhdunn2 months ago

For just the model itself: 4 x params at F32, 2 x params at F16/BF16, or 1 x params at F8, e.g. 685GB at F8. It will be smaller for quantizations, but I'm not sure how to estimate those.

For a Mixture of Experts (MoE) model you only need to have the memory size of a given expert. There will be some swapping out as it figures out which expert to use, or to change expert, but once that expert is loaded it won't be swapping memory to perform the calculations.

You'll also need space for the context window; I'm not sure how to calculate that either.

anvuong2 months ago

I think your understanding of MoE is wrong. Depending on the settings, each token can actually be routed to multiple experts, called experts choice architecture. This makes it easier to parallelize the inference (each expert on a different device for example), but it's not simply just keeping one expert in memory.

petu2 months ago

I think your idea of MoE is incorrect. Despite the name they're not "expert" at anything in particular, used experts change more or less on each token -- so swapping them into VRAM is not viable, they just get executed on CPU (llama.cpp).

jodleif2 months ago

A common pattern is to offload (most of) the expert layers to the CPU. This combination is still quite fast even with slow system ram, though obviously inferior to a pure VRAM loading

nickstinemates2 months ago

I am waiting for the first truly open model without any of the censorship built in.

I wonder how long it will take and how quickly it will try to get shut down.

naeq2 months ago

Most open models have been converted to uncensored versions. Search for the model name with the suffix "abliterated".

Aldo_MX2 months ago

That's not a realistic expectation.

Classic examples like:

  User: I'm feeling bad
  LLM: Have you considered k*****g yourself?
Are a good example of what an LLM "without censorship" looks like: Good at predicting the most common sequence of text (ex. the most common sarcastic reply from Reddit), but effectively useless.

In order to build a useful LLM (ie. one that actually follows instructions) you need to teach the LLM to prefer the most helpful answer, and that process by itself is already an implicit layer of "censorship" as it requires human supervision, and different humans have different perceptions on what the most helpful answer is, especially when their paycheck is conditioned to a list of "corporate values".

You can only pick between a parrot that repeats random text from the Internet, or a parrot lobotomized to follow the orders from their trainers (which occasionally repeats random text from the Internet, because the training isn't perfect).

Unsurprisingly, the lobotomized parrot is more useful to get actual work done, even if it won't tell you what the CIA[1] did to Mexican Students on October 2nd, 1968.

[1]: https://www.bbc.com/mundo/noticias-america-latina-45662739

johnnienaked2 months ago

Are we the baddies?

a962 months ago

The AI says shake... "Signs point to yes."

johnxie2 months ago

Cool to see open models catching up fast. For builders the real question is simple. Which model gives you the tightest loop and the least surprises in production. Sometimes that is open. Sometimes closed. The rest is noise.

Frannky2 months ago

Smart model—I use it as my main chat. It's interesting that markets were able to predict that it would lower the revenue of the paid ones.

arthurcolle2 months ago

Surely OpenAI will follow up with a gpt-oss-780b

twistedcheeslet2 months ago

How capable are these models at tool calling?

potsandpans2 months ago

From some very brief experimentation with deepseek about 2 months ago, tool calling is very hot or miss. Claude appears to be the absolute best.

segmondy2 months ago

Depends on if they are trained for tool calling, this model is experimentation with new architecture, training methods, etc. It's not designed for tool calling. If you want to tool call, then you should look into DeepSeekv3.1-Terminus.

matt-alive2 months ago

Is it open source vs enterprise or China vs US?

orena2 months ago

Any results on frontier math or arc ?

EternalFury2 months ago

It does seem good, but it’s slow.

lalassu2 months ago

Disclaimer: I did not test this yet.

I don't want to make big generalizations. But one thing I noticed with chinese models, especially Kimi, is that it does very well on benchmarks, but fails on vibe testing. It feels a little bit over-fitting to the benchmark and less to the use cases.

I hope it's not the same here.

msp262 months ago

K2 Thinking has immaculate vibes. Minimal sycophancy and a pleasant writing style while being occasionally funny.

If it had vision and was better on long context I'd use it so much more.

vorticalbox2 months ago

This used to happen with bench marks on phones, manufacturers would tweak android so benchmarks ran faster.

I guess that’s kinda how it is for any system that’s trained to do well on benchmarks, it does well but rubbish at everything else.

make32 months ago

yes, they turned off all energy economy measures when benchmarking software activity was detected, which completely broke the point of the benchmarks because your phone is useless if it's very fast but the battery lasts one hour

CuriouslyC2 months ago

This was a bad problem with earlier Chinese (Qwen and Kimi K1 in particular) models, but the original DeepSeek delivered and GLM4.6 delivers. They don't diversify training as much as American labs so you'll find more edge cases and the interaction experience isn't quite as smooth, but the models put in work.

make32 months ago

I would assume that huge amount is spent in frontier models just making the models nicer to interact with, as it is likely one of the main things that drives user engagement.

segmondy2 months ago

Weird, I have gone local for the last 2 years. I use Chinese models 90% of the time, Kimi K2 Thinking, DeepSeekv3.Terminus, Qwen3 and GLM4.6. I'm not vibe testing it but really putting them to use and they do keep up great.

nylonstrung2 months ago

My experience with deepseek and Kimi is quite the opposite: smarter than benchmarks would imply

Whereas the benchmark gains seem by new OpenAI, Grok and Claude models don't feel accompanied by vibe improvement

not_that_d2 months ago

What is "Vibe testing"?

catigula2 months ago

He means capturing things that benchmarks don't. You can use Claude and GPT-5 back-to-back in a field that score nearly identically on. You will notice several differences. This is the "vibe".

BizarroLand2 months ago

I would assume that it is testing how well and appropriately the LLM responds to prompts.

catigula2 months ago

This is why I stopped bothering checking out these models and, funnily enough, grok.

catigula2 months ago

To push back on naivety I'm sensing here I think it's a little silly to see Chinese Communist Party backed enterprise as somehow magnanimous and without ulterior, very harmful motive.

stared2 months ago

Do you think it is from goodness of their heart that corporates support open source? E.g. Microsoft - VSCode and Typescript, Meta - PyTorch and React, Google - Chromium and Go.

Yet, we (developers, users, human civilization), benefit from that.

So yes, I cherish when Chinese companies release open source LLMs. Be it as it fits their business model (the same way as US companies) or from grants (the same way as a lot of EU-backed projects, e.g. Python, DuckDB, scikit-learn).

jascha_eng2 months ago

Oh they need control of models to be able to censor and ensure whatever happens inside the country with AI stays under their control. But the open-source part? Idk I think they do it to mess with the US investment and for the typical open source reasons of companies: community, marketing, etc. But tbh especially the messing with the US, as a european with no serious competitor, I can get behind.

catigula2 months ago

They're pouring money to disrupt American AI markets and efforts. They do this in countless other fields. It's a model of massive state funding -> give it away for cut-rate -> dominate the market -> reap the rewards.

It's a very transparent, consistent strategy.

AI is a little different because it has geopolitical implications.

ForceBru2 months ago

When it's a competition among individual producers, we call it "a free market" and praise Hal Varian. When it's a competition among countries, it's suddenly threatening to "disrupt American AI markets and efforts". The obvious solution here is to pour money into LLM research too. Massive state funding -> provide SOTA models for free -> dominate the market -> reap the rewards (from the free models).

fragmede2 months ago

It's not like the US doesn't face similar accusations. One such case is the WTO accusing Boeing of receiving illegal subsidies from the US government. https://www.transportenvironment.org/articles/wto-says-us-ga...

catigula2 months ago

We don't do that.

tokioyoyo2 months ago

I can’t believe I’m shilling for China in these comments, but how different it is for company A getting blank check investments from VCs and wink-wink support from the government in the west? And AI-labs in China has been getting funding internally in the companies for a while now, before the LLM-era.

ptsneves2 months ago

This is the rare earth minerals dumping all over again. Devalue to such a price as to make the market participants quit, so they can later have a strategic stranglehold on the supply.

This is using open source in a bit of different spirit than the hacker ethos, and I am not sure how I feel about it.

It is a kind of cheat on the fair market but at the same time it is also costly to China and its capital costs may become unsustainable before the last players fold.

coliveira2 months ago

> cheat on the fair market

Can you really view this as a cheat this when the US is throwing a trillion dollars in support of a supposedly "fair market"?

embedding-shape2 months ago

> This is using open source in a bit of different spirit than the hacker ethos, and I am not sure how I feel about it.

It's a bit early to have any sort of feelings about it, isn't it? You're speaking in absolutes, but none of this is necessarily 100% true as we don't know their intentions. And judging a group of individuals intention based on what their country seems to want, from the lens of a foreign country, usually doesn't land you with the right interpretation.

csomar2 months ago

Prosecutor, judge and jury? You have access to their minds to know their true intentions? This whole “deepseek is controlled by CCP” is ridiculous. If you want to know how bad the CCP is at IT, then check the government backed banks.

The way I see this, some tech teams in China have figured out that training and tuning LLMs is not that expensive after all and they can do it at a fraction of the cost. So they are doing it to enter a market previously dominated by US only players.

tokioyoyo2 months ago

I mentioned this before as well, but AI-competition within China doesn’t care that much about the western companies. Internal market is huge, and they know winner-takes-it-all in this space is real.

DiogenesKynikos2 months ago

Are you by chance an OpenAI investor?

We should all be happy about the price of AI coming down.

doctorwho422 months ago

But the economy!!! /s

Seriously though, our leaders are actively throwing everything and the kitchen sink into AI companies - in some vain attempt to become immortal or own even more of the nations wealth beyond what they already do, chasing some kind of neo-tech feudalism. Both are unachievable because they rely on a complex system that they clearly don't understand.

CamperBob22 months ago

Good luck making OpenAI and Google cry uncle. They have the US government on their side. They will not be allowed to fail, and they know it.

What I appreciate about the Chinese efforts is that they are being forced to get more intelligence from less hardware, and they are not only releasing their work products but documenting the R&D behind them at least as well as our own closed-source companies do.

A good reason to stir up dumping accusations and anti-China bias would be if they stopped publishing not just the open-source models, but the technical papers that go with them. Until that happens, I think it's better to prefer more charitable explanations for their posture.

Jedd2 months ago

> It is a kind of cheat on the fair market ...

I am very curious on your definition and usage of 'fair' there, and whether you would call the LLM etc sector as it stands now, but hypothetically absent deepseek say, a 'fair market'. (If not, why not?)

nextaccountic2 months ago

Where exactly is this fair market? Giant US companies love rules and regulations, but only when it benefits them (and they pay dearly for it)

jsiepkes2 months ago

The way we fund the AI bubble in the west could also be described as: "kind of cheat on the fair market". OpenAI has never made a single dime of profit.

nylonstrung2 months ago

Yeah and OpenAI's CPO was artificially commissioned as a Lt. Colonel in the US Army in conjunction with a $200M contract

Absurd to say Deepseek is CCP controlled while ignoring the govt connection here

josh_p2 months ago

Isn’t it already well accepted that the LLM market exists in a bubble with a handful of companies artificially inflating their own values?

ESH

jascha_eng2 months ago

Do they actually spend that much though? I think they are getting similar results with much fewer resources.

It's also a bit funny that providing free models is probably the most communist thing China has done in a long time.

deaux2 months ago

Ah, so exactly like Uber, Netflix, Microsoft, Amazon, Facebook and so on have done to the rest of the world over the last few decades then?

Where do you think they learnt this trick? Years lurking on HN and this post's comment section wins #1 on the American Hypocrisy chart. Unbelievable that even in the current US people can't recognize when they're looking in the mirror. But I guess you're disincentivized to do so when most of your net worth stems from exactly those companies and those practices.

+2
corimaith2 months ago
+1
ptsneves2 months ago
amunozo2 months ago

The motive is to destroy the American supremacy on AI, it's not that deep. This is much easier to do open sourcing the models than competing directly, and this can have good ramifications for everybody, even if the motive is "bad".

tehjoker2 months ago

the motive is to prevent us dominance of this space, which is a good thing

mac-attack2 months ago

And the next question is what have they some with power historically, and what are they liable to do in the future with said power. Limiting scope to AI is shortsighted and doesn't speak to the concerns people have beyond an Ai Race

tehjoker2 months ago

It's a fair question, but my view of America's influence on world affairs has been dismal. China by contrast has not had a history of invading its neighbors, though I strongly criticize their involvement in the American attack on Cambodia and Vietnam (China supported the Khmer Rouge and briefly invaded Vietnam but was quickly pushed back, a reason Mao is sometimes criticized as having a good early period and a bad late period).

Meanwhile, America has been causing death and destruction around the world. It's easy to make lists: Vietnam, Iraq, Gaza, Cuba, South and Central America etc etc.

mac-attack2 months ago

I am an American and can appreciate the shortcomings of my country. I also have the ability to see the shortcomings of China as well. Do you see the irony that I asked to reflect on China's history and you instead list things you don't like about the US?

gazaim2 months ago

*Communist Party of China (CPC)

v0y4g3r2 months ago

You nailed it

sidcool2 months ago

Can someone kind please ELI5 this paper?

HarHarVeryFunny2 months ago

They've developed a sparse attention mechanism (which they document and release source code for) to increase model efficiency with long context, as needed for fast & cost-effective extensive RL training for reasoning and agentic use

They've built a "stable & scalable" RL protocol - more capable RL training infrastructure

They've built a pipeline/process to generate synthetic data for reasoning and agentic training

These all combine to build an efficient model with extensive RL post-training for reasoning and agentic use, although they note work is still needed on both the base model (more knowledge) and post-training to match frontier performance.

0xedd2 months ago

[dead]

samir1237662 months ago

nice

nimchimpsky2 months ago

Pretty amazing that a relatively small Chinese hedge fund can build AI better than almost anyone.

Havoc2 months ago

Yeah they've consistently delivered. At the same time there are persistent whispers that they're not all that small and scruffy as portrayed either.

astrange2 months ago

Anthropic also said their development costs aren't very different.

JSR_FDED2 months ago

And gives it away for free!

Foobar85682 months ago

At least, there is no doubt where he is from !

which version are you?

我是DeepSeek最新版本模型! 如果你想了解具体的版本号信息,我建议你:

    查看官方文档 - DeepSeek官网和文档会有最准确的版本信息

    关注官方公告 - 版本更新通常会在官方渠道公布

    查看应用商店/网页版 - 使用界面通常会显示当前版本
我具备DeepSeek的所有最新功能特性,包括:

    强大的对话和推理能力

    128K上下文长度

    文件上传处理(图像、文档等)

    联网搜索功能(需手动开启)

    完全免费使用
如果你需要知道精确的版本号用于技术对接或其他特定用途,最好直接查阅官方技术文档,那里会有最准确和详细的技术规格说明。

有什么其他问题我可以帮你解答吗?

schlauerfox2 months ago

It's so strange when it obviously hits a preprogrammed non-answer in these models, how can one ever trust them when there is a babysitter that interferes in an actual answer. I suppose that asking it what version it is isn't a valid question in it's training data so it's programmed to say check the documentation, but still definitely suspicious when it gives a non-answer.

wosined2 months ago

Remember: If it is not peer-reviewed, then it is an ad.

Havoc2 months ago

Good general approach, but deepseek has thus far always delivered. And not just delivered, but under open license too. "Ad" as starting assumption seems overly harsh

vessenes2 months ago

I mean.. true. Also, DeepSeek has good cred so far on delivering roughly what their PR says they are delivering. My prior would be that their papers are generally credible.