Back

NeurIPS 2025 Best Paper Awards

72 points7 hoursblog.neurips.cc
gradascent2 hours ago

From the figure in the first paper listed:

> Responses to the query “Write a metaphor about time” clustered by applying PCA to reduce sentence embeddings to two dimensions. […] The responses form just two primary clusters: a dominant cluster on the left centered on the metaphor “time is a river,” and a smaller cluster on the right revolving around variations of “time is a weaver.”

I just gave Gemini 3 the same prompt and got something quite different:

>Time is a patient wind against the cliff face of memory. It does not strike with a hammer to break us; it simply breathes, grain by grain, until the sharp edges of grief are smoothed into rolling hills, and the names we thought were carved in stone are weathered into soft whispers.

Scene_Cast26 hours ago

I think my favorite of the bunch is the "Does Reinforcement Learning Really Incentivize Reasoning Capacity in LLMs Beyond the Base Model" paper. Easy to read, gets the point across very intuitively and quickly, and the point is very interesting and relevant to a lot of people.

About the Superposition paper - this is close to what I've been thinking about over the past week. I'm thinking that concepts or choices in a "superposition" are harder for a fully-differentiable neural net to reason about. For example, if there's a "green" vs "purple" choice to be made, it can't fully commit to either (especially if they're 50-50), and will have to reason about both simultaneously (difficult due to nonlinear manifold space). Discretizing to tokens (non-differentiable argmax) forces a choice, and that allows it to reason about a single concept separately and easier.

ilaksh5 hours ago

Does some have a similar award for papers that are innovative? Like new, relatively unproven architectures?

chermi5 hours ago

Interesting that 3 names I recognized as physicists from stat mech adjacent fields. They continue to punch above their expectations (as sampled by general dismissal of physicists in AI/ML on HN and reddit).

mnky9800n2 hours ago

Do people not like physicists?

chatmasta4 hours ago

Some of the best software engineers I know are ex-physics PhDs… it’s one of those “can’t fake it” skillsets that also happens to have high transferability to ML/AI fields. On the other hand, I snuck through the CS major without ever multiplying a matrix.

miki1232112 hours ago

> I snuck through the CS major without ever multiplying a matrix

I didn't, but only because I became personally interested in AI/ML at some point, so I actually had to learn it myself.

As an AI practitioner, I still couldn't explain eigenvectors or singular-value decomposition to you though.

ctxc3 hours ago

Haha, nice bio. Seeing that font on HN is quite a shock.

niceguy44 hours ago

Are there any talks about these papers on youtube or somewhere? I think I find it easier to listen and watch then read or maybe I'm just lazy, not sure.

neves4 hours ago

There conference had interesting lectures. Will they be posted online?