Back

Scaling Latent Reasoning via Looped Language Models

84 points3 daysarxiv.org
kelseyfrog3 days ago

If you squint your eyes it's a fixed iteration ODE solver. I'd love to see a generalization on this and the Universal Transformer metioned re-envisioned as flow-matching/optimal transport models.

cfcf143 days ago

This makes me think it would be nice to see some kinda child of modern transformer architecture and neural ODEs. There was such interesting work a few years ago on how neural ode/pdes could be seen as a sort of continuous limit of layer depth. Maybe models could learn cool stuff if the embeddings were somehow dynamical model solutions or something.

kevmo3143 days ago

How would flow matching work? In language we have inputs and outputs but it's not clear what the intermediate points are since it's a discrete space.

Etheryte3 days ago

One of the core ideas behind LLMs is that language is not a discrete space, but instead a multidimensional vector field where you can easily interpolate as needed. It's one of the reasons LLMs readily make up words that don't exist when translating text for example.

kevmo3143 days ago

Not the input and output though, which is the important part for flow matching modeling. Unless you're proposing flow matching over the latent space?

Xmd5a3 days ago

[flagged]

the84723 days ago

Does the training process ensure that all the intermediate steps remain interepretable, even on larger models? Not that we end up with some alien gibberish in all but the final step.

oofbey3 days ago

Training doesn’t encourage the intermediate steps to be interpretable. But they are still in the same token vocabulary space, so you could decode them. But they’ll probably be wrong.

the84723 days ago

token vocabulary space is a hull around human communication (emoji, mathematical symbols, unicode scripts, ...), inside that there's lots of unused representation space that an AI could use to represent internal state. So this seems to be bad idea from an safety/oversight perspective.

https://openai.com/index/chain-of-thought-monitoring/

oofbey3 days ago

What is a bad idea? Allowing reasoning to happen in continuous space instead of discrete token space? This paper can be seen as a variant of the Coconut models (continuous chain of thought). Continuous reasoning is certainly more efficient when it works. Lack of interpret ability makes certain safety systems harder to enforce. Is that your point?

+1
the84723 days ago
lukebechtel3 days ago

so it's:

output = layers(layers(layers(layers(input))))

instead of the classical:

output = layer4(layer3(layer2(layer1(input))))

oofbey3 days ago

Yeah if layers() is a shortcut for layer4(layer3(layer2(layer1(input)))). But sometimes it’s only

output = layers(input)

Or

output = layers(layers(input))

Depends on how difficult the token is.

remexre3 days ago

Or more like,

    x = tokenize(input)
    i = 0
    do {
      finish, x = layers(x)
    } while(!finish && i++ < t_max);
    output = lm_head(x)
oofbey2 days ago

That’s closer still. But even closer would be:

    x = tokenize(input)
    i = 0
    finish = 0
    do {
      p, x = layers(x)
      finish += p
    } while(finish < 0.95 && i++ < t_max);
    output = lm_head(x)
Except the accumulation of the stop probabilities isn’t linear like that - it’s more like a weighted coin model.