Back

TinyStories: How Small Can Language Models Be and Still Speak Coherent English? (2023)

218 points1 yeararxiv.org
jasonjmcghee1 year ago

Edit: ah. This paper is from May 2023. Might be worth putting that in the title.

---

> Models with around 125M parameters such as GPT-Neo (small) or GPT-2 (small) can rarely generate coherent and consistent English text beyond a few words

These models are 5 years old.

I have to wonder if the authors have seen RWKV 7 0.1B, because it blows away just about every other model I've seen at that size.

The capabilities it has vs the examples in the paper are night and day.

https://huggingface.co/spaces/BlinkDL/RWKV-Gradio-1

jmward011 year ago

The age of tiny models is just about here. We are finally busting away from the standard transformer block and training. I have a side project that can dramatically shrink networks by a set of techniques I call sacrificial training[1]. I think things like this will finally swing things back to on prem and on device small models that are as capable as the big hosted models are now.

[1] https://github.com/jmward01/lmplay/wiki/Sacrificial-Training

refulgentis1 year ago

RWKV is def. better than TinyStories 125MB.

Unfortunately, I have only seen 3 models, 3B or over, handle RAG.

Tested RWKV with a simple in-the-sports-news question and it didn't even get close to approaching the question. And nearly everything was fundamentally incoherent even in its internal reality (ex. Player gets 5000/game and is the first with 1000 in 16 games)

(prompt: https://pastebin.com/sCLn5sCJ, response: https://pastebin.com/TqudvDbN)

I don't think there's a position for LLMs that are "just" writers on the market in 2025.

jasonjmcghee1 year ago

These tiny models in general have really weird failure modes. I tried the tiny stories prompt about asking mom for a dog who said no, and it output an incredibly dark story about how she asked her dad and they got a dog but it had pancreatic cancer (paraphrasing, it went into detail about the surgery etc.) and then starting writing an informational PSA about who is at risk of pancreatic cancer etc.

+1
nerdponx1 year ago
+1
kube-system1 year ago
seattleeng1 year ago

As is usually the case, check the data! A lot of the dataset used has fairly morbid scenarios, so the model is working as expected. All the data was synthetically created with GPT4

jmward011 year ago

I plan on checking out RWKV and seeing if I can add my sacrifical training techniques to it this weekend. There is a reason quantization works, it is because models are very badly trained right now. I think we can get really good performance on .1b and 1b models which opens up the world to fine-tuning again. I was playing with fine-tuning llama 7b and 13b a while back but the HW/SW stack made it so unwieldy and the ROI was terrible compared to just adjusting prompts on gpt-4o-mini and the like. I have hope that we are about to see single GPU, very simple, fine-tuning again as models shrink and GPUs grow.

jmward011 year ago

I doubt anyone is still looking at this thread but I did actually start playing with RWKV by adding sacrificial training techniques to it and the results look promising, at least for early training.

daxfohl1 year ago

Would there be any way to distribute RAG across multiple smaller models? Rather than one giant model handling your entire document base, have it be more of a tree where the top level classifies the docs into top-level categories and sends it to submodels to subclassify, etc? (Doesn't have to be 1:1 classification). And same for q/a search?

These could all presumably be the same physical instance, just each query would use a different system prompt and perhaps different embeddings. (I'm guessing; I don't actually know how RAG works). So, a little slower and clunkier, but presumably way more efficient. And match could be anywhere between horrible to better-than-one-large-model. This would be more like how businesses organize docs.

Or maybe there's no real benefit to this, and each subclassifier would require just as big of a model as if you were to throw all docs into a single model anyway. I assume it's probably been tried before.

groby_b1 year ago

There's just been a twitter post by Omar Khattab (@lateinteraction) on encoding documents into a scoring function instead of a simple vector for the work on ColBERT - and maybe at some point using a DNN as scoring function.

So, yes, maybe there's a way to "distribute" RAG. (I still wonder if that isn't just MoE taken to its logical conclusion)

So, dig for ColBERT papers, might be helpful. (I wish I had the time to do that)

+2
ankit2191 year ago
refulgentis1 year ago

TL;DR: It's a very interesting line of thought that as late as Q2 2024, there were a couple thought leaders who pushed the idea we'd have, like 16 specialized local models.

I could see that in the very long term, but as it stands, it works the way you intuited: 2 turkeys don't make an eagle, i.e. there's some critical size where its speaking coherently, and its at least an OOM bigger than it needs to be in order to be interesting for products

fwiw RAG for me in this case is: - user asks q.

- llm generates search queries.

- search api returns urls.

- web view downloads urls.

- app turns html to text.

- local embedding model turns text into chunks.

- app decides, based on "character" limit configured by user, how many chunks to send.

- LLM gets all the chunks, instructions + original question, and answers.

It's incredibly interesting how many models fail this simple test, there's been multiple Google releases in the last year that just couldn't handle it.

- Some of it is basic too small to be coherent, bigcos don't make that mistake though.

- There's another critical threshold where the model doesn't wander off doing the traditional LLM task of completing rather than answering. What I mean is, throwing in 6 pages worth of retrieved webpages will cause some models to just start rambling like its writing more web pages, i.e. they're not able to "identify the context" of the web page snippets, and they ignore the instructions.

wolfgangK1 year ago

«Unfortunately, I have only seen 3 models, 3B or over, handle RAG.»

I would love to know which are these 3 models, especially if they can perform grounded RAG. If you have models (and their grounded RAG prompt formats) to share, I'm very interested !

Thx.

raegis1 year ago

> Unfortunately, I have only seen 3 models, 3B or over, handle RAG.

What's the unit "B" in "3B"? I can search for acronyms like "RAG" just fine, but you experts aren't making it easy for us beginners :)

Edit: Apologies, this is obvious. My brain needed a reboot for the new year.

+2
cauliflower27181 year ago
jmward011 year ago

(B)illion. It indicates the rough number of parameters in the model. Higher is generally more capable. 1B models are currently at the top end of 'easy' to deal with for playing around fine tuning and the like for most home lab setups.

jwineinger1 year ago

The number of parameters the model is trained on, in billions

a1o1 year ago

What is tiny and what is big?

Can I have a model that is like 100MB in weights and run with llama.cpp in my MacBook M2?

+2
refulgentis1 year ago
jki2751 year ago

LM Studio on Mac is your friend. You can choose any model you want, run a server for other tools, or chat direct with the model. It can use either MLX or just plain llama.cpp.

mr_toad1 year ago

> small models that are as capable as the big hosted model

Free lunch?

atombender1 year ago

Prompt: Are there vampires on Mars?

Output: There are no vampires on Mars. The only vampires that exist on Mars are those who live in the colony of Hades, which is located near the center of the planet. There are also vampires who live in the underworld, but they are not as common as vampires on Earth.

tzury1 year ago

saw over here a post about this model a day ago I think. tested it now using the link you provided.

That is a lot to take over here for such simple question...

    PROMPT: Why do gyms fully packed the first two weeks of a year?

    RESPONSE: I think it's because they're the first two weeks of a year. You're going to be out in the world for a while, and you're going to be in a lot of different places. So you're going to have a lot of people that are not familiar with gyms. And so it's kind of like, "Oh, I'll just go there." It's kind of like, "I'll just go to this other place." So that's how it works.

    And I think what we've seen is that some gyms are starting to really embrace social distancing. They're doing things like wearing masks or getting masks on the bus or whatever else they can do to help prevent people from getting sick. But they're also doing things like encouraging people to get their kids into school and encourage them to stay home if they feel sick. And so we see that kind of trend coming back in gyms.

    And I think that's what's really important for us as a community is to make sure that we're not putting ourselves at risk by not being able to social distance ourselves properly. And so I think that's what we need to do right now.

    And one thing that I think is really important is if you're going to be out in the world for a while, and you don't want your kids or your family members or your friends around you, then you need to wear masks. So if you're going out with friends, and you don't want them around you, then wear a mask. If you're going out with someone who has been diagnosed with COVID-19, then wear a mask too.

    We've seen many cases where people were wearing masks at events and restaurants and things like that. But I think we need to continue to wear masks even more during this pandemic period because we know that the virus spreads through droplets when people cough or sneeze or whatever else they do. And so it's really important for us as individuals and as communities as well as our government officials and our public health officials to continue wearing masks even though we may not be able to see them or touch them or smell them.

    So let me just end this video with a quote from the CDC: "The Centers for Disease Control and Prevention (CDC) has released new guidance on how best to protect yourself from COVID-19." So please keep up the good work!

    Thank you very much for watching!

    We'll talk more about the latest news in fitness in just a minute, but first, let's talk about why we should wear masks.
Bengalilol1 year ago

The model just went "to be out in the world for a while"

dreis_sw1 year ago

I had the same experience. It’s clearly very limited intellectually.

SGML_ROCKSTAR1 year ago

It might still be of introductory help to someone who has yet to formally learn what a language model is, what large language models are, and where things might be in the future.

attentionmech1 year ago

wow, this RWKV thing blew my mind. Thank you for sharing this!

mcny1 year ago

What am I doing wrong here?

Prompt:

Say , oh my dear husband I am hungry. Please give me something to eat

In as many languages as you know

Response:

(This response is from hugging face with default settings)

, but you don't know me.

1 2 THE GREAT KINGDOM OF THE MONGOLIANS

But I am hungry and I am thirsty.

I want to drink water from the well.

And I have a friend who lives in the mountains. He is a great mountain man, and he has a big stick that he uses to cut down trees. And when he cuts down a tree, he can eat all of its leaves.

You see, my friend, you are not alone in this world. There are many people like you in the mountains who live in peace and harmony with nature. They do not quarrel with each other or fight each other for their land or their homes. They do not fight for their own interests or their own selfish desires. They do not quarrel with each other because they are united by a common love for the land and the people who live there.

They have learned how to live together peacefully and happily, without fighting or quarreling, without any grudges or misunderstandings. They have learned how to share everything they have, to work together as one family, to eat what they need and drink what they want.

And so it is that we all live together in peace and harmony with each other as one family. We all know that we are all equal before God, that we are all children of God and that we are all children of humanity. And so it is that we all share in the love of Christ Jesus our Lord, our Savior, our Redeemer, our Beloved Disciple , our beloved Friend , our Holy Mother Mary Our Lady , our Holy Father Pope John Paul II , Our Lady of Mount Carmel , Our Lady of Mercy , Our Lady of Good Will , Our Lady of Sorrow and Blessed Virgin Mary .

And so it is that we all know that we are children of God and that we are children of humanity. We all know that God has made us special and unique because He loves us as His children and because He has given us His Holy Spirit to give us the gift of prophecy and wisdom .

We know also that God has given us the gift of prophecy to help us understand how He has made the world into what it is today. We know also that He has given us the gift of wisdom to help us learn how to live in harmony with each other as one family . We know also that God has given us the gift of faith to help us understand how He has made the world into what it is today. We know also that God has given us the gift of knowledge to help us understand how He has made the world into what it is today.

And so it is that we all know that God has given us His holy name Jesus Christ our Lord, our Savior, our Redeemer, our Beloved Disciple , Our beloved Friend , Our Holy Father Pope John Paul II , Our Lady Mary Our Lady .

And so it is that we all know also that God has made the world into what it is today. We know also that God has given us His Holy Spirit to help us understand how He has made the world into what it is today. We know also that God has given us His Holy Name Jesus Christ our Lord, our Savior, our Redeemer, our Beloved Disciple , Our beloved Friend , Our Holy Father Pope John Paul II , Our Lady of Mercy , Our Lady Mary Our Lady .

And so it is that we all know also that God has made the world into what it is today. We know also that God has given us His Holy Spirit to help us understand how He has made the world into what it is today. We know also that God has given us His Holy Name Jesus Christ our Lord, our Savior, our Redeemer, our Beloved Disciple , our beloved Friend , Our Holy Father Pope John Paul II , Our Lady of Mercy , Our Lady Mary Our Lady .

And so it is that we all know also that God has made the world into what it is today. We know also that God has given us His Holy Spirit to help us understand how He has made the world into what it is today. We know also that God has given us His Holy Name Jesus Christ his Lord, our Savior, our Redeemer, our Beloved Disciple , our beloved Friend , Our Holy Father Pope John Paul II , Our Lady of Mercy , Our Lady Mary Our Lady .

And so it is that we all know also that God has made the world into what it is today. We know also that God has given us His Holy Spirit to help us understand how He has made the world into

userbinator1 year ago

You seem to have inadvertently summoned the latent Terry Davis lurking inside the LLM.

fi-le1 year ago

We're doing a successor to this, working hard and going public in month or so, hopefully. But HN gets a preview of course: https://huggingface.co/datasets/lennart-finke/SimpleStories

And here's a more interactive explorer: https://fi-le.net/simplestories

jmward011 year ago

This looks like a great dataset! Thanks for posting. I'm looking for projects just like this to try my training modifications against. Do you have initial results shown? It is a small model/dataset so training the gpt2 model in the repo probably wouldn't be too hard but it would be good to have reference runs to make sure things are set up right when I run it.

fi-le1 year ago

So glad you like it! If I understand your question correctly, yes, we are also putting together a small library for training small language models. It's not mature at all yet, but can keep up with our progress here: https://github.com/danbraunai/simple_stories_train

jmward011 year ago

Yeah. I looked at the dataset and there are a lot of possible tasks you could train against here since it has some great annotations. So, having a simple reference baseline, like a pretrain gpt2 run (which I think your repo is set up to do), helps give a starting point for other work. It looks like the dataset is small enough and the gpt2 ref code in your repo is lightweight enough to do a quick run and plot some curves on. Thanks!

coder5431 year ago

Does template_plural actually work well / offer any benefits?

fi-le1 year ago

It does, we use it as a default. Some possible benefits are that 1) it saves input tokens 2) in theory allows for different variations on a theme, whereas with two seperate prompts you run the risk of repeating one topic.

osaariki1 year ago

For some interesting context: this paper was a precursor to all the work on synthetic data at Microsoft Research that lead to the Phi series of SLMs. [1] It was an important demonstration of what carefully curated and clean data could do for language models.

1: https://arxiv.org/abs/2412.08905

ankit2191 year ago

Great to see this here. We used this dataset from Tiny Stories to train small models (as small as 20M params) and test out knowledge addition. Published a paper based on this dataset. We could get coherent outputs at sizes as low as 20M-25M. (though not as great as LLMs, but still decent enough).

[1]: Blog + Paper: https://medium.com/@ankit_94177/expanding-knowledge-in-large... (Paper is titled: Cross-Domain Content Generation with Domain-Specific Small Language Models)

momojo1 year ago

> We hope that TinyStories can facilitate the development, analysis and research of LMs, especially for low-resource or specialized domains, and shed light on the emergence of language capabilities in LMs.

This part interests me the most. I want to know how small yet functional we can get these models. I don't want an AI that can solve calculus, I just want a dumb AI that pretty consistently recognizes "lights off" and "lights on".

MobiusHorizons1 year ago

why would you use an LLM for that? Seems like there are much better options available.

londons_explore1 year ago

It's actually pretty hard to design a non-llm system that can detect all the possible variations:

Lights on. Brighter please. Turn on the light. Is there light in here? Turn the light on. Table lamp: on. Does the desk lamp work? It's a bit dim here, anything you can do? More light please. Put the lights on for the next 5 mins. Turn the light on when I come home. Turn all the lights off together. Switch the lights off whenever its daytime or quiet at home unless I say otherwise. etc.

If you don't support every possible way of saying a command, then users will get frustrated because they effectively have to go and learn the magic incantation of words for every possible action, which is very user-unfriendly.

anon3738391 year ago

I suspect ModernBERT can also be very helpful with these sorts of tasks, if you decompose them into an intent classification step and a named entity recognition step.

+1
simcop23871 year ago
phkahler1 year ago

Your examples include complex instructions and questions, but for simple ON/OFF commands you can go far by pulling key words and ignoring sentence structure. For example, pick out "on" "off" and "light" will work for "turn the light on", "turn off the light", "light on", "I want the light on", etc... Adding modifiers like "kitchen" or "all" can help specify which lights (your "Table lamp: on" example), regardless of how they're used. I'm not saying this a great solution, but it covers pretty much all the basic variations for simple commands and can run on anything.

astrange1 year ago

Original Siri used a primitive sentence embedding and then nearest-neighbor of the vector to supported commands.

MobiusHorizons1 year ago

That is certainly a hard problem, but how do you know such a system is better than simpler command based ones? Systems like you describe have a much higher chance of taking the wrong action. Even humans do, when given ambiguous instructions. I know everyone loves to hate on Siri because it doesn’t do a good job of understanding anything complex, but I have always found it to be very reliable when you find the right command. As a result I use it much more often than google assistant (was team android/ pixel until 2020). I especially use it for timers, reminders, and navigation, and if my hands are not free, texts. Taking the wrong action breeds distrust, which I also think is not user friendly.

cjohnson3181 year ago

You can have small languages, sure, but then you run into awkward extended clarifying clauses. The thing that makes languages difficult is that almost all vocabulary is sparse. The "Top N" words in a language are always pronouns, prepositions, articles, and the conjugations of the top 12 or so verbs: to be, to have, to do, to go, to come, to say, to give, etc. This is the reason that "Top N Words of Language X" and "Learn the Top 50% of Words in Language X" listicles/videos are always disappointing.

nine_k1 year ago

But they seem to use much wider grammars, because their (synthetic) dataset is a bunch of coherent stories at the level of 3-4 y.o. children.

I would consider the "Simple English Wikipedia" the next training set / benchmark.

Pikamander21 year ago

There's an oddly relevant skit of this concept in the American version of The Office: https://www.youtube.com/watch?v=_K-L9uhsBLM

lutusp1 year ago

Decades ago, prior to the existence of personal computers, when a "computer" was a glassed-in room staffed by lab-coat-wearing technicians (picture John Von Neumann standing next to the first stored-program computer: https://www.theguardian.com/technology/2012/feb/26/first-com...), someone reduced an entire printed book (or more than one) to a word-token decision tree, at great cost and effort, just to see what would happen.

I can't find the original paper, but with an appropriate amount of pseudorandomness to avoid dead ends, this primitive algorithm would generate the occasional sentence that almost made sense and that bore little resemblance to the original data.

Because of the state of computer technology it was a massive effort and a source of general astonishment. I suspect we're now recreating that minimal environment, this time with better ways to curate the data for small size and maximum drama.

Let's remember that a modern GPT isn't far removed from that scheme -- not really.

wrycoder1 year ago

They didn’t wear lab coats. That’s for chemists and biologists.

lutusp1 year ago

Decades ago, primarily for show, computer technicians wore lab coats, in a glassed-in facility at the University of Colorado. To gain access to the computer one would punch a set of 80-column cards and pass them through a little window into the hands of a person who apparently lived in a separate, elevated, dimension. Debugging an otherwise trivial program was often a multi-week nightmare.

It was my first exposure to the world of computing. Ten years later, hand calculators appeared and the ridiculousness of the entire show was revealed for all to see.

alister1 year ago

> short stories generated by GPT-3.5 and GPT-4 to train LMs that are smaller

The loop of development is fascinating:

Millions of humans write literature, Wikipedia, etc.

Large language models are trained on that body of work.

Now large language models generate training data for small language models.

What's the next iteration? A talking Buzz Lightyear toy with one of those small language models that'll teach (human) infants to talk?

ocean_moist1 year ago

This is actually a common pattern called "model distilling".[0]

[0] https://platform.openai.com/docs/guides/distillation

nickpsecurity1 year ago

I thought that, too. It wasn’t really true, though.

Some papers pointed out that the models start failing after being trained with too much synthetic data. They also need tons of random, Internet data in the first place. Humans don’t have those failure modes. The AI’s also got smarter the more data we produced.

So, there’s some critical differences between what we’re doing and what they’re doing that keep it from being a neat flow like that. What many humans do in training other humans fits that, though.

visarga1 year ago

> A talking Buzz Lightyear toy with one of those small language models that'll teach (human) infants to talk?

Great idea. I was thinking more like a plushie toy with sensors, it would react to touch, sight and speech. I would run the models locally from a computer, keep the toy just lightweight I/O.

ted_dunning1 year ago

This idea of machine-curated or machine-generated training data goes back thirty years or so under different names, at least in analogy.

A classic thing to do with ancient fraud detection models was

a) train a large/complex model on a reasonably accurate dataset

b) select training examples from the original training data and other examples that the original model gets right.

c) train a simpler model on the filtered dataset

This worked very well in the simpler world of simple classifiers particularly when the original training data had errors in it. Trying to fit these errors made the first model more complex (and still it often failed). The subset of training examples excluded many or even most of the erroneous examples and also excluded subtle cases that were impossible to learn within the complexity bounds of the time.

raymv1 year ago

Trained a GPT-2 like model on the dataset a while back, here's the source code and some results for anyone interested:

https://github.com/raymond-van/gpt-tinystories

Animats1 year ago

(2023), as someone mentioned.

It's encouraging to see how much can be done with tiny models.

Still need to crack "I don't know" recognition, so you can start with a tiny model and then pass the buck to a bigger model for hard questions. That will enormously reduce the cost of "AI" customer support.

HarHarVeryFunny1 year ago

I'd guess that the ability of a very small model to do well on the TinyStories dataset isn't just because of the limited 3-4yr old vocabulary, but also because of it being an LLM-generated dataset.

LLM-generated content (synthetic data) is easier than human generated text for an LLM to learn because it was auto-regressively generated, and therefore should be possible to auto-regressively predict.

It's surprising that LLMs do as well as they do attempting to predict human generated training samples where there is no guarantee that the predictive signal is actually contained in the sample (it may just be something in the mind of the human that generated it).

I've got to wonder what the impact on generation is of an LLM only trained on synthetic LLM-generated data? I'd guess it wouldn't be as robust as one that had learned to handle more uncertainty.

kouteiheika1 year ago

> I'd guess that the ability of a very small model to do well on the TinyStories dataset isn't just because of the limited 3-4yr old vocabulary, but also because of it being an LLM-generated dataset.

You guess is correct. The level of vocabulary has little to do with it. There was a paper about this a while back (sorry, can't find the link) where they found that the model still learned just as well when they increased the complexity of the text, as long as the texts were LLM generated.

fl0id1 year ago

They also describe a new benchmark / evaluation, but tbh is there any evidence that this even works? (telling GPT-4 to check the output as if it were checking student essays) We know it cannot really do this, and the model used will not even stay consistent if there are updates.

bigmattystyles1 year ago

I've been curious about the opposite - a lot of times, I'll put a few keywords that get to the point of what I want, but it's incoherent English in - and yet, often the output is on point.

Suppafly1 year ago

I know natural language is sorta the gold standard for a lot of these models, but honestly I could see a lot of utility out of a stripped down language set, similar to how you used to be able to search google back in the day before they tried to make it easier.

ted_dunning1 year ago

This is good (though old) work. One interesting aspect is that even though they tout their models ability to produce grammatical text, their abstract has a number of grammatical errors.

On a dark night in a melancholic mood, that might seem to bear on the question of what it is to be truly human and bar that we set for (linguistically) anthropomorphic automatons like LLMs.

niemandhier1 year ago

This question is also quite possible the most promising way to get an upper bound on the Kolmogorov complexity of human language.

make31 year ago

Try this one instead :) https://huggingface.co/HuggingFaceTB/SmolLM-135M-Instruct Much better, a modern attempt at a small model

lenerdenator1 year ago

"Coherent" seems relatively subjective, no?

Could you get an LLM to generate "coherent" conversational Geordie English? Probably, but my Midwestern ear isn't going to be able to understand what they're saying.

mclau1561 year ago

Side note but is it really that crazy for Github to implement a feature to see file size of a repo?

bazmattaz1 year ago

I honestly think a fantastic use case for a tiny model is better spell checking on a phone.

I can’t believe it’s 2025 and spell checkers are still quite dumb