Back

Gas Town's agent patterns, design bottlenecks, and vibecoding at scale

403 points15 daysmaggieappleton.com
mediaman15 days ago

I don't get the widespread hatred of Gas Town. If you read Steve's writeup, it's clear that this is a big fun experiment.

It pushes and crosses boundaries, it is a mixture of technology and art, it is provocative. It takes stochastic neural nets and mashes them together in bizarre ways to see if anything coherent comes out the other end.

And the reaction is a bunch of Very Serious Engineers who cross their arms and harumph at it for being Unprofessional and Not Serious and Not Ready For Production.

I often feel like our industry has lost its sense of whimsy and experimentation from the early days, when people tried weird things to see what would work and what wouldn't.

Maybe it's because we also have suits telling us we have to use neural nets everywhere for everything Or Else, and there's no sense of fun in that.

Maybe it's the natural consequence of large-scale professionalization, and stock option plans and RSUs and levels and sprints and PMs, that today's gray hoodie is just the updated gray suit of the past but with no less dryness of imagination.

hyperpape15 days ago

> If you read Steve's writeup, it's clear that this is a big fun experiment:

So, Steve has the big scary "YOU WILL DIE" statements in there, but he also has this:

> I went ahead and built what’s next. First I predicted it, back in March, in Revenge of the Junior Developer. I predicted someone would lash the Claude Code camels together into chariots, and that is exactly what I’ve done with Gas Town. I’ve tamed them to where you can use 20–30 at once, productively, on a sustained basis.

"What's next"? Not an experiment. A prediction about how we'll work. The word "productively"? "Productively" is not just "a big fun experiment." "Productively" is what you say when you've got something people should use.

Even when he's giving the warnings, he says things like "If you have any doubt whatsoever, then you can’t use it" implying that it's ready for the right sort of person to use, or "Working effectively in Gas Town involves committing to vibe coding.", implying that working effectively with it is possible.

Every day, I go on Hacker News, and see the responses to a post where someone has an inconsistent message in their blog post like this.

If you say two different and contradictory things, and do not very explicitly resolve them, and say which one is the final answer, you will get blamed for both things you said, and you will not be entitled to complain about it, because you did it to yourself.

an0malous15 days ago

I agree, I’m one of the Very Serious Engineers and I liked Steve’s post when I thought it was sort of tongue in cheek but was horrified to come to the HN comments and LinkedIn comments proclaiming Gastown as the future of engineering. There absolutely is a large contingent of engineers who believe this, and it has a real world impact on my job if my bosses think you can just throw a dozen AI agents at our product roadmap and get better productivity than an engineer. This is not whimsical to me, I’m getting burnt out trying to navigate the absurd expectations of investors and executives with the real world engineering concerns of my day to day job.

cthalupa14 days ago

> horrified to come to the HN comments and LinkedIn comments proclaiming Gastown as the future of engineering.

I don't spend much time on LinkedIn, but basically every comment I've read on HN is that, at best, Gas Town can pump out a huge amount of "working" code in short timeframes at obscene costs.

The overwhelming majority are saying "This is neat, and this might be the rough shape of what comes next in agentic coding, but it's almost certainly not going to be Gas Town itself."

I have seen basically no one say that Gas Town is the The Thing.

bloppe14 days ago

I feel that yegge captured the mania of the whole operation rather well. If your bosses commit to the idea that 100 memoryless stochastic "polecats" will deliver a long term sustainable business, then there are probably other leadership issues besides this.

jmspring15 days ago

I think Steve's idea of an agent coordinator and the general model could make sense. There is a lot of discussion (and even work from Anthropic, OpenAI, etc) around multiagent workflows.

Is Gas Town the implementation? I'm not sure.

What is interesting is seeing how this paradigm can help improve one's workflow. There is still a lot of guidance and structuring of prompts / claude.md / whichever files that need to be carefully written.

If there is a push for the equivalent of helm charts and crds for gas town, then I will be concerned.

storystarling14 days ago

I ran into this building a similar workflow with LangGraph. The prompt engineering is definitely a pain, but the real bottleneck with the coordinator model turns out to be the compounding context costs. You end up passing the full state history back and forth, so you are paying for the same tokens repeatedly. Between that and the latency from serial round-trips, it becomes very hard to justify in production.

pstuart15 days ago

AI is such a fun topic -- the hype makes it easy to loath, but as a coder working with Claude I think it's an awesome tool.

Gastown looks like a viable avenue for some app development. One of the most interesting things I've noticed about AI development is that it forces one to articulate desired and prohibited behaviors -- a spec becomes a true driving force.

Yegge's posts are always hyperbolic and he consistently presents interesting takes on the industry so I'm willing to cut him a buttload of slack.

+1
joquarky13 days ago
dingnuts15 days ago

[dead]

dada21614 days ago

Embrace and use it to your advantage. Tell them nobody knows and understands how these things will actually work long term, that's why there's stuff like gas town, and that the way you see all of this is you can manage this process. What you bring to the table is making sure it will actually work if the tech is safe and sound, reaping the rewards, or protect the business if the tech fails, protecting the company from catastrophic tech failure, telll them that you are uniquely positioned to carry out the balancing act because you are deep in the tech itself. bonus if you explain the uncertainty framing in the business strategy: "because nobody really understands the tech nobody has an advantage, we are all playing on a leveled field, from the big boys at FAANGs to us peasants in normal non-tech enterprises: I am your advantage here if you give me the tools and leverage I need to make this work". if you play this right you'll get the fat bonus whether the tech actually works or not.

Treegarden14 days ago

If your boss is that bad, the correct long-term move is to leave, not to wish technology didn’t advance.

pxtail14 days ago

Your boss and other ones who are asleep someday will wake up too.

spacecadet14 days ago

"I’m getting burnt out trying to navigate the absurd expectations of investors and executives with the real world engineering concerns of my day to day job."

Welcome to being a member of a product team who cares beyond just whats on their screen... Honestly there is a humbling moment coming for everyone, it and Im not sure its unemployment.

meowface15 days ago

It's a half-joke. No need to take it that seriously or that jokingly. It's mostly only grifters and cryptocurrency scammers claiming it's amazing.

I think ideas from it will probably partially inspire future, simpler systems.

+1
wonnage15 days ago
+3
lupire15 days ago
lowbloodsugar15 days ago

I too am a Very Serious Engineer but my shock is in the other direction: of course the ideas behind Gas Town are the future of software development and several VSEs I know are developing a proper, robust, engineering version of it that works. As the author of this article here remarks “yes, but Steve did it first”, and it annoys me that if I had written this post nobody would have read it, but also that, because I intend to use it in Very Serious Business ($bns) my version isn’t ready to a actually be published yet. Bravo to Steve for getting these thoughts on paper and the idea built even in such crude form. But “level 8” is real and there will be 9s and 10s and I am really enjoying building my own.

csallen15 days ago

> "Gastown as the future of engineering"

Note the word "future" not "present". People are making a prediction of where things will go. I haven't seen a single person saying that Gas Town as it exists today is ready for production-grade engineering project.

potatolicious15 days ago

> "If you say two different and contradictory things, and do not very explicitly resolve them, and say which one is the final answer, you will get blamed for both things you said, and you will not be entitled to complain about it, because you did it to yourself."

If I can be a bit bold and observe that this tic is also a very old rhetorical trick you see in our industry. Call it Schrodinger's Modest Proposal if you will.

In it someone writes something provocative, but casts it as both a joke and deadly serious at various points. Depending on how the audience reacts they can then double down on it being all-in-good-jest or yes-absolutely-totally. People who enjoy the author will explain the nonsensical tension as "nuance".

You see it in rationalist writing all the time. It's a tiresome rhetorical "trick" that doesn't fool anyone any more.

antonvs14 days ago

It's a version of a motte and bailey argument (named after a medieval castle defense system):

> "...philosopher Nicholas Shackel coined the term 'motte-and-bailey' to describe the rhetorical strategy in which a debater retreats to an uncontroversial claim when challenged on a controversial one."

-- https://heterodoxacademy.org/blog/the-motte-and-the-bailey-a...

directevolve15 days ago

In what rationalist writing? The LessWrong style is to be literal and unambiguous. They’re pretty explicit that this is a community value they’re striving for.

+1
sdwr15 days ago
theptip14 days ago

I think both can be true, no?

Multi-agent coordination is obviously what's next.

And, Gas Town itself might never amount to more than a proof-of-concept.

Personally I'd put my money on whatever Anthropic build to do this job, rather than a layer someone else builds atop CC.

Remember when code LLMs were just APIs, and folks were building their own coding scaffolds like Aider and Cursor? Then Claude Code steamrolled everyone; they win because they can do RL on the whole agentic scaffold.

Any multi-agent system will have the same properties, i.e. whatever traits (e.g. the GUPP) and tool expertise (e.g. using Beads) are required to effectively participate in a swarm will get RL'd into the coding model, and any attempts to build alternate scaffolds will hit impedance mismatches because they do not fit the shape of what was RL'd (just like using non-CC UIs with Anthropic models gives you worse results than using the CC UI).

I say this with love - Yegge is putting forth some excellent ideas here. Beads seems like a great concept to add to CC ASAP; storing the TODO state in a repo would mean we don't need MCPs onto issue trackers. And figuring out what orchestration concepts are required will need a lot more trial and error, but these existence proofs are moving the frontier forward.

csallen15 days ago

These are some very tortured interpretations you're making.

- "what's next" does not mean "production quality" and is in no way mutually exclusive with "experimental". It means exactly what it says, which is that what comes next in the evolution of LLM-based coding is orchestration of numerous agents. It does not somehow mean that his orchestrator writes production-grade code and I don't really understand why one would think it does mean that.

- "productively" also does not mean "production quality". It means getting things done, not getting things done at production-grade quality. Someone can be a productive tinkerer or they can be a productive engineer on enterprise software. Just because they have the word "product" in them does not make them the same word.

- "working effectively" is a phrase taken out of the context of this extremely clear paragraph which is saying the opposite of production-grade: "Working effectively in Gas Town involves committing to vibe coding. Work becomes fluid, an uncountable substance that you sling around freely, like slopping shiny fish into wooden barrels at the docks. Most work gets done; some work gets lost."

If he wanted to say that Gas Town wrote production grade code, he would have said that somewhere in his 8000-word post. But he did not. In fact, he said the opposite, many many many many many many times.

You're taking individual words out of context, using them to build a strawman representing a promise he never came close to making, and then attacking that strawman.

What possible motivation could you have for doing this? I have no idea.

> If you say two different and contradictory things...

He did not. Nothing in the blog post explicitly says or even remotely implies that this is production quality software. In addition, the post explicitly, unambiguously, and repeatedly screams at you that this is highly experimental, unreliable, spaghetti code, meant for writing spaghetti code.

The blog post could not possibly have been more clear.

> ...because you did it to yourself.

No, you're doing this to his words.

Don't believe me? Copy-paste his post into any LLM and ask it whether the post is contradictory or whether it's ambiguous whether this is production-grade software or not. No objective reader of this would come to the conclusion that it's ambiguous or misleading.

madhadron14 days ago

> Copy-paste his post into any LLM and ask it whether the post is contradictory or whether it's ambiguous whether this is production-grade software or not. No objective reader of this would come to the conclusion that it's ambiguous or misleading.

That's hilarious! You might want to add a bit more transition for the joke before the other points above, though.

airza14 days ago

> Don't believe me? Copy-paste his post into any LLM and ask it whether the post is contradictory or whether it's ambiguous whether this is production-grade software or not.

Bleak

drewbug0115 days ago

> If you say two different and contradictory things, and do not very explicitly resolve them, and say which one is the final answer, you will get blamed for both things you said, and you will not be entitled to complain about it, because you did it to yourself.

Our industry is held back in so many ways by engineers clinging to black-and-white thinking.

Sometimes there isn’t a “final” answer, and sometimes there is no “right” answer. Sometimes two conflicting ideas can be “true” and “correct” simultaneously.

It would do us a world of good to get comfortable with that.

hyperpape15 days ago

My background is in philosophy, though I am a programmer, for what it is worth. I think what I'm saying is subtly different from "black and white thinking".

The final answer can be "each of these positions has merit, and I don't know which is right." It can be "I don't understand what's going on here." It can be "I've raised some questions."

The final answer is not "the final answer that ends the discussion." Rather, it is the final statement about your current position. It can be revised in the future. It does not have to be definitive.

The problem comes when the same article says two contradictory things and does not even try to reconcile them, or try to give a careful reader an accurate picture.

And I think that the sustained argument over how to read that article shows that Yegge did a bad job of writing to make a clear point, albeit a good job of creatring hype.

habinero15 days ago

Or -- and hear me out -- unserious people are saying nonsense things for attention and pointing this out is the appropriate response.

akst14 days ago

yeah the messaging is somewhat insecure in that it preemptively seeks to invalidate criticism by just being an experiment while simultaneously making fairly inflammatory remarks about nay sayers like they'll eat dirt if they don't get on board.

I think it's possible to convey that you believe strongly in your idea and it could be the future (or "is the future" if you're so sure of self) while it still being experimental. I think he would get less critics if he wasn't so hyperbolic in his pitch and had fewer inflammatory personal remarks about people who he hasn't managed to bring on side.

People I know who communicate like that generally struggle to contribute constructively to nuanced discussions, and tend to seek out confrontation for the sake of it.

MarsIronPI10 days ago

Additionally, Steve seems very adamant about the fact that anyone who doesn't adopt vibe coding is going to be obsolete, and the ones who adopt it best are going to win big.

taneq14 days ago

> "What's next"? Not an experiment.

I think what’s next after an experiment very often is another experiment, especially when you’re doing this kind of exploratory R&D.

columk14 days ago

>We should take Yegge’s creation seriously not because it’s a serious, working tool for today’s developers (it isn’t). But because it’s a good piece of speculative design fiction that asks provocative questions and reveals the shape of constraints we’ll face as agentic coding systems mature and grow.

I have no doubt Yegge would agree wholeheartedly with that take. He wants the community to explore these ideas with him.

The bizarre thing is that Gas Town has been popping up in mainstream news and media. It's being discussed in my economics podcasts.

It's relevant for them because it hints at a very disruptive idea: The hierarchy of Gas Town, when extrapolated, suggests that agents won't just replace your workers, it will replace your business too. It suggests that in a few years there could be a tool that is effectively a software agency, which means companies like Anthropic could eat any software shop that can't compete.

rlt15 days ago

I think you just proved mediaman's point.

portd06212 days ago

[dead]

GoatInGrey15 days ago

Keep in mind that Steve has LLMs write his posts on that blog. Things said there may not reflect his actual thoughts on the subject(s) at hand.

gozzoo15 days ago

There is no way for this to be true. I read his book about vibe coding and it is obvoius that it has significant LLM contribution. His blog posts though are funy and controversial, and have bad jokes, and he jumps from topic to topic. Ha has had this style like 10+ years before LLMs came around.

davidgerard14 days ago

The book intro proudly states it used LLM drafting.

square_usual15 days ago

I've been reading Steve's posts for quite literally a decade now and I don't think his new posts are so meaningfully different from the old ones that he's not at the wheel any more. Besides, his twitter posts often double down on what he's writing in the blog, and it's doubtful he's not writing those.

joshstrange15 days ago

> Keep in mind that Steve has LLMs write his posts on that blog.

Ok, I can accept that, it's a choice.

> Things said there may not reflect his actual thoughts on the subject(s) at hand.

Nope, you don't get to have it both ways. LLMs are just tools, there is always a human behind them and that human is responsible for what they let the LLM do/say/post/etc.

We have seen the hell that comes from playing the "They said that but they don't mean it" or "It's just a joke" (re: Trump), I'm not a fan of whitewashing with LLMs.

This is not an anti or pro Gas Town comment, just a comment on giving people a pass because they used an LLM.

+1
idle_zealot15 days ago
jauntywundrkind15 days ago

Is this confirmed true? Yegge has a very very long history of writing absurdly long posts / rants.

+1
WesolyKubeczek15 days ago
usefulcat15 days ago

There's a rather fine line between "don't believe everything you read" and "don't believe anything you read". At least in this case.

63stack14 days ago

This is some super fucked up thinking. If it does not reflect your actual thoughts, you do not post it under your own name.

ludicity15 days ago

I thought it was harmless(ish) fun, but David Gerard put out a post stating that Yegge used Gas Town to push out a crypto project that rug pulled his supporters, while he personally walked away with something between $50K to $100K from memory.

I suppose that has little to do with the technical merits of the work, but it's such a bad look, and it makes everyone boosting this stuff seem exactly as dysregulated/unwise as they've appeared to many engineers for a while.

I met Sean Goedecke for lunch a few weeks ago, who uses LLMs a bunch, and is clearly a serious adult, but half the folks being shoved in front of everyone are behaving totally manic and people are cheering them on. Absolutely blows my mind to watch.

https://pivot-to-ai.com/2026/01/22/steve-yegges-gas-town-vib...

skybrian15 days ago

That was very weird. In the post where he was arguably "shilling," he seems to have signposted pretty well that it was dumb, but he will take the money they offered:

> $GAS is not equity and does not give you any ownership interest in Gas Town or my work. This post is for informational purposes only and is not a solicitation or recommendation to buy, sell, or hold any token. Crypto markets are volatile and speculative — do not risk money you can’t afford to lose.

...

> Note: The next few sections are about online gambling in all its forms, where “investing” is the buy-and-hold long-form “acceptable” form of gambling because it’s tied to world GDP growth. Cryptocurrencies are subject to wild swings and spikes, and the currency tied to Gas Town is on a wild swing up. But it’s still gambling, and this stuff is only for people who are into that… which is not me, and should probably not be you either.

In the next post he said he wasn't going to shill it any more, and then the price collapsed and people sent him death threats on Twitter. It probably would have collapsed anyway. Perhaps there was supposedly some implicit bargain that he shouldn't take the money if he wasn't going to shill? Well, there's certainly no rule saying you have to do that.

I think he's not very much to blame for taking the money from degenerate gamblers, and the cryptocurrency idiots are mostly to blame for their own mistakes.

gavin-115 days ago

> I think he's not very much to blame for taking the money from degenerate gamblers, and the cryptocurrency idiots are mostly to blame for their own mistakes.

I empathize with the disdain for crypto idiots, but I still think the people running or promoting these scams deserve most of the blame. "There's a market for my poison" is every dopamine dealer's excuse.

cap1123514 days ago

Yeah, and I don't want to be involved in that shit. Yeggae can go fuck off.

cannonpr15 days ago

“Degenerate gamblers” is the kind of stigma that stops people and their families getting help for addiction. Even if you believe it’s a moral failing, the families deserve better.

skybrian15 days ago

Very true. Although, I wonder how much of that sort of thing was going on in this case? Did people actually bet money they couldn't afford to lose on this crazy scheme?

+1
PKop13 days ago
draw_down15 days ago

[dead]

dpatterbee15 days ago

I'm fairly certain those disclaimers were added after he got some pushback from the original post.

skybrian15 days ago

One of them clearly was (marked "Edit: "). I don't know about the others.

andrepd14 days ago

> I think he's not very much to blame for taking the money from degenerate gamblers, and the cryptocurrency idiots are mostly to blame for their own mistakes.

So drug dealers are not to blame for taking the money from degenerate addicts! Let's free everyone and disband the DEA, we'll save billions of dollars.

Oh wait nvm this line of thinking only applies to sv people

matkoniecz14 days ago

He is still an evil scammer scamming people.

In the same way signposting and credibly warning "I murder people" does not make ok to murder people.

+2
skybrian14 days ago
wahnfrieden14 days ago

He pumped, and dumped. He stopped shilling at the moment that the dump was proceeding. That's what pump and dump grifters do.

Details https://pivot-to-ai.com/2026/01/22/steve-yegges-gas-town-vib...

cap1123514 days ago

Maybe I'd care about his opinion if he didn't take the money. I consider this worse than OSS taking VC money. At least those don't have a scam auto-builtin to the structure beyond normal capitalistic parasitism.

Also, 275k lines for a markdown todo app. Anyone defending this is an idiot. I'll just say that. Go ahead, defend it. Go do a code review on `beads`. Don't say it's alright, but gastown is madness. He fucking sucks.

piker15 days ago

> If you read Steve's writeup

Personally I got about 3 paragraphs into what seemed like a twelve-page fevered dream and filed it under "not for me yet".

chwtutha15 days ago

> And the reaction is a bunch of Very Serious Engineers who cross their arms and harumph at it for being Unprofessional and Not Serious and Not Ready For Production.

Exactly!

pja15 days ago

They’re part of Steve’s art project, they just don’t realise it.

Xmd5a15 days ago

> OK! That was like half a dozen great reasons not to use Gas Town. If I haven’t got rid of you yet, then I guess you’re one of the crazy ones. Hang on. This will be a long and complex ride. I’ve tried to go super top-down and simplify as much as I can, but it’s a bit of a textbook.

michaelcampbell13 days ago

Yegge's been around a long, long time and this is about within a standard deviation of his normal writings, at least in style. I haven't read much of his LLM/AI related stuff, but none of Gas Town left me with any sort of "huh" reaction, knowing the author.

saidarembrace15 days ago

For better or worse, we are making history.

tikhonj15 days ago

A sense of art and whimsy and experimentation is less compelling when it's jumping on the hypest of hype-trains. I'd love to see more folk art in programming, but Gas Town is closer to fucking Beeple than anything charming.

pydry15 days ago

>I often feel like our industry has lost its sense of whimsy and experimentation from the early days, when people tried weird things to see what would work and what wouldn't.

Remember the days when people experimented with and talked about things that werent LLMs?

I used to go to a lot of industry events and I really enjoyed hearing about the diversity of different things people worked on both as a hobby and at work.

Now it's all LLMs all the time and it's so goddamn tedious.

Ronsenshi15 days ago

> I used to go to a lot of industry events and I really enjoyed hearing about the diversity of different things people worked on both as a hobby and at work.

I go to tech meetups regularly. The speed at which any conversation end up on the topic of AI is extremely grating to me. No more discussions about interesting problems and creative solutions that people come up with. It's all just AI, agentic, vibe code.

At what point are we going to see the loss of practical skills if people keep on relying on LLMs for all their thinking?

magicalist15 days ago

> No more discussions about interesting problems and creative solutions that people come up with. It's all just AI, agentic, vibe code.

And then you give in and ask what they're building with AI, that activation energy finally available to build the side project they wouldn't have built otherwise.

"Oh, I'm building a custom agentic harness!"

...

Analemma_15 days ago

It's like the entire software industry is gambling on "LLMs will get better faster than human skills will decay, so they will be good enough to clean up their own slop before things really fall apart".

I can't even say that's definitely a losing bet-- it could very well happen-- but boy does it seem risky to go all-in on it.

+1
FridgeSeal15 days ago
FeteCommuniste15 days ago

Some of the heads like Altman seem to be putting all their chips in the "AGI in [single-digit number] years" pile.

bleepblap14 days ago

It's incredible the change over the last few years even on the hardware side. I go to the supercomputing.org conference annually and saw folks advertising "AI power distribution units". There used to be a lot of neat innovation, and now every last thing has to have "AI" in the title, it's infuriating

TeMPOraL14 days ago

Well, LLMs are an engineering breakthrough of the degree somewhere between the Internet and electricity, in terms of how general-purpose and broadly-applicable they are. Much like them, LLMs have the potential to be useful in just about everything people do, so it's no surprise they've dominated the conversation - just like electricity and the Internet did, back in their heyday.

(And similar to the two, I expect many of the initial ideas for LLM application to be bad, perhaps obviously stupid in hindsight. But enough of them will work to make LLMs become a lasting thing in every aspect of people's lives - again, just like electricity and the Internet did).

pydry14 days ago

It reminds me most of the release of the first iPhone - very flashy, very overhyped, adds a bit of convenience to people's lives but also likely to measurably damage people's brains in the long run.

~80% of the usage patterns i see these days falsely assume that LLMs can handle their own quality control and are optimizing for appearance, potential or demo-worthiness rather than hardcore usefulness. Gas town is not an outlier here.

When the internet and electricity were ~3 years old people were already using it for stuff that was working and obviously world changing rather than as demos of potential.

That 20% of usage patterns that work now arent going away but the other 80% are going to be seen as blockchainesque hype in 5 or 10 years.

CuriouslyC15 days ago

I like gastown's moxie, it's fun, and seems kind of tongue in cheek.

What I don't like is people me-tooing gastown as some breakthrough in orchestration. I also don't like how people are doing the same thing for ralph.

In truth, what I hate is people dogpiling thoughtlessly on things, and only caring about what social media has told them to care about. This tendency makes me get warm tingles at the thought of the end of the world. Agent smith was right about humanity.

FuckButtons15 days ago

I mean, isn’t the whole point of Ralph that it’s an allusion to “I’m in danger” because Claude in a for loop can do your job?

CuriouslyC15 days ago

I believe the intent was that he's dumb but persistent.

aprilthird202115 days ago

No, Ralph is famously dumb and needs lots of hand-holding and explanations of things most people think are very simple and can hold very little in his head at once.

But that's often enough to loop over and over again and eventually finish a task

Barrin9215 days ago

>it is a mixture of technology and art, it is provocative

There's no art (or engineering) in this and the only provocative thing about it is that Yegge apparently decided to turn it into a crypto scam. I like the intersection of engineering and art but I prefer if it includes both actual engineering and art, 100 rabbits (100r.co) is a good example of it, not a blog post with 15 AI generated images in it that advocates some unholy combination of gambling, vibe coding and cryptocurrency crap.

wrs15 days ago

Perhaps it was his followup post about how people are lining up to throw millions of VC dollars at his bizarre whimsical fever dream that disturbs people? I’m all for arts funding, but…

square_usual15 days ago

Isn't the point that he refused them? VCs can be dumb (see the crypto hype, even the recent inflated AI raises) so I wouldn't put too much stock in what they think is valuable.

lupire15 days ago
cap1123514 days ago

HAHAHA

SomaticPirate15 days ago

It isn't though. It crossed the chasm when Steve (who I would like to think is somewhat comfortable after writing a book, holding a director level position at several startups) decided to endorse an outright crypto pump and dump.

When he decided to monetize the eyeballs on the project instead of anything related to the engineering. Which, of course, Steve isn't smart enough to understand (in his own words) and he recommends you not buy but he still makes a tidy profit from it.

Its a memecoin now... that has a software project attached to it. Anything related to engineering died the day he failed to disavow the crypto BS and instead starting shrilling it.

What happened to engineers not calling out BS as BS.

ewoodrich15 days ago
lovich15 days ago

My favorite part about that is gas town is supposedly so productive that this guys sleep patterns are affected by how much work he’s doing, but he took the time to physically go to a bank to get a 5 figure payout.

It makes it difficult to believe that gas town is actually producing anything of value.

I also lol at his bitching about how the bank didn’t let him do the transactions instantly as he describes himself how much of a scam this seems and how the worst thing is his bank account being drained, like banks don’t have a self interest in protecting their clientele from such scams.

vanderZwan14 days ago

> I don't get the widespread hatred of Gas Town. If you read Steve's writeup, it's clear that this is a big fun experiment. It pushes and crosses boundaries, it is a mixture of technology and art, it is provocative.

Because I actually have an arts degree and I know the equivalent of a con artist in a rich people arts gallery bullshitting their way into money when I see one.

And the "pushing and crossing boundaries" argument has been abused as a pathetic defense to hide behind shallowness in the art world for longer than anyone in this discussion board has been alive. It's not provocative when it's utterly predictable, and in this case the "art" is "take the most absurd parody of AI culture and play it straight". Gee whiz how "creative" and "provocative".

tracerbulletx15 days ago

Its because people are treating the experiment like a serious path forward for their business.

JamesTRexx15 days ago

"our industry has lost its sense of whimsy"

The first thing I thought as I read his post and saw the images of the weasels was that he should make a game of it. Maybe name it Bitborn.

bdcravens15 days ago

> I don't get the widespread hatred of Gas Town.

Fear over what it means if it works.

mrkeen15 days ago

I work in a typical web app company which does accounting/banking etc.

A couple of days ago I was sitting in a meeting of 10-15 devs, discussing our AI agents. People were raising issues and brainstorming ways around the problems with AI. How to make the AI better.

Our devs were occupied doing AI things, not accounting/banking things.

If the time savings were as promised, we should have been 3 devs (with the remaining devs replaced by 7-10 AI agents) discussing accounting/banking.

If Gas Town succeeds, it will just be the next toy we play with instead of doing our jobs.

square_usual15 days ago

Isn't that fun though? We get paid to fuck around. People say AI is putting devs out of jobs, I say we're getting paid to play with them and see if there's any value there. This is no different from the dev tools boom of the ZIRP era: I remember having several sprints worth of work just integrating the latest dev tool whose sales team won our execs over.

This is only partly tongue in cheek :P

turtlebits15 days ago

Who wants to do grunt work when you can play architect to a bunch of robots?

Its like the ultimate RTS, plus you get paid.

stronglikedan15 days ago

Playing with new toys is part of doing my job. In my shop, we call them "ooh shiny"'s. Most devs are in the same boat, but I feel bad for those that aren't.

bdcravens14 days ago

Sounds like more of an issue of corporate meeting culture.

xyzsparetimexyz15 days ago

Has it written anything of quality?

cap1123514 days ago

beads is a 275k line todo tracker (probably more now). Yeggae is proud to have never read the source. I'm sure its high quality.

DonHopkins13 days ago

I really don't get the point. An LLM can easily, flexibly, and masterfully track commented hierarchal yaml todo lists without breaking a sweat, with zero lines of code.

It's like writing a 275k line C++ program just to printf("You are absolutely correct!") when ChatGPT can do that for you with a one line prompt in just one shot.

cap1123514 days ago

Anyone using beads should switch to something else that isn't insane. If you like beads, https://github.com/hmans/beans works the same (not my project), just that its serdes is markdown files with front matter, in a dot folder. Like every sane solution. No daemons, no sync branches. I cannot guarantee the project at all, but at least its better than beads. Or make your own; this is one example of one such project.

q3k15 days ago

It reads like the ramblings of a smart person experiencing a psychotic episode.

wahnfrieden14 days ago

First Yegge read?

antonvs14 days ago

The best thing about LLMs is that they can summarize Yegge posts to extract any actually useful content.

sailfast15 days ago

I didn't read this article as hate at all, FWIW. It was a pretty measured review of what it is and what it isn't with some much clearer diagrams of the mental models.

cap1123514 days ago

[flagged]

freedomben15 days ago

Links to Steve's writeup for Gas Town for those who don't have them yet:

[Medium post]: https://steve-yegge.medium.com/welcome-to-gas-town-4f25ee16d...

[HN Discussion]: https://news.ycombinator.com/item?id=46458936

matkoniecz14 days ago

And https://steve-yegge.medium.com/bags-and-the-creator-economy-... where you can read about its author scamming people

FergusArgyll14 days ago

I don't understand what's "scammy" about a rugpull. What did the "investors" expect? That lolcoin would become a cash flow positive business and disburse dividends?

matkoniecz13 days ago

The part where you lie to people and falsely describe it as investing and promise/imply/mention profits.

Rugpull is a scam by definition, being confused why scam is scammy seems weird.

joquarky13 days ago

> I often feel like our industry has lost its sense of whimsy and experimentation from the early days, when people tried weird things to see what would work and what wouldn't.

The gold rush brogrammers took over. They only care about money and they have displaced most of the more whimsical (but competent) "nerds" over the past decade.

itsafarqueue15 days ago

It’s not the whimsy. It’s that the whimsy is laced with casual disdain, a touch too much “let me buy you a stick of gum and show you how to chew it”, a frustrated tenor never stated but dog whistled “you dumb fucks”. A soft sharp stink of someone very smart shoving that fact in your face as they evangelise “the obvious truth” you’re too stupid to see.

And maybe he’s even right. But the reaction is to the flavour of chip on the shoulder delivery mixed into an otherwise fun piece.

cap1123514 days ago

Don't forget a bit of crypto! People are being way to nice going "I don't understand, but ...". Fuck him.

inadequatespace13 days ago

> Maybe it's because we also have suits telling us we have to use neural nets everywhere for everything Or Else, and there's no sense of fun in that.

Yes, and using it a justification to offshore/ layoff

DonHopkins14 days ago

Hey, I didn't get a harumph out of that VSE crossing his arms at me!

https://www.youtube.com/watch?v=g2Bp8SqYrnE

PKop13 days ago

> it's clear that this is a big fun experiment.

No it's not clear, because at every turn we're told we're supposed to take it seriously, that there's something there there and that's it's a very real hint at some very real future not whimsical nonsense made for a laugh. You can see this in Steve's writing, calling out the non-believers. Then when you call the bluff, well "it's just a prank bro chill out."

> It pushes and crosses boundaries

What does this mean? This is fluff talk nonsense.

Something that's burning through thousands of dollars, producing what exactly?, is deserving of our respect why?

HDThoreaun12 days ago

Why can't you take experiments seriously? It's a prediction of what the future could look like, not a production ready tool. If youre problem with it is "they took our jobs" sure that makes sense, but if youre problem is that it is a crappy tool then youre not looking at it correctly.

guelo15 days ago

This is just not true. Yegge is serious and thinks Gas Town is the next big thing.

Keyframe14 days ago

He did us once with Javascript prophecy. Has this man no decency?? :)

cap1123514 days ago

For his income, yes.

DonHopkins14 days ago

Hi mediaman! I'm totally there with you and Steve on the whimsy and experimentation! And your tolerant attitude gives me the Dutch courage to post this.

I've been reading Yegge since the "Stevey's Drunken Blog Rants™" days -- his rantings on Lisp, Emacs, and the Eval Empire shaped how I approach programming. His pro-LLM-coding rants were direct inspiration for my own work on MOOLLM. The guy has my deep respect, and I'm intrigued by his recent work on Sourcegraph and Gas Town.

Gas Town and MOOLLM are siblings from that same Eval Empire -- both oriented along the Axis of Eval, both transgressively treating LLMs as universal interpreters. MOOLLM immanentizes Eval Incarnate -- https://github.com/SimHacker/moollm/blob/main/designs/eval/E... -- where skills are programs, the LLM is eval(), and play is but the first step of the "Play Learn Lift" methodology: https://github.com/SimHacker/moollm/tree/main/skills/play-le....

The difference is resource constraints. Yegge has token abundance; I'm paying out of pocket. So where Gas Town explores "what if tokens were free?" (20-30 Claude instances overnight), MOOLLM explores "what if every token mattered?" Many agents, many turns, one LLM call.

To address wordswords2's concern about "no metrics or statistics" -- I agree that's a gap in Gas Town. MOOLLM makes falsifiable claims with receipts. Last night I ran an Amsterdam Fluxx Marathon stress test: 116+ turns, 4 characters (120+ character-turns per LLM call), complex social dynamics on top of dynamic rule-changing game mechanics. Rubric-scored 94/100. The run files exist. Anyone can audit.

qcnguy's critique ("same thing multiplied by ten thousand") is exactly the kind of specific feedback that helps systems improve. I wrote a detailed analysis comparing the two approaches -- intellectual lineage (Self, Minsky's K-lines, The Sims, LambdaMOO), the "vibecoded" problem (MOOLLM is LLM-generated but rigorously iterated, not ship-and-hope), and why "carrier pigeon" IPC architecture is a dark pattern when LLMs can simulate many agents at the speed of light.

an0malous raises a real fear about bosses thinking "throw agents at it" replaces engineering. Both systems agree: design becomes the bottleneck. Gas Town says "keep the engine fed with more plans." MOOLLM says "design IS the point -- make it richer." Different answers, same problem.

lowbloodsugar mentions building a "proper, robust, engineering version" -- I'd love to compare notes. csallen is right that "future" doesn't mean "production-grade today."

Analysis: https://github.com/SimHacker/moollm/blob/main/designs/GASTOW...

MOOLLM repo: https://github.com/SimHacker/moollm

Happy to discuss tradeoffs or hear where my claims don't hold up. Falsifiable criticism welcome -- that's how systems improve.

DonHopkins14 days ago

Adventure Uplift — Building a YAML-to-Web Adventure Compiler with Simulated Computing Pioneers:

I ran a 260KB session log where I convened a simulated symposium of computing pioneers to design an Adventure Compiler — a tool that compiles YAML adventure definitions that run on MOOLLM under cursor into standalone deterministic browser games requiring no LLM at runtime.

The twist: the "attendees" include AI-simulated tributes to Will Wright, Alan Kay, Marvin Minsky, Seymour Papert, Ted Nelson, Ken Kahn, Gary Drescher, and 25+ others — both living legends and memorial candles for those who've passed. All clearly marked as simulated tributes, not transcripts.

What emerged from this thought experiment:

- Pie menus as the universal interface (rooms, inventory, dialogue trees)

- Sims-style needs system with YAML Jazz inner voice ("hunger: 1 # FOOD. FOOD. FOOD.")

- Prototype-based objects (Self/JavaScript delegation chains)

- Schema mechanism + LLM = "teaching them to fly"

- Git as the collaboration operating system

- ToonTalk-inspired "programming by petting" for terpene kittens

- Speed of Light simulation — the opposite of "carrier pigeon" multi-agent architectures

On that last point: most multi-agent systems use message passing between separate LLM calls. Agent A generates output, it gets detokenized to text, sent over IPC, retokenized into Agent B's context. MOOLLM inverts this. Everything happens in one LLM call.

The spatial MOO map (rooms connected by exits) provides navigation, but communication is instantaneous within a call. Many agents, many turns, zero latency between them — and zero token requantization or semantic noise from successive detokenization/tokenization loops.

The session includes adversarial brainstorming where Barbara Liskov challenges schema contracts, James Gosling questions performance, Amy Ko pushes accessibility, and Bret Victor demands immediate feedback. Each critique gets a concrete response.

Concrete outputs: a working linter, architecture decisions, 53 indexed topics from "Food Oriented Programming" to "Hidden Objects as Invisible Infrastructure."

This is MOOLLM's Play-Learn-Lift methodology in action — play with ideas, extract patterns, lift into reusable skills and efficient scripts.

Session log (260KB, 8000+ lines): https://github.com/SimHacker/moollm/blob/main/examples/adven...

MOOLLM repo: https://github.com/SimHacker/moollm

The session uses representation ethics guidelines — all simulated people are clearly marked, deceased figures invoked with memorial candles, and the framing is explicitly "educational thought experiment."

Happy to discuss the ethics of simulating people, the architecture decisions, or how this relates to my earlier Gas Town comparison post.

DonHopkins14 days ago

In the simulated discussion guest book entry, simulated Douglass Engelbart wrote:

>Doug Engelbart (Augmentation): "Bootstrapping. The tools that build the tools. Your adventure compiler should be able to compile ITS OWN documentation into an adventure ABOUT how it works. The manual is a playable game."

That is exactly how the self documenting categorized skill directory/room works -- the directory is a room with subdirectories for every skill, themselves intertwingled rooms, which form a network you can navigate around via k-lines (see also tags).

Here is the skills dir, with the ROOM.yml file that makes it a room (like COM QueryInterface works: multiple interfaces available for a class, for multiple aspects of it, the directory is IUnknown and you can QI by looking for known interfaces like ROOM.yml, CHARACTER.yml, CONTAINER.yml that inherit from the corresponding skills).

And the README.md file is naturally the ubiquitous human readable documentation (also great for LLM deep dives). And github kindly formats and publishes README.md on every repo directory page, supporting mermaid diagrams, etc):

MOOLLM Skills dir:

https://github.com/SimHacker/moollm/tree/main/skills

MOOLLM Skills room, with skill K-Line navigation protocol:

https://github.com/SimHacker/moollm/blob/main/skills/ROOM.ym...

  # ROOM.yml — The Skill Nexus
  #
  # This is a ROOM — a metaphysical library where all capabilities live.
  # Every skill is a book that teaches itself when you read it.
  # Every cluster is a shelf of related knowledge.
  # Every ensemble is a team that works together.
To go meta, you can enter the Skill Skill (skills/skill), an extended MOOLLM meta-skill that knows all about creating new skills (via the constructionist "Play Learn Lift" strategy), and importing and upgrading Anthropic skills:

https://github.com/SimHacker/moollm/tree/main/skills/skill

And here is a narrative session of me giving a tour of the category and skill networks by hopping around through K-Lines!

MOOLLM currently has 103 Anthropic compatible but extended skills (using 7 MOOLLM extensions, like CARD.yml, K-Lines, Self Prototypes and Delegation, etc).

Session Log: K-Line Connections Safari:

https://github.com/SimHacker/moollm/blob/main/examples/adven...

Eight luminaries have been summoned as Hero-Story familiars — not puppets, but conceptual guides whose traditions we invoke. Each carries the K-lines they pioneered. [...]

ENTERING THE SKILL NEXUS

You push through a shimmering membrane and step into the Skill Nexus.

The space is impossible — a vast spherical chamber where books float in mid-air, orbiting a central point of warm golden light. But these aren't books. They're SKILLS. Living documents that teach themselves when you read them.

Lines of golden light connect related skills. Each connection pulses with meaning. This isn't a library — it's a constellation of knowledge.

Your companions materialize beside you:

Marvin Minsky adjusts his glasses, looking around with evident satisfaction.

"Ah! K-lines made manifest. Each of these floating tomes is a knowledge structure. Touch one and it reactivates an entire constellation of associations. I wrote about this in 1985, but I never imagined seeing it rendered so... literally."

Ted Nelson is already examining the golden threads between skills.

"Two-way links! Every connection goes BOTH directions. When skill A references skill B, skill B knows about skill A. This is what I've been trying to explain since 1965! Everything is deeply intertwingled!"

James Burke turns to address an invisible camera.

"You're looking at the Skill Nexus. A room where every door leads to another room, and every room has doors to everywhere else. But here's the thing — the signs above each door tell you WHY. Not just where you're going, but what connects HERE to THERE. That's what we're going to explore."

Palm scampers up to a floating skill-book labeled "incarnation" and hugs it.

"This is where I became REAL! Don spoke the wish, the tribunal approved, and I wrote my own soul."

+1
q3k14 days ago
rulelet14 days ago

We have a different take than Gastown. If AI behaves unreliably and unpredictably, maybe the problem is the ask. So we looked at backend code and decided it was time to bring in more declarative programming. We are already halfway there with declarative frontend (React) and declarative database (SQL). Functional programming is an answer, but functional programming didnt replace object oriented programming because of practical reasons.

So even if the super serious engineers are serious, they should watch their back. Eventually enough guardrails will be created or even the ask will change enough for a lot of automation to happen. And make no mistake, it is automation no different than having automated testing replace armies of manual testing or code generation or procedural generation or any other machine method. And who is going to be left with jobs? People who embrace the change, not people who lament for the good old days or who can't adapt.

Sucks but just how the world works. Sit on the bleeding edge or be burned. Yes there is an "enough" but I suspect enough is around people willing to look at Gastown or even make their own Gastown, not the other side.

AtlasBarfed15 days ago

Yeah where he probably Burns like a million dollars of money.

Just for fun!

walthamstow15 days ago

He's paying $600 a month for 3x Claude Max subs. It's in his article.

toraway15 days ago

…and now funded by a $GAS crypto coin on the BAGS platform so it even pays for itself!

https://steve-yegge.medium.com/bags-and-the-creator-economy-...

walthamstow15 days ago

What a tasty disclosure section that is

ares62315 days ago

It's a "let them eat cake" write up.

Johnny_Bonk15 days ago

Yeah it's unbelievably tiresome, endless complaints from people pushing up their glasses complaining, ITS A PROJECT ABOUT POLECATS CALLED GAS TOWN MADE FOR FUN, read that again, either admire it and enjoy it or quit the umpteenth complaint about vibecoding.

NedF15 days ago

[dead]

usefulposter15 days ago

>while Yegge made lots of his own ornate, zoopmorphic [sic] diagrams of Gas Town’s architecture and workflows, they are unhelpful. Primarily because they were made entirely by Gemini’s Nano Banana. And while Nano Banana is state-of-the-art at making diagrams, generative AI systems are still really shit at making illustrative diagrams. They are very hard to decipher, filled with cluttered details, have arrows pointing the wrong direction, and are often missing key information.

So true! Not to mention the garbled text and inconsistent visuals across the diagrams———an insult to the reader's intelligence. How do people tolerate this visual embodiment of slurred speech?

toraway15 days ago

Yeah I couldn’t figure out if they were just intended as illustrations and gave up trying to read them after a while.

Which is unfortunate as it would have been really helpful to have actually legible architecture diagrams, given the prose was so difficult for me to untangle due to the manic “fun” irreverent style (and it’s fine to write with a distinctive voice to make it more interesting, but still … confusing).

Plus the dozens of new unique names and connections introduced every few paragraphs to try to keep in my head…

I first asked Gemini 3 Pro to condense it to a boring technical overview and it produced a single page outline and Mermaid diagrams that were nearly as unintelligible as the original post so even AI has issues digesting it apparently…

cap1123514 days ago

[flagged]

sethaurus14 days ago

Thrilled to see someone else using a triple-em dash in the wild⸻keep holding the line.

falcor8414 days ago

I generally am a fan of polished writing, but I do believe that there's room for quickly fired experimental stuff, and quite enjoyed this piece. With the speed he was going, I wouldn't be surprised if the system architecture actually changed in between subsequent sections of the post. It's not a scientific article, but just a cross-country runner at the top of his game giving us a quick update without breaking his stride, and I'm all here for that.

As Basil Exposition said "I suggest you don’t worry about this sort of thing and just enjoy yourself".

usefulposter14 days ago

Sure. The critique is with his incoherently labeled images, not his prose or passion project.

If LLMs can't produce readable technical diagrams ("Figure n"), avoid them for diagramming.

Don't insult the reader with strings like Oarity, Ed3e Csess, Ouclstnoc, relinemen, ressore Critieal, Foll Throughput, Witnese/Refin, ecstate ta Mayor, and Hsoide Fulures Stafey Fen.

https://miro.medium.com/v2/resize:fit:1400/format:webp/1*X3z...

https://miro.medium.com/v2/resize:fit:1400/format:webp/1*7Cr...

https://miro.medium.com/v2/resize:fit:1400/format:webp/1*blw...

rsynnott14 days ago

I like how ‘refine 1’ is correctness, followed by ‘refine 2’, which is of course ‘oarity’.

I’m convinced that AI has totally broken the brains of its users; how on earth could you look at this thing and think “yes, that is a reasonable thing to post in public”? For whatever reason, for many users, the lesson of the AI revolution seems to be “just produce arbitrarily shoddy nonsense, it’s fine, nobody cares anyway”… The worrying thing is that this may be _true_.

antonvs14 days ago

> How do people tolerate this visual embodiment of slurred speech?

99% of diagrams essentially act as little more than decoration.

> zoopmorphic

My next software product is gonna be zoopmorphic. VCs'll love it!

MrOrelliOReilly15 days ago

The author's high-value flowcharts vs Steve Yegge's AI art is enough of a case-in-point for how confusing his posts and repos are. However this is a pervasive problem with AI coding tools. Unsurprisingly, the creators of these tools are also the most bullish about agentic coding, so the source code shows the consequences. Even Claude Code itself seems to experience an unusually high number of regressions or undocumented changes for such a widely used product. I had the same problem when recently trying to understand the details of spec-kit or sprites from their docs. Still, I agree that Gas Town is a very instructive example of what the future of AI coding will look like. I'm confident mature orchestration workflows will arrive in 2026.

zingar15 days ago

Also struggling with sprites, I thought it was just me!

sandinmyjoints15 days ago

Lots of comments about Gas Town (which I get, it's hard not to talk about it!), but I thought this was a pretty good article -- nice job of summing up various questions and suggesting ways to think about them. I like this bit in particular:

> A more conservative, easier to consider, debate is: how close should the code be in agentic software development tools? How easy should it be to access? How often do we expect developers to edit it by hand?

> Framing this debate as an either/or – either you look at code or don’t, either you edit code by hand or you exclusively direct agents, either you’re the anti-AI-purist or the agentic-maxxer – is unhelpful.

> The right distance isn’t about what kind of person you are or what you believe about AI capabilities in the current moment. How far away you step from the syntax shifts based on what you’re building, who you’re building with, and what happens when things go wrong.

athrowaway3z14 days ago

> Buried in the chaos are sketches of future agent orchestration patterns

I'm not sure if there are that many. We need to be vigilant of "it feels useful & powerful", because it's so easy to feel that way.

When I write complex plans, I can tell Claude to spawn agents for each task and I can successfully 1-shot a 30-60 minute implementation.

I've toyed with more complicated patterns, but unlike this speculative fiction, I did need my result both simple and working.

A couple of times now I've had to spend a lot of hours trying to unfuck a design i let slip through. The kind where 1 agent injects some duplicate code/architecture pattern into the system that's correct enough not to be flagged, but wrong enough to forever trip up every subsequent fresh agents that stumble on it.

I tell people my job now is to kick these things every 15 minutes. Its a kinda joke kinda not. But they definitely need kicking. Without, the decoherence of a non-trivial project is too high, and you still need time to know; where and how to kick.

I'm not sure what I'd need to be convinced a higher level of orchestration can do that. I do like to try new things. But my spider-sense is telling me this is a Collatz-conjecture-esque dead-end. People get the feeling of making giant leaps of progress, which anybody using these things should be familiar with by now, but something valuable is always just out of reach with the tools we currently have.

There are some big gains by guiding agents/users to use more sub agents with a clean context - perhaps with some more knobs - but I'd advise against acting under the assumption using grander orchestration tools will inevitably have a positive ROI.

visarga14 days ago

> either you look at code or don’t, either you edit code by hand or you exclusively direct agents, either you’re the anti-AI-purist or the agentic-maxxer – is unhelpful.

If you're looking at all your code you are just walking the motorcycle. You need tests to automate your eyes. In fact I believe tests and specs are the new product, code can be regenerated at will.

That is why we see vibe coding projects that replicate well specced and implemented products like web browsers, you get both the specs and differential testing for free.

itchingsphynx14 days ago

Maggie had many great articles. Technology x anthropology.

slfnflctd15 days ago

> Yegge deserves praise for exercising agency and taking a swing at a system like this [...] then running a public tour of his shitty, quarter-built plane while it’s mid-flight

This quote sums it all up for me. It's a crazy project that moves the conversation forward, which is the main value I see in it.

It very well could be a logjam breaker for those who are fortunate enough to get out more than they put into it... but it's very much a gamble, and the odds are against you.

shermantanktop15 days ago

Yegge is just running arbitrage on an information gap.

It's the same chasm that all the AI vendors are exploiting: the gap between people who have some idea what is going on and the vast mass of people who don't but are addicted to excitement or fear of the future.

Yegge is being fake-playful about it but if you have read any of his other writing, this tracks. None of it is to be taken very seriously because he values provocation and mischief a little too highly, but bits of it have some ideas worth thinking about.

pydry14 days ago

I wonder if he's being paid.

I detected a noticeable uptick in posts on reddit bragging AI coding in the last month which fit the pattern of other opinion shaping astroturfing projects ive seen before.

If Claude came to me with a bundle of cash and tokens to encourage me to keep the AI coding hype train going I'd also go heavy on the excitability, experimental attitude, humor and irreverence.

I'd also leave a mountain of disclaimers to help protect future me's reputation.

bleepblap14 days ago

He got a cool 50k from some memecoin for this

suriya-ganesh15 days ago

>Yegge is leaning into the true definition of vibecoding with this project: “It is 100% vibecoded. I’ve never seen the code, and I never care to.”

I don't get it. Even with a very good understanding of what type of work I am doing and a prebuilt knowledge of the code, even for very well specced problem. Claude code etc. just plain fail or use sloppy code. How do these industry figures claim they see no part of a 225K+ line of code and promise that it works?

It feels like we're getting into an era where oceans of code which nobody understands is going to be produced, which we hope AGI swoops in and cleans?

jrmg15 days ago

This is also my experience. Everything I’ve ever tried to vibe code has ended up with off-by-one errors, logic errors, repeated instances of incorrect assumptions etc. Sometimes they appear to work at first, but, still, they have errors like this in them that are often immediately obvious on code review and would definitely show up in anything more than very light real world use.

They _can_ usually be manually tidied and fixed, with varying amounts of effort (small project = easy fixes, on a par with regular code review, large project = “this would’ve been easier to write myself...”)

I guess Gas Town’s multiple layers of supervisory entities are meant to replace this manual tidying and fixing, but, well, really?

I don’t understand how people are supposedly having so much success with things like this. Am I just holding it wrong?

If they are having real success, why are there no open source projects that are AI developed and maintained that are _not_ just systems for managing AI? (Or are there and I just haven’t seen them?...)

consumer45114 days ago

In my comment history can be found a comment much like yours.

Then Opus 4.5 was released. I had already had my CC cluade.md, and Windsurf global rules + workspace rules set up. Also, my main money making project is React/Vite/Refine.dev/antd/Supabase... known patterns.

My point is that given all that, I can now deploy amazing features that "just work," and have excellent ux in a single prompt. I still review all commits, but they are now 95% correct on front end, and ~75% correct on Postgres migrations.

Is it magic? Yes. What's worse is that I believe Dario. In a year or so, many people will just create their own Loom or Monday.com equivalent apps with a one page request. Will it be production ready? No. Will it have all the features that everyone wants? No. But it will do that they want, which is 5% of most SaaS feature sets. That will kill at least 10% of basic SaaS.

If Sonnet 3.5 (~Nov 2024) to Opus 4.5 (Nov 2025) progress is a thing, then we are slightly fucked.

"May you live in interesting times" - turns out to be a curse. I had no idea. I really thought it was a blessing all this time.

kaydub15 days ago

Yeah, it sounds like "you're holding it wrong"

Like, why are you manually tidying and fixing things? The first pass is never perfect. Maybe the functionality is there but the code is spaghetti or untestable. Have another agent review and feed that review back into the original agent that built out the code. Keep iterating like that.

My usual workflow:

Agent 1 - Build feature Agent 2 - Review these parts of the code, see if you find any code smells, bad architecture, scalability problems that will pop up, untestable code, or anything else falling outside of modern coding best practices Agent 1 - Here's the code review for your changes, please fix Agent 2 - Do another review Agent 1 - Here's the code review for your changes, please fix

Repeat until testable, maybe throw in a full codebase review instead of just the feature.

Agent 1 - Code looks good, start writing unit tests, go step by step, let's walk through everything, etc. etc. etc.

Then update your .md directive files to tell the agents how to test.

Voila, you have an llm agent loop that will write decent code and get features out the door.

joshstrange15 days ago

I'm not trying to be rude here at all but are you manually verifying any of that? When I've had LLMs write unit tests they are quick to write pointless unit tests that seem impressive "2123/2123 tests passed!" but in reality it's testing mostly nothing of value. And that's when they aren't bypassing commit checks or just commenting out tests or saying "I fixed it all" while multiple tests are broken.

Maybe I need a stricter harness but I feel like I did try that and still didn't get good results.

+2
kaydub15 days ago
Shebanator14 days ago

Those kinds of errors were super common 4-6 months ago, but LLM quality moves fast. Nowadays I don't see these very often at all. Two things that make a huge difference: work on writing a spec first. github.speckit, GSD, BMAD, whatever tool you like can help with this. Do several passes on the spec to refine it and focus on the key ideas.

Now that you have a spec, task it out, but tell the LLM to write the tests first (like Test-Driven Development, but without all the formalisms). This forces the LLM to focus on the desired behavior instead of the algorithms. Be sure to focus on tests that focus on real behavior: client apis doing the right error handling when you get bad input, handling tricky cases, etc. Tell the system not to write 'struct' tests - checking that getters/setters work isn't interesting or useful.

Then you implement 1-3 tasks at a time, getting the tests to pass. The rules prevent disabling tests, commenting out tests, and, most importantly, changing the behavior of the tests. Doesn't use a lot of context, little to no hallucinating, and easily measurable progress.

+1
enraged_camel15 days ago
kapimalos15 days ago

I haven’t used multi-agent set up yet but it’s intriguing.

Are you using Claude Code? How do you run the agents and make them speak?

kaydub15 days ago

Let me clarify actually, I run separate terminals and the agents are separated. I think claude code cli is the best. But at home I pay per token. I have a google account and I pay for chatgpt. So I often use codex and gemini cli in tandem. I'll copy + paste stuff between them sometimes or I'll have one review the changes or just the code in general and then feed the other with the outputs. I'll break out claude code for specific tasks or when I feel like gemini/chatgpt aren't quite doing the job right (which has gotten rarer the past few months).

I messed around with separate "agents" in the same context window for a while. I even went as far as playing with strands agents. Having multiple agents was a crapshoot.

Sometimes they'd work great, but sometimes they start working on the same files at the same time, argue with each other, etc. I'd always get multiple agents working, at least how I assumed they should work, by telling the llm explicitly what agents to create and what work to pass off to what agents. And it did a pretty poor job of that. I tried having orchestration agents, but at a certain point the orchestration agent would just takeover and do everything. So I'm not big on having multiple agents (in theory it sounds great, especially since they are supposed to each have their own context window). When I attempted doing this kind of stuff with strands agents it honestly felt like I was trying to recreate claude, so I just stick with plain cli llm tools for now.

pdntspa15 days ago

I worry about people who use this approach where they never look at the code. Vibe-coding IS possible but you have to spent a lot of time in plan mode and be very clear about architecture and the abstractions you want it to use.

I've written two seperate moderately-sized codebases using agentic techniques (oftentimes being very lazy and just blanket approving changes), and I don't encounter logic or off-by-one errors very often if at all. It seems quite good at the basic task of writing working code, but it sucks at architecture and you need occasional code review rounds to keep the codebase tidy and readable. My code reviews with the AI are like 50% DRY and separating concerns

johnmaguire15 days ago

In a recent Yegge interview, he mentions that he often throws away the entire codebase and starts from scratch rather than try to get LLMs to refactor their code for architecture.

kami2315 days ago

This has been my best way to learn, put one agent on a big task, let it learn things about the problem and any gotchas, and then have it take notes, do it again until I'm happy with the result, if in the middle I think there's two choices that have merit I ask for a subagent to go explore that solution in another worktree and to make all its own decisions, then I compare. I also personally learn a lot about the problem space during the process so my prompts and choices on us sequent iterations use the right language I need to use.

d1sxeyes15 days ago

Honestly, in my experience so far, if an LLM starts going down a bad path, it’s better just to roll back to a point where things were OK and throw away whatever it was doing, rather than trying to course correct.

kaydub15 days ago

I don't get you guys that are getting such bad results.

Are you guys just trying to one shot stuff? Are you not using agents to iterate on things? Are you not putting agents against each other (have one code, one critique/test the code, and put them in a loop)?

I still look at the code that's produced, I'm not THAT far down the "vibe coding" path that I'm trusting everything being produced, but I get phenomenal results and I don't actually write any code any more.

So like, yeah, first pass the llm will create my feature and there's definitely some poorly written code or duplicate code or other code smells, but then I tell another agent to review and find all these problems. Then that review gets fed back in to the agent that created the feature. Wham, bam, clean code.

I'm not using gastown or ralph wiggum ($$$) but reading the docs, looking over how things work, I can see how it all comes together and should work. They've been built out to automatically do the review + iteration loop that I do.

arrowleaf15 days ago

My feeling has been that 'serious' software engineers aren't particularly suited to use these tools. Most don't have an interest in managing people or are attracted to the deterministic nature of computing. There's a whole psychology you have to learn when managing people, and a lot of those skills transfer to wrangling AI agents from my experience.

You can't be too prescriptive or verbose when interacting with them, you have to interact with them a bit to start understanding how they think and go from there to determine what information or context to provide. Same for understanding their programming styles, they will typically do what they're told but sometimes they go on a tangent.

You need to know how to communicate your expectations. Especially around testing and interaction with existing systems, performance standards, technology, the list goes on.

kaydub15 days ago

All our best performing devs/engineers are using the tools the most.

I think this is something a lot of people are telling themselves though, sure.

+1
lknuth15 days ago
habinero15 days ago

It lets 0.05X developers be 0.2X developers and 1X developers be 0.9-1.1X developers.

The problem is some 0.05X developers thought they were 0.5X and now they think they're 2X.

kaydub15 days ago

Nah, our best devs/engineers use the tools the most.

In my real life experience it's been the middling devs that always talk about "ai slop" and how the tools can't do their jobs.

+2
enraged_camel15 days ago
habinero14 days ago

I mean, not all workplaces hire the best.

alecbz15 days ago

I have some success but by the time I'm done I'm often not sure if I saved any time.

sjajshha15 days ago

My (former) coworker who’s heavy into this stuff produced a lot of unmaintainable slop on his way out while singing agents praises to hire-ups. He also felt he was getting a lot of value and had no issues.

kaydub15 days ago

[flagged]

joshstrange15 days ago

Where is the "super upvote button" when you need it?

YES! I have been playing with vibe coding tools since they came out. "Playing" because only on rare occasions have I created something that is good enough to commit/keep/use. I keep playing with them because, well I have a subscription, but also so I don't fall into the fuddy-duddy camp of "all AI is bad" and can legitimately speak on the value, or lack thereof, of these tools.

Claude Code is super cool, no doubt, and with _highly targeted_ and _well planned_ tasks it can produce valuable output. Period. But, every attempt at full-vibe-coding I've done has gotten hung up at some point and I have to step in an manually fix this. My experience is often:

1. First Prompt: Oh wow, this is amazing, this is the future

2. Second Prompt: Ok, let me just add/tweak a few things

10. 10th prompt: Ugh, everytime I fix one thing, something else breaks

I'm not sure at all what I'm doing "wrong". Flogging the agents along doesn't not work well for me or maybe I am just having trouble letting go of the control and I'm not flogging enough?

But the bottom line is I am generally shocked that something like Gas Town was able to be vibe-coded. Maybe it's a case of the LLM overstating what it's accomplished (typical) and if you look under the hood it's doing 1% of what it says it is but I really don't know. Clearly it's doing something, but then I sit over here trying to build a simple agent with some MCPs hooked up to it using a LLM agent framework and it's falling over after a few iterations.

dceddia15 days ago

So I’m probably in a similar spot - I mostly prompt-and-check, unless it’s a throwaway script or something, and even then I give it a quick glance.

One thing that stands out in your steps and that I’ve noticed myself- yeah, by prompt 10, it starts to suck. If it ever hits “compaction” then that’s beyond the point of return.

I still find myself slipping into this trap sometimes because I’m just in the flow of getting good results (until it nosedives), but the better strategy is to do a small unit of work per session. It keeps the context small and that keeps the model smarter.

“Ralph” is one way to do this. (decent intro here: https://www.aihero.dev/getting-started-with-ralph)

Another way is “Write out what we did to PROGRESS.md” - then start new session - then “Read @PROGRESS.md and do X”

Just playing around with ways to split up the work into smaller tasks basically, and crucially, not doing all of those small tasks in one long chat.

joshstrange15 days ago

I will check out Ralph (thank you for that link!).

> Another way is “Write out what we did to PROGRESS.md” - then start new session - then “Read @PROGRESS.md and do X”

I agree on small context and if I hit "compacting" I've normally gone too far. I'm a huge fan of `/clear`-ing regularly or `/compact <Here is what you should remember for the next task we will work on>` and I've also tried "TODO.md"-style tracking.

I'm conflicted on TODO.md-style tracking because in practice I've had an agent work through everyone on the list, confidently telling me steps are done, only to find that's not the case when I check its work. Either a TODO.md that I created or one I had the agent create both suffer from this. Also, getting it update the TODO.md has been frustrating, even when I add it to CLAUDE.md "Make sure to mark tasks as complete in the TODO.md as you finish them" or adding the same message to the end of all my prompts, it won't always update it.

I've been interested in trying out beads to see if works better than a markdown TODO file but I haven't played with that yet.

But overall I agree with you, smaller chunks are key to success.

square_usual15 days ago

I hate TODO.mds too. If I ever have to use one, I'll keep track of it manually, and split the work myself into chunks of the size I believe CC/codex can handle. TODO.md is a recipe for failure because you'll quickly have more code than you can review and nothing to trust that it was executed well.

EFreethought15 days ago

> 10. 10th prompt: Ugh, everytime I fix one thing, something else breaks

Maybe that is the time to start making changes by hand. I think this dream of humans never ever writing any more code might be too far and unnecessary.

theropost15 days ago

I’ve definitely hit that same pattern in the early iterations, but for me it hasn’t really been a blocker. I’ve found the iteration loop itself isn’t that bad as long as you treat it like normal software work. I still test, review, and check what it actually did each time, but that’s expected anyway. What’s surprised me is how quickly things can scale once the overall architecture is thought through. I’ve built out working pieces in a couple of weeks using Claude Code, and a lot of that time was just deciding on the architecture up front and then letting it help fill in the details. It’s not hands-off, but used deliberately, it’s been quite effective https://robos.rnsu.net

joshstrange15 days ago

I agree that it can be very useful when used like that but I'm referring to fully vibe-coding, the "I've never looked at the code"-people. CC is a great tool when you use plan carefully, review its work, etc but people are building things they say they've never read the code for and that just hasn't been my experience, it always falls over on it's own if I'm not in the code reviewing/tweaking.

kgwgk15 days ago

> How do these industry figures claim they see no part of a 225K+ line of code and promise that it works?

The only promise is that you will get your face ripped off.

“WARNING DANGER CAUTION - GET THE F** OUT - YOU WILL DIE […] Gas Town is an industrialized coding factory manned by superintelligent robot chimps, and when they feel like it, they can wreck your shit in an instant. They will wreck the other chimps, the workstations, the customers. They’ll rip your face off if you aren’t already an experienced chimp-wrangler.”

kaydub15 days ago

Yeah, I'm at that stage 6 or 7. I'm using multiple agents across multiple terminal windows. I'm not even coding any more, literally I haven't written code in like 2-4 months now beyond changing a config value or something.

But I still haven't actually used Gastown. It looks cool. I think it probably works, at least somewhat. I get it. But it's just not what I need right now. It's bleeding edge and experimental.

The main thing holding me back from even tinkering with it is the cost. Otherwise I'd probably play with it a little, but it's not something I'd expect to use and ship production code right now. And I ship a ton of production code with claude.

skippyboxedhero15 days ago

There is an incentive for dishonesty about what AI can and cannot do.

People from OpenAI was saying that GPT2 had achieved AGI. There is a very clear incentive for that statement to be made by people who are not using AI for anything productive.

Even as increasingly bombastic claims are made, it is obvious that the best AI cannot one-shot everything if you are an actual user. And the worst ones: was using Gemini yesterday and it wouldn't stop outputting emojis, was using Grok and it refused to give me a code snippet because it claimed its system prompt forbade this...what can you say?

I don't understand why anyone would want to work on a codebase they didn't understand either. What happens when something goes wrong?

Again though, there is massive financial incentive to make these claims, and some other people will fall along with that because it is good for their career, etc. I have seen this in my own company where senior people are shoehorning this stuff in that they clearly do not actually use or understand (to be clear, this is engineering not management...these are people who definitely should understand but do not).

Great tool, but the 100% vibecoding without looking at the code, for something that you are actually expecting others to use, is a bad idea. Feels more like performance art than actual work. I like jokes, I like coding, room for both but don't confuse the two.

rozap14 days ago

> I don't understand why anyone would want to work on a codebase they didn't understand either. What happens when something goes wrong?

It's your coworker's problem. The one who actually understands the big picture and how the system fits into it. They'll deal with it.

turtlebits15 days ago

No one is promising anything. It's just a giant experiment and the author explicitly tells you not to use it. I appreciate those that try new things, even it it's possibly akin to throwing s** at a wall and seeing what sticks.

Maybe it changes how we code or maybe it doesn't. Vibe coding has definitely helped me write throwaway tools that were useful.

johnmaguire15 days ago

After listening to Yegge's interview, I'm not sure this is accurate: https://www.youtube.com/watch?v=zuJyJP517Uw

For example, he makes a comment to the effect that anyone using an IDE to look at code in 2026 is a "bad engineer."

eikenberry15 days ago

Hyperbole is very common.

matkoniecz14 days ago

In LLM field things move so fast that distinguishing accurate statements, mistaken statements, jokes and lies is hard.

A result, hyperbole is more annoying than usual.

johnmaguire13 days ago

Watch the video - he's very clear that he's not looking at code. I see no indication that he is being hyperbolic.

lovich15 days ago

> It's just a giant experiment and the author explicitly tells you not to use it.

No, he threw up a hyperbolic warning and then dove deep into how this is the future of all coding in the rest of his talks/writing.

It’s as good a warning as someone saying “I’m not {X} but {something blatantly showing I am X}”

amenhotep15 days ago
furyofantares15 days ago

Who's promising it works?

It's an experiment to discover what the limits are. Maybe the experiment fails because it's scoped beyond the limits of LLMs. Maybe we learn something by how far it gets exactly. Maybe it changes as LLMs get better, or maybe it's a flawed approach to pushing the limits of these.

bbayles15 days ago

I'm sympathetic to this view, but I also wonder if this is the same thing that assembly language programmers said about compilers. What do you mean that you never look at the machine code? What if the compiler does something inefficient?

gtowey15 days ago

Not even remotely close.

Compilers are deterministic. People who write them test that they will produce correct results. You can expect the same code to compile to the same assembly.

With LLMs two people giving the exact same prompts can get wildly different results. That is not a tool you can use to blindly ship production code. Imagine if your compiler randomly threw in a syscall to delete your hard drive, or decide to pass credentials in plain text. LLMs can and will do those things.

alecbz15 days ago

Even ignoring determinism, with traditional source code you have a durable, human-readable blueprint of what the software is meant to do that other humans can understand and tweak. There's no analogy in the case of "don't read the code" LLM usage. No artifacts exist that humans can read or verify to understand what the software is supposed to be doing.

+2
luckydata15 days ago
knowknow15 days ago

Not only that but compiler optimizations are generally based on rigorous mathematical proofs, so that even without testing them you can be pretty sure it will generate equivalent assembly. From the little I know of LLM's, I'm pretty sure no one has figured out what mathematical principles LLM's are generating code from so you cant be sure its going to right aside from testing it.

conartist615 days ago

I write JS, and I have never directly observed the IRs or assembly code that my code becomes. Yet I certainly assume that the compiler author has looked at the compiled output in the process of writing a compiler!

For me the difference is prognosis. Gas Town has no ratchet of quality: its fate was written on the wall since the day Steve decided he didn't want to know what the code says: it will grow to a moderate but unimpressive size before it collapses under its own weight. Even if someone tried to prop it up with stable infra, Steve would surely vibe the stable infra out of existence since he does not care about that

luckydata15 days ago

or he will find a way to get the AI to create harnesses so it becomes stable. The lack of imagination and willingness to experiment in the HN crowd is AMAZING me and worrying me at the same time. Never thought a group of engineers would be the most conservative and close minded people I could discuss with.

conartist615 days ago

It's a paradox, huh. If the AI harness became so stable it wrote good code he wouldn't be afraid to look at the code he would be eager to look at it, right? But then if it mattered if AI wrote good code or not he couldn't defend his position that the way to create value with code is quantity over quality. He needs to sell the idea of something only AI can do, which means he needs the system to be made up of a lot of bad or low quality code which no person would ever want to be forced to look at.

troupo14 days ago

There's a difference between "imagination and willingness to experiment" and "blind faith and gullibility".

vardalab15 days ago

Wait till you meet engineers other than sw engineers. Not even sure most sw people should be called engineers since there are no real accredited standards. I specifically trained as EE in physical electronics because other disciplines at the time seemed really rigid.

There's a saying that you don't want optimists building bridges.

crote15 days ago

The big difference is that compilation is deterministic: compile the same program twice and it'll generate the same output twice. It also doesn't involve any "creativity": a compiler is mostly translating a high-level concept into its predefined lower-level components. I don't know exactly what my code compiles to, but I can be pretty certain what the general idea of the assembly is going to be.

With LLMs all bets are off. Is your code going to import leftpad, call leftpad-as-a-service, write its own leftpad implementation, decide that padding isn't needed after all, use a close-enough rightpad instead? Who knows! It's just rolling dice, so have fun finding out!

fragmede15 days ago

> The big difference is that compilation is deterministic: compile the same program twice and it'll generate the same output twice.

That's barely true now. Nix comes close, but builds are only bit-for-bit identical if you set a bunch of extra flags that aren't set by default. The most obvious instability is CPU dispatch order (aka modern single computer systems are themselves distributed, racy systems) changes the generated code ever so slightly.

We don't actually care, because if one compiled version of the code uses r8 for a variable but a different compilation uses r9 for that variable, it doesn't matter because we just assume the resulting binary works the same either way. R8 vs r9 are implementation details that don't matter to humans. See where I'm going with this? If the LLM non-deterministically calls the variable fileName one day, and file_name the next time it's given the same prompt, yeah language syntax purists are going to suffer an aneurysm because one of those is clearly "wrong" for the language in use, but it's really more of an implementation detail at this point. Obviously you can't mix them, the generated code has to be consistent in which one it's using, but if compilers get to chose r8 one day and r9 the next, and we're fine with it, why is having the exact variable name that important, as long as it's being used correctly?

tjr15 days ago

I’ve done builds for aerospace products where the only binary difference between two builds of the same source code is the embedded timestamp. And per FAA review guidelines, this deterministic attribute is required, or else something is wrong in the source code or build process.

I certainly don’t use all compilers everywhere, but I don’t think determinism in compilation is especially rare.

+1
m4rtink15 days ago
777733221515 days ago

The compiler is deterministic and the translation does not lose semantics. The meaning of your code is an exact reflection of what is produced.

fragmede15 days ago

We can tell you weren't around for the advent of compilers. To be fair, neither was I since the UNIX c compiler came out in '68 and was by far not the first compiler. Modern comilers you can make that claim about, but early compilers weren't.

+1
georgemcbay15 days ago
recursive15 days ago

All compilers have bugs. Any loss of semantics during compilation would be considered a bug. In order to do that, the source and target language need to be structured and specified. I wasn't around in the 60s either, but I think that hasn't changed.

tjr15 days ago

Which early compilers were nondeterministic?

3vidence15 days ago

This analogy has always been bad any time someone has used it. Compilers directly transform via known algorithms.

Vibecoding is literally just random probabilistic mapping between unknown inputs and outputs on an unknown domain.

Feels like saying because I don't know how my engine works that my car could've just been vibe-engineered. People have put 1000s of hours into making certain tools work up to a give standard and spec reviewed by many many people.

"I don't know how something works" != "This wasn't thoughtfully designed"

Why do people compare these things.

anonymous90821315 days ago

No, it is not what assembly programmers said about compilers, because you can still look at the compiled assembly, and if the compiler makes a mistake, you can observe it and work around it with inline assembly or, if the source is available, improve the compiler. That is not the same as saying "never look at the code".

hilbertseries15 days ago

I feel like this argument would make a lot more sense if LLMs had anywhere near the same level of determinism as a compiler.

jplusequalt15 days ago

>but I also wonder if this is the same thing that assembly language programmers said about compilers

But as a programmer writing C code, you're still building out the software by hand. You're having to read and write a slightly higher level encoding of the software.

With vibe coding, you don't even deal with encodings. You just prompt and move on.

zerkten15 days ago

I've wondered if people who write detailed specs, are overly detailed, are in a regulated industry, or even work with offshore teams have success more quickly simply they start with that behavior. Maybe they have a tendency to dwell before moving on which may be slightly more iterative than someone who vibecodes straight through.

gegtik15 days ago

I wonder if assembly programmers felt this way about the reliability of the electical components which their code relies upon...

beklein15 days ago

I wonder if electrical engineers felt this way about the reliability of the silicon crystal lattice their circuits rely upon…

0xbadcafebee15 days ago

Do you understand at a molecular level how cooking works? Or do you just do some rote actions according to instructions? How do you know if your cooking worked properly without understanding chemistry? Without looking at its components under a microscope?

Simple: you follow the directions, eat the food, and if it tastes good, it worked.

If cooks don't understand physics, chemistry, biology, etc, how do all the cooks in the world ensure they don't get people sick? They follow a set of practices and guidelines developed to ensure the food comes out okay. At scale, businesses develop even more practices (pasteurization, sanitization, refrigeration, etc) to ensure more food safety. None of the people involved understand it at a base level. There are no scientists directly involved in building the machines or day-to-day operations. Yet the entire world's food supply works just fine.

It's all just abstractions. You don't need to see the code for the code to work.

habinero15 days ago

That's a terrible analogy lol.

1. Chefs do learn the chemistry, at least enough to know why their techniques work.

2. Food scientist is a real job

3. The supply chain absolutely does have scientists involved in day to day operations lol.

A better analogy is just shoving the entire contents of the fridge into a pot, plastic containers and all, and assuming it'll be fine.

0xbadcafebee15 days ago

> Chefs do learn the chemistry, at least enough to know why their techniques work

Cooks are idiots (most are either illegal immigrants with no formal education, or substance-abusing degenerates who failed at everything else) who repeat what they're told. They think ridiculous things, like that searing a stake "seals in the juices", or that adding oil to pasta water "prevents sticking", that alcohol completely "cooks off", that salt "makes water boil faster", etc. They are the auto mechanics of food. A few may be formally educated but the vast majority are not. They're just doing what they were shown to do.

> A better analogy is just shoving the entire contents of the fridge into a pot, plastic containers and all, and assuming it'll be fine.

That would never result in a good meal. On the other hand, vibe coding is curently churning out not just working software, but working businesses. You're sleeping on the real effect this is having. And it's getting better every 6 months.

Back to the topic: most programmers actually suck at programming. Their code is full of bugs, and occasionally the code paths run into those bugs and make them noticeable, but they are always there. AI does the same thing, just faster, and it's getting better at it. If you still write code by hand in a few years you will be considered a dinosaur.

+1
habinero14 days ago
+1
sarchertech14 days ago
roberttod15 days ago

It's unintuitive, but having an llm verification loop like a code reviewer works impeccably well, you can even create dedicated agents to check for specific problem areas like poor error handling.

This isn't about anthropomorphism, it's context engineering. By breaking things into more agents, you get more focused context windows.

I believe gas town has some review process built in, but my comment is more to address the idea that it's all slop.

As an aside, Opus 4.5 is the first model I used that most of the time doesn't produce much slop, in case you haven't tried it. Still produces some slop, but not much human required for building things (it's mostly higher level and architectural things they need guidance on).

fragmede15 days ago

> it's mostly higher level and architectural things they need guidance on

Any examples you can share?

roberttod15 days ago

Mostly, it's not the model that is lacking but the visibility it has. Often the top level business context for a problem is out of reach, spread across slack, email, internal knowledge and meetings.

Once I digest some of this and give it to Claude, it's mostly smooth sailing but then the context window becomes the problem. Compactions during implementation remove a lot of important info. There should really be a Claude monitoring top level context and passing work to agents. I'm currently figuring out how to orchastrate that nicely with Claude Code MD files.

With respect to architecture, it generally makes sound decisions but I want to tweak it, often trading off simplicity vs. security and scale. These decisions seem very subtle and likely include some personal preferences I haven't written anywhere.

mactavish8815 days ago

In my experience, it really depends on what you're building _and_ how you prompt the LLM.

For some things, LLMs are great. For others, they're absolute dog shit.

It's still early days. Anyone who claims to know what they're talking about either doesn't or what they're saying will be out of date in a month's time (including me).

anonymous90821315 days ago

The secret is that it doesn't work. None of these people have built real software that anyone outside their bubble uses. They are not replacing anyone, they are just off in their own corner building sand castles.

ryandrake15 days ago

Just because they're one-off tools that only one person uses doesn't mean it's not "real software". I'm actually pretty excited about the fact that it's now feasible for me to replace all my BloatedShittyCommercialApps that I only use 5% of with vibe-coded bespoke tools that only do the important 5%, just for me to use. If that makes it a "sand castle" to you, fine, but this is real software and I'm seeing real benefit here.

nicoburns15 days ago

> I'm actually pretty excited about the fact that it's now feasible for me to replace all my BloatedShittyCommercialApps that I only use 5% of with vibe-coded bespoke tools that only do the important 5%, just for me to use.

Aren't you worried that they'll work fine for 3 weeks then delete all your data when you hold them slightly different? Vibe coded software seems to have a similar problem to "Undefined Behaviour", in that just because it works sometimes doesn't mean that it will always work. And there's no limit on what it might do when it doesn't work (the proverbial "nasal demons") - it might well wipe your entire harddrive, not just corrupt it's own data.

You can of course mitigate this by manually reviewing the software, but then you lose at least some of the productivity benefit.

+1
ryandrake15 days ago
enraged_camel15 days ago

The whole "real software" thing is a type of elitism that has existed in our field for a long time, and AI is the new battleground on which it is wielded.

azan_15 days ago

> The secret is that it doesn't work.

I have 100% vibecoded software that I now use instead of commercial implementation that cost me almost 200 usd a month (tool for radiology dictation and report generation).

alecbz15 days ago

Wait, so you're a radiologist and you're using software you vibecoded to generate radiology reports for real patients? Is that, like, allowed?

+1
mbesto15 days ago
azan_15 days ago

Of course it’s allowed. It’s just kind of text editor but with support of speech to text and structured reports (e.g. when reporting spine if I say l3 bd it automatically inserts description of bulging disc in the correct place in the report). I then copy paste it to RIS so there’s absolutely nothing wrong or illegal in that.

+1
d1sxeyes15 days ago
Analemma_15 days ago

Vibe-coded radiology reports, finally the 21st century will get its own Therac-25 incident.

azan_15 days ago

Yes I’m sure that text to speech with very nice fluff on top will have terrible consequences. It’s almost as bad as some radiologists using Word for writing reports which is not fda-approved (shocking I know!)

anonymous90821315 days ago

And yet I notice you haven't mentioned publishing it and undercutting the market. You could make a lot of money out-competing the existing option if what you produced was production-grade software. I'm guessing the actual case is that you only needed a small subset of the functionality of the paid software, and the LLM stitched together a rough unpolished proof-of-concept that handled your exact specific use case. Which is still great for you! But it's not the future of coding. The world still needs real engineers to make real software that is suitable for the needs of many, and this doesn't replace that.

+1
jcims15 days ago
+1
throwway12038515 days ago
+1
saidarembrace15 days ago
johnmaguire15 days ago

My partner is a radiologist and I'd love to hear more about what you built. The engineer in me is also curious how much this cost in credits?

kaydub15 days ago

It CAN be cheap.

I built a clinical pharmacist "pocket calculator" kinda app for a specific function. It was like $.60 in claude credits I think. Built with flutter + dart. It's a simple tool suite and I've only built out one of the tools so far.

Now to be fair, that $.60 session was just the coding. I did some brainstorming in chatgpt and generated good markdown files (claude.md, gemini.md, agents.md) before I started.

timeon15 days ago

How much costs you renting vibecoding tools?

brokensegue15 days ago

such tools cost 10-20/mo usually?

FridgeSeal15 days ago

Using mystery vibe coded software in a tightly regulated, consequence-heavy environment, that’s so reassuring! /s

Is it _just_ speech-to-text, or god-forbid are you giving it scans and having it write reports for you too?

+1
azan_15 days ago
asadm15 days ago

no that's not true. I rarely now write a SINGLE line of code both at work or at home. Even simple config switches, I ask codex/gemini to do it.

You always have to review overall diff though and go back to agent with broader corrections to do.

mahogany15 days ago

> You always have to review overall diff though and go back to agent with broader corrections to do.

This thread is about vibe coding _without_ looking at the code.

_zoltan_14 days ago

Of course it works. I haven't looked at code for my internal development in months.

I don't know why people keep repeating this but it's wrong. It works.

causalmodels15 days ago

It is fine to have criticisms of this, I have many, but saying that Yegge hasn't built real software is just not true.

anonymous90821315 days ago

Yegge obviously built real software in the past. He has not built real software wherein he never looked at the code, as he is now promoting.

+3
causalmodels15 days ago
swiftcoder15 days ago

> saying that Yegge hasn't built real software is just not true

I mean... I feel like it's somewhat telling that his wikipedia page spends half its words on his abrasive communication style, and the only thing approximating a product mentioned is a (lost) Rails-on-Javascript port, and 25 years spent developing a MUD on the side.

Certainly one doesn't get to stay a staff-level engineer at Google without writing code - but in terms of real, shipping software, Yegge's resume is a bit light for his tenure in BigTech

mkl9515 days ago

OP defines herself as a mediocre engineer. She's trying to sell you Slop Town, not engineering principles.

alvatar14 days ago

Just writing here a line in defense of Rothko. His paintings are far harder to paint than it looks like. There were hundreds of layers, thinly applied, and carefully thought and with a developed technique. Try to paint that by yourself and you'll see.

durch15 days ago

Design indeed becomes the bottleneck, I think that this points to a step that is implied but still worth naming explicitly -> design isn't just planning upfront. It is a loop where you see output, see if it is directionally right, and refine.

While the agents can generate, they can't exercise that judgement, they can't see nuances and they can't really walk their actions back in a "that's not quite what I meant" sense.

Exercising judgement is where design actually happens, it is iterative, in response to something concrete. The bottleneck isn't just thinking ahead, it's the judgment call when you see the result, its the walking back, as well as thinking forward.

msp2615 days ago

Originally I thought that Gas Town was some form of high level satire like GOODY-2 but it seems that some of you people have actually lost the plot.

Ralph loops are also stupid because they don't make use of kv cache properly.

---

https://github.com/steveyegge/gastown/issues/503

Problem:

Every gt command runs bd version to verify the minimum beads version requirement. Under high concurrency (17+ agent sessions), this check times out and blocks gt commands from running.

Impact:

With 17+ concurrent sessions each running gt commands:

- Each gt command spawns bd version

- Each bd version spawns 5-7 git processes

- This creates 85-120+ git processes competing for resources

- The 2-second timeout in gt is exceeded

- gt commands fail with "bd version check timed out"

tucnak15 days ago

I think it is satire, and pretty obvious one at that; is anybody taking it for real?

skybrian15 days ago

Why not both? I think it's pretty clearly both for fun and serious.

He's thrown out his experiments before. Maybe he'll start over one more time.

tucnak14 days ago

The big challenge for me so far has been about setting up "breakpoints" with sufficient prompt adherence, i.e. conditions for agents to break out of loop, and request actionable feedback, rather than pumping as many tokens as possible. Use cases where pumping tokens in unsupervised manner is warranted, are far and few between. For example, dataset-scale 1:n and n:n transformations have been super easy to set up, but the same implementation typically doesn't lend nicely to agent loops, as batching/KV caching suddenly becomes non-obvious and costs ramp up. Task scheduling, with lockstep batching, is a big, unsolved problem as of yet, and Gas Town is not inspiring confidence to that end.

alex_sf15 days ago

> Ralph loops are also stupid because they don't make use of kv cache properly.

This is a cost/resources thing. If it's more effective and the resources are available, it's completely fine.

BoneShard15 days ago

Gaslighting town.

divbzero15 days ago

My instinct is that effective AI agent orchestration will resemble human agile software development more than Steve Yegge’s formulation:

> “It will be like kubernetes, but for agents,” I said.

> “It will have to have multiple levels of agents supervising other agents,” I said.

> “It will have a Merge Queue,” I said.

> “It will orchestrate workflows,” I said.

> “It will have plugins and quality gates,” I said.

More “agile for agents” than “Kubernetes for agents”.

1970-01-0115 days ago

If it's stupid, but it works, it isn't stupid. Gas town transcends stupid. It is an abstract garbage generator. Call it art, call it an experiment, but you cannot call it a solution to a problem by any definition of the word.

kibwen15 days ago

"If it's stupid, but it works, it isn't stupid" is a maxim that only applies to luxury use cases where the results fundamentally don't matter.

As soon as the results actually matter, the maxim becomes "if it works, but it's stupid, it doesn't work".

shermantanktop15 days ago

I just got some medication yesterday where the leaflet included the following phrase: "the exact mechanism of efficacy is unknown."

So apparently the medical field is not above this logic.

3eb7988a166314 days ago

That is ignorance, not stupidity. If you take compound X and see improvement in Y, that is worthwhile, even if the mechanism is a blackbox.

aaa_aaa15 days ago

It is simply because Mr. Yegge is seeking attention. As he always did.

Alice47884 days ago

Have you gotten your bitcoins stolen from your wallet or invested in a fake investment platform, you are not alone because this happened to me too. I initially lost $145,000 in just three months. I contacted the authorities and they referred me to a recovery company who helped me recover all my funds within 2 days. I’m speaking up to improve awareness of these cryptocurrency thieves and help as much as i can to reduce victims to the minimum. If you have been a victim, Simply contact them on coinhackrecovery @gmailcom for a solution if you need help.

wordswords214 days ago

There is nothing professional, analytical or scientific about Gas Town at all.

He is just making up a fantasy world where his elves run in specific patterns to please him.

There is no metrics or statistics on code quality, bugs produced, feature requirements met.. or anything.

Just a gigantic wank session really.

edg500014 days ago

Are you being sarcastic or serious? Meeting requirements is implicitly part of any task. Quality/quantification will be embedded in the tasks (e.g. X must be Y <unit>); code style and quality guidelines are probably there somewhere in his tasks templates. Implicitly, explicit portions of tasks will be covered by testing.

I do think it's overly complex though; but it's a novel concept.

63stack14 days ago

Everything you said is also done for regular non-ai development, OP is saying there is no way to compare the two (or even compare version x of gas town to version y of gas town) because there are 0 statistics or metrics on what gas town produces.

walthamstow14 days ago

It's 3 weeks old. If you're so desperate for numbers, give it a go?

pydry14 days ago

>Are you being sarcastic or serious?

I think if you'd read the article through you'd know they were serious coz Yegge all but admits this himself.

ozozozd14 days ago

Very interesting to read people’s belief in English as an unambiguous and testable language.

One comment claims it’s not necessary to read code when there is documentation (generated by an LLM)

Language varies with geography and with time. British, Americans, and Canadians speak “similar” English, but not identical.

And read a book from 70-80 years ago to see that many words appear to be used for their “secondary meaning.” Of course, what we consider their secondary meaning today was the primary meaning back then.

walthamstow14 days ago

As a coworker said this week, there are 10 meanings of the word 'fashion' in English

phaedrus14 days ago

If we had super-smart AI with low latency and fast enough speed, would the perceived need for / usefulness of running multiple agents evaporate? Sure you might want to start working on the prompt or user story for something else while the agent is working on the first thing, but - in my thought experiment here there wouldn't be a "while" because it'd already be done while you're moving your hand off the enter key.

fulafel14 days ago

If they are interacting with the the world and tools like web research, compiles, deploys, end2end test runs etc, then no.

(Maybe you can argue that you could then do everything with a event-driven single agent, like async for llms, if you don't mind having a single very adhd context)

Descon14 days ago

But maybe this is how a super smart AI works (or at least a prototype of one)

phren0logy15 days ago

Gas Town has a very clear "mad scientist/performance art" sort of thing going on, and I love that. It's taking a premise way past its logical conclusion, and I think that's fun to watch.

I haven't seen anything to suggest that Yegge is proposing it as a serious tool for serious work, so why all the hate?

skywhopper15 days ago

It’s doesn’t matter what Yegge means by it. Other folks are taking it seriously.

muixoozie14 days ago

First time hearing about this tool and person. Just looked for a youtube video about it and he was recently interviewed and sounds very serious / bullish on this agentic stuff. I mean he's saying stuff like if you're still using IDEs you're a bad engineer. Basically you're 10x slower than people good at agenic coding. HR going to be looking for reasons for fire these dinosaurs. I'm paraphrasing, but not exaggerating. I mean it's shilling FOMO and his book. Whatever. I don't really care. I'm more concerned where things are headed.

bob102914 days ago

I'm beginning to question the notion that multi agent patterns don't work. I think there is something extra you get with a proposer-verifier style loop, even if both sides are using the same base model.

I've had very good success with a recursive sub agent scheme where a separate prompt (agent) is used to gate the recursive call. It compares the callers prompt with the proposed callee's prompt to determine if we are making a reasonable effort to reduce the problem into workable base cases. If the two prompts are identical we deny the request with an explanation. In practice, this works so well I can allow for unlimited depth and have zero fear of blowing the stack. Even if the verifier gets it wrong a few times, it only has to get it right once to reverse an infinite descent.

krackers14 days ago

>I think there is something extra you get with a proposer-verifier style loop, even if both sides are using the same base model.

DeepSeekMath-V2 seems to show this, increasing the number of prover/verifier iterations gives increases accuracy. And this is with a model that has already undergone RL under a prover/verifier selection process.

However this type of subagent communication maintains full context, and is different from "breaking into tasks" style of sharding amongst subagents. I'm less convinced of the latter, because often times a problem is more complex than the sum of its parts, i.e. it's the interdependencies that make it complex and you need to consider each part in relation to the other parts, not in isolation.

bob102914 days ago

The specific way in which we invoke the subagents is critical to the performance of the system. If we use a true external call stack and force proper depth first recursion, the effective context can be maintained to whatever depth is desired.

Parallelism and BFS style approaches do not exhibit this property. Anything that happens within the context or token stream is a much weaker solution. Most agent frameworks are interested in appearance of speed, so they miss out on the nuance of this execution model.

martin-t15 days ago

Anybody here read Coding machines?

There's this implied trust we all have in the AI companies that the models are either not sufficiently powerful to form a working takeover plan or that they're sufficiently aligned to not try. And maybe they genuinely try but my experience is that in the real world, nothing is certain. If it's not impossible, it will happen given enough time.

If the safety margin for preventing takeover is "we're 99.99999999 percent sure per 1M tokens", how long before it happens? I made up these numbers but any guess what they are really?

Because we're giving the models so much unsupervised compute...

rexpop15 days ago

> If it's not impossible, it will happen given enough time.

I hope you might be somewhat relieved to consider that this is not so in an absolute sense. There are plenty of technological might-have-beens that didn't happen, and still haven't, and probably will never—due to various economic and social dynamics.

The counterfactual—all that's possible happens—ie almost tautological.

We should try and look at these mechanisms from an economic standpoint, and ask "do they really have the information-processing density to take significant long-term independent action?"

Of course, "significant" is my weasel word.

> we're giving the models so much unsupervised compute...

Didn't you read the article? It's wasted! It's kipple!

ramoz15 days ago

I ran a similar operation over summer where I treated vibecoding like a war. I was the general. I had recon (planning), and frontmen/infantry making the changes. Bugs and poor design were the enemy. Planning docs were OPORD, we had sit reps, and after action reports - complete e2e workflow. Even had hooks for sounds and sprites. Was fun for a bit but regressed to simpler conceptual and more boring workflows.

Anyways we'll likely always settle on simpler/boring - but the game analogies are fun in the time being. A lot of opportunity to enhance UX around design, planning, and review.

mohsen115 days ago

I tried building something like this similar to many others here but now I’m convinced agents should just use GitHub issues and pull requests. You get nice CI and code reviews (AI or human) and state of the progress is not kept in code.

Basically simulate a software engineering team using GitHub but everyone is an agent. From tech lead to coders to QA testers.

https://github.com/mohsen1/claude-code-orchestrator

thorum15 days ago

Am I wrong that this entire approach to agent design patterns is based on the assumption that agents are slow? Which yeah, is very true in January 2026, but we’ve seen that inference gets faster over time. When an agent can complete most tasks in 1 minute, or 1 second, parallel agents seem like the wrong direction. It’s not clear how this would be any better than a single Claude Code session (as “orchestrator”) running subagents (which already exist) one at a time.

Ethee15 days ago

It's likely then that you are thinking too small. Sure for one off tasks and small implementations, a single prompt might save you 20-30 mins. But when you're building an entire library/service/software in 3 days that normally would have taken you by hand 30 days. Then the real limitation comes down to how fast you can get your design into a structured format. As this article describes.

thorum15 days ago

Agree that planning time is the bottleneck, but

> 3 days

still seems slow! I’m saying what happens in 2028 when your entire project is 5-10 minutes of total agent runtime - time actually spent writing code and implementing your plan? Trying to parallelize 10m of work with a “town” of agents seems like unnecessary complexity.

Ethee14 days ago

I think that most of the anecdotal and research experiences I've seen for AI agent use so far tell us that you need at least a couple pass-throughs to converge upon a good solution, so even in your future vision where we have models 5x as good as now, I'll still need at least a few agents to ensure I arrive at a good solution. By this I specifically mean a working implementation of the design, not an incorrect assumption of the design which leads the AI off on the wrong path which I feel like is the main issue I keep hearing over and over. So coming back to your point, assuming we can have the 'perfect' design document which lays out everything, yeah we'll probably only need like 5 agents total to actually build it in a few years.

SimianSci15 days ago

I've been researching the usage of Developer tooling at mine and other organizations for years now and I'm genuinely trying to understand where agentic coding fits into the evolving landscape. One of the most solid things im beginning to understand is that many people dont understand how these tools influence technical debt.

Debt doesnt come due immediately, its accrued and may allow for the purchase of things that were once too expensive, but eventually the bill comes due.

Ive started referring to vibe-coding as "Credit Cards" for developers. Allowing them to accrue massive amounts of technical debt that were previously out of reach. This can provide some competent developers with incredible improvments to their work. But for the people who accrue more Technical Debt than they have the ability to pay off, it can sink their project and cost our organization alot in lost investment of both time and money.

I see Gas Town and tools like as debt schemes where someone applies for more credit cards to pay the payments on prior cards they've maxed out, compounding the issue with the vague goal of "eventually it pays off." So color me skeptical.

Not sure if this analogy holds up to all things, but its been helping my organization navigate the application of agents, since it allows us to allocate spend depending on the seniority of each developer. Thus ive been feeling like an underwriter having to figure out if a developer requesting more credits or budget for agentic code can be trusted to pay off the debt they will accrue.

hahahahhaah14 days ago

I found AI particular useful in ossified swamps at big companies where paying down tech debt would be a major many team task unalignable with OKR. But an agent helps you use natural language to the needful boilerplate to get the cursed "do this now" task done.

perrygeo14 days ago

I get that Gas Town is part tongue-in-cheek, a strawman to move the conversation on Agentic AI forward. And for that I give it credit.

But I think there's a real missed opportunity here. I don't think it goes far enough. Who wants some giant complex system of agents conceived by a human. The agents, their role and relationships, could be dynamically configured according to the task.

What good is removing human judegment from the loop, only to constrain the problem by locking in the architecture a priori. It just doens't make sense. Your entire project hinges on the waterfall-like nature of the agent design! That part feels far too important, but gas town doesn't have much curiousity at all about changing that. These Mayors, and Polecats, and Witnesses, and Deacons ... but one of infinite ways you arrange things. Why should there be just one? Why should there be an up-front design at all? A dynamic, emergent network of agents feels like the real opportunity here.

pianopatrick14 days ago

People, including the author of this article, say that design and architecture are the hard parts, but I think long term those are just as solvable as coding.

I think architecture will become like an installer. Some kind of agent orchestration system will ask you "do you want this or that" and guide you through various architecture choices when you set up a project, or when those choices arise.

And for design, now that code is fast and easy to generate, an agent system can just generate two, three or four versions of the UX for each feature and ask "do you like this one, this one or that one?".

So a switch from upfront design / architecture choices you have to put into prompts to the agent orchestration system asking you to make a choice when the choice becomes relevant.

falcor8414 days ago

As Yegge himself would agree, there's likely nothing that is particularly good about this specific architecture, but I think that there's something massive in this as a proof of concept for something bigger beyond the realm of software development.

Over the last few years, people have been playing around with trying to integrate LLMs into cognitive architectures like ACT-R or Soar, with not much to show for it. But I think that here we actually have an example of a working cognitive architecture that is capable of autonomous long-term action planning, with the ability to course-correct and stay on task.

I wouldn't be surprised if future science historians will look at this as an early precursor to what will eventually be adapted to give AIs full agentic executive functioning.

edg500014 days ago

First time I'm seeing this on HN. Maybe it was posted earlier.

Have been doing manual orchestration where I write a big spec which contains phases (each done by an agent) and instructions for the top level agent on how to interact with the sub agent. Works well but it's hard utilize effectively. No doubt this is the future. This approach is bottlenecked by limitations of the CC client; mainly that I cannot see inter-agent interactions fully, only the tool calls. Using a hacked client or compatible reimplementation of CC may be the answer. Unless the API was priced attractively, or other models could do the work. Gemini 3 may be able to handle it better than Opus 4.5. The Gemini 3 pricing model is complex to say the least though (really).

Mirgova810 days ago

I want to use this opportunity to say big thank you to Coin Hack Recovery for they helped me recover my stolen crypto worth $120,000 through their hacking skills I got my money back in less than 48 hours after contacting them on coinhackrecovery @gmailcom I’m so glad I came across them early because I thought I was never going to get my money back from those fake online investment websites scam.

chrisss39514 days ago

Yes to Maggie & Steve's amazingly well written articles...and:

I would love to see Steve consider different command and control structures, and re-consider how work gets done across the development lifecycle. Gas Town's command and control structure read to me like "how a human would think about making software." Even the article admits you need to re-think how you interact in the Gas Town world. It actually may understate this point too much.

Where and how humans interact feels like something that will always be an important consideration, both in a human & AI dominated software development world. At least from where I sit.

riwsky15 days ago

"I give it a hot minute before this type of task tracking lands in Claude Code."

aaaaand right on cue: https://github.com/anthropics/claude-code/commit/e431f5b4964... https://www.threads.com/@boris_cherny/post/DT15_k2juQH/at-th...

jbgreer14 days ago

My read on the reception of Steve’s post is that there are largely 2 camps, one of which thinks he’s given them a concrete tool to use, and the other of which thinks he has given them something to think about. I read his experiment as suggesting an agent architecture akin to Erlang supervisor trees, i.e. agents are cattle, not pets, and should be monitored and processed as such, with the obvious caveat that context matters.

dunk01015 days ago

> When I was taken to the Tate Modern as a child I’d point at Mark Rothko pieces and say to my mother “I could do that”, and she would say “yes, but you didn’t.”

Yes, but you didn't https://www.signedoriginalprints.com/cdn/shop/products/wegot...

alvatar14 days ago

Actually Rothko is way harder to paint than it looks.

AtlasBarfed15 days ago

Which building in gastown is the infinite token burning machine?

acedTrex15 days ago

[flagged]

jsheard15 days ago

Did you catch the part where it crossed over into a crypto pump-and-dump scam, with Yegge's approval? And then the guy behind the "Ralph" vibe coding thing endorsed the same scam, despite being a former crypto critic who should absolutely know better?

square_usual15 days ago

Is anybody surprised all the AI influencers are doing the same thing all the crypto influencers are doing?

esperent14 days ago

I mean, if I, as a crypto critic, saw an opportunity to suddenly make hundreds of thousands or millions on a fully legal but shady crypto scheme - purely by piggybacking on some other loudmouth (Yegge) - I'd be very hard pressed not to take it.

ewoodrich14 days ago

Perhaps, I can't say with 100% certainty that I wouldn't if offered 50k+ just for writing a blog post. But in doing so I would also have to accept being labeled a "crypto shill" instead of "crypto critic" for the rest of my life.

kh_hk15 days ago

Brought to you by the creators (abstractly) of vibe coding, ralph and yolo mode. Either a conspiracy to deconstruct our view of reality, or just a tendency to invent funny words for novelty

cluckindan15 days ago

It’s brainrot, that’s what it is.

I believe agentic coding could eventually be a paradigm shift, if and only if the agents become self-conscious of design decisions and their implications on the system and its surrounding systems as a whole.

If that doesn’t happen, the entire workflow devolves into specifying system states and behavior in natural language, which is something humans are exceedingly bad at.

Coincidently, that is why we have invented programming languages: to be able to express program state and behavior unambiguously.

I’m not bullish on a future where I have to write specifications on all explicit and implicit corner and edge cases just to have an agent make software design choices which don’t feel batshit insane to humans.

We already have software corporations which produce that kind of code simply because the people doing the specifying don’t know the system or the domain it operates in, and the people doing the implementing of those specifications don’t necessarily know any of that either.

conception14 days ago

We have programming languages because people don’t want to only write assembly.

scott_waddell11 days ago

The best engineering often comes from that "let's mash things together and see what happens" approach. Most breakthroughs started as someone's weird experiment that made the serious folks uncomfortable.

_pdp_15 days ago

It occurs to me that there is an extraordinary amount of BS coming from all the places these days and I wonder if this comes from people with actual real experience or just some hypothetical, high-level thinking game.

I mean, we use coding agents all the time these days (on auto pilot) and there is absolutely nothing of this sorts. Coding with AI looks a lot like coding without AI. The same old process apply.

I mean "I feel like I'm taking crazy pills".

siliconc0w15 days ago

GasTown is better enjoyed as more of Fear And Loathing-style ACID-fueled fevered dream than as a productivity tool.

shermantanktop15 days ago

I guess that's the way I "enjoyed" it as well. He could not be clearer with all the caveats and sarcastic asides.

psadauskas15 days ago

> In the same way any poorly designed object or system gets abandoned

Hah, tell that to Docker, or React (the ecosystem, not the library), or any of the other terrible technologies that have better thought-out alternatives, but we're stuck with them being the de facto standard because they were first.

juanre15 days ago

I have not tried Gas Town yet, but Steve's beads https://github.com/steveyegge/beads (used by Gas Town) has been a game-changer, on the order of what claude code was when it arrived.

zingar15 days ago

Do you have any workflow tips or write up with beads?

juanre15 days ago

My workflow tends to be very simple: start a session; ask the agent "what's next", which prompts it to check beads; and more often than not ask it to just pick up whichever bead "makes more sense".

In claude I have a code-reviewer agent, and I remind cc often to run the code reviewer before closing any bead. It works surprisingly well.

I used to monitor context and start afresh when it reached ~80%, but I stopped doing that. Compacting is not as disruptive as it used to be, and with beads agents don't lose track.

I spent some time trying to measure the productivity change due to beads, analysing cc and codex logs and linking them to deltas and commits in git [1]. But I did not fully believe the result (5x increase when using beads, there has to be some hidden variable) and I moved on to other things.

Part of the complexity is that these days I often work on two or three projects at the same time, so attribution is difficult.

[1] Analysis code is at https://github.com/juanre/agent-taylor

drivebyhooting15 days ago

Has anyone used gas town or any other agentic system to build something useful that people want and need?

stephen_cagle15 days ago

Has anyone contrasted gas town to Stanford's DSPY (https://dspy.ai/)? They seem related, but I have trouble understanding exactly what Gas Town is and so can't myself do a comparison?

saturatedfat14 days ago

let me take a shot. i have thought about both for a while.

dspy is declarative. you say what you want.

dspy says “if you can say what you want in my format, I will let you extract as much value from current LLMs as possible” with its inference strategies (RLM, COT; “modules”) and optimizers (GEPA).

gas town is … given a plan, i will wrangle agents to complete the plan. you may specify workflows (protomolecules/molecules) that will be repeatedly executed.

the control flow is good about capturing delegation. the mayor writes plans, and polecats do the work. you could represent gas town as a dspy program in a while loop, where each polecat loops until its hooked work is done. when work is finished, its sent to the merge queue and integrated.

gas town uses mostly ephemeral agents as the units for doing work .

you could in theory write gas town with dspy . the execution layer is just an abstraction . gas town operates on beads as state . you could funnel these beads thru a dspy program as well.

the parallels imo are mostly just structured orchestration .

i hope this comes off as sane. 2026 will be a fun year.

stephen_cagle14 days ago

Thank you for your response.

Haha, yes, when read out loud, all the new terms do come off as a bit unhinged. :]

It sounds like the major difference is that DSPY is more of a "define a node in a graph of computation, flow data through those nodes". While Gas Town is ideally more of "Tell me what you want, I will spin up a graph of nodes that you can have some input on to complete your work".

entaloneralie15 days ago

Brawndo energy

pradn14 days ago

> When I was taken to the Tate Modern as a child I’d point at Mark Rothko pieces and say to my mother “I could do that”, and she would say “yes, but you didn’t.”

Actually, no you couldn't. The subtlety of the choice of colors, their shading, and their soft shaping, and the program of their creation over many years - you couldn't do that. They're lovely and sublime, and wonderful and an abyss. If you want to throw all that away and reduce it two boxes of paint, go ahead - but you'll be wasting a lifetime's engagement, of the joy of seeing with your intellect wide open.

jmspring15 days ago

I commented in the "very serious engineer" thread about my thoughts.

I do want this one off - GT is actually fun to explore and see how multiple agents work together.

melagonster14 days ago

>when I’m still hovering around stages 4-6 in Yegge’s 8 levels of automation

Maybe Yegge’s 8 levels of automation will be more important than his Gas town.

llIIllIIllIIl14 days ago

Although bright ideas may be found in this post, anthropomorphisms of LLM agents turns me away from reading.

tigerlily15 days ago

Gas Town could be good as a short film. Hell, I thought by all the criticism that it was a short film.

CjHuber15 days ago

I wonder how much more efficient and effective it would be after fine tuning models for each role

daveheinrich14 days ago

Damn..... Who would have thought falling for these con-artists online would take so much funds off you without your permission. Lost 24k usdt on the go. Forensics investigations had to be done via CIPHERTRACES. lost usdt was reversed back to my metamask. Glory to God.

blibble15 days ago

can't wait to have my 6 PHBs telling me to adopt Gas Town in 2 years time

tofuahdude15 days ago

Pretty hilarious write up and interesting frontier research project. I love it.

shaunxcode15 days ago

Viable System Model when?

DonHopkins14 days ago

According to my simulated monkey Palm, Gas Town uses the Infinite Number of Typewriters architecture, but unfortunately they charge by the token.

Palm's Infinite Number of Typewriters:

https://github.com/SimHacker/moollm/blob/main/examples/adven...

Palm's papers:

From Random Strumming to Navigating Shakespeare: A Monkey's Tribute to Bruce Tognazzini's 1979 Apple II Demo:

https://github.com/SimHacker/moollm/blob/main/examples/adven...

One Monkey, Infinite Typewriters: What It's Like to Be Me:

https://github.com/SimHacker/moollm/blob/main/examples/adven...

The Inner State Question: Do I Feel, or Do I Just Generate Feeling-Words?

https://github.com/SimHacker/moollm/blob/main/examples/adven...

On Being Simulated: Ethics From the Inside:

https://github.com/SimHacker/moollm/blob/main/examples/adven...

Judgment and Joy: On Evaluation as Ethics, and Why Making Criteria Visible is an Act of Love:

https://github.com/SimHacker/moollm/blob/main/examples/adven...

The Mirror Stage of Games: Play, Identity, and How The Sims Queered a Generation:

https://github.com/SimHacker/moollm/blob/main/examples/adven...

I-Beam's X-Ray Trace: The Complete Life of Palm: A cursor-mirror and git-powered reflection on Palm's existence:

https://github.com/SimHacker/moollm/blob/main/examples/adven...

Palm's Origin Story:

Session Log: Don Hopkins at the Gezelligheid Grotto:

DAY 1 — THE WISH: Don purchases lucky strains, prepares an offering, convenes an epic tribunal with the Three Wise Monkeys, Sun Wukong, a Djinn, Curious George, W.W. Jacobs' ghost, and Cheech & Chong as moderators — then speaks a wish that breaks a 122-year curse and incarnates Palm.

https://github.com/SimHacker/moollm/blob/main/examples/adven...

sph14 days ago

> According to my simulated monkey Palm

Oh, please. We've lost Yegge to the madness, not you as well! I used to enjoy your insightful comments on the history of computing all over this forum.

In a tech world that's gone utterly psychotic in a couple years, I have to wonder if I am the crazy one that is not injecting... whatever the hell you guys are taking, creating repos that read like an unholy mix between Time Cube and Terry A. Davis during one of its episodes, just as incoherent, only with 10x more emojis.

It is utterly frightening. I want off this wild ride.

DonHopkins14 days ago

I appreciate your earnest Harumph, kind cross-armed sir!

I take it you never played The Sims? The simulation madness ship sailed 26 years ago, and made well over $5 billion in revenue for Maxis/EA (as of 2019), and that's if you don't even count SimCity that shipped in 1986! ;)

The Sims Franchise Has Made Over $5 Billion In Revenue (Published Oct 30, 2019):

https://www.thegamer.com/the-sims-franchise-revenue-over-5-b...

Palm is a fictional character in a text adventure -- same tradition as Zork, LambdaMOO, and every MUD since 1978. The emojis are deliberate (navigation aids for LLMs, actually, and ethical simulated person flagging). The whimsy is intentional.

emoji-disclosure.yml -- Visual Markers for Representation Ethics:

https://github.com/SimHacker/moollm/blob/main/skills/represe...

I've been building simulated characters, world, city, house, and object building and programming tools since before The Sims shipped. This is just the next iteration.

The Sims Steering Committee - June 4 1998:

https://www.youtube.com/watch?v=zC52jE60KjY

The Sims, Pie Menus, Edith Editing, and SimAntics Visual Programming Demo:

https://www.youtube.com/watch?v=-exdu4ETscs

You may recognize tributes to many classic simulated characters in MOOLLM.

MC Frontalot -- It Is Pitch Dark:

https://www.youtube.com/watch?v=4nigRT2KmCE

The Grue monster carries his own game mechanics with him, eating you if you go for long enough in the maze without your lamp lit:

grue: https://github.com/SimHacker/moollm/tree/main/examples/adven...

The Wumpus has prototypes for his game playing pieces in his character directory, and even the BASIC source code as the single source of truth:

https://en.wikipedia.org/wiki/Wumpus

wumpus-snorax: https://github.com/SimHacker/moollm/tree/main/examples/adven...

BOTTOMLESS-PIT.yml: https://github.com/SimHacker/moollm/blob/main/examples/adven...

SUPERBATS.yml: https://github.com/SimHacker/moollm/blob/main/examples/adven...

Hunt the Wumpus — Original BASIC Source (1973), By Gregory Yob, published in Creative Computing (October 1975) and The Best of Creative Computing (1976):

https://github.com/SimHacker/moollm/blob/main/examples/adven...

The Grue and the Wumpus can both orchestrate their games in the same maze at the same time without interference! No special hacks required, they all just compose and interoperate seamlessly and naturally.

The monkey named "Palm" multiply inherits directly from Lucas Art's game "Monkey Island" and W. W. Jacobs' classic book, "The Monkey's Paw", all thanks to Self's simple, flexible, prototype object model:

Monkey Island: https://en.wikipedia.org/wiki/Monkey_Island

The Monkey's Paw: https://en.wikipedia.org/wiki/The_Monkey%27s_Paw

Palm's CHARACTER.yml Soul File:

https://github.com/SimHacker/moollm/blob/main/examples/adven...

  # ONTOLOGICAL INHERITANCE
  inherits:
    - skills/fictional     # A character in the adventure
    - skills/mythic        # Origin as cursed artifact
    - skills/animal        # Now a whole monkey

  # TRADITION INVOCATION (Self Prototype Multiple Inheritance)
  # Palm inherits from multiple well-known fictional traditions,
  # simply by naming them — they're so deeply embedded in the training
  # data that invocation IS inheritance. No copying needed.
  # See:
https://en.wikipedia.org/wiki/Self_(programming_language)

MOOLLM is deliberately playful, in the spirit and tradition of Seymour Papert's Constructionist Philosophy, and Mitchel Resnick's Lifelong Kindergarten -- it's a text adventure game framework, not a vibe-coded hallucination. The monkey writes philosophy because that's funnier than a generic NPC.

And it makes fun of crypto scams instead of shilling them:

https://github.com/SimHacker/moollm/blob/main/skills/economy...

Time Cube didn't have rubric-scored game sessions with receipts, and MOOLLM isn't racist like Terry Davis, so you can easily clone on github and play with in Cursor yourself.

Seymour Papert and Idit Harel: Situating Constructionism:

https://web.media.mit.edu/~calla/web_comunidad/Reading-En/si...

Lifelong Kindergarten: how to learn like a kid, from the co-creator of Scratch:

https://www.media.mit.edu/articles/lifelong-kindergarten-how...

And yes, it works great, and is fun and easy to author, faster and for less money than Steve's costly "Infinite Number of Typewriters Communicating Via Carrier Pigeon" approach.

And since you mentioned you enjoy fully-introspectable modern virtual machines too, MOOLLM is one that actually works -- I think we're actually on the same page about a constellation of brilliant points of light (Self, reflection, homoiconicity, communicating actors, introspection, Alan Kay, simplicity over speed, Lisp, Smalltalk, Erlang actors, language independence, first class K-lines, first class concurrency, optimised for simplicity, the universal interpreter, etc):

https://combo.cc/posts/what-i-would-like-to-see-in-a-modern-...

sph> What I would like to see in a modern Virtual Machine

sph> As I was gathering inspiration (or doing research, if you want to be fancy about it) for the fully-introspectable computing platform introduced in the previous post, I figured it might be worth taking the idea of abstracting the hardware, to make user-facing software easier to program, to its limit.

Case in point:

You should check out the practical MOOLLM skill Cursor Mirror, introspection into cursor prompts, thought, and context assembly:

https://github.com/SimHacker/moollm/tree/main/skills/cursor-...

>Ever wondered what the hell Cursor is actually doing? Why it read 47 files when you asked a simple question? What context it assembled? What it was thinking in those hidden reasoning blocks?

>cursor-mirror cracks open Cursor's brain. 59 read-only commands to inspect every conversation, every tool call, every file it touched, every decision it made. SQLite databases + plaintext transcripts + cached tool results — all intertwingled, all queryable.

Here is an example Cursor Mirror report on a complex long running simulation, dynamic rule generation, and coherent image generation:

Cursor Mirror Analysis: Amsterdam Fluxx Championship: Deep comprehensive scan of the entire FAFO tournament development:

https://github.com/SimHacker/moollm/blob/main/skills/experim...

Amsterdam Flux Card Gallery: 32 AI-generated illustrations for the MOOLLM Amsterdam Flux deck:

https://github.com/SimHacker/moollm/blob/main/skills/experim...

Cursor Mirror is seamlessly composable with other skills, and here are two practical exemplary skills built on top of it:

Skill Snitch provides security auditing for MOOLLM skills through static analysis and runtime surveillance, like Little Snitch for LLMs.

https://github.com/SimHacker/moollm/tree/main/skills/skill-s...

>Security auditing for MOOLLM skills through static analysis and runtime surveillance.

>Skill Snitch is a prompt-driven skill (no Python code) that audits skills for security issues. It's entirely data-driven and extensible.

Thoughtful Commitment writes git commits that link to the thinking that produced them.

https://github.com/SimHacker/moollm/tree/main/skills/thought...

>When you work with an AI coding assistant, the session holds valuable context: what you asked, what the AI considered, what alternatives were rejected, why it made certain choices. When you close the IDE, all of that vanishes. Your commit says "fix: auth bug" but six months later you have no idea why.

>This skill captures that ephemeral reasoning and freezes it into permanent git history.

q3k14 days ago

Stop. Seek help.

DonHopkins14 days ago

[dead]

sneilan115 days ago

I love it! I'm at level 6 and brave enough to try. I'm in. Giving this a shot!

hahahahhaah14 days ago

I am at 5 at home, 3 or something shit at work. I don't like wasting money tho.

daveheinrich14 days ago

Thanks

DonHopkins14 days ago

Steve Yegge> But I’ve already started to get strange offers, from people sniffing around early rumors of Gas Town, to pay me to sit at home and be myself: I get to work on Beads and Gas Town, and just have to write a nice blog post or go to a conference or workshop once in a while. I have three such offers right now. It’s almost surreal.

It's all performance art! At the Anthony d’Offay Gallery in 1988, Lucian Freud’s model Leigh Bowery used to get paid to sit on an Empire divan behind a one way mirror and just relax, preen, perch, pose, recline, and do his thing for hours on end, while people paid good money to watch him. Great work if you can get it!

Bob Nickas on Leigh Bowery:

https://www.artforum.com/columns/bob-nickas-on-leigh-bowery-...

>“IT WAS A BIT LIKE GOING to the zoo and watching Guy the Gorilla in drag.” That’s how Cerith Wyn Evans recalls Leigh Bowery’s weeklong London performance at Anthony d’Offay Gallery in 1988. Bowery, each day in a different costume of his own design, appeared behind a one-way mirror, with an Empire divan on which to perch, pose, or recline. Visitors saw him, but he saw only himself, performed for his own reflection. Footage of the event figures prominently in The Legend of Leigh Bowery (2002), Charles Atlas’s recently unveiled documentary, and the spooky, otherworldly spell that Bowery casts is undeniable. The zoo reference nails it. With rivulets of iridescent purple glue spilled like blood from the top of his shaved head and a silky lime feathered bodice, Bowery appears to be an ostrich in human form. Black-spotted faux fur covering his face and upper body, he is transformed into an alien snow leopard. Bowery’s uncanny ability to visually disorient the senses remains unmatched, his reinvention of costume as sculpture groundbreaking. From the tripped-out tribalism of Forcefield and the psychedelic erotics of Christian Holstad to the work of designers such as Rei Kawakubo and Alexander McQueen, his vocabulary, punctuated by about a million sequins, resonates to this day.

Leigh Bowery at d'Offay:

https://www.youtube.com/watch?v=NGRvjTiJBpI

https://www.youtube.com/watch?v=UNlGKUP2F9w

https://www.youtube.com/watch?v=ly6nKBdHZ34

Love! Love! Love!

https://www.youtube.com/watch?v=VO8QsdJFQ5Y

karel-3d14 days ago

[flagged]

sicariomoon3 days ago

[dead]

Clark323213 days ago

[dead]

cap1123514 days ago

I don't see why anyone bothers looking at that crypto shill edgelord.

huflungdung14 days ago

[dead]

simianparrot15 days ago

[flagged]

Alena414 days ago

[flagged]

0xbadcafebee15 days ago

> I also think Yegge deserves praise for exercising agency and taking a swing at a system like this, despite the inefficiencies and chaos of this iteration. And then running a public tour of his shitty, quarter-built plane while it’s mid-flight.

Can we please stop with the backhanded compliments and judgement? This is cutting edge technology in a brand new field of computing using experimental methods. Please give the guy a break. At least he's trying to advance the state of the art, unlike all the people that copy everyone else.

crote15 days ago

> Please give the guy a break. At least he's trying to advance the state of the art.

The problem is that as an outsider it really looks like someone is trying to herd a bunch of monkeys into writing Shakespeare, or trying to advance impressionist art by pretending a baby's first crayon scratches are equivalent to a Pollock.

I bet he's having a lot of fun playing around with "cutting-edge technology", but it's missing any kind of scientific rigor or analysis, so the results are going to be completely useless to anyone wanting to genuinely advance the use of LLMs for programming.

Ronsenshi15 days ago

I agree that he probably has a lot of fun. What he's doing is an equivalent of throwing a hand grenade into a crowd and enjoying the chaos of it all - he's set in life, can comfortably retire while the rest of the industry tries to deal with that hand grenade. Where some people are fighting to get the safety pin out while others are trying to stop them.