Back

Professional software developers don't vibe, they control

217 points1 montharxiv.org
simonw1 month ago

This is pretty recent - the survey they ran (99 respondents) was August 18 to September 23 2025 and the field observations (watching developers for 45 minute then a 30 minute interview, 13 participants) were August 1 to October 3.

The models were mostly GPT-5 and Claude Sonnet 4. The study was too early to catch the 5.x Codex or Claude 4.5 models (bar one mention of Sonnet 4.5.)

This is notable because a lot of academic papers take 6-12 months to come out, by which time the LLM space has often moved on by an entire model generation.

utopiah1 month ago

> academic papers take 6-12 months to come out, by which time the LLM space has often moved on by an entire model generation.

This is a recurring argument which I don't understand. Doesn't it simply mean that whatever conclusion they did was valid then? The research process is about approximating a better description of a phenomenon to understand it. It's not about providing a definitive answer. Being "an entire model generation" behind would be important if fundamental problems, e.g. no more hallucinations, would be solved but if it's going from incremental changes then most likely the conclusions remain correct. Which fundamental change (I don't think labeling newer models as "better" is sufficient) do you believe invalidate their conclusions in this specific context?

soulofmischief1 month ago

2025 has been a wild year for agentic coding models. Cutting-edge models in January 2025 don't hold a candle to cutting edge models in December 2025.

Just the jump from Sonnet 3.5 to 3.7 to 4.5, and Opus 4.5 has been pretty massive in terms of holistic reasoning, deep knowledge as well as better procedural and architectural adherence.

GPT-5 Pro convinced me to pay $200/mo for an OpenAI subscription. Regular 5.2 models, and 5.2 codex, are leagues better than GPT-4 when it comes to solving problems procedurally, using tools, and deep discussion of scientific, mathematic, philosophical and engineering problems.

Models have increasingly longer context, especially some Google models. OpenAI has released very good image models, and great editing-focused image models in general have been released. Predictably better multimodal inference over the short term is unlocking many cool near-term possibilities.

Additionally, we have seen some incredible open source and open weight models released this year. Some fully commercially viable without restriction. And more and more smaller TTS/STT projects are in active development, with a few notable releases this year.

Honestly, the landscape at the end of the year is impressive. There has been great work all over the place, almost too much to keep up with. I'm very interested in the Genie models and a few others.

For an idea:

At the beginning of the year, I was mildly successful getting at coding models to make changes in some of my codebases, but the more esoteric problems were out of reach. Progress in general was deliberate and required a lot of manual intervention.

By comparison, in the last week I've prototyped six applications at levels that would take me days to weeks individually, often developing multiple at the same time, monitoring agentic workflows and intervening only when necessary, relying on long preproduction phases with architectural discussions and development of documentation, requirements, SDDs... and detailed code review and refactoring processes to ensure adherence to constraints. I'm morphing from a very busy solo developer into a very busy product manager.

foldr1 month ago

>By comparison, in the last week I've prototyped six applications at levels that would take me days to weeks individually [...]

I don't doubt that the models have got better, but you can go back two or three years and find people saying the exact same stuff about the latest models back then.

+1
simonw1 month ago
orwin1 month ago

> Just the jump from Sonnet 3.5 to 3.7 to 4.5, and Opus 4.5 has been pretty massive in terms of holistic reasoning, deep knowledge as well as better procedural and architectural adherence.

I don't really agree. Aside from how it handled frontend code, changes in Sonnet did not truly impact my overall productivity (from Sonnet 3.7 to 4 to 4.5, i did not try 3.5). Opus 4.5/Codex 5.2 are when the changes truly happenned for me (and i'm still a bit distrustfull of Codex 5.2, but i use it basically to help me during PRs).

soulofmischief1 month ago

That's fine. Maybe you're holding it wrong, or maybe your work is too esoteric/niche/complex for newer models to be bigger productivity boosters. Some of mine certainly is, I get that. But for other stuff, these newer models are incredible productivity boosters.

I also chat with these models for long hours about deep, complicated STEM subjects and am very impressed with the level of holistic knowledge and wisdom compared to models a year ago. And the abstract math story has gotten sooooo much better.

simonw1 month ago

The problem is with how people interpret these results.

A paper comes out that says "we did a study of developers and found that AI-assistance had no impact on their productivity (using the state of the art models available in September 2024) and a lot of people will point to that as incontestable evidence that "AI doesn't work".

bbor1 month ago

I’m glad someone else noticed the time frames — turns out the lead author here has published 28 distinct preprints in the past 60 days, almost all of which are marked as being officially published already/soon.

Certainly some scientists are just absurdly efficient and all 28 involved teams, but that’s still a lot.

Personally speaking, this gives me second thoughts about their dedication to truly accurately measuring something as notoriously tricky as corporate SWE performance. Any number of cut corners in a novel & empirical study like this would be hard to notice from the final product, especially for casual readers…TBH, the clickbait title doesn’t help either!

I don’t have a specific critique on why 4 months is definitely too short to do it right tho. Just vibe-reviewing, I guess ;)

aaronblohowiak1 month ago

are they a PI with a lab? in this field, does the PI get first or last author?

dheera1 month ago

> academic papers take 6-12 months to come out

It takes about 6 months to figure out how to get LaTeX to position figures where you want them, and then another 6 months to fight with reviewers

zeristor1 month ago

Couldn't AI help with the LaTeX?

Cutting it down to 6 minutes

jsrozner1 month ago

I have found it to be pretty bad at formatting tables

ActionHank1 month ago

For what it’s worth I know this is likely intended to read as the new generation of models will somehow better than any paper will be able to gauge, that hasn’t been my experience.

Results are getting worse and less accurate, hell, I even had Claude drop some Chinese into a response out of the blue one day.

danielbln1 month ago

I can absolutely not corroborate this, Opus 4.5 has been nothing but stellar.

mannycalavera421 month ago

same here. While getting a commandline for ffmpeg instead of giving me the option "soft-knee" it used "soft-膝" (where 膝 is the chinese for knee) was easy to spot and figure out but still... pretty rubbishy ¯ \ _ (ツ) _ / ¯

reactordev1 month ago

I knew in October the game had changed. Thanks for keeping us in the know.

mikasisiki1 month ago

I'm not sure what you mean by “the game has changed.” If you’re referring to Opus 4.5, it’s somewhat better, but it’s far from game-changing.

reactordev1 month ago

You’re looking in from the outside. I’m on the inside. This next generation of models will show. It’s about to get wild.

We now have extremely large context windows, we now have memory, we now have recall, we now can put an agent to the task for 24 hours.

joenot4431 month ago

Thanks Simon - always quick on the draw.

Off your intuition, do you think the same study with Codex 5.2 and Opus 4.5 would see even better results?

simonw1 month ago

Depends on the participants. If they're cutting-edge LLM users then yes, I think so. If they continue to use LLMs like they would have back in the first half of 2025 I'm not sure if a difference would be noticeable.

mkozlows1 month ago

I'm not remotely cutting edge (just switched from Cursor to Codex CLI, have no fancy tooling infrastructure, am not even vaguely considering git worktrees as a means of working), but Opus 4.5 and 5.2 Codex are both so clearly more competent than previous models that I've started just telling them to do high-level things rather than trying to break things down and give them subtasks.

If people are really set in their ways, maybe they won't try anything beyond what old models can do, and won't notice a difference, but who's had time to get set in their ways with this stuff?

+3
christophilus1 month ago
nineteen9991 month ago

Also not a cutting edge user, but do run my own LLM's at home and have been spending a lot of time with Claude CLI last few months.

It's fine if you want Claude to design your API's without any input, but you'll have less control and when you dig down into the weeds you'll realise it's created a mess.

I like to take both a top-down and bottoms-up approach - design the low level API with Claude fleshing out how it's supposed to work, then design the high level functionality, and then tell it to stop implementing when it hits a problem reconciling the two and the lower level API needs revision.

At least for things I'd like to stand the test of time, if its just a throwaway script or tool I care much less as long as it gets the job done.

drbojingle1 month ago

What's the difference between using llms now vs the first half of 2025 among the best users?

+4
simonw1 month ago
trq1261541 month ago

[flagged]

runtimepanic1 month ago

The title is doing a lot of work here. What resonated with me is the shift from “writing code” to “steering systems” rather than the hype framing. Senior devs already spend more time constraining, reviewing, and shaping outcomes than typing syntax. AI just makes that explicit. The real skill gap isn’t prompt cleverness, it’s knowing when the agent is confidently wrong and how to fence it in with tests, architecture, and invariants. That part doesn’t scale magically.

asmor1 month ago

Is anyone else getting more mentally exhausted by this? I get more done, but I also miss the relaxing code typing in the middle of the process.

agumonkey1 month ago

I think there are two groups of people emerging. deep / fast / craft-and-decomposition-loving vs black box / outcome-only.

I've seen people unable to work at average speed on small features suddenly reach above average output through a llm cli and I could sense the pride in them. Which is at odds with my experience of work.. I love to dig down, know a lot, model and find abstractions on my own. There a llm will 1) not understand how my brain work 2) produce something workable but that requires me to stretch mentally.. and most of the time I leave numb. In the last month I've seen many people expressing similar views.

ps: thanks everybody for the answers, interesting to read your pov

remich1 month ago

I get what you're saying, but I would say that this does not match my own experience. For me, prior to the agentic coding era, the problem was always that I had way more ideas for features, tools, or projects than I had the capacity to build when I had to confront the work of building everything by hand, also dealing with the inevitable difficulties in procrastination and getting started.

I am a very above-average engineer when it comes to speed at completing work well, whether that's typing speed or comprehension speed, and still these tools have felt like giving me a jetpack for my mind. I can get things done in weeks that would have taken me months before, and that opens up space to consider new areas that I wouldn't have even bothered exploring before because I would not have had the time to execute on them well.

jaapz1 month ago

I think the comprehension part is very important.

When I write my own code without an LLM, it is an extension of my own thinking, my own mental model.

But when I use an LLM, that LLM produces code that I need to comprehend, understand. It's like I'm continually reading some other developers' code, and having to understand their mental model and way of thinking to truly understand the code.

For me, this is very tiring. It just costs more energy for me to review and read other people's code than when I write it myself.

ronsor1 month ago

The sibling comments (from remich and sanufar) match my experience.

1. I do love getting into the details of code, but I don't mind having an LLM handle boilerplate.

2. There isn't a binary between having an LLM generate all the code and writing it all myself.

3. I still do most of the design work because LLMs often make questionable design decisions.

4. Sometimes I simply want a program to solve a problem (outcome-focused) over a project to work on (craft-focused). Sometimes I need a small program in order to focus on the larger project, and being able to delegate that work has made it more enjoyable.

zahlman1 month ago

> I do love getting into the details of code, but I don't mind having an LLM handle boilerplate.

My usual thought is that boilerplate tells me, by existing, where the system is most flawed.

I do like the idea of having a tool that quickly patches the problem while also forcing me to think about its presence.

> There isn't a binary between having an LLM generate all the code and writing it all myself. I still do most of the design work because LLMs often make questionable design decisions.

One workflow that makes sense to me is to have the LLM commit on a branch; fix simple issues instead of trying to make it work (with all the worry of context poisoning); refactor on the same branch; merge; and then repeat for the next feature — starting more or less from scratch except for the agent config (CLAUDE.md etc.). Does that sound about right? Maybe you do something less formal?

> Sometimes I simply want a program to solve a purpose (outcome-focused) over a project to work on (craft-focused). Sometimes I need a small program in order to focus on the larger project, and being able to delegate that work has made it more enjoyable.

Yeah, that sounds about right.

sanufar1 month ago

I think for me, the difference really comes down to how much ownership I want to take in regards to the project. If it’s something like a custom kernel that I’m building, the real fun is in reading through docs, learning about systems, and trying to craft the perfect abstractions; but if it’s wiring up a simple pipeline that sends me a text whenever my bus arrives, I’m happy to let an LLM crank that out for me.

I’ve realized that a lot of my coding is on this personal satisfaction vs utility matrix and llms let me focus a lot more energy onto high satisfaction projects

zahlman1 month ago

> deep / fast / craft-and-decomposition-loving vs black box / outcome-only

As a (self-reported) craft-and-decomposition lover, I wouldn't call the process "fast".

Certainly it's much faster than if I were trying to take the same approach without the same skills; and certainly I could slow it down with over-engineering. (And "deep" absolutely fits.) But the people I've known that I'd characterize as strongly "outcome-only", were certainly capable of sustaining some pretty high delta-LoC per day.

jghn1 month ago

That's kind of the point here. Once a dev reached a certain level, they often weren't doing much "relaxing code typing" anyways before the AI movement. I don't find it to be much different than being a tech lead, architect, or similar role.

remich1 month ago

As a former tech lead and now staff engineer, I definitely agree with this. I read a blog post a couple of months ago that theorized that the people that would adopt these technologies the best were people in the exact roles that you describe. I think because we were already used to having to rely on other people to execute on our plans and ideas because they were simply too big to accomplish by ourselves. Now that we have agents to do these things, it's not really all that different - although it is a different management style working around their limitations.

+1
jghn1 month ago
tikimcfee1 month ago

Ya know, I have to admit feeling something like this. Normally, the amount of stuff I put together in a work day offers a sense of completion or even a bit of a dopamine bump because of a "job well done". With this recent work I've been doing, it's instead felt like I've been spending a multiplier more energy communicating intent instead of doing the work myself; that communication seems to be making me more tired than the work itself. Similar?

whynotminot1 month ago

It feels like we all signed up to be ICs, but now we’re middle managers and our reports are bots.

+1
MikeTheGreat1 month ago
senshan1 month ago

> and our reports are bots.

With no gossip, rivalry or backstabbing. Super polite and patient, which is very inspiring.

We also brutally churning them by "laying off" the previously latest model once the new latest is available.

perfmode1 month ago

You’re possibly not entering into the flow state anymore.

Flow is effortless. and it is rejuvenating.

I believe:

While communication can be satisfying, it’s not as rejuvenating as resting in our own Being and simply allowing the action to unfold without mental contraction.

Flow states.

When the right level of challenge and capability align and you become intimate with the problem. The boundaries of me and the problem dissolve and creativity springs forth. Emerging satisfied. Nourished.

danielbln1 month ago

Flow state can happen at various levels of abstraction, not just when hand writing code in a gen 3 language.

johnsmith18401 month ago

This is why I think LLMs will make us all a LOT smarter. Raw code made it so we stopped heavily thinking in between but now it's just 100% the most intense thought processes all day long.

+1
falkensmaize1 month ago
bugglebeetle1 month ago

Nah, I don’t miss at all typing all the tests, CLIs, and APIs I’ve created hundreds of times before. I dunno if I it’s because I do ML stuff, but it’s almost all “think a lot about something, do some math, and and then type thousands of lines of the same stuff around the interesting work.”

simonw1 month ago

Yes, absolutely, I can be mentally wiped out by lunch.

epolanski1 month ago

Yes it's taxing and mentally draining, reading code and connecting dots is always harder than writing it.

And if you let the AI too loose, as when you try to vibe code an entirely new program, I end up in the situation where in 1 day I have a good prototype and then I can spend easily 5 times as much sorting the many issues and refactoring in order to have it scale to the next features.

SJMG1 month ago

I think it's the serial waiting game and inevitable context switching while you wait.

Long iteration cycles are taxing

bccdee1 month ago

So far what I've been doing is, I look for the parts that seem like they'd be rewarding to code and I do them myself with no input from the machine whatsoever. It's hard to really understand a codebase without spending time with the code, and when you're using a model, I think there's a risk of things changing more quickly than you can internalize them. Also, I worry I'll get too comfortable bossing chatbots around & I'll become reluctant to get my hands dirty and produce code directly. People talk about ruining their attention spans by spending all their time on TikTok until they can no longer read novels; I think it'd be a real mistake to let that happen to my professional skill set.

mupuff12341 month ago

For me it's the opposite, I'm wasting less energy over debugging silly bugs and fighting/figuring out some annoying config.

But it does feel less fulfilling I suppose.

teaearlgraycold1 month ago

I like to alternate focusing on AI wrangling and writing code the old fashioned way.

AlotOfReading1 month ago

It's difficult to steer complex systems correctly, because no one has a complete picture of the end goal at the outset. That's why waterfall fails. Writing code agentically means you have to go out of your way to think deeply about what you're building, because it won't be forced on you by the act of writing code. If your requirements are complex, they might actually be a hindrance because you're going have to learn those lessons from failed iterations instead of avoiding them preemptively.

codeformoney1 month ago

The stereotype that writing code is for junior developers needs to die. Some devs are hired with lofty titles specifically for their programming aptitude and esoteric systems knowlege, not to play implementation telephone with inexperienced devs.

remich1 month ago

I don't think that anyone actually believes that writing code is only for junior developers. That seems to be a significant exaggeration at the very least. However, it is definitely true that most organizations of this size are hiring people into technical lead, staff engineer, or principal engineer roles are hiring those people not only for their individual expertise, or ability to apply that expertise themselves, but also for their ability to use that expertise as a force multiplier to make other less experienced people better at the craft.

codeformonkey1 month ago

In my world there are Hard Problems that need to be solved for bu$ine$$ rea$on$, no being a "force multiplier" required (whatever that really means).

inkyoto1 month ago

> I don't think that anyone actually believes that writing code is only for junior developers.

That is, unquestionably, how it ought to be. However, the mainstream – regrettably – has devolved into a well-worn and intellectually stagnant trajectory, wherein senior developers are not merely encouraged but expected to abandon the coding altogether, ascending instead into roles such as engineering managers (no offence – good engineering managers are important, it is the quality that has been diluted across the board), platform overseers (a new term for stage gate keepers), or so-called solution architects (the ones who are imbued with compliance, governance and do not venture out past that).

In this model, neither role is expected – and in some lamentable cases, is explicitly forbidden[0] – to engage directly with code. The result is a sterile detachment from the very systems they are charged with overseeing.

Worse still, the industry actively incentivises ill-considered career leaps – for instance, elevating a developer with limited engineering depth into the position of a solution designer or architect. The outcome is as predictable as it is corrosive: individuals who can neither design nor architect.

The number of organisations in which expert-level coding proficiency remains the norm at senior or very senior levels has dwindled substantially over the past couple of decades or so – job ads explicitly call out the management experience, knowledge of vacuous or limited usefulness architectural frameworks (TOGAF and alike). There do remain rare islands in an ever-expanding ocean of managerial abstraction where architects who write code, not incessantly but when a need be, are still recognised as invaluable. Yet their presence is scarce.

The lamentable state of affairs has led to a piquant situation on the job market. In recent years, headhunters have started complaining about being unable to find an actually highly proficient, experienced, and, most importantly, technical architect. One's loss is another one's gain, or at least an opportunity, of course.

[0] Speaking from firsthand experience of observing a solution architect to have quit their job to run a bakery (yes) due to the head of architecture they were reporting to explicitly demanding the architect quit coding. The architect did quit, albeit in a different way.

Madmallard1 month ago

"it’s knowing when the agent is confidently wrong and how to fence it in with tests, architecture, and invariants."

Strongly suspect this is simply less efficient than doing it yourself if you have enough expertise.

llmslave21 month ago

Does using an LLM to craft Hackernews comments count as "steering systems"?

coip1 month ago

You're totally right! It's not steering systems -- it's cooking, apparently

lesuorac1 month ago

> Most Recent Task for Survey

> Number of Survey Respondents

> Building apps 53

> Testing 1

I think this sums up everybody complaints about AI generated code. Don't ask me to be the one to review work you didn't even check.

rco87861 month ago

Yea. Nobody wants to be a full-time code reviewer.

jaggederest1 month ago

Hi it's me, the guy who wants to be a full-time code reviewer.

sarchertech1 month ago

If you really did that full time and never wrote code, you’d be a terrible reviewer.

+1
littlestymaar1 month ago
nemo1 month ago

Be careful what you wish for.

throw-12-161 month ago

I fired someone over this a few months ago.

danavar1 month ago

So much of my professional SWE jobs isn't even programming - I feel like this is a detail missed by so many. Generally people just stereotype SWE as a programmer, but being an engineer (in any discipline) is so much more than that. You solve problems. AI will speed up the programming work-streams, but there is so much more to our jobs than that.

whstl1 month ago

Agreed.

Most of the work brought to me gets done before I even think about sitting down to type.

And it's interesting to see the divide here between "pure coder" and "coder + more". A lot of people seem to be in the job to just do what the PM, designer and business people ask. A lot of work is pushing back against some of those requests. In conversations here in HN about "essential complexity" I even see commenters arguing that the spec brought to you is entirely essential. It's not.

ciaranmca1 month ago

^This 100%. Junior SWE here. Agentic coding has kinda felt like a promotion for me. I code less by hand and spend more time on the actual engineering side of things. There’s hype in both directions though. I don’t AI is replacing me anytime soon(fingers crossed), but it’s already way more useful than the skeptics give it credit for. Like most things the truth’s somewhere in the middle.

danielbln1 month ago

There is also so much more you can automate and use AI agents for than "programming". It's the world's best rubber duck, for one. It also can dig through code bases and compile information on data flows, data models and so on. Hell, it can automate effectively any task you do on the terminal.

AYBABTME1 month ago

It feels like we're doing another lift to a higher level of abstraction. Whereas we had "automatic programming" and "high level programming languages" free us from assembly, where higher level abstractions could be represented without the author having to know or care about the assembly (and it took decades for the switch to happen), we now once again get pulled up another layer.

We're in the midst of another abstraction level becoming the working layer - and that's not a small layer jump but a jump to a completely different plane. And I think once again, we'll benefit from getting tools that help us specify the high level concepts we intend, and ways to enforce that the generated code is correct - not necessarily fast or efficient but at least correct - same as compilers do. And this lift is happening on a much more accelerated timeline.

The problem of ensuring correctness of the generated code across all the layers we're now skipping is going to be the crux of how we manage to leverage LLM/agentic coding.

Maybe Cursor is TurboPascal.

websiteapi1 month ago

we've never seen a profession drive themselves so aggressively to irrelevance. software engineering will always exist, but it's amazing the pace to which pressure against the profession is rising. 2026 will be a very happy new year indeed for those paying the salaries. :)

simonw1 month ago

We've been giving our work away to each other for free as open source to help improve each other's productivity for 30+ years now and that's only made our profession more valuable.

websiteapi1 month ago

I see little proof open source has resulted in higher wages and not the fact that everything is being digitized and the subsequent demand for such people to assist in such.

simonw1 month ago

I'm not sure how I can prove it, but ~25 years ago building software without open source sucked. You had to build everything from scratch! It took months to get even the most basic things up and running.

I think open source is the single most important productivity boost to our industry that's ever existed. Automated testing is a close second.

Google, Facebook, many others would not have existed without open source to build on.

And those giants and others like them that were enabled by open source employed a TON of people, at competitive rates that greatly increased our salaries.

+4
christophilus1 month ago
+2
throw12354351 month ago
+1
websiteapi1 month ago
aussieguy12341 month ago

This makes sense. Imagine PHP or NodeJS without a framework, or front end development without React. Your projects would take much longer to build. The time saved with the open source frameworks and libraries is more than what an AI agent can save you.

fshacf1 month ago

[flagged]

cheema331 month ago

> we've never seen a profession drive themselves so aggressively to irrelevance.

Should we be trying to put the genie back in the bottle? If not, what exactly are you suggesting?

Even if we all agreed to stop using AI tools today, what about the rest of world? Will everybody agree to stop using it? Do you think that is even a remote possibility?

dinkumthinkum1 month ago

Does the rest of the world want to make money in a way not involving digging ditches? I feel like people from developing countries that spend 18 hours a day studying, giving their entire childhood to some standardized test, may not want yo be rewarded with no job prospects. Maybe that’s a crazy position.

mkoubaa1 month ago

Don't care have too much to do must automate away my today responsibilities so I can do more tomorrow trvst the plqn

throw-12-161 month ago

Software Engineers will still exist.

Software Devs not so much.

There is a huge difference between the two and they are not interchangeable.

wiseowise1 month ago

Good luck convincing new overlords.

Your take is this meme https://knowyourmeme.com/memes/dig-the-fucking-hole.

throw-12-161 month ago

sorry i don't speak meme

zwnow1 month ago

Also it really baffles me how many are actually in on the hype train. Its a lot more than the crypto bros back in the day. Good thing AI still cant reason and innovate stuff. Also leaking credentials is a felony in my country so I also wont ever attach it to my codebases.

aspenmartin1 month ago

I think the issue is folks talk past each other. People who find coding agents useful or enjoyable are labeled “on the hype train” and folks for which coding agents don’t work for them or their workflow are considered luddites. There are an incredible number of contradicting claims and predictions out there as well, and I believe what we see is folks projecting their reaction to some amalgamation of them onto others. I see a lot of “they” language, and a lot of viral articles about business leadership “shoving AI down our throats” and it becomes a divisive issue like American political scene with really no one having a real conversation

zwnow1 month ago

Its all a hype train though. People still believe in the AI gonna bring utopia bullshit while the current infra is being built on debt. The only reason it still exists is that all these AI companies believe in some kind of revenue outside of subscriptions. So its all about:

Owning the infrastructure and enshittify (ads) once enough products are based on AI.

Its the same chokehold Amazon has on its Vendors.

mhitza1 month ago

Hard to have a conversation when often the critics of LLM output receive replies like "What, you used last week's model?! No, no, no, this one is a generational leap"

Too many people are invested into AI's success to have a balanced conversation. Things will return to normal after a market shakedown of a few larger AI companies.

aspenmartin1 month ago

On HN I think you overestimate the number of optimists that are optimists because they have some vested interest. Everyone everywhere arguably has a vested interest. I would also argue all of the folks on HN that are hostile and dismissive of coding agents also have a vested interest (just for the sake of contrasting your argument). If coding agents were really crappy I wouldn’t be using them just like I didn’t use them until end of 2025.

What conversation is hard to have? If you mean trying to convince people coding agents can or cannot do a specific thing then that may never go away. If you take an overall theme or capability, in some cases it will “just work” and in other cases it needs some serious steering or scaffolding, and in other cases it will just waste as much time as you will let it. It’s an imperfect tool and it may always be, and two people insisting it can do something and it cannot do that same thing may both be right.

What is troubling to me is the attitude of folks that are heavily hostile towards these models and the people that use them. People routinely conflate market promises and actual delivered tools and capabilities and lump people who enjoy and get lots of mileage out of these tools into what appears to be a big strawman camp of fawning fans who don’t understand or appreciate Real Software Engineering; people who would write bad code anyway and not know. It’s quite insulting but also wrong. Not saying you are part of this camp! But as one lonely optimist in a sea of negativity that’s certainly the perspective I’ve developed from the “conversations” I’ve seen on HN

llmslave21 month ago

I think the reason for the varying claims and predictions is because developers have wildly different standards for what constitutes working code. For the developers with a lower threshold, AI is like crack to them because gen ai's output is similar to what they would produce, and it really is a 10x speedup. For others, especially those who have to fix and maintain that code, it's more like a 10x slowdown.

Hence why you have in the same thread, some developer who claims that Claude writes 99% of their code and another developer who finds it totally useless. And of course others who are somewhere in the middle.

+1
remich1 month ago
throw12354351 month ago

There's also the effect of different models. Until the most recent models, especially for concise algorithms, I felt it was still easier to sometimes do it myself (i.e. a good algo can be concise/more concise than a lossy prompt) and leave the "expansion/repetitive" boilerplate code to the LLM. At least for me the latest models do feel like a "step change" in that the problems can be bigger and/or require less supervision on each problem depending on the tradeoff you want.

fragmede1 month ago

your credentials shouldn't be in your codebase to begin with!

zwnow1 month ago

.env files are a thing in tons of codebases

+1
iwontberude1 month ago
mkozlows1 month ago

If your secrets are in your repo, you've probably already leaked them.

banbangtuth1 month ago

You know what. After seeing all these articles about AI/LLM for these past 4 years, about how they are going to replace me as software developers and about how I am not productive enough without using 5 agents and being a project manager.

I. Don't. Care.

I don't even care about those debates outside. Debates about do LLM work and replace programmers? Say they do, ok so what?

I simply have too much fun programming. I am just a mere fullstack business line programmer, generic random replaceable dude, you can find me dime a dozen.

I do use LLM as Stack Overflow/docs replacement, but I always code by hand all my code.

If you want to replace me, replace me. I'll go to companies that need me. If there are no companies that need my skill, fine, then I'll just do this as a hobby, and probably flip burgers outside to make a living.

I don't care about your LLM, I don't care about your agent, I probably don't even care about the job prospects for that matter if I have to be forced to use tools that I don't like and to use workflows I don't like. You can go ahead find others who are willing to do it for you.

As for me, I simply have too much fun programming. Now if you excuse me, I need to go have fun.

lifetimerubyist1 month ago

Hear hear. I didn't spend half my life getting an education, competing in the corporate crab bucket, retraining and upskilling just to turn into a robot babysitter.

danielbln1 month ago

Then continue to write code as a hobby, noone is going to take that away from you. But if you want someone to pay you for hand setting code the way you always have then .. well you might find that harder and harder as time goes on.

lifetimerubyist1 month ago

[dead]

yacthing1 month ago

Easy to say if you either:

(1) already have enough money to survive without working, or

(2) don't realize how hard of a life it would be to "flip burgers" to make a living in 2026.

We live very good lives as software developers. Don't be a fool and think you could just "flip burgers" and be fine.

banbangtuth1 month ago

Ah, I actually did flip burgers. So I know.

I also did dry cleaning, cleaning service, deli, delivery guy, etc.

Yup I now have enough money to survive without working.

But I also am very low maintenance, thanks to my early life being raised in harsh conditions.

I am not scared to go back flipping burgers again.

falkensmaize1 month ago

"I am not scared to go back flipping burgers again."

You should be - in all likelihood you'd have to work 3 burger-flipping jobs to make enough money to pay rent and buy food. Inflation and housing issues have hit a lot harder than most people who make 6-figure incomes realize. It's really tough out there right now. I am very, very grateful for the income I have and don't take it for granted.

Madmallard1 month ago

"Yup I now have enough money to survive without working" Your opinion is borderline irrelevant then.

+1
banbangtuth1 month ago
hecanjog1 month ago

I appreciate this perspective. I'm actually hoping LLM hype will help to pop the bubble of tech salaries, make the profession roughly as profitable as going into teaching, so maybe the gold diggers will clear out and go play the stock market or something, rest of us can stick around and build things. Maybe software quality will even improve as a result? Would be nice...

falkensmaize1 month ago

Man, come on - what planet are you from, seriously? I got into this business because I enjoy programming, but I also wanted to for once in my life make a decent living and be able to save something. I have kids I'd like to send to college. I'd like to be able to retire someday. I have aging parents that need expensive care. This is one of the few professions that you can upskill into without years of expensive degrees.

People need to make money to survive, now more than ever. It seems incredibly selfish to wish for that to disappear just so you can "purify" the profession.

hecanjog1 month ago

> People need to make money to survive

I very much agree. A million dollar tech salary isn't that though.

dinkumthinkum1 month ago

I hear you but I feel like you (and really others like you, in mass) should not be so passive about your replacement. For most programmers, simply flipping burgers for money to enjoy programming a few hours a week is not going to work. Making a living is a thing. If you are reduced to having to flip burgers that means the economy will gave collapsed and there won’t be any magic Elon UBI money to save us.

banbangtuth1 month ago

We will have bigger problems when that happens. I am not worried.

dinkumthinkum1 month ago

I mean, the point is to not be passive and push against those bigger problems happening but ok.

llmslave21 month ago

I simply will not spend my life begging and coaxing a machine to output working code. If that is what becomes of this profession, I will just do something else :)

ryanobjc1 month ago

If I wanted to do that, I'd just move into engineering management and work with something less temperamental and predictable - humans.

I'd at least be more likely to get a boost in impact and ability to affect decision making, maybe.

lifetimerubyist1 month ago

Until you realize you're just begging and coaxing a human to better beg and coax a machine to output working code - when you could just beg and coax the machine yourself.

+1
llmslave21 month ago
aspenmartin1 month ago

It would definitely be the profession if we stopped developing things today. Think about the idea of coding agents 2 years ago, I personally found them very unrealistic and am now coding exclusively with them despite them being either a neutral or net negative to my development time simply because I see the writing on the wall that in 6 mos to a year they will probably be a huge net positive and in 2-3 years the dismissive attitude towards adoption will start to look kind of silly (no offense). To me we are _just_ at the inflection point where using and not using coding agents are both totally sensible decisions.

agentifysh1 month ago

having fun isn't tied to employment unless you are self-employed even then what's fun should not be the driving force

lifetimerubyist1 month ago

"get a job doing something you enjoy and you'll never work a day in your life"

or something like that

banbangtuth1 month ago

Why? It is a matter of values. Fun can be a driving force just like money and stability is. It is simply a matter of your values (and your sacrifices).

Like I said, I am just a generic replaceable dime a dozen programmer dude.

agentifysh1 month ago

you dont get paid to have fun but to produce as a laborer

a job isn't supposed to be fun its nice when it is but it shouldn't be what drives decisions

+1
banbangtuth1 month ago
throw-12-161 month ago

i think you angered the hustle bros

agentifysh1 month ago

were these bots? so strange all green nicks

throw-12-161 month ago

nah just privileged dudes who think the whole world gets to pick and choose how to earn a living

llmslave21 month ago

That sounds miserable to me :(

agentifysh1 month ago

you work on somebody's dime, its no longer your choice

+1
zem1 month ago
llmslave21 month ago

It's my life, it's my choice.

geldedus1 month ago

The "Ai-assisted programming" mistaken for "vibe coding" is getting old and annoying

zwnow1 month ago

Idk, I still mostly avoid using it and if I do, I just copy and paste shit into the Claude web version. I wont ever manage agents as that sounds just as complicated as coding shit myself.

lexandstuff1 month ago

It's not complicated at all. You don't "manage agents". You just type your prompt into an terminal application that can update files, read your docs and run your tests.

As with every new tech there's a hell of a lot of noise (plugins, skills, hooks, MCP, LSP - to quote Kaparthy) but most of it can just be disregarded. No one is "behind" - it's all very easy to use.

danielbln1 month ago

Easy to use, hard to master. Or: low skill floor, high skill ceiling. My output wouldn't be nearly as good without subagents and skills, and MCPs are somewhat required if you deploy tool using agents at scale.

It's like saying all you need is notepad to develop. It's not wrong, but.. you know.

micik1 month ago

It’s not hard to master. It’s not a skill to be learned —- it’s a tool that comes with a manual. You read the manual and now you can use the tool. Most people never will read the manual which is what gives the false impression that there’s something “to master” here. It’s like saying vím is harder to use than notepad. Not if you read the entire manual first.

danielbln1 month ago

I'm not sure how you define skill acquisition, it's reading documentation and doing the skill, yes? The AI landscape shifts rather quickly still, and a new LLM + harness has a different set of functionality, but more importantly different fuzzy failure cases Things a model is particular good at, things that work better if you combine certain systems. All of it is documented, but also fast moving and new things are discovered frequently. In comparison, Vim has been around for decades.

And vum is absolutely harder to use than notepad. Otherwise it's like saying that rocket science isn't hard because you just have to read the documentation to know how to engineer a rocket.

andy991 month ago

Is the title an ironic play on AI’s trademark writing style, is it AI generated, or is the style just rubbing off on people?

mattnewton1 month ago

I think it’s a popular style before gen ai and the training process of LLMs picked up on that.

andy991 month ago

That’s not how LLMs work, it’s part of the reinforcement learning or SFT dataset, data labelers would have written or generated tons of examples using this and other patterns (all the emoji READMEs for example) that the models emulate. The early ones had very formulaic essay style outputs that always ended with “in conclusion”, lots of the same kind of bullet lists, and a love of adjectives and delving, all of which were intentionally trained in. It’s more subtle now but it’s still there.

mattnewton1 month ago

Maybe I was being imprecise, but I’m not sure what you mean by “not how LLMs work” - discovering patterns of how humans write is exactly the signal they are trained against. Either explicitly curated like SFT or coaxed out during RLHF, no?

It could even have been picked up in pretraining and then rewarded during rlhf when the output domain was being refined; I haven’t used enough LLMs before post training to know what step it usually becomes noticeable.

learningstud1 month ago

If developers are not using TLA+ or Lean4 etc. They are vibe coding. Nothing wrong with that. They just have to realize that they were never in control. Thinking logically is much harder than developers imagined. As Dijkstra observed, the whole field has adopted the mentra, "How to program when you cannot." I estimate that 80% of what developers do can be done once and for all for all of humanity, yet we don't learn. Be offended all you want, but I am fed up with this idiocy given all the usual rebuttals of deadlines etc.

https://news.ycombinator.com/item?id=43679634

senshan1 month ago

Excellent survey, but one has to be careful when participating in such surveys:

"I’m on disability, but agents let me code again and be more productive than ever (in a 25+ year career). - S22"

Once Social Security Administration learns this, there goes the disability benefit...

LoganDark1 month ago

I think you eventually lose disability benefits anyway once you start making money.

ramoz1 month ago

> Takeaway 3c: Experienced developers disagree about using agents for software planning and design. Some avoided agents out of concern over the importance of design, while others embraced back-and-forth design with an AI.

Im in the back-and-forth camp. I expect a lot of interesting UX to develop here. I built https://github.com/backnotprop/plannotator over the weekend to give me a better way to review & collaborate around plans - all while natively integrated into the coding agent harness.

andrewstuart1 month ago

Don’t let anyone tell you the right way to program a computer.

Do it in the way that makes you feel happy, or conforms to organizational standards.

mkoubaa1 month ago

The right way to program a computer:

Well

andrewstuart1 month ago

No.

There’s many contexts in which programming a computer well is not important.

amkharg261 month ago

The title is provocative but there's truth to it. The distinction between "vibing" with AI tools and actually controlling the output is crucial for production code.

I've seen this with code generation tools - developers who treat AI suggestions as magic often struggle when the output doesn't work or introduces subtle bugs. The professionals who succeed are those who understand what the AI is doing, validate the output rigorously, and maintain clear mental models of their system.

This becomes especially important for code quality and technical debt. If you're just accepting AI-generated code without understanding architectural implications, you're building a maintenance nightmare. Control means being able to reason about tradeoffs, not just getting something that "works" in the moment.

senshan1 month ago

I often tell people that agentic programming tools are the best thing since cscope. The last 6 months I have not used cscope even once after decades of using it nearly daily.

[0] https://en.wikipedia.org/wiki/Cscope

utopiah1 month ago

Well, looks like that's how I'm spending my day https://cscope.sourceforge.net/cscope_vim_tutorial.html

Out of curiosity, if I wanted to setup cscope for a bunch of small projects, say dozens of prototypes in their own directory, would it be useful? Too broad?

softwaredoug1 month ago

The new layer of abstraction is tests. Mostly end-to-end and integration tests. It describes the important constraints to the agents, essentially long lived context.

So essentially what this means is a declarative programming system of overall system behavior.

zkmon1 month ago

I haven't seen the definition of an agent, in the paper. Do they differentiate agents from generic online chat interfaces?

senshan1 month ago

Page 2: We define agentic tools or agents as AI tools integrated into an IDE or a terminal that can manipulate the code directly (i.e., excluding web-based chat interfaces)

esafak1 month ago

An agent takes actions. Chat bots only return text.

zkmon1 month ago

"takes actions" is automation and its is hardly new. Code was always taking actions over the decades. Interpreting and generating text belongs to chat bots. What's new with agents?

esafak1 month ago

Your code only takes actions prescribed by you. The agent does not; it picks the tool. I thought this was too obvious to point out.

000ooo0001 month ago

Have to wonder about the motivations of research when the intro leads with such a quote.

4b11b41 month ago

I like to think of it as "maintaining fertile soil"

throw-12-161 month ago

Getting big "I'll keep making saddles in the era of automobiles" vibes from these comments.

danielbln1 month ago

Yeah, it feels many SWEs have painted themselves into a corner. They love the nose-to-code-grindstone process and chain themselves to the abstraction layer of today. I don't think it's gonna end well for them, let's see.

Snuggly731 month ago

This type of comment implies that it’s going to stop with “them” and somehow “us that adopted the LLM” will be the winners. The goal is full automation, there is no “adapt or be left behind”.

danielbln1 month ago

I don't think in terms of winners or losers, automation will come for all of us. Some of us will be caught by it later than others.

game_the0ry1 month ago

> Through field observations (N=13) and qualitative surveys (N=99)...

Not a statistically significant sample size.

flurie1 month ago

This is a qualitative methods paper, so statistical significance is not relevant. The rough qualitative equivalent would instead be "data saturation" (responses generally look like ones you've received already) and "thematic saturation" (you've likely found all the themes you will find through this method of data collection). There's an intuitive quality to determining the number of responses needed based on the topic and research questions, but this looks to me like they have achieved sufficient thematic saturation based on the results.

game_the0ry1 month ago

So, I upvoted your comment bc I genuinely believe there is something in your comments worth learning from, but...

> This is a qualitative methods paper, so statistical significance is not relevant.

I have never heard of a "qualitative methods paper" and it sounds like something a researcher would do to push a narrative with "qualitative data" rather than data that could be measured.

Tell me why I am wrong.

flurie1 month ago

You're not necessarily wrong, but the phrase "push a narrative," the scare quotes around "qualitative data," and your initial comment suggest to me that you are not familiar with qualitative research but have a bias or mistrust against it (no judgment, just stating my observation). If you would like to know more about it, this[1] provides a reasonable overview, and if you would like to know much more, I can ask my spouse, who is a qualitative methodologist in medicine at an R1[2], for her recommendations. I can also tell you what I think of this specific paper, but I did not want it to color my initial comment.

[1] https://en.wikipedia.org/wiki/Qualitative_research

[2] https://en.wikipedia.org/wiki/List_of_research_universities_...

game_the0ry1 month ago

> your initial comment suggest to me that you are not familiar with qualitative research but have a bias or mistrust against it

I can confirm that, yes, I do have an arguably paranoid bias and/or mistrust against information that is not quantifiable in nature nor is simple enough for me (an idiot) to understand easily.

Appreciate the thoughtful response. Don't ask the spouse, just enjoy the new year. I'll figure it out.

bee_rider1 month ago

97 samples is enough to get a 95% confidence level if you accept a 10% margin of error. 99 is not so bad, at least.

https://www.surveymonkey.com/mp/sample-size-calculator/

HPsquared1 month ago

Significance depends on effect size.

energy1231 month ago

How many independent witnesses would you need to convict someone of murder?

superjose1 month ago

Same thoughts exactly.

SunlitCat1 month ago

Funny how the title alone evokes the old “real programmers” trope https://xkcd.com/378/