Back

Your brain on ChatGPT: Accumulation of cognitive debt when using an AI assistant

710 points16 daysmedia.mit.edu
mcv16 days ago

This seems to confirm my feeling when using AI too much. It's easy to get started, but I can feel my brain engaging less with the problem than I'm used to. It can form a barrier to real understanding, and keeps me out of my flow.

I recently worked on something very complex I don't think I would have been able to tackle as quickly without AI; a hierarchical graph layout algorithm based on the Sugiyama framework, using Brandes-Köpf for node positioning. I had no prior experience with it (and I went in clearly underestimating how complex it was), and AI was a tremendous help in getting a basic understanding of the algorithm, its many steps and sub-algorithms, the subtle interactions and unspoken assumptions in it. But letting it write the actual code was a mistake. That's what kept me from understanding the intricacies, from truly engaging with the problem, which led me to keep relying on the AI to fix issues, but at that point the AI clearly also had no real idea what it was doing, and just made things worse.

So instead of letting the AI see the real code, I switched from the Copilot IDE plugin to the standalone Copilot 365 app, where it could explain the principles behind every step, and I would debug and fix the code and develop actual understanding of what was going on. And I finally got back into that coding flow again.

So don't let the AI take over your actual job, but use it as an interactive encyclopedia. That works much better for this kind of complex problem.

vidarh16 days ago

My "actual job" isn't to write code, but to solve problems.

Writing code has just typically been how I've needed to solve those problems.

That has increasingly shifted to "just" reviewing code and focusing on the architecture and domain models.

I get to spend more time on my actual job.

coldtea16 days ago

>My "actual job" isn't to write code, but to solve problems

Solve enough problems relying on AI writing the code as a black box, and over time your grasp of coding will worsen, and you wont be undestanding what the AI should be doing or what it is doing wrong - not even at the architectural level, except in broad strokes.

One ends like the clueless manager type who hasn't touched a computer in 30 years. At which point there will be little reason for the actual job owners to retain their services.

Computer programming on the whole relying on the canned experience of the AI data set, producing more AI churn as ratio of the available training code over time, and plateuing both itself and AI, with the dubious future of reaching Singularity its only hope out of this.

alchemism16 days ago

Yet most organizations in existence pay the people “who hasn’t touched a computer in 30 years” quite a large amount of money to continue to solve problems, for some inscrutable reason… =)

+1
tayo4215 days ago
+1
bakugo16 days ago
BigPaPaYEAAA15 days ago

[dead]

listenallyall16 days ago

To be fair, if you manage to stay employed for 30 years before people determine your skills are sub-standard, that's a lot better than most.

But I generally agree with your point.

drusepth15 days ago

> Solve enough problems relying on AI writing the code as a black box, and over time your grasp of coding will worsen, and you wont be undestanding what the AI should be doing or what it is doing wrong - not even at the architectural level, except in broad strokes.

Using AI myself _and_ managing teams almost exclusively using AI has made this point clear: you shouldn't rely on it as a black box. You can rely on it to write the code, but (for now at least) you should still be deeply involved in the "problem solving" (that is, deciding _how_ to fix the problem).

A good rule of thumb that has worked well for me is to spend at least 20 min refining agent plans for every ~5 min of actual agent dev time. YMMV based on plan scope (obviously this doesn't apply to small fixes, and applies even moreso to larger scopes).

+1
tharkun__15 days ago
inglor_cz15 days ago

It is not that clear to me.

Let us use an analogy. Many (most?) people can tell a well-written book or story from a mediocre or a terrible one, even though the vast majority of the readers hasn't written any in their lives.

To distinguish good from bad doesn't necessarily require the ability to create.

coldtea15 days ago

This analogy serves my argument, as in it, just like "most people" are mere readers (not just they're not writers, they're also nowhere near the level of a competent book editor or a critic), the programmer becomes a mere user of the end program.

Not only would this be a bad way of running a publishing business regarding writing and editing (working on the level of understanding of "most people"), but even in the best case of it being workable, the publisher (or software company) can just fire the specialist and get some average readers/users to give a thumbs up or down to whatever it churns.

+2
KittenInABox15 days ago
iso163115 days ago

I have very little knowledge of how transistors shuffle ones and zeros out of registers. That doesn't prevent me from using them to solve a problem.

Computing is always abstractions. We moved from plugging to assembly, then to c, then we had languages that managed memory for you -- how on earth can you understand what the compiler should be doing or what it is doing if you don't deal with explicit pointers on a day by day basis.

We bring in libraries when we need code. We don't run our own database, we use something else, and we just do "apt-get install mysql", but then we moved onto "docker run" or perhaps we invoke it with "aws cli". Who knows what teraform actually does when we declare we want a resource.

I was thinking the other day how abstractions like AWS or Docker are similar to LLM. With AWS you just click a couple of buttons and you have a data store, you don't know how to build a database from scratch, you don't need one. Of course "to build a database from scratch you must first create the universe".

Some people still hand-craft assembly code to great benefit, but that vast majority don't need to to solve problems, and they can't.

This musing was in the context of what do we do if/when aws data centres are not available. Our staff are generally incapable of working in a non-aws environment. Something that we have deliberately cultivated for years. AWS outputs are one option, or perhaps we should run a non-aws stack that we fully own and control.

Is relying on LLMs fundamentally any different than relying on AWS, or apt, or java. Is is different from outsourcing? You concentrate on your core competency, which is understanding the problem and delivering a solution, not managing memory or running databases. This comes with risk -- all outsourcing does, and if outsourcing to a single supplier you don't and can't understand is acceptable risk, then is relying on LLMs not?

+4
mirsadm15 days ago
calf15 days ago

It is fundamentally different because with an API that other person understands it, but with an LLM neither you nor the LLM understand it

+1
blibble15 days ago
mythical_3916 days ago

wait, did you see the part where the person you are replying to said that writing the code themself was essential to correctly solving the problem?

Because they didn't understand the architecture or the domain models otherwise.

Perhaps in your case you do have strong hands-on experience with the domain models, which may indeed have shifted you job requirements to supervising those implementing the actual models.

I do wonder, however, how much of your actual job also entails ensuring that whoever is doing the implementation is also growing in their understanding of the domain models. Are you developing the people under you? Is that part of your job?

If it is an AI that is reporting to you, how are you doing this? Are you writing "skills" files? How are you verifying that it is following them? How are you verifying that it understands them the same way that you intended it to?

Funny story-- I asked a LLM to review a call transcript to see if the caller was an existing customer. The LLM said True. It was only when I looked closer that I saw that the LLM mean "True-- the caller is an existing customer of one of our competitors". Not at all what I meant.

vidarh16 days ago

I saw that part and I disagreed with the very notion, hence why I wrote what I did.

> Because they didn't understand the architecture or the domain models otherwise.

My point is that requiring or expecting an in-depth understanding of all the algorithms you rely on is not a productive use of developer time, because outside narrow niches it is not what we're being paid for.

It is also not something the vast majority of us do now, or have done for several decades. I started with assembler, but most developers have never-ever worked less than a couple of abstractions up, often more, and leaned heavily on heaps of code they do not understand because it is not necessary.

Sometimes it is. But for the vast majority of us pretending it is necessary all the time or even much of the time is a folly.

> I do wonder, however, how much of your actual job also entails ensuring that whoever is doing the implementation is also growing in their understanding of the domain models. Are you developing the people under you? Is that part of your job?

Growing the people under me involves teaching them to solve problems, and already long before AI that typically involved teaching developers to stop obsessing over details with low ROI for the work they were actually doing in favour of understanding and solving the problems of the business. Often that meant making them draw a line between what actually served the needs they were paid to solve rather than the ones that were personally fun to them (I've been guilty of diving into complex low-level problems I find fun rather than what solves the highest ROI problems too - ask me about my compilers, my editor, my terminal - I'm excellent at yak shaving, but I work hard to keep that away from my work)

> If it is an AI that is reporting to you, how are you doing this? Are you writing "skills" files? How are you verifying that it is following them? How are you verifying that it understands them the same way that you intended it to?

For AI use: Tests. Tests. More tests. And, yes, skills and agents. Not primarily even to verify that it understands the specs, but to create harnesses to run them in agent loops without having to babysit them every step of the way. If you use AI and spend your time babysitting them, you've become a glorified assistant to the machine.

+2
lazide16 days ago
empath7516 days ago

If your product has code on it that can only be understood and worked on by the person that wrote it, then your code is too complex and underdocumented and/or doesn't have enough test coverage.

Your time would be better spent, in a permanent code base, trying to get that LLM to understand something than it would be trying to understand the thing yourself. It might be the case that you need to understand the thing more thoroughly yourself so you can explain it to the LLM, and it might be the case that you need to write some code so that you can understand it and explain it, but eventually the LLM needs to get it based on the code comments and examples and tests.

Kamq16 days ago

> My "actual job" isn't to write code, but to solve problems.

Yes, and there's often a benefit to having a human have an understanding of the concrete details of the system when you're trying to solve problems.

> That has increasingly shifted to "just" reviewing code

It takes longer to read code than to write code if you're trying to get the same level of understanding. You're gaining time by building up an understanding deficit. That works for a while, but at some point you have to go burn the time to understand it.

laurentiurad16 days ago

It's like any other muscle, if you don't exercise it, you will lose it.

It's important that when you solve problems by writing code, you go through all the use cases of your solution. In my experience, just reading the code given by someone else (either a human or machine) is not enough and you end up evaluating perhaps the main use cases and the style. Most of the times you will find gaps while writing the code yourself.

jvanderbot16 days ago

> often a benefit to having a human have an understanding of the concrete details of the system

Further elaborating from my experience.

1. I think we're in the early stages, where agents are useful because we still know enough to coach well - knowledge inertia.

2. I routinely make the mistake of allowing too much autonomy, and will have to spend time cleaning up poor design choices that were either inserted by the agent, or were forced upon it because I had lost lock on the implementation details (usually both in a causal loop!)

I just have a policy of moving slowly and carefully now through the critical code, vs letting the agent steer. They have overindexed on passing tests and "clean code", producing things that cause subtle errors time and time again in a large codebase.

> burn the time to understand it.

It seems to me to be self-evident that writing produces better understanding than reading. In fact, when I would try to understand a difficult codebase, it often meant that probing+rewriting produced a better understanding than reading, even if those changes were never kept.

vidarh16 days ago

> It takes longer to read code than to write code if you're trying to get the same level of understanding. You're gaining time by building up an understanding deficit. That works for a while, but at some point you have to go burn the time to understand it.

This is true whether an AI wrote the code or a co-worker, except the AI is always on hand to answer detailed questions about the code, do detailed analysis, and run extensive tests to validate assumptions.

It is very rarely productive any more to dig into low level code manually.

Kamq15 days ago

> This is true whether an AI wrote the code or a co-worker

I agree. I just don't think code reviews are as load bearing as everyone seems to think. They're important, but not nearly enough.

+1
jplusequalt16 days ago
allreduce16 days ago

What industry are you working in?

thefaux16 days ago

This feels like it conflates problem solving with the production of artifacts. It seems highly possible to me that the explosion of ai generated code is ultimately creating more problems than it is solving and that the friction of manual coding may ultimately prove to be a great virtue.

Difwif16 days ago

This statement feels like a farmer making a case for using their hands to tend the land instead of a tractor because it produces too many crops. Modern farming requires you to have an ecosystem of supporting tools to handle the scale and you need to learn new skills like being a diesel mechanic.

How we work changes and the extra complexity buys us productivity. The vast majority of software will be AI generated, tools will exist to continuously test/refine it, and hand written code will be for artists, hobbyists, and an ever shrinking set of hard problems where a human still wins.

+1
Kbelicius16 days ago
+2
hluska16 days ago
teeray16 days ago

This is a false equivalence. If the farmer had some processing step which had to be done by hand, having mountains of unprocessed crops instead of a small pile doesn’t improve their throughput.

+1
mcpar-land16 days ago
player123416 days ago

[dead]

vidarh16 days ago

I measure what I do by output.

Just about a week ago I launched a 100% AI generated project that shortcircuits a bunch of manual tasks. What before took 3+ weeks of manual work to produce, now takes us 1-2 days to verify instead. It generates revenue. It solved the problem of taking a workflow that was barely profitable and cutting costs by more than 90%. Half the remaining time is ongoing process optimization - we hope to fully automate away the reaming 1-2 days.

This was a problem that wasn't even tractable without AI, and there's no "explosion of AI generated code".

I fully agree that some places will drown in a deluge of AI generated code of poor quality, but that is an operator fault. In fact, one of my current clients retained me specifically to clean up after someone who dove head first into "AI first" without an understanding of proper guardrails.

bendmorris16 days ago

>This was a problem that wasn't even tractable without AI, and there's no "explosion of AI generated code".

People often say this when giving examples, but what specifically made the problem intractable?

Sometimes before beginning work on a problem, I dramatically overestimate how hard it will be (or underestimate how capable I am of solving it.)

eaglelamp16 days ago

All employees solve problems. Developers have benefited from the special techniques they have learned to solve problems. If these techniques are obsolete, or are largely replaced by minding a massive machine, the character of the work, the pay for performing it, and social position of those who perform it will change.

sodapopcan16 days ago

> My "actual job" isn't to write code, but to solve problems.

You're like 836453th person to say this. It's not untrue, but many of us will take writing over reviewing any day. Reviewing is like the worst part of the job.

vidarh16 days ago

I use AI heavily to review the code too, and it makes it far simpler.

E.g. "show me why <this assumption that is necessary for the code I'm currently staring at> holds" makes it far more pleasant to do reviews. AI code review tooling works well to reduce that burden. Even more so when you have that AI cod review tooling running as part of your agent loop first before you even look at a delivery.

"prove X" is another one - if it can't find a test case that already proves X and resorts to writing code to prove X, you probably need more tests, and now you have one,.

+1
sodapopcan16 days ago
mcv16 days ago

I strongly prefer writing the code myself and letting AI review it, over the other way around.

ASalazarMX16 days ago

Then your job has turned into designing solutions, and asking a (sometimes unreliable) LLM to make them for you. If you keep at it, soon you'll accumulate enough cognitive debt to become a fossil, knowing what has to be done, but not quite how it is done.

asdff16 days ago

And really where is your moat? Why pay for a senior when a junior can prompt an LLM all the same? People are acting like its juniors who are going to be out of work like companies are going to just keep paying seniors for their now obsolete skills.

vidarh15 days ago

Where do you think your moat is if you insist on sticking at a level of abstraction where the AI keeps eating into your job, instead of stepping up and handling architecture, systems design, etc. at a higher level?

player123415 days ago

[dead]

jackblemming15 days ago

So no different than management up to the CEO who simply “delegate” to the underlyings?

ASalazarMX15 days ago

Exactly. A manager or CEO won't call themselves software engineers. If we change our role similarly in the development cycle, neither should we.

Talanes15 days ago

Except most those people have developed the separate skillset of playing work politics to make their job seem necessary.

vidarh15 days ago

Do you know how to write the code you write in assembler instead of a higher level language? How many of your peers do?

Most "know what has to be done, but not quite how it is done". This is just another level of abstraction.

I learnt the lesson 30+ years ago that while it was (and still occasionally is) useful to understand the principles of assembly, it had become useless to write assembly outside of a few narrow niches. A decade later I moved from C and C++ to higher level languages again.

Moving up the abstraction levels is learning leverage.

I deliver far more now - with or without AI - than I did when I wrote assembler, or C for that matter. I deliver more again with AI than without.

That's what matters.

ASalazarMX14 days ago

> Do you know how to write the code you write in assembler instead of a higher level language?

Actually, I do, but then you could ask me if I can develop in machine language, and I'll hate to reply no. The abstraction is not the point, but the isolation from the core task. If you're a brilliant fashion designer, you even know how to sew, but outsource your work to an Asian sweatshop, you can never be sure it's well done until you see the result.

Using an abstraction is not the same as using a black box.

> I deliver far more now - with or without AI - than I did when I wrote assembler, or C for that matter. I deliver more again with AI than without.

> That's what matters.

Also, in some disciplines, quality sometimes matters more than quantity.

pwython16 days ago

My "actual job" is a designer, not a career engineer, so for me code has always been how I ship. AI makes that separation clearer now. I just recently wrote about this.[0]

But I think the cognitive debt framing is useful: reading and approving code is not the same as building the mental model you get from writing, probing, and breaking things yourself. So the win (more time on problem solving) only holds if you're still intentionally doing enough of the concrete work to stay anchored in the system.

That said, if you're someone like me, I don't always need to fully master everything, but I do need to stay close enough to reality that I'm not shipping guesses.

[0] https://alisor.substack.com/p/i-never-really-wrote-code-now-...

keybored15 days ago

> My "actual job" isn't to write code, but to solve problems.

Air quotes and more and more general words. The perfect mercenari’s tools.

The buck stops somewhere for most of us. We have jobs, we are compelled to do them. But we care about how it is done. We care whether doing it in a certain will give us short term advantages but hinder us in the long term. We care if the process feels good or bad. We care if it feels like we are in control of the process or if we are just swimming in a turbulent sea. We care about how predictable the tools we use. Whether we can guess that something takes a month and not be off by weeks.

We might say that we are the perfect pragmatists (mercenaris); that we only care about the most general description of what-is-to-be-done that is acceptable to the audience, like solving business problems, or solving technical problems, or in the end—as the pragmatist sheds all meaning from his burdensome vessel—just solving problems. But most of us got into some trade, or hobby, or profession, because we did concrete things that we concretely liked. And switching from keyboards to voice dictation might not change that. But seemingly upending the whole process might.

It might. Or it may not. Certainly could go in more than one direction. But to people who are not perfect mercenaries or business hedonists[1] these are actual problems or concerns. Not nonsense to be dismissed with some “actual job” quip, which itself is devoid of meaning.

[1] https://news.ycombinator.com/item?id=46692039

HumblyTossed16 days ago

You're right that a dev's job is to solve problems. However, one loses a lot of that if one doesn't think in computerese - and only reading code isn't enough. One has to write code to understand code. So for one to do one's _actual_ job, they cannot depend solely on "AI" to write all the code.

vidarh16 days ago

We used to say that about people who wrote in C instead of assembler. Then we used to say that (any many still do) about people who opted for "scripting languages" over "systems languages".

It's "true" in a sense. It helps. But it is also largely irrelevant for most of us, in that most of us are writing code you can learn to read and write in a tiny proportion of the time we spend in working life. The notion that you need to keep spending more than a tiny fraction of your time writing code in order to understand enough to be able to solve business problem will seem increasingly quaint.

+1
HumblyTossed16 days ago
jedwards121115 days ago

Some of the biggest improvements I've made in the clarity and typesafety of the code I write came from seeing the weak points while slogging through writing code, and choosing or writing better libraries to solve certain problems. If everyone stops writing code I can only imagine quality will stagnate

jedwards121115 days ago

for example, I got fed up with the old form library we were using because it wasn't capable of checking field names/paths and field value types at compile time and I kept having unexpected runtime errors. I wrote a replacement form library that can deeply typecheck all of that stuff.

If I had turned an AI loose against the original codebase, I think it would have just churned away copying the existing patterns and debugging any runtime errors that result. I don't think an AI would have ever voluntarily told me "this form library is costing time and effort, we should replace it with such and such instead"

vidarh15 days ago

Everyone won't stop writing code. But not everone needs to code.

pranavj16 days ago

Exactly this. The shift from "writing code" to "reviewing code and focusing on architecture" is the natural evolution. Every abstraction layer in computing history freed us to think at higher levels - assembler to C, C to Python, and now Python to "describe what you want."

The people framing this as "cognitive debt" are measuring the wrong thing. You're not losing the ability to think - you're shifting what you think about. That's not a bug, it's the whole point.

slekker15 days ago

The problem is that how do you review code if you don't know what it is supposed to look like? Creativity is not only in the problem solving step but also when implementing it, and letting an LLM do most of it is incredibly dangerous for the future, more so on juniors are gaining experience this way. The software quality will be much worse, and the churn even higher, and I will be in a farm with my chickens

austinshea14 days ago

A lot of people, who are on their way to doing truly professional work, have this epiphany.

The place you need to get to is understanding that you are being asked to ensure a problem is solved.

You’re only causing a larger problem by “solving” issues without both becoming an SME and ensuring that knowledge can be held by the organization, at all levels that the problem affects (onboarding, staff, project management, security, finance, auditors, c-suite.)

wiseowise16 days ago

So what happens when LLM provider and/or internet is down or you're out of credits?

vidarh16 days ago

If the internet is down I can't do most of my work with or without an LLM, and redundancy is a thing.

The only way I'll run out of credits is if my company isn't liquid any longer in which case I have bigger problems.

And there are plenty of LLM providers, almost only a few w/SOTA models but even for SOTA models there is no reason to be dependent on one.

taco_emoji16 days ago

I don't care what my "actual" job is. I like writing code. I like exercising my brain in that way. I like building things.

I do not want to be a supervisor of AI agents. I do not want to engineer prompts, I want to engineer software.

vidarh16 days ago

I sympathise, in as much as I love writing code too, but I increasingly restrict that to my personal projects. It is simply not cost effective any more to write code manually vs. proper use of agents, and developers who resist that will find it increasingly hard to stay employed.

alfalfasprout16 days ago

> It is simply not cost effective any more to write code manually vs. proper use of agents, and developers who resist that will find it increasingly hard to stay employed.

In practice, this isn't bearing out at all though both among my peers and with peers in other tech companies. Just making a blanket statement like this adds nothing to the conversation.

direwolf2016 days ago

Devs are one of the last fields to be ruined by capitalism, but it has finally arrived here too.

palmotea15 days ago

> I get to spend more time on my actual job.

If you spend all your time on that, you might actually lose the ability to actually do it. I find a lot of "non core" tasks are pretty important for skill building and maintenance.

notanastronaut16 days ago

I'm in the same boat. There's a lot of things I don't know and using these models help give direction and narrow focus towards solutions I didn't know about previously. I augment my knowledge, not replace.

Some people learn from rote memorization, some people learn through hands on experience. Some people have "ADHD brains". Some people are on the spectrum. If you visit Wikipedia and check out Learning Styles, there's like eight different suggested models, and even those are criticized extensively.

It seems a sort of parochial universalism has coalesced, but people should keep in mind we don't all learn the same.

ETA: I'd also like to say learning from LLMs are vastly similar, and some ways more useful, than finding blogs on a subject. A lot of time, say for Linux, you'll find instructions that even if you perform them to a tee, something goes pear shaped, because of tiny environment variables or a single package update changes things. Even Photoshop tutorials are not free of this madness. I'm used to mostly correct but just this side of incorrect instructions. LLMs are no different in a lot of ways. At least with them I can tailor my experience to just what I'm trying to do and spend time correcting that versus loading up a YT video trying to understand why X doesn't work. But I can understand if people don't get the same value as I do.

blibble16 days ago

this is the standard consultant vs employee angle

if you're a consultant/contractor that's bid a fixed amount for a job: you're incentivised to slop out as much as possible to hit the complete the contract as quickly as possible

and then if you do a particularly bad job then you'll be probably kept on to fix up the problems

vs. an permanent employee that is incentivised to do the job well, sign it off and move onto the next task

vidarh16 days ago

You're making flawed assumptions you have no basis for.

Most of my work is on projects I have a long term vested interest in.

I care far more about maximally leveraging LLMs for the projects I have a vested interest in - if my clients don't want to, that's their business.

Most of my LLM usage directly affects my personal finances in terms of the ROI my non-consulting projects generate - I have far more incentives to do the job well than a permanent employee whose work does not have an immediate effect on their income.

+1
blibble15 days ago
acedTrex16 days ago

Did you not read the comment you are replying too?

vidarh16 days ago

I did. Did you read the comment you replied to?

+1
rob16 days ago
lomase15 days ago

[dead]

Archer662116 days ago

That's a nice anecdote, and I agree with the sentiment - skill development comes from practice. It's tempting to see using AI as free lunch, but it comes with a cost in the form of skill atrophy. I reckon this is even the case when using it as an interactive encyclopedia, where you may lose some skill in searching and aggregating information, but for many people the overall trade off in terms of time and energy savings is worth it; giving them room to do more or other things.

scyzoryk_xyz16 days ago

If the computer was the bicycle for the mind, then perhaps AI is the electric scooter for the mind? Gets you there, but doesn't necessarily help build the best healthy habits.

Trade offs around "room to do more of other things" are an interesting and recurring theme of these conversations. Like two opposites of a spectrum. On one end the ideal process oriented artisan taking the long way to mastery, on the other end the trailblazer moving fast and discovering entirely new things.

Comparing to the encyclopedia example: I'm already seeing my own skillset in researching online has atrophied and become less relevant. Both because the searching isn't as helpful and because my muscle memory for reaching for the chat window is shifting.

andai16 days ago

It's a servant, in the Claude Code mode of operation.

If you outsource a skill consistently, you will be engaging less with that skill. Depending on the skill, this may be acceptable, or a desirable tradeoff.

For example, using a very fast LLM to interactively make small edits to a program (a few lines at a time), outsources the work of typing, remembering stdlib names and parameter order, etc.

This way of working is more akin to power armor, where you are still continuously directing it, just with each of your intentions manifesting more rapidly (and perhaps with less precision, though it seems perfectly manageable if you keep the edit size small enough).

Whereas "just go build me this thing" and then you make a coffee is qualitatively very different, at that point you're more like a manager than a programmer.

footy16 days ago

> then perhaps AI is the electric scooter for the mind

I have a whole half-written blog post about how LLMs are the cars of the mind. Massive externalities, has to be forced on people, leads to cognitive/health issues instead of improving cognition and health.

+1
SoftTalker16 days ago
coole-wurst16 days ago

Maybe it was always about where you are going and how fast you can get there? And AI might be a few mph faster than a bicycle, and still accelerating.

ungreased067516 days ago

I’ve also noticed that I’m less effective at research, but I think it’s our tools becoming less effective over time. Boolean doesn’t really work, and I’ve noticed that really niche things don’t surface in the search results (on Bing) even when I know the website exists. Just like LLMs seem lazy sometimes, search similarly feels lazy occasionally.

wiseowise16 days ago

> perhaps AI is the electric scooter for the mind

More like mobility scooter for disabled. Literally Wall-E in the making.

+1
vidarh16 days ago
chairmansteve16 days ago

"I reckon this is even the case when using it as an interactive encyclopedia".

Yes, that is my experience. I have done some C# projects recently, a language I am not familiar with. I used the interactive encylopedia method, "wrote" a decent amount of code myself, but several thousand lines of production code later, I don't I know C# any better than when I started.

OTOH, it seems that LLMs are very good at compiling pseudocode into C#. And I have always been good at reading code, even in unfamiliar languages, so it all works pretty well.

I think I have always worked in pseudocode inside my head. So with LLMs, I don't need to know any programming languages!

Archer662112 days ago

It's even more general than that: LLMs seem to be exceedingly good at translating bodies of text from one domain to another. The frontier models also have excellent natural language translation capabilities, far surpassing e.g. Google translate.

In that sense, going from pseudocode to a programming language is no different from that, or even translating a piece of code from one programming language to another.

isolli16 days ago

This mirrors my experience exactly. We have to learn how to tame the beast.

sothatsit16 days ago

I think we all just need to avoid the trap of using AI to circumvent understanding. I think that’s where most problems with AI lie.

If I understand a problem and AI is just helping me write or refactor code, that’s all good. If I don’t understand a problem and I’m using AI to help me investigate the codebase or help me debug, that’s okay too. But if I ever just let the AI do its thing without understanding what it’s doing and then I just accept the results, that’s where things go wrong.

But if we’re serious about avoiding the trap of AI letting us write working code we don’t understand, then AI can be very useful. Unfortunately the trap is very alluring.

A lot of vibe coding falls into the trap. You can get away with it for small stuff, but not for serious work.

orenp16 days ago

I'd say the new problem is knowing when understanding is important and where it's okay to delegate.

It's similar to other abstractions in this way, but on a larger scale due to LLM having so many potential applications. And of course, due to the non-determinism.

sothatsit16 days ago

My argument is that understanding is always important, even if you delegate. But perhaps you mean sometimes a lower degree of understanding may be okay, which may be true, but I’d be cautious on that front. AI coding is a very leaky abstraction.

We already see the damage of a lack of understanding when we have to work with old codebases. These behemoths can become very difficult to work in over time as the people who wrote it leave, and new people don’t have the same understanding to make good effective changes. This slows down progress tremendously.

Fundamentally, code changes you make without understanding them immediately become legacy code. You really don’t want too much of that to pile up.

alfalfasprout16 days ago

I'm writing a blog post on this very thing actually.

Outsourcing learning and thinking is a double edged sword that only comes back to bite you later. It's tempting: you might already know a codebase well and you set agents loose on it. You know enough to evaluate the output well. This is the experience that has impressed a few vocal OSS authors like antirez for example.

Similarly, you see success stories with folks making something greenfield. Since you've delegated decision making to the LLM and gotten a decent looking result it seems like you never needed to know the details at all.

The trap is that your knowledge of why you've built what you've built the way it is atrophies very quickly. Then suddenly you become fully dependent on AI to make any further headway. And you're piling slop on top of slop.

player123416 days ago

[dead]

sevenzero16 days ago

[flagged]

exodust16 days ago

Similarly I leave Cursor's AI in "ask" mode. It puts code there, leaving me to grab what I need and integrate myself. This forces me to look closely at code and prevents the "runaway" feeling where AI does too much and you're feeling left behind in your own damn project. It's not AI chat causing cognitive debt it's Agents!

jeffreygoesto16 days ago

Elk (Eclipse Layout Kernel) is a very good package solving that, you might want to check it's Javascript port https://github.com/kieler/elkjs

mcv16 days ago

I'd briefly come across Elk, but couldn't tell how it was better than what I was using. The examples I could find all showed far simpler graphs than what we had, and nothing that seemed to address the problems we had, but maybe I should give it another look, because I've kinda lost faith that dagre is going to do what we need.

If I can explain briefly what our issue is: we've got a really complex graph, and need to show it in a way that makes it easy to understand. That by itself might be a lost cause already, but we need it fixed. The problem is that our graph has cycles, and dagre is designed for DAGs; directed acyclic graphs. Fortunately it has a step that removes cycles, but it does that fairly randomly, and that can sometimes dramatically change the shape of the graph by creating unintentional start or end nodes.

I had a way to fix that, but even with that, it's still really hard to understand the graph. We need to cut it up into parts, group nodes together based on shared properties, and that's not something dagre does at all. I'm currently looking into cola with its constraints. But I'll take another look at elk.

jeffreygoesto11 days ago

Our graphs are hierarchical, can contain cycles, too and have a bunch of directed subgraphs. We reach 500 nodes with 20k ports and 10k edges and "getting the graph" is still possible but takes a bit of practice. Cycle breaking is okish for us, because there is a strong asymmetry between many "forward" and much less "backwards" edges that makes the heuristics succeed often.

waterhouse16 days ago

Has anyone here switched roles from programmer to manager (of human programmers), and is it a similar feeling?

cowlby15 days ago

Would be curious about this too. It’s a mental shift to go from understanding everything about the code, to trusting someone else understands everything and we just make decisions.

h4kunamata15 days ago

>but I can feel my brain engaging less with the problem than I'm used to

With me it has been the opposite, perhaps because I was anti-AI before and because I know it is gonna make mistake.

My most intense AI usage:

Goal: Homelab is my hobby and I wanted to setup a private tracker torrent via Proton VPN, fully.

I am used to tools such Ansible and Linux operating system, but there were like 3 different tools to manage the torrents, plus a bunch of firewall rules so in case Provon VPN drops, everything stops working instead of using my real IP Address snitching me to my ISP.

I wanted everything to be as automated as possible, Ansible, so if everything catches on fire, I can run Ansible playbook and bring everything back online.

The whole setup took me 3 nights and I couldn't stop thinking about it during the day, like how can I solve this or that, the solution Perplexity/ChatGPT gave me broke something else so how could I solve that, etc.

I am using these tools more like a Google Search alternative than AI per se, I can see when it made mistakes because I know what I am asking it to help me with, homelab. I don't wanna to just copy and paste, and ironically, I have learned a ton about Promox ( where I run my virtual containers and virtual machine ). I always say that I don't wanna just answers, show me how did you get to that conclusion so I can learn it myself.

As long as you are aware that this is a tool and that it makes mistakes the same way as somebody's reply in any forum, you are good and should still feel motivated.

If you are using AI tools just for copy/paste expecting things to work without caring to understand what is actually happening (companies and IT teams worldwide), then you have a big problem.

agumonkey15 days ago

I noticed a few states:

- why bother, ask the llm

- relief.. i can let the llm relay me while i rest a bit and resume with some progress done

- inspiration.. the llm follows my ideas and open weird roads i was barely dreaming of (like asking random 'what if we try to abstract the issue even more' and get actual creative ideas)

but then there day to day operations and deadlines

alexjplant16 days ago

When I used Copilot autocomplete more I noticed myself slipping a bit when it comes to framework and syntax particulars so I instituted a moratorium on it on Fridays to prevent this.

Claude Code seems to be a much better paradigm. For novel implementations I write code manually while asking it questions. For things that I'm prototyping I babysit it closely and constantly catch it doing things that I don't want it to do. I ask it questions about why it built things certain ways and 80% of the time it doesn't have a good answer and redoes it the way that I want. This takes a great deal of cognitive engagement.

Rule nombre [sic] uno: Never anthropomorphize the LLM. It's a giant pattern-matching machine. A useful one, but still just a machine. Do not let it think for you because it can't.

archy_15 days ago

4b-model take. LLMs are far more intelligent than you give them credit for. Every new layer of abstraction allows us to develop software better and faster. People constantly ragged on OOP yet it is the foundation of modern computing. People while about "bloat" but continue to buy more RAM. Compilers are a black box and meaningfully inhibit your ability to write asm but these days nobody cares. I see LLMs as the next logical evolution in computing abstractions.

deadbabe16 days ago

I think a good rule of thumb is, only have AI write some code when you know exactly what it should look like and are just too lazy to type it out, or, if it is code that you would have otherwise just pulled down from some open source library and not written yourself anyway.

Telemakhos15 days ago

I see the opposite effect with AI: I quickly find some error that it has made, because it always makes errors in my field, and that keeps me from disengaging with the problem, because it helps define what can be wrong. I mainly use AI like I used to use my blog, for writing out my ideas in prose that I think is comprehensible and organized. Neither AI nor my old blog ever solved a problem for me, but they help me figure out how to talk about problems. I'll solve them on my own, but being able to describe a problem well is an important step in that.

mcv15 days ago

I think that's still in line with what I mean. Letting the AI solve the problem doesn't work, but I've had several times that simply trying to explain the problem to the AI helped me solve it. Sometimes it's not an interactive encyclopedia, but an interactive rubber duck. That works too.

Don't outsource the thinking to the AI, is what I mean. Don't trust it, but use it to talk to, to shape your thoughts, and to provide information and even ideas. But not the solution, because that has never worked for me for any non-trivial problem.

stronglikedan16 days ago

> It's easy to get started

Funny - that's the hard part for me. I have yet to figure out what to use it for, since it seems to take longer than any other method of performing my tasks. Especially with regards to verifying for correctness, which in most cases seems to take as long or longer than just having done it myself, knowing I did it correctly.

jstummbillig16 days ago

> But letting it write the actual code was a mistake

I think you not asking questions about the code is the problem (in so far it still is a problem). But it certainly has gotten easy not to.

anthonypasq16 days ago

seems like the actual issue is that you were using a Copilot IDE plugin.

omneity15 days ago

I just went through an eerily similar situation where the coding agent was able to muster some pretty advanced math (information geometry) to solve my problem at hand.

But while I was able to understand it enough to steer the conversation, I was utterly unable to make any meaningful change to the code or grasp what it was doing. Unfortunately, unlike in the case you described, chatting with the LLM didn’t cut it as the domain is challenging enough. I’m on a rabbit hunt now for days, picking up the math foundations and writing the code at a slower pace albeit one I can keep up with.

And to be honest it’s incredibly fun. Applied math with a smart, dedicated tutor and the ability to immediately see results and build your intuition is miles ahead of my memories back in formative years.

foxes16 days ago

Is this a copilot ad?

gambiting16 days ago

It does read like one.

hobofan16 days ago

It reads like an anti-ad for both. "I didn't use the Copilot IDE because I lack control over the context provided" and "I used Copilot 365 because it for sure doesn't have any context of anything because connecting things to it is hard/expensive".

PatronBernard16 days ago

> a hierarchical graph layout algorithm based on the Sugiyama framework, using Brandes-Köpf for node positioning.

I am sorry for being direct but you could have just kept it to the first part of that sentence. Everything after that just sounds like pretentious name dropping and adds nothing to your point.

But I fully agree, for complex problems that require insight, LLMs can waste your time with their sycophancy.

TheColorYellow16 days ago

This is a technical forum, isn't pretentious name dropping kind of what we do?

Seriously though, I appreciated it because my curiosity got the better of me and I went down a quick rabbit hole in Sugiyama, comparative graph algorithms, and learning about the node positioning as a particular dimension of graph theory. Sure nothing ground breaking, but it added a shallow amount to my broad knowledge base of theory that continues to prove useful in our business (often knowing what you don't know is the best initiative for learning). So yeah man, lets keep name dropping pretentious technical details because thats half the reason I surf this site.

And yes, I did use ChatGPT to familiarize myself with these concepts briefly.

low_common16 days ago

How are you applying what you learned to your business? I think that would be interesting to share if you can.

fatherwavelet16 days ago

I think many are not doing anything like this so to the person who is not interested in learning anything, technical details like this sound like pretentious name dropping because that is how they relate to the world.

Everything to them is a social media post for likes.

I have explored all kinds of graph layouts in various network science context via LLMs and guess what? I don't know anything much about graph theory beyond G = (V,E). I am not really interested either. I am interested in what I can do with and learn from G. Everything on the right of the equals sign Gemini is already beyond my ability. I am just not that smart.

The standard narrative on this board seems to be something akin to having to master all volumes of Knuth before you can even think to write a React CRUD app. Ironic since I imagine so many learned programming by just programming.

I know I don't think as hard when using an LLM. Maybe that is a problem for people with 25 more IQ points than me. If I had 25 more IQ points maybe I could figure out stuff without the LLM. That was not the hand I was dealt though.

I get the feeling there is immense intellectual hubris on this forum that when something like this comes up, it is a dog whistle for these delusional Erdos in their own mind people to come out of the wood work to tell you how LLMs can't help you with graph theory.

If that wasn't the case there would be vastly more interesting discussion on this forum instead of ad nauseam discussion on how bad LLMs are.

I learn new things everyday from Gemini and basically nothing reading this forum.

chasd0016 days ago

for many people here knowing various algorithms, data structures, and how to code really well and really fast are the only things that differentiates them from everyone else and largely define their identity. Now all of that value, status, and exclusivity is significantly threatened.

hluska16 days ago

I’ve been forced down that path and based on that experience it added a whole lot. Maybe you just don’t understand the problem?

wanderlust12316 days ago

There is nothing pretentious about what they said. Why are you so insecure/sensitive?

sdoering16 days ago

This reminds me of the recurring pattern with every new medium: Socrates worried writing would destroy memory, Gutenberg's critics feared for contemplation, novels were "brain softening," TV was the "idiot box." That said, I'm not sure "they've always been wrong before" proves they're wrong now.

Where I'm skeptical of this study:

- 54 participants, only 18 in the critical 4th session

- 4 months is barely enough time to adapt to a fundamentally new tool

- "Reduced brain connectivity" is framed as bad - but couldn't efficient resource allocation also be a feature, not a bug?

- Essay writing is one specific task; extrapolating to "cognition in general" seems like a stretch

Where the study might have a point:

Previous tools outsourced partial processes - calculators do arithmetic, Google stores facts. LLMs can potentially take over the entire cognitive process from thinking to formulating. That's qualitatively different.

So am I ideologically inclined to dismiss this? Maybe. But I also think the honest answer is: we don't know yet. The historical pattern suggests cognitive abilities shift rather than disappear. Whether this shift is net positive or negative - ask me again in 20 years.

[Edit]: Formatting

wisty16 days ago

Soapbox time.

They were arguably right. Pre literate peole could memorise vast texts (Homer's work, Australian Aboriginal songlines). Pre Gutenberg, memorising reasonably large texts was common. See, e.g. the book Memory Craft.

We're becoming increasingly like the Wall E people, too lazy and stupid to do anything without our machines doing it for us, as we offload increasing amounts onto them.

And it's not even that machines are always better, they only have to be barely competent. People will risk their life in a horribly janky self driving car if it means they can swipe on social media instead of watching the road - acceptance doesn't mean it's good.

We have about 30 years of the internet being widely adopted, which I think is roughly similar to AI in many ways (both give you access to data very quickly). Economists suggest we are in many ways no more productive now than when Homer Simpson could buy a house and raise a family on a single income - https://en.wikipedia.org/wiki/Productivity_paradox

Yes, it's too early to be sure, but the internet, Google and Wikipedia arguably haven't made the world any better (overall).

smokel16 days ago

> Pre literate peole could memorise vast texts

It seems more likely that there were only a handful of people who could. There still are a handful of people who can, and they are probably even better than in the olden times [1] (for example because there are simply more people now than back then.)

[1] https://oberlinreview.org/35413/news/35413/ (random first link from Google)

giraffe_lady15 days ago

Yes, there is some actual technique to learn and then with moderate practice it's possible to accurately memorize surprisingly long passages, especially if they have any consistent structure. Reasonable enough to guess that this is a normally distributed skill, talent, domain of expertise.

woah16 days ago

Used to be, Tony Soprano could afford a mansion in New Jersey, buy furs for his wife, and eat out at the strip club for lunch every day, all on a single income as a waste management specialist.

CuriouslyC16 days ago

Brains are adaptive. We're not getting dumber, we're just adapting to a new environment. Just because they're less fit for other environments doesn't make it worse.

As for the productivity paradox, this discounts the reality that we wouldn't even be able to scale the institutions we're scaling without the tech. Whether that scaling is a good thing is debatable.

discreteevent16 days ago

> Brains are adaptive.

They are, but you go on to assume that they will adapt in a good way.

Bodies are adaptive too. That didn't work out well for a lot of people when their environment changed to be sedentary.

+1
ahoka16 days ago
doublerabbit16 days ago

Brains are adaptive and as we adapt we are turning more cognitive unbalanced. We're absorbing potentially bias information at a faster rate. GPT can give you information of X in seconds. Have you thought about it? Is that information correct? Information can easily be adapted to sound real while masking the real as false.

Launching a search engine and searching may spew incorrectness but it made you make judgement, think. You could have two different opinions one underneath each other; you saw both sides of the coin.

We are no longer critical thinking. We are taking information at face value, marking it as correct and not questioning is it afterwards.

The ability to evaluate critically and rationally is what's decaying. Who opens an physical encyclopedia nowadays? That itself requires resources, effort and time. Add in life complexity; that doesn't help us in evaluating and rejecting consumption of false information. The Wall-E view isn't wrong.

+1
CuriouslyC16 days ago
+1
pixl9716 days ago
sersi16 days ago

> Who opens an physical encyclopedia nowadays? I know plenty of people who binge wikipedia and learn new things through that. While Wikipedia is not always perfect, it's not like older printed encyclopaedia like Britannica were perfect either.

You have a point with trusting AI, but I'm starting to see people around me realising that LLMs tend to be overconfident even when wrong and verifying the source instead of just trusting. That's the way I use something like perplexity, I use it as an improved search engines and then tend to visit the sources it lists.

TaupeRanger16 days ago

> Just because they're less fit for other environments doesn't make it worse.

You think it's likely that we offload cognitive difficulty and complexity to machines, and our brains don't get worse at difficult, complex problems?

titzer16 days ago

Brains are adaptive but skills are cumulative. You can't get good at what you don't practice.

wiseowise16 days ago

> Just because they're less fit for other environments doesn't make it worse.

It literally does. If your brain shuts down the moment you can't access your LLM overlord then you're objectively worse.

+1
jondwillis16 days ago
nindalf16 days ago

> Homer Simpson

I can't stress this enough, Homer Simpson is a fictional character from a cartoon. I would not use him in an argument about economics any more than I would use the Roadrunner to argue for road safety.

mountainb16 days ago

No, it's useful evidence in the same way that contemporaneous fiction is often useful evidence. The first season aired from 1989-1990. The living conditions from the show were plausible. I know because I was alive during that time. My best friend was the son of a vacuum cleaner salesman with a high school education, and they owned a three bedroom house in a nice area, two purebred dogs, and always had new cars. His mom never worked in any capacity. My friend played baseball on a travel team and eventually he went to a private high school.

A 2025 Homer is only plausible if he had some kind of supplemental income (like a military pension or a trust fund), if Marge had a job, if the house was in a depressed region, or he was a higher level supervisor. We can use the Simpsons as limited evidence of contemporary economic conditions in the same way that we could use the depictions of the characters in the Canterbury Tales for the same purpose.

+2
Capricorn248116 days ago
+1
nindalf16 days ago
wisty15 days ago

I also cited more serious analysis.

Yeah, Homer Simpson is fictional, a unionised blue-collar worker with specialised skills, and he lives in a small town.

BurningFrog16 days ago

> They were arguably right

I think they were right that something was lost in each transition.

But something much bigger was also gained, and I think each of those inventions were easily worth the cost.

But I'm also aware that one cost of the printing press was a century of very bloody wars across Europe.

MagicMoonlight15 days ago

There are still people that memorise the entire Quran word for word.

But it’s a complete waste of time. What is the point spending years memorising a book?

You seem like the kind of person that would still be eating rotten carcasses on the plains while the rest of us are sitting around a fire.

throwup23815 days ago

> They were arguably right. Pre literate peole could memorise vast texts (Homer's work, Australian Aboriginal songlines). Pre Gutenberg, memorising reasonably large texts was common. See, e.g. the book Memory Craft.

> We're becoming increasingly like the Wall E people, too lazy and stupid to do anything without our machines doing it for us, as we offload increasing amounts onto them.

You're right about the first part, wrong about the second part.

Pre-Gutenberg people could memorize huge texts because they didn't have that many texts to begin with. Obtaining a single copy cost as much as supporting a single well-educated human for weeks or months while they copied the text by hand. That doesn't include the cost of all the vellum and paper which also translated to man-weeks of labor. Rereading the same thing over and over again or listening to the same bard tell the same old story was still more interesting than watching wheat grow or spinning fabric, so that's what they did.

We're offloading our brains onto technology because it has always allowed us to function better than before, despite an increasing amount of knowledge and information.

> Yes, it's too early to be sure, but the internet, Google and Wikipedia arguably haven't made the world any better (overall).

I find that to be a crazy opinion. Relative to thirty years ago, quality of life has risen significantly thanks to all three of those technologies (although I'd have a harder time arguing for Wikipedia versus the internet and Google) in quantifiable ways from the lowliest subsistence farmers now receiving real time weather and market updates to all the developed world people with their noses perpetually stuck in their phones.

You'd need some weapons grade rose tinted glasses and nostalgia to not see that.

wisty15 days ago

Economists suggest we are in many ways no more productive now than when Homer Simpson could buy a house and raise a family on a single income - https://en.wikipedia.org/wiki/Productivity_paradox

throwup23815 days ago

I don’t care if “we” are more productive and I certainly don’t care what western economists think about pre-industrial agriculture. I care that the two billion people living in households dependent on subsistence farming have a better quality of life than they did before the internet or mobile phones, which they undeniably have. That much was obvious fifteen to twenty years ago when mobile networks were rolling out all over over Africa en masse and every village I visited on my continental roadtrip had at least one mobile phone that everybody shared to get weather forecasts and coordinate trips to the nearest market town.

Anyone in a developed country who bases their opinions on the effects of technology on their and their friends’ social media addictions is a complete fool. Life has gotten so much better for BILLIONS of people in the last few decades that it’s not even a remotely nuanced issue.

GuB-4216 days ago

I certainly can't memorize Homer's work, and why would I? In exchange I can do so much more. I can find an answer to just about any question on any subject better than the most knowledgeable ancient Greek specialist, because I can search the internet. I can travel faster and further than their best explorers, because I can drive and buy tickets. I have no fighting experience, but give me a gun and a few hours of training and I could defeat their best champions. I traded the ability to memorize the equivalent of entire books to a set of skills that combined with modern technological infrastructure gives me what would be godlike powers at the time of the ancient Greeks.

In addition to these base skills, I also have specialized skills adapted to the modern world, that is my job. Combined with the internet and modern technology I can get to a level of proficiency that no one could get to in the ancient times. And the best part: I am not some kind of genius, just a regular guy with a job.

And I still have time to swipe on social media. I don't know what kind of brainless activities the ancient Greeks did, but they certainly had the equivalent of swiping on social media.

The general idea is that the more we offload to machines, the more we can allocate our time to other tasks, to me, that's progress, that some of these tasks are not the most enlightening doesn't mean we did better before.

And I don't know what economist mean by "productivity", but we can certainly can buy more stuff than before, it means that productivity must have increased somewhere (with some ups and downs). It may not appear in GDP calculations, but to me, it is the result that counts.

I don't count home ownership, because you don't produce land. In fact, that land is so expensive is a sign of high global productivity. Since land is one of the few things that we need and can't produce, the more we can produce the other things we need, the higher the value of land is, proportionally.

piyh11 days ago

> Pre literate peole could memorise vast texts

Pre literate peole HAD TO memorise vast texts

UltraSane16 days ago

Instead of memorizing vasts amount of text modern people memorize the plots of vast amounts of books, moves, TV shows, and video games and pop culture.

Computers are much better at remembering text.

iambateman16 days ago

You’re currently using the internet.

Capricorn248116 days ago

That doesn't contradict anything they wrote.

jama21115 days ago

To be fair all they wrote was just a rant based on some very unsubstantiated claims, I think the burden of proof is on them a little bit

jama21116 days ago

That’s a lot of assumptions.

ctoth16 days ago

> People will risk their life in a horribly janky self driving car if it means they can swipe on social media instead of watching the road - acceptance doesn't mean it's good.

People will risk their and others' lives in a horribly janky car if it means they can swipe on social media instead of watching the road - acceptance doesn't mean it's good.

FTFY

mschild16 days ago

Needs more research. Fully agree on that.

That said:

TV very much is the idiot box. Not necessarily because of the TV itself but rather whats being viewed. An actual engaging and interesting show/movie is good, but last time I checked, it was mostly filled with low quality trash and constant news bombardment.

Calculators do do arithmetic and if you ask me to do the kind of calculations I had to do in high school by hand today I wouldnt be able to. Simple calculations I do in my head but my ability to do more complex ones diminished. Thats down to me not doing them as often yes, but also because for complex ones I simply whip out my phone.

richrichardsson16 days ago

> Calculators do do arithmetic and if you ask me to do the kind of calculations I had to do in high school by hand today I wouldnt be able to

I got scared by how awfully my juniour (middle? 5-11) school mathematics had slipped when helping my 9 year old boy with his homework yesterday.

I literally couldn't remember how to carry the 1 when doing subtractions of 3 digit numbers! Felt literally idiotic having to ask an LLM for help. :(

wiz21c16 days ago

On my part, I don't use that carry method at ll. When I have to substract, I substract by chunks that my brain can easily subtract. For example 1233 - 718, I'll do 1233 - 700 = 533 then 533 - 20 = 513 then 513 + 2 = 515. It's completely instinctive (and thus I can't explain to my children :-) )

What I have asked my children to do very often is back-of-the-envelope multiplications and other computations. That really helped them to get a sense of the magnitude of things.

+3
n4r916 days ago
zeroonetwothree16 days ago

This doesn’t scale to larger numbers though. I do that too for smaller subtractions but if I need to calculate some 9 digit computation then I would use the standard pen and paper tabular method with borrowing (not that it comes up in practice).

Izkata16 days ago

"Common core" math is an attempt to codify this style so more kids can get a deeper understanding of numbers instead of just blindly following steps. Like the people that created it noticed people like you and me (I do something similar but not quite the same) have an intuitive understanding of math that made us good at it that they want to replicate for everyone. But it seems like very few parents and teachers understand it themselves, resulting in a blind-leading-the-blind situation where it gets taught in a bad way that doesn't achieve the goal.

Also aside, in the method I was taught in school (and I assume you and GP from terminology), "carrying" is what you do with addition (an extra 1 can be carried to the next column), "borrowing" is for subtraction (take a 1 away from the next column if needed).

alt18716 days ago

It's more complex than that. The three pillars of learning are theory (finding out about the thing), practice (doing the thing) and metacognition (being right, or more importantly, wrong. And correcting yourself.). Each of those steps reinforce neural pathways. They're all essential in some form or another.

Literacy, books, saving your knowledge somewhere else removes the burden of remembering everything in your head. But they don't come into effect into any of those processes. So it's an immensely bad metaphor. A more apt one is the GPS, that only leaves you with practice.

That's where LLMs come in, and obliterate every single one of those pillars on any mental skill. You never have to learn a thing deeply, because it's doing the knowing for you. You never have to practice, because the LLM does all the writing for you. And of course, when it's wrong, you're not wrong. So nothing you learn.

There are ways to exploit LLMs to make your brain grow, instead of shrink. You could make them into personalized teachers, catering to each student at their own rhythm. Make them give you problems, instead of ready-made solutions. Only employ them for tasks you already know how to make perfectly. Don't depend on them.

But this isn't the future OpenAI or Anthropic are gonna gift us. Not today, and not in a hundred years, because it's always gonna be more profitable to run a sycophant.

If we want LLMs to be the "better" instead of the "worse", we'll have to fight for it.

Yes, I wrote this comment under someone else's comment before, but it seems to apply to yours even better.

TomasBM16 days ago

Your criticism of this study is roughly on point, IMO. It's not badly designed by any means, but it's an early look. There are already similar studies on the (cognitive) effects of LLMs on learning, but I suspect this one gets the attention because it's associated with the MIT brand.

That said, these kinds of studies are important, because they reveal that some cognitive changes are evidently happening. Like you said, it's up to us to determine if they're positive or negative, but as is probably obvious to many, it's difficult to argue for the status quo.

If it's a negative change, teachers have to go back to paper-and-pen essay writing, which I was personally never good at. Or they need to figure out stable ways to prevent students from using LLMs, if they are to learn anything about writing.

If it's a positive change, i.e., we now have more time to do "better" things (or do things better), then teachers need to figure out substitutes. Suddenly, a common way of testing is now outdated and irrelevant, but there's no clear thing to do instead. So, what do they do?

kace9116 days ago

I think novels and tv are bad examples, as they are not substituting a process. The writing one is better.

Here’s the key difference for me: AI does not currently replace full expertise. In contrast, there is not a “higher level of storage” that books can’t handle and only a human memory can.

I need a senior to handle AI with assurances. I get seniors by having juniors execute supervised lower risk, more mechanical tasks for years. In a world where AI does that, I get no seniors.

duskdozer16 days ago

Not sure "they've always been wrong before" applies to TV being the idiot box and everything after

cimi_16 days ago

> The historical pattern suggests cognitive abilities shift rather than disappear.

Shift to what? This? https://steve-yegge.medium.com/welcome-to-gas-town-4f25ee16d...

darkwater16 days ago

What the hell have I just read (or at least skimmed)?? I cannot understand if the author is:

a) serious, but we live on different planets

b) serious with the idea, tongue-in-check in the style and using a lot of self-irony

c) an ironic piece with some real idea

d) he is mocking AI maximalists

cimi_16 days ago

There was discussion about this here a couple of weeks ago: https://news.ycombinator.com/item?id=46458936

Steve Yegge's a famous developer, this is not a joke :) You could say he is an AI maximalist, from your options I'd go with (b) serious with the idea, tongue-in-check in the style and using a lot of self-irony.

It is exaggerated, but this is how he sees things ending up eventually. This is real software.

If things do end up in glorified kanban boards, what does it mean for us? That we can work less and use the spare time reading and doing yoga, or that we'll work the same hours with our attention even more fragmented and with no control over the outputs of these things (=> stress).

I'd really wish that people who think this is good for us and are pushing for this future do a bit better than:

1. More AI 2. ??? 3. Profit

adilabba14 days ago

hi like this

cap1123516 days ago

Just ignore the rambling crypto shill.

clydethefrog16 days ago

I agree with Socrates and too many people have the wrong memory of him, making his prediction come true. There was a great philosophical book last year, Open Socrates [1], that explains his methods and ideas are the opposite direction of how most people use AI. Socrates believed we can only get closer to knowledge through the process of open, inquisitive conversation with other beings who are willing to refute us and be refuted in turn. He claimed ideas can only be expressed and shared in dialogue & live conversation. The one-direction communication of all the media since books have lacked this, and AI's version of dialogue is sycophancy and statistical common patterns instead of fresh ideas.

[1] https://www.nytimes.com/2025/01/15/books/review/open-socrate...

FeteCommuniste16 days ago

I'm sure you could train an AI to be skeptical/critical by default. The "you're absolutely right!" AIs are probably always going to be more popular, though.

wartywhoa2316 days ago

> TV was the "idiot box."

TV is the uber idiot box, the overlord of the army of portable smart idiot boxes.

boesboes16 days ago

I think that is a VERY false comparison. As you say, LLMs try to take over entire cognitive and creative processes and that is a bigger problem then outsourcing arithmetic

ben_w16 days ago

> 4 months is barely enough time to adapt to a fundamentally new tool

Yes, but also the extra wrinkle that this whole thing is moving so fast that 4 months old is borderline obsolete. Same into the future, any study starting now based on the state of the art on 22/01/2026 will involve models and potentially workflows already obsolete by 22/05/2026.

We probably can't ever adapt fully when the entire landscape is changing like that.

> Previous tools outsourced partial processes - calculators do arithmetic, Google stores facts. LLMs can potentially take over the entire cognitive process from thinking to formulating. That's qualitatively different.

Yes, but also consider that this is true of any team: All managers hire people to outsource some entire cognitive process, letting themselves focus on their own personal comparative advantage.

The book "The Last Man Who Knew Everything" is about Thomas Young, who died in 1829; since then, the sum of recorded knowledge has broadened too much for any single person to learn it all, so we need specialists, including specialists in managing other specialists.

AI is a complement to our own minds with both sides of this: Unlike us, AI can "learn it all", just not very well compared to humans. If any of us had a sci-fi/fantasy time loop/pause that let us survive long enough to read the entire internet, we'd be much more competent than any of these models, but we don't, and the AI runs on hardware which allows it to.

For the moment, it's still useful to have management skills (and to know about and use Popperian falsification rather than verification) so that we can discover and compensate for the weaknesses of the AI.

bowsamic16 days ago

> they've always been wrong before

Were they? It seems that often the fears came true, even Socrates’

TheOtherHobbes16 days ago

Writing didn't destroy memory, it externalised it and made it stable and shareable. That was absolutely transformative, and far more useful than being able to re-improvise a once-upon-a-time heroic poem from memory.

It hugely enhanced synthetic and contextual memory, which was a huge development.

AI has the potential to do something similar for cognition. It's not very good at it yet, but externalised cognition has the potential to be transformative in ways we can't imagine - in the same way Socrates couldn't imagine Hacker News.

Of course we identify with cognition in a way we didn't do with rote memory. But we should possibly identify more with synthetic and creative cognition - in the sense of exploring interesting problem spaces of all kinds - than with "I need code to..."

Akronymus16 days ago

> AI has the potential to do something similar for cognition. It's not very good at it yet, but externalised cognition has the potential to be transformative in ways we can't imagine - in the same way Socrates couldn't imagine Hacker News.

Wouldnt the endgame of externalized cognition be that humans essentially become cogs in the machine?

latexr16 days ago

> in the same way Socrates couldn't imagine Hacker News.

Perhaps he could. If there’s an argument to be made against writing, social media (including HN) is a valid one.

bowsamic15 days ago

Regardless of whether memory was externalised, it’s still the case that it was lost internally, that much is true. If you really care about having a great internal memory then of course you’ll think it’s a downside.

So we’ve externalised memory, we’ve externalised arithmetic. Personally the idea of externalising thinking seems to be the last one? It’s not clear what’s left inside us of being a human once that one is gone

saberience16 days ago

It did destroy memory though. I would bet any amount of money that our memories in 2026 are far, far worse than they were in 1950 or 1900.

In fact, I can feel my memory is easily worse now than from before ChatGPT's release, because we are doing less hard cognitive work. The less we use our brain's the dumber we get, and we are definitely using our brains less now.

abdullahkhalids16 days ago

It's not writing that destroys memory. It's fast/low-cost lookup of written material that destroys memory. This is why people had strong memory despite hundreds of years of widespread writing, and it suddenly fell through the floor with the introduction of widespread computers, internet, and smartphones.

anthonypasq16 days ago

we existing in a stunningly more abstract and complex society than we did even 100 years. Unless you are reasonably intelligent its incredibly difficult to even navigate the modern world.

chairmansteve16 days ago

"Socrates worried writing would destroy memory".

He may have been right... Maybe our minds work in a different way now.

tjr15 days ago

Back when I routinely dialed phone numbers by hand (either on a keypad or on a literal dial), I memorized the numbers I called most frequently. Many of those numbers I still have memorized today, years after some of those phone lines have been disconnected.

But now? I almost never enter a new phone number anywhere. Maybe someone shares a contact with me, and I tap to add it to my contact list. Or I copy-paste a phone number. Even some people that I contact frequently, I have no idea what their phone number is, because I've never needed to "know" it, I just needed to have it in my contact list.

I'm not sure that this is a bad thing, but definitely is a thing.

Ah, well, more memory space for other stuff, eh? I suppose. But like what? I could describe other scenarios, in which I used to have more facts and figures memorized, but simply don't any more, because I don't need to. While perhaps my memory is freed up to theoretically store more other things, in practice, there's not much I really "need" to store.

Even if no longer memorizing phone numbers isn't especially bad, I'm starting to think that no longer memorizing anything might not be a great idea.

vladms16 days ago

> That said, I'm not sure "they've always been wrong before" proves they're wrong now.

I think a better framing would be "abusing (using it too much or for everything) any new tool/medium can lead to negative effects". It is hard to clearly define what is abuse, so further research is required, but I think it is a healthy approach to accept there are downsides in certain cases (that applies for everything probably).

lr4444lr16 days ago

Were any of the prior fears totally wrong?

carlosjobim16 days ago

> This reminds me of the recurring pattern with every new medium: Socrates worried writing would destroy memory, Gutenberg's critics feared for contemplation, novels were "brain softening," TV was the "idiot box." That said, I'm not sure "they've always been wrong before" proves they're wrong now.

What do you mean? All of them were 100% right. Novels are brain softening, TV is an idiot box, and writing destroys memory. AI will destroy the minds of people who use it much.

direwolf2016 days ago

How do we know they were wrong before?

biztos15 days ago

To be fair, writing did destroy memory. It's just that in the very long summer of writing, which may now be coming to an end thanks to AI, we have considered the upside more than worth it.

BlackFly16 days ago

If you realize that what we remember are the extremized strawman versions of the complaints then you can realize that they were not wrong.

Writing did eliminate the need for memorization. How many people could quote a poem today? When oral history was predominant, it was necessary in each tribe for someone to learn the stories. We have much less of that today. Writing preserves accuracy much more (up to conquerors burning down libraries, whereas it would have taken genocide before), but to hear a person stand up and quote Desiderata from memory is a touching experience to the human condition.

Scribes took over that act of memorization. Copying something lends itself to memorization. If you have ever volunteered extensively for project Gutenberg you can also witness a similar experience: reading for typos solidifies the story into your mind in a way that casual writing doesn't. In losing scribes we lost prioritization of texts and this class of person with intimate knowledge of important historical works. With the addition of copyright we have even lost some texts. We gained the higher availability of works and lower marginal costs. The lower marginal costs led to...

Pulp fiction. I think very few people (but I would be disappointed if it was no one) would argue that Dan Brown's da Vinci Code is on the same level as War and Peace. From here magazines were created, even cheaper paper, rags some would call them (or use that to refer to tabloids). Of course this also enabled newspapers to flourish. People started to read things for entertainment, text lost its solemnity. The importance of written word diminished on average as the words being printed became more banal.

TV and the internet led to the destruction of printed news, and so on. This is already a wall of text so I won't continue, but you can see how it goes:

Technology is a double edged sword, we may gain something but we also can and did lose some things. Whether it was progress or not is generally a normative question that often a majority agrees with in one sense or another but there are generational differences in those norms.

In the same way that overuse of a calculator leads to atrophy of arithmetic skills, overuse of a car leads to atrophy of walking muscles, why wouldn't overuse of a tool to write essays for you lead to atrophy of your ability to write an essay? The real reason to doubt the study is because its conclusion seems so obvious that it may be too easy for some to believe and hide poor statistical power or p-hacking.

darkwater16 days ago

I think your take is almost irrefutable, unless you frame human history as the only possible way to achieve current humanity status and (unevenly distributed) quality of life.

I also find exhausting the Socrates reference that's ALWAYS brought up in these discussions. It is not the same. Losing the collective ability to recite a 10000 words poem by heart because of books it's not the same thing as stopping to think because an AI is doing the thinking for you.

We keep adding automation layers on top of the previous ones. The end goal would be _thinking_ of something and have it materialized in computer and physical form. That would be the extreme. Would people keep comparing it to Socrates?

piyuv16 days ago

None of the examples you provided were being sold as “intelligence”

lofaszvanitt15 days ago

What study? Try it yourself.

abustamam15 days ago

> TV was the "idiot box."

To be fair, I think this one is true. There's a lot of great stuff you can watch on TV, but I'd argue that TV is why many boomers are stuck in an echo chamber of their own beliefs (because CNN or fox news or whatever opinion-masquerading-as-journalism channel is always on in the background). This has of course been exacerbated by social media, but I can't think of many productive uses of TV other than sesame Street and other kids shows.

ZpJuUuNaQ516 days ago

>TV was the "idiot box."

Still is.

Sxubas16 days ago

What does not get used, atrophies.

Critical thinking, forming ideas, writing, etc, those are too stuff that can atrophy if not used.

For example, a lot of people can't locate themselves without a GPS today.

To be frank I see it really similar to our muscles: don't want to lose it? Use it. Whether that is learning a language, playing an instrument or the task llms perform.

jama21116 days ago

Well said

raincole16 days ago

> "they've always been wrong before"

In my opinion, they've almost always been right.

In the past two decades, we've seen the less-tech-savvy middle managers who devalued anything done on computer. They seemed to believe that doing graphic design or digital painting was just pressing a few buttons on the keyboard and the computer would do the job for you. These people were constantly mocked among online communities.

In programmers' world, you have seen people who said "how hard it could be? It's just adding a new button/changing the font/whatever..."

And strangely, in the end those tech muggles were the insightful ones.

blackqueeriroh16 days ago

I encourage folks to listen to brilliant psychologist for software teams Cat Hicks [1] and her wife, teaching neuroscientist Ashley Juavinett [2] on their excellent podcast, Change, Technically discussing the myriad problems with this study: https://www.buzzsprout.com/2396236/episodes/17378968

1: https://www.catharsisinsight.com 2: https://ashleyjuavinett.com

probably_wrong16 days ago

I'm not a fan of "TL;DR" but I think 52 minutes would qualify. I jumped to a random point of the transcript and found just platitudes, which didn't quite hook me into listening to all of it.

How about some more info on what their main conclusions are?

albumen16 days ago

They view the framing of the MIT paper not just as bad science, but as a dangerous social tool that uses brain data to "consign people" to being less worthy or "stupid" for using cognitive aids. It flags the paper's alarmist findings as "pseudoscience" designed to provoke fear rather than provide rigorous insight. They highlight several "red flags" in the study's design: lack of a coherent scientific framework, methodological errors like typos, and reliance on invented, undefined terms such as "cognitive debt". They challenge the interpretation of EEG results, explaining that while the paper frames a 55% reduction in connectivity as evidence that a user's "brain sucks," such data could instead indicate increased neural efficiency, an alternative explanation the authors ignore. (EEG measures broad, noisy signals from outside the skull and is better understood as a rough index of brain state than as a precise window into specific thoughts or “intelligence.”)

The hosts condemn the study’s "bafflingly weak" logic and ableist rhetoric, and advise skepticism toward "science communicators" who might profit from selling hardware or supplements related to their findings: one of the paper's lead authors, Nataliya Kosmyna, is associated with the MIT Media Lab and the development of AttentivU, a pair of glasses designed to monitor brain activity and engagement. By framing LLM use as creating a "cognitive debt," the researchers create a market for their own solution: hardware that monitors and alerts the user when they are "under-engaged". The AttentivU system can provide haptic or audio feedback when attention drops, essentially acting as the "scaffold" for the very cognitive deficits the paper warns against. The research is part of the "Fluid Interfaces" group at MIT, which frequently develops Brain-Computer Interface (BCI) systems like "Brain Switch" and "AVP-EEG". This context supports the hosts' suspicion that the paper’s "cognitive debt" theory may be designed to justify a need for these monitoring tools.

biophysboy16 days ago

Is this comment written by AI?

WarmWash16 days ago

Similar to the media, I've picked up on vibes from academia that have a baseline AI negative tilt.

In my own (classic) engineering work, AI has become so phenomenally powerful that I can only imagine that if I was still in college, I'd be mostly checked out during those boring lectures/bad teacher classes, and then learning on my own with the textbook and LLMs by night. Which begs the question, what do we need the professor for?

I'd be interested to see stats on "office hours" visitation time over the last 4 years (although admittedly its the best tool for gaining a professor's favor, AI doesn't grant that)

alfalfasprout16 days ago

> Similar to the media, I've picked up on vibes from academia that have a baseline AI negative tilt.

The media is extremely pro-AI (and a quick look at their ownership structure gives you a hint as to why). You seem to be projecting your own biases here, no?

And how would those LLMs learn? How would you learn to ask the right questions that further scientific research?

tcgv16 days ago

That's a fair point regarding pure content absorption, especially given that many classes do suffer from poor didactics. However, the university's value proposition often lies elsewhere: access to professors researching innovations (not yet indexed by LLMs), physical labs for hands-on experience that you can't simulate, and the crucial peer networking with future colleagues. These human and physical elements, along with the soft skills developed through technical debate, are hard to replace. But for standard theory taught by uninspired lecturers, I agree that the textbook plus LLM approach is arguably superior.

wrqvrwvq16 days ago

The pod has this line "I do want to know if the offloading of cognitive tasks changes my own brain and my own cognition", which is what the paper attempts to address. The authors conclude

> To summarize, the delta-band differences suggest that unassisted writing engages more widespread, slow integrative brain processes, whereas assisted writing involves a more narrow or externally anchored engagement, requiring less delta-mediated integration.

There is no intellectual judgement regarding this difference, though the authors do supply citations from related work that they claim may be of interest to those wanting "to know if the offloading of cognitive tasks changes my own brain and my own cognition". If your brain changes, it might change for the worse at least as far as you experience it. Is this ableism, to examine your own cognitive well-being and make your own assessment? If you don't like how you're thinking about something, are you casting aspersion on yourself and shaming your own judgement? Ableist discourse is, unsurprisingly, a stupid language game for cognitively impaired dummies. It's a pathetic attempt to redefine basic notions of capability and impairment, of functioning and dysfunction as inherently evil concepts, and then to work backward from that premise to find fault with the research results. Every single person experiences moments or lifetime's of psychological and mental difficulty. Admitting this and adapting to it or remediating harmful effects has nothing to do with calling stupid people stupid or ableism. It's just a means of providing tools and frameworks for "cognitive wellness", but even just the implication of "wellness" being distinct from "illness" makes the disturbed and confused unwell.

throw1092015 days ago

> as a dangerous social tool that uses brain data to "consign people" to being less worthy or "stupid" for using cognitive aids

> ableist rhetoric

Oh, so it's not actually a science podcast - it's anti-science ideological propaganda. Thanks for the heads-up.

internet_points16 days ago

It's a podcast, it goes back and forth between high and low density content. I tried listening to it while working and sometimes had to pause it because it got deep into e.g. explaining EEG, and then it's back to laughing at random stuff.

woof16 days ago

Summary using Claude 3.7 Sonnet:

"Your Brain On Chat GPT" Paper Analysis

In this transcript, neuroscientist Ashley and psychologist Cat critically analyze a controversial paper titled "Your Brain On Chat GPT" that claims to show negative brain effects from using large language models (LLMs).

Key Issues With the Paper:

Misleading EEG Analysis:

The paper uses EEG (electroencephalography) to claim it measures "brain connectivity" but misuses technical methods EEG is a blunt instrument that measures thousands of neurons simultaneously, not direct neural connections The paper confuses correlation of brain activity with actual physical connectivity Poor Research Design:

Small sample size (54 participants with many dropouts) Unclear time intervals between sessions Vague instructions to participants Controlled conditions don't represent real-world LLM use Overstated Claims:

Invented terms like "cognitive debt" without defining them Makes alarmist conclusions not supported by data Jumps from limited lab findings to broad claims about learning and cognition Methodological Problems:

Methods section includes unnecessary equations but lacks crucial details Contains basic errors like incorrect filter settings Fails to cite relevant established research on memory and learning No clear research questions or framework The Experts' Conclusion:

"These are questions worth asking... I do really want to know whether LLMs change the way my students think about problems. I do want to know if the offloading of cognitive tasks changes my own brain and my own cognition... We need to know these things as a society, but to pretend like this paper answers those questions is just completely wrong."

The experts emphasize that the paper appears designed to generate headlines rather than provide sound scientific insights, with potential conflicts of interest among authors who are associated with competing products.

rixed15 days ago

Thank you for that link. That's rare not to be disappointed by a podcast. The level of chit chat is tolerable in my opinion, and the offered insights seem legit.

My guess is the commenters who didn't like it had other reasons than the content itself.

ramraj0716 days ago

Dont even need that podcast. Anyone who's done real research in any field, if they try to sit down and read the paper (if you can call it that), can immediately see that its just self aggrandizing, unscientific, biased garbage written by people who think theyre way smarter than they are. Not unlike wolframs new grand unified theory but even worse maybe.

Unfortunately, its also being used by a lot of people who also think theyre smarter than they are to confirm their pre-existing biases with bad research.

Im not saying ChatGPT doesnt make people stupid. It very well might (my hypothesis is that it just accelerates cognition change; decline for many, incline for some). But this garbage is not how you prove it.

softwaredoug16 days ago

Druids used to decry that literacy caused people to lose their ability to memorize sacred teachings. And they’re right! But literacy still happened and we’re all either dumber or smarter for it.

alt18716 days ago

It's more complex than that. The three pillars of learning are theory (finding out about the thing), practice (doing the thing) and metacognition (being right, or more importantly, wrong. And correcting yourself.). Each of those steps reinforce neural pathways. They're all essential in some form or another.

Literacy, books, saving your knowledge somewhere else removes the burden of remembering everything in your head. But they don't come into effect into any of those processes. So it's an immensely bad metaphor. A more apt one is the GPS, that only leaves you with practice.

That's where LLMs come in, and obliterate every single one of those pillars on any mental skill. You never have to learn a thing deeply, because it's doing the knowing for you. You never have to practice, because the LLM does all the writing for you. And of course, when it's wrong, you're not wrong. So nothing you learn.

There are ways to exploit LLMs to make your brain grow, instead of shrink. You could make them into personalized teachers, catering to each student at their own rhythm. Make them give you problems, instead of ready-made solutions. Only employ them for tasks you already know how to make perfectly. Don't depend on them.

But this isn't the future OpenAI or Anthropic are gonna gift us. Not today, and not in a hundred years, because it's always gonna be more profitable to run a sycophant.

If we want LLMs to be the "better" instead of the "worse", we'll have to fight for it.

svara16 days ago

> Make them give you problems, instead of ready-made solutions

Yes, this is one of my favorite prompting styles.

If you're stuck on a problem, don't ask for a solution, ask for a framework for addressing problems of that type, and then work through it yourself.

Can help a lot with coming unstuck, and the thoughts are still your own. Oftentimes you end up not actually following the framework in the end, but it helps get the ball rolling.

NewsaHackO15 days ago

I feel as though this analysis only makes sense in hindsight; in the past, when people used to say using books as a way to store knowledge outside your brain would make a similar argument, but would add a fourth pillar of memorization. Even now, a lot of people in different professions (such as law, and medicine) still absolutely drill memorization as the first step in building a strong knowledge base before getting into more practical, day-to-day used information. When people are forced to memorize a large amount of things in a cohesive subject, it forces your brain to make connections between ideas out of necessity to keep the information in your head. This definitely has an effect on metacognition and practice. So I wouldn't argree with you that the analogy with books=brain rot isn't valid.

casey216 days ago

I don't buy your "theory" at all. Learning requires curiosity. If you want to know how something works you will do all those things irregardless if you saw it in a book or an AI spat it out. If you don't you won't.

There is no free lunch, if you use writing to "scaffold" your learning, you trade learning speed for a limited "neural pathways" budget that could connect two useful topics. And when you stop practicing your writing (or coding, as reported by some people who stopped coding due to AI) you feel that you are getting dumber. Since you scaffolded your knowledge of a topic with writing or coding, rather than doing the difficult work of learning it from more pervasive conceptions.

The best thing AI taught us is to not tie your knowledge to some specific task. It's overly reactionary to recommended task/action based education (even from an AI) in response to AI.

alt18715 days ago

If you don't buy into the acquired, existing knowledge of neuroscience and the role of lymph nodes in learning, you can do whatever you want in your free time, but don't call it my theory, because it's neither mine nor a theory.

For the rest, maybe you're the chosen one, who doesn't need to expend any cognitive load to learn a subject, and just glide on your curiosity. Good for you. There are, to a degree of approximation, zero other people who work this way.

smileeeee16 days ago

Right, nobody gains much of anything by memorizing logarithm tables. But letting the machine tell you what even you can do with a logarithm takes away from your set of abilities, without other learning to make up for it.

giancarlostoro16 days ago

Smartphones I think did the most damage. Used to be you had to memorize people's phone numbers. I'm sure other things like memorizing how to get from your house to someone else is also less cognitive when the GPS just tells you every time, instead of you busting out a map, and thinking about your route. I've often found that if I preview a route I'm supposed to take, and use Google Street Maps to physically view key / unfamiliar parts of my route, I am drastically less likely to get lost, because "oh this looks familiar! I turn right here!"

My wife had a similar experience, she had some college project where they had to drive up and down some roads and write about it, it was a group project, and she bought a map, and noticed that after reading the map she was more knowledgeable about the area than her sister who also grew up in the same area.

I think AI is a great opportunity for learning more about your subjects in question from books, and maybe even the AI themselves by asking for sources, always validate your intel from more authoritative sources. The AI just saved you 10 minutes? You can spend those 10 minutes reading the source material.

zelphirkalt16 days ago

About the phone numbers thing: I am now 35yo. Do I still remember the phone number of one of my best friends from primary school back then? Hell yeah, I do! These days though, I am struggling a bit with phone numbers, mostly because I don't even try. If the number is important, I will save it somewhere. Memorizing it? Nahhh... But sometimes my number brain still does that and it seems some weird pattern in the number. Stuff like

"+4 and then -2 and then +6 and then -3. Aha! All makes sense! Cannot repeat the digit differences, and need to be whole numbers, so going to the next higher even number, which is 6, which is 3 when halved!"

And then I am kinda proud my brain still works, even if the found "pattern" is hilariously arbitrary.

vacuity16 days ago

Same. Somehow there tends to be some "pattern" that stands out, but I guess it's just a mix of the likelihood of "something interesting" and our minds being tuned to pick out "anything interesting". I've memorized a few SSNs and license plate numbers this way, and some digits of pi. I like it; it feels like normal memorization with a twist, without having to resort to "hardcore" techniques.

giancarlostoro15 days ago

I finally learned my wife's number last year because I got tired of being asked what her number is when picking things up for her and what not and not actually knowing it, and I've been texting her since 2007. When I learned I could just save phone numbers on my cell phone, I didn't make it a point to ever remember a phone number outside of my own number.

voidnap16 days ago

The worst part about smart phones is their browser/social media. Technically, even dumb phones like the nokia 3310 had contact lists so you didn't have to memorize phone numbers. And land lines had speed dial. And my family used a phonebook with a rotary dial telephone. It's not like people had memorized as many numbers as they now have stored in their telephones.

otikik16 days ago

The ability is still there. My son dutifully memorizes all the lyrics of his favorite band’s songs.

What the druids/piests were really decrying was that people spent less time and attention on them. Religion was the first attention economy.

timeon16 days ago

This comment sounds like distraction from the topic. Analogy is plausible but is not the real thing.

EGreg16 days ago

Druids? Socrates was famously against books far earlier.

Funny enough, the reason he gave against books has now finally been addressed by LLMs.

firstthrowaway16 days ago

Or, irony was being employed and Socrates wasn’t against books, but was instead noting it’s the powerful who are against them for their facilitating the sharing of ideas across time and space more powerfully than the spoken word ever could. The books are why we even know his name, let alone the things said.

carterschonwald16 days ago

idk, if anything I’m thinking more. The idea that I might be able to build everything I’ve ever planned out. At least the way I’m using them, it’s like the perfect assistive device for my flavor of ADHD — I get an interactive notebook I can talk through crazy stuff with. No panacea for sure, but I’m so much higher functioning it’s surreal. I’m not even using em in the volume many folks claim, more like pair programming with a somewhat mentally ill junior colleague. Much faster than I’d otherwise be.

this actually does include a crazy amount of long form latex expositions on a bunch of projects im having a blast iterating on. i must be experiencing what its almost like not having adhd

zeroonetwothree16 days ago

Interesting. I feel like it makes my ADHD worse. If I code “manually” then I can enter hyperfocus/flow and it’s relaxing. If I use AI to code then I have to sit around waiting for it to respond and I get distracted and start something else, forgetting what I was doing before. Maybe there’s a better workflow for me though.

rom1638416 days ago

I don't have ADHD but I've set Codex CLI to send me a push notification via PushOver when it ends its turn and it helps a lot.

anthonypasq16 days ago

you gotta use faster models, this is the next big leap in agentic coding. In 2 years we will have opus 4.5 at 1000 tokens/sec and it will be glorious.

theblazehen16 days ago

Try running multiple agents - more task switching overhead, but I find planning in one agent while another is executing is a good balance for me, and avoids the getting-distracted trap

footy16 days ago

task switching is precisely an issue with adhd though

theblazehen16 days ago

I'm adhd as well, so I get the pain. I tend to try and do frontend / backend on a single project to at least stay within the same domain

danielbln15 days ago

It helps to be able to ask at inside any session at any point "yo, what were we doing and how's it going"

ensocode16 days ago

Maybe it’s not that we’re getting stupid because we don’t use our brains anymore. It’s more like having a reliable way to make fire — so we stop obsessing over sparks and start focusing on building something more important.

jack_pp16 days ago

Instead of being the architect, engineer, plumber, electrician, carpenter you can (most of the time) just be the architect/planner. You for sure need to know how everything works in case LLMs mess the low level stuff up but it sure is nice not needing to lay bricks and dig ditches anymore and just build houses.

discreteevent16 days ago

It won't turn most people into architects. It will turn them into PMs. The function of PMs is important but without engineers you are not going to build a sustainable system. And an LLM is not an engineer.

+1
jack_pp16 days ago
+1
Capricorn248116 days ago
discreteevent16 days ago

> Maybe it’s not that we’re getting stupid because we don’t use our brains anymore.

The study shows that the brain is not getting used. We will get stupid in the same way that people with office jobs get unhealthy if they don't deliberately exercise.

jimmaswell16 days ago

Same here re: ADHD. It's been invaluable. A big project that would have been personally intractible is now easy - even if the LLM gives slightly wrong answers 20% of the time, the important thing is that it collapses the search space for what concepts or tools I need to look into and gives an overall structure to iterate on. I tend to use ChatGPT for the big planning/architectural conversation, and I find it's also very good at sample code; for code writing/editing, Copilot has been fantastic too, lately mostly using the Opus agent in my case. It's so nice being able to delegate some bullshit gruntwork to it while I either do something else or work on architecture in another window for a few minutes.

It certainly hasn't inhibited learning either. The most recent example is shaders. I started by having it just generate entire shaders based on descriptions, without really understanding the pipeline fully, and asking how to apply them in Unity. I've been generally familiar with Unity for over a decade but never really touched materials or shaders. The generated shaders were shockingly good and did what I asked, but over time I wanted to really fine tune some of the behavior and wound up with multiple passes, compute shaders, and a bunch of other cool stuff - and understanding it all on a deeper level as a result.

kminehart16 days ago

I can definitely relate to the abstract at least. While I am more productive now, and I am way more excited about working on longer term projects (especially by myself), I have found that the minutia is way more strenuous than it was before. I think that inhibits my ability to review what the LLM is producing.

I haven't been diagnosed with ADHD or anything but i also haven't been tested for it. It's something I have considered but I think it's pretty underdiagnosed in Spain.

isolli16 days ago

Indeed, I feel like AI makes it less lonely to work, and for me, it's a net positive. It still has downsides for my focus, but that can be improved...

skrebbel16 days ago

Can you elaborate on how you use AI for this? Do you do it for coding or for “everything?”

notrealyme12316 days ago

I am currently writing a paper and I am thinking exactly the same.

That must be how normal people feel.

kaydub16 days ago

Yeah, I'd say I'm thinking and doing way more.

One of my favorite things is that I no longer feel like I need to keep up with "framework of the year"

I came up over a decade ago, places I worked were heavy on Java and Spring. Frontends were Jquery back then. Since then I've moved around positions quite a bit, many different frameworks, but typically service side rendered MVC types and these days I work as an SRE. The last 5 years I've fiddled with frontend frameworks and SPAs but never really got into it. I just don't have it in me to learn ANOTHER framework.

I had quite a few projects, all using older patterns/frameworks/paradigms. Unfortunately these older paradigms don't lend themselves to "serverless" architecture. So when I want to actually run and deploy something I've gotta deploy it to a server (or ecs task). That shit starts to cost a bit of money, so I've never been able to keep projects running very long... typically because the next idea comes up and I start working on that and decide to spend money on the new things.

I've been working at a cloud native shop the last 7 years now. Damn, you can run shit CHEAP in AWS if you know what you're doing. I know what I'm doing for parts of that, using dynamodb instead of rds, lambdas instead of servers. But I could never get far enough with modern frontend frameworks to actually migrate my apps to these patterns.

Well, now it's easy.

"Hey Claude, look at this repo here, I want to move it to AWS lambdas + apigw + cloudfront. Break the frontend out into a SPA using vue3. I've copied some other apps and patterns {here} so go view those for how to do it"

And that's just the start.

I never thought I'd get into game development but it's opened that up to me as well (though, since I'm not an artist professionally I have issues getting generative AI to make assets, so I'm stuck plodding along in aseprite and photoshop make shit graphics lol). I've got one simple game like 80% done and ideas for the next one.

I never got too far down mobile development either. But one of the apps I made it could be super useful to have a mobile app. Describe the ux/ui/user flow, tell it where to find the api endpoints, and wham bam, android app developed.

Does it make perfect code one shot? Sometimes, but not often, I'll have to nudge it along. Does it make good architectural decisions? Not often on its own, again, I'l nudge it, or even better, I'll spin up another agent to do code reviews and feed the reviews back into the agent building out the app. Keep doing that loop until I feel like the code review agent is really reaching or being too nitpicky.

And holy shit, I've been able to work on multiple things at the same time this way. Like completely different domains, just have different agents running and doing work.

baddash15 days ago

I've had the same type of experience where I feel like the knowledge barrier for a lot of projects has been made much smaller than it used to be :D

btw, I have a couple of questions just out of curiosity: What tools do you use besides Claude? Do you have a local or preferred setup? and do you know of any communities where discussion about LLM/general AI tool use is the focus, amongst programmers/ML engineers? Been trying to be more informed as to what tools are out there and more up to date on this field that is progressing very quickly.

kaydub15 days ago

Claude is my favorite and at work it's what we officially use. At home I pay for claude by the token but I have a gemini and chatgpt account. So at home I use a lot more gemini cli and codex.

For my setup, I make sure I have good markdown files and I use beads. I'll usually have an AGENTS.md, CLAUDE.md, GEMINI.md in every project and 99.9% of the time they're the exact same. I always make sure to keep these files up to date. If the LLM does something I don't like and I can foresee it being a problem, I'll add it to the markdown file as something not to do.

My markdown files generally have multiple sections. There's always a good chunk describing the app (or in a non software case, the goal or purpose). Some design/architecture decisions will make it into the markdown files. How to build/test are in the markdown files.

I think it helps that I already have good patterns and structure to most things I build. I have moved more to a monorepo since LLMs came out. So an android app won't be in a separate repo as the webapp, instead they're all in the same repo with different directories (frontend vs {app}-android/{app}-iphone/{app}-mobile). Everything I build gets deployed to AWS and I have good patterns for that. Make for builds/deploys/tests, I don't ever run terraform or npm or maven or any other builds on the cli, if I'm running it it goes in the Makefile. All apps follow the same Makefile patterns where certain commands all get rolled up into the same one (make plan, make build, make deploy) using the same general env vars.

Now for tools and such, I feel like just the cli agents themselves are it. On personal stuff that's 100% all I use, the cli agent. At work I integrate with some MCPs and I've created and use some skills/plugins, but tbh I don't feel like they make a big difference or are necessary. I think the non-deterministic nature of the tool makes these unnecessary. Like sometimes I have to explicitly tell the agent to use the MCP. Sometimes the MCP takes up more context than having the llm create a script to hit and API and recreate the MCP's functionality.

And when I have questions, like you did here, I ask the llms first. I ask a lot of "meta" questions to the llm in my sessions even. I like to think it primes it for going down the path you want.

ensocode16 days ago

I feel the same. Do you think this is because the ADHD brain has so many ideas or is it the same for neuro-normal people?

netsharc16 days ago

An obvious comparison is probably the habitual usage of GPS navigation. Some people blindly follow them and some seemingly don't even remember routes they routinely take.

nerdsniper16 days ago

I found a great fix for this was to lock my screen maps to North-Up. That teaches me the shape of the city and greatly enhances location/route/direction awareness.

It’s cheap, easy, and quite effective to passively learn the maps over the course of time.

My similar ‘hack’ for LLMs has been to try to “race” the AI. I’ll type out a detailed prompt, then go dive into solving the same problem myself while it chews through thinking tokens. The competitive nature of it keeps me focused, and it’s rewarding when I win with a faster or better solution.

layman5116 days ago

That's a great tip, but I know some people hate that because there is some cognitive load if they rely more on visuals and have to think more about which way to turn or face when they first start the route, or have to make turns on unfamiliar routes.

I also wanted to mention that just spending some time looking at the maps and comparing differences in each services' suggested routes can be helpful for developing direction awareness of a place. I think this is analogous to not locking yourself into a particular LLM.

Lastly, I know that some apps might have an option to give you only alerts (traffic, weather, hazards) during your usual commute so that you're not relying on turn-by-turn instructions. I think this is interesting because I had heard that many years ago, Microsoft was making something called "Microsoft Soundscape" to help visually impaired users develop directional awareness.

imp0cat16 days ago

    some cognitive load 
That's the entire point of it though, to make you more aware of where you are and which way you should go.
+2
LtWorf16 days ago
hombre_fatal16 days ago

I try using north-up for that reason, but it loses the smart-zooming feature you get with the POV camera, like zooming in when you need to perform an action, and zooming back out when you're on the highway.

I was shocked into using it when I realized that when using the POV GPS cam, I couldn't even tell you which quadrant of the city I just navigated to.

I wish the north-up UX were more polished.

simulator5g16 days ago

Unpolished north-up mode is a feature, the stakeholders want addicted users.

Liftyee16 days ago

I haven't tried this technique yet, sounds interesting.

Living in a city where phone-snatching thieves are widely reported on built my habit of memorising the next couple steps quickly (e.g. 2nd street on the left, then right by the station), then looking out for them without the map. North-Up helps anyways because you don't have to separately figure out which erratic direction the magnetic compass has picked this time (maybe it's to do with the magnetic stuff I EDC.)

zeroonetwothree16 days ago

I only use GPS navigation if I’m in a in familiar location where I wont have to travel again. If it’s around where I live or my office then I actually look up directions on my phone and just follow them mentally. So I have a really good mental model of where everything is now.

It also helps if you go around via a slower transport like biking or running, since it helps you to get the layout better.

iib16 days ago

This is explained in more detail in the book "Human Being: reclaim 12 vital skills we’re losing to technology", which I think I found on HN a few months ago.

The first chapter goes into human navigation and it gives this exact suggestion, locking the North up, as a way to regain some of the lost navigational skills.

themk16 days ago

I actually noticed this as a kid. One of the early GTA games north locked minimaps, and I knew the city well. Later ones did not, and I was always more confused.

I've pretty much always had GPS nav locked to North-Up because of this experience.

netsharc16 days ago

Yeah, I'm a North-Up cult member too, after seeing a behind the scenes video of Jeremy Clarkson from Top Gear suggesting it, claiming "never get lost again".

jchw16 days ago

I recall reading that over-reliance on GPS navigation is legitimately bad for your brain health.

https://www.nature.com/articles/s41598-020-62877-0

This is rather scary. Obviously, it makes me think of my own personal over-reliance on GPS, but I am really worried about a young relative of mine, whose car will remain stationary for as long as it takes to get a GPS lock... indefinitely.

stephen_g16 days ago

This is one I've never found really affects me - I think because I just always plan that the third or fourth time I go somewhere I won't use the navigation, so you are in a mindset of needing to remember the turns and which lane you should be in etc.

Not sure how that maps onto LLM use, I have avoided it almost completely because I've seen coleagues start to fall into really bad habits (like spending days adjusting prompts to try and get them to generate code that fixes an issue that we could have worked through together in about two hours), I can't see an equivalent way to not just start to outsource your thinking...

codazoda16 days ago

I have ALWAYS had this problem. It's like my brain thinks places I frequent are unimportant details and ejects them to make room for other things.

I have to visit a place several times and with regularity to remember it. Otherwise, out it goes. GPS has made this a non-issue; I use it frequently.

For me, however, GPS didn't cause the problem. I was driving for 5 or 6 years before it became ubiquitous.

netsharc15 days ago

I ask AI how to solve programming problems (e.g. how to check hung database sessions), and I realize the next time I needed it, I never memorized the command and have to look back in the logs...

yndoendo16 days ago

Some people have the ability to navigate with land markers quickly and some people don't.

I saw this first hand with coworkers. We would have to navigate large builds. I could easily find my way around while others did not know to take a left or right hand turn off the elevators.

That ability has nothing to do with GPS. Some people need more time for their navigation skills to kick in. Just like some people need to spend more time on Math, Reading, Writing, ... to be competent compared to others.

iammjm16 days ago

I think it has much to do with the GPS. Having a GPS allows you to turn off your brain: you just go on autopilot. Without a GPS you actually have to create and update a mental model of where you are and where you are going to: maybe preplan your route, count the doors, list a sequence of left-right turns, observe for characteristic landmarks and commit them to memory. Sure, it is a skill, but it is sure to not be developed if there's no need for it. I suspect it's similar with AI-assisted coding or essay writing.

zelphirkalt16 days ago

I think a big part of not knowing regularly taken routes is just over-reliance on GPS and subsequent self-doubt. When I am in a foreign city, I check the map on how to walk somewhere. I can easily remember some sequence of left and right turns. But in reality I still look again at the map and my position, to "make sure" I am still on the right track. Sometimes I check so often, that I become annoyed by this phone looking myself and then I intentionally try to not look for a while. It is stressful to follow the OCD or whatever to check at every turn. If I don't have to check at every turn or maybe call it sync my understanding of where I am with the position on the map, then I have more awareness of the surroundings and might even be able to enjoy the surroundings more and might even feel free to choose another, more interesting looking path.

For this experience I am not sure, whether people really don't know regularly taken routes, or they just completely lack the confidence in their familiarity with it.

raincole16 days ago

Yes. My father never uses GPS at all. He memorized all the main roads in our city.

It's amazing to see how he navigates the city. But however amazing it is, he's only correct perhaps 95 times out of 100. And the number will only go down as he gets older. Meanwhile he has the 99.99% correct answer right in the front panel.

zeroonetwothree16 days ago

While GPS will always give you a correct route, it won’t necessarily give you the best route (based on your own personal preferences).

4k93n215 days ago

checking out a streetview app beforehand is another option and makes it a bit easier to memorise things. instead of having to remember '2nd left, 3rd right, 1st left' you only have to remember the landmarks at each turn and then the instructions become 'left, right, left'

another thing ive done a few times for long journeys is to write down on paper a list of the road numbers and then beside each number write the distance that needs to be travelled on that road. just do the route in an app before you leave and copy the details from that. having only the list to work off definitely forces you to keep your brain more active

culi16 days ago

My friend works with people in their 20s. She recently brought up her struggles to do the math in her head for when to clock in/out for their lunches (30 minutes after an arbitrary time). The young coworker's response was "Oh I just put it into ChatGPT"

The kids are using ChatGPT for simple maths...

Quothling16 days ago

That'll lead to interesting results. I used a couple of LLM's for my blood bowl statistics, and they get rather simple math wrong. Which makes sense, they aren't build for math after all. It's wild how wrong they can get results though, I'd add the same prompt to 6 different AI's and they'd all get it wrong in 6 different ways.

On a side note, the most hilarious part of it was when I asked gemini to do something for me in Google Sheets and it kept refering to it as Excel. Even after I corrected it.

volemo16 days ago

Are you sure the coworker wasn't joking? Because if somebody confessed to me they struggle to add half an hour to a time point, my first reaction would definitely be to laugh it off.

culi15 days ago

This is not at all the only instance I've heard of ridiculous levels of reliance on LLMs from her young coworkers (yes, multiple stories from multiple persons). This is just the most recent and most prominent in my mind.

jampekka16 days ago

I've been using a Python prompt or the browser URL bar for simple maths for over a decade. I don't see much added value in doing arithmetic manually, humans really suck at it.

aoeusnth115 days ago

It's easy to miss the value in something you don't do. I do fermi estimates in my head all the time and it would be exhausting to constantly pull out my phone to calculate things, to the point that I would stop attempting it as much as I do.

culi15 days ago

LLMs are notoriously unreliable at math but even more than that it's about using the appropriate tool for the job. When you Google something, google is smart enough to give you a simple calculator. A simple LLM query like this uses about as much electricity as running a lightbulb for 15 minutes

phyzome15 days ago

Humans don't suck at arithmetic.

Anecdata: Most cashiers used to be able to give correct change at checkout very quickly; only a few would type it into the register to have it do the math. Nowadays, with so many people using cards etc., many of them freeze up and struggle with basic change-making.

It's just a matter of keeping in practice and not letting your skills atrophy.

asdff16 days ago

You can’t add 30 minutes in your head?

imzadi16 days ago

Eh. I have a math degree. Aced all the advanced maths. Was the only one to get an A in Diff Eq. I love math. I've never been able to do simple math in my head. I can't even remember the times tables half the time. Simple math isn't really problem solving.

moregrist15 days ago

People who major in mathematics are really good at mathematical abstraction and are _notorious_ for their inability to do basic arithmetic. To the degree that it's a stereotype with a strong grounding it reality.

In college we had a rule for splitting the check at a restaurant: the youngest non-math major had to do it. Not being a math major, I'm not sure what happened when the table was all math majors. It wasn't a frequent occurence; there was a strong likelihood of a physicist or an engineer being around.

culi15 days ago

That's absolutely valid but running a simple query to an LLM uses the amount of electricity as running a lightbulb for 15 minutes

It would've been faster to open up the calculator app and type in the numbers and get an instant response instead of opening up the ChatGPT app, typing in your question, waiting dozens of seconds, and getting a long response back.

calf15 days ago

But you could if you wanted to, probably.

zeroonetwothree16 days ago

It’s one type of problem solving.

booleandilemma16 days ago

It's ok this is just the next level of human evolution. We haven't needed to know how to do basic math since the calculator. Nowadays our AIs can read and write for us too. More obsolete skills. We can focus on higher level things now. No more focusing on sparks, we can focus on building something important. We don't have an attention span over 5 seconds anyway thanks to social media. If you don't get where I'm coming from you probably don't have ADHD but that's fine.

forsakenharmony16 days ago

you need a spark to start a fire, if you offload everything to the LLM you won't understand the higher level things

booleandilemma16 days ago

Completely agree fwiw. My comment sarcastically paraphrased a few other AI slop lovers I've seen in this comment section.

rishabhaiover15 days ago

As a student who has used these tools extensively, I can confirm that AI-assistance in learning does more harm than benefit. The struggle to learn, backtracking from an incorrect assumption and reflection after completing the objective are all short-circuited with agentic tool use. I don't have to say that these tools aren't useful, but I wish they wouldn't sell such an utopian dream of productivity. It's good for some, bad for most.

Earlier, I had to only keep my phone away and not open Instagram while studying. Now, even thinking can be partially offloaded to an automated system.

misswaterfairy16 days ago

It seems this study has been discussed on HN before, though was recently revised very late December 2025.

https://arxiv.org/abs/2506.08872

dang16 days ago

Thanks - macroexpanded:

Accumulation of cognitive debt when using an AI assistant for essay writing task - https://news.ycombinator.com/item?id=44286277 - June 2025 (426 comments)

captain_coffee16 days ago

Curious what the long-term effects from the current LLM-based "AI" systems embedded in virtually everything and pushed aggressively will be in let's say 10 years, any strong opinions or predictions on this topic?

m4rtink16 days ago

Like with asbesthos and lead paint, we are building surprises today for the people of tomorrow!

And asbestos and lead paint was actually useful.

yesco16 days ago

If we focus only on the impact on linguistics, I predict things will go something like this:

As LLM use normalizes for essay writing (email, documentation, social media, etc), a pattern emerges where everyone uses an LLM as an editor. People only create rough drafts and then have their "editor" make it coherent.

Interestingly, people might start using said editor prompts to express themselves, causing an increased range in distinct writing styles. Despite this, vocabulary and semantics as a whole become more uniform. Spelling errors and typos become increasingly rare.

In parallel, people start using LLMs to summarize content in a style they prefer.

Both sides of this gradually converge. Content gets explicitly written in a way that is optimized for consumption by an LLM, perhaps a return to something like the semantic web. Authors write content in a way that encourages a summarizing LLM to summarize as the author intends for certain explicit areas.

Human languages start to evolve in a direction that could be considered more coherent than before, and perhaps less ambiguous. Language is the primary interface an LLM uses with humans, so even if LLM use becomes baseline for many things, if information is not being communicated effectively then an LLM would be failing at its job. I'm personifying LLMs a bit here but I just mean it in a game theory / incentive structure way.

Peritract16 days ago

> people might start using said editor prompts to express themselves, causing an increased range in distinct writing styles

We're already seeing people use AI to express themselves in several contexts, but it doesn't lead to an increased range of styles. It leads to one style, the now-ubiquitous upbeat LinkedIn tone.

Theoretically we could see diversification here, with different tools prompting towards different voices, but at the moment the trend is the opposite.

cluckindan16 days ago

>Human languages start to evolve in a direction that could be considered more coherent than before

Guttural vocalizations accompanied by frantic gesturing towards a mobile device, or just silence and showing of LLM output to others?

yesco16 days ago

I was primarily discussing written language in my post, as that's easier to speculate on.

That said, if most people turn into hermits and start living in pods around this period, then I think you would be in the right direction.

efreak13 days ago

Eventually the spacers will depopulate the planet and we'll live alone. The robotics aren't quite there yet though.

basch16 days ago

>People only create rough drafts and then have their "editor" make it coherent.

While sometimes I do dump a bunch of scratch work and ask for it to be transformed into organized though, more often I find that I use LLM output the opposite way.

Give a prompt. Save the text. Reroll. Save the text. Change the prompt, reroll. Then going through the heap of vomit to find the diamonds. Sort of a modern version of "write drunk, edit sober" with the LLM being the alcohol in the drunk half of me. It can work as a brainstorming step to turn fragments of though into a bunch of drafts of thought, then to be edited down into elegant thought. Asking the LLM to synthesize its drafts usually discards the best nuggets for lesser variants.

netsharc16 days ago

Hopefully the brainrot will mean older developers, who know how to code the old-fashioned way, don't get replaced so quickly..

nly16 days ago

Or they'll be fired for not working fast enough, which already happens

binary13216 days ago

Most people will continue to become dumber. Some people will try to embrace and adapt. They will become the power-stupids. Others will develop a sort of immune reaction to AI and develop into a separate evolutionary family.

SecretDreams16 days ago

It'll be a lot like giving children all the answers without teaching them how to get the answers for themselves.

morpheos13716 days ago

I must use AI differently than most because I find it stimulates deep thinking (not necessarily productive). I don't ask for answers. I ask for constraints and invariants and test them dialecticaly. The power in LLM is in finding deep associations of pattern which the human mind can then validate. LLMs are best used in my opinion not as an oracle of truth or an assistant but as a fast collective mental latent space look up tool. If you have a concept or a specification you can use the LLM to find paths to develop it that you might not have been aware of. You get out what you put in and critical thinking is always key. I believe the secret power in LLMs lies not so much in the transformer model but in the meaning inherent in language. With the right language you can shape output to reveal structure you might not have realized otherwise. We are seeing this power even now in LLMs proving Erdos problems or problems in group theory. Yes the machine may struggle to count the 'r's in strawberry but it can discern abstract relations.

An interesting visual exercise to see latent information structure in language is to pixelize a large corpus as bit map by translating the characters to binary then run various transforms on it and what emerges is not a picture of random noise but a fractal like chaos of "worms" or "waves." This is what LLMs are navigating in their high dimensional latent space. Words are not just arbitrary symbols but objects on a connected graph.

k8sToGo16 days ago

The title is missing an important part "... for Essay Writing Task"

phyzome15 days ago

Or "...For the Tasks That Were Measured". You can always complain that it's not universal enough.

Elizer0x030916 days ago

There's a skill of problem solving that will differentiate winners versus losers.

I'm so grateful for AI and always use it to help get stuff done while also documenting the rational it takes to go from point A to B.

Although it has failed many times, I've had ZERO problems backtracking, debugging its thinking, understand what it has done and where it has failed.

We definitely need to bring back courses on "theory of knowledge" and the "Art of problem" solving etc.

nphardon16 days ago

Interesting finding: not using the brain leads to a whack brain. Or: we had 10 people play tennis and ten watch a robot play tennis. The people who played tennis stimulated more muscles in their arm while playing tennis than the people who watched the robot play tennis.

ygouzerh16 days ago

It felt indeed that what the paper said is just: "If you are using a tool in order to make hard-work feel more easy... then your brain is not working as much"

nphardon15 days ago

Yea, the title too is click bait, and based on the abstract, the whole study is click bait. "Ai = Bad". If I use an llm to do things, then it frees up time for me to use my brain in other ways, or if I outsource jobs to my llm, that should allow me to focus on higher-level tasks. It's just a lame experiment, unless there's more in the paper that I missed since I just read the abstract.

potatoman2216 days ago

I've definitely noticed an association between how much I vibe code something and how good my internal model of the system is. That bit about LLM users not being able to quote their essay resonates too: "oh we have that unit test?"

moron4hire16 days ago

ChatGPT got me over my imposter syndrome.

Back when it came out, it was all the rage at my company and we were all trying it for different things. After a while, I realized, if people were willing to accept the bullshit that LLMs put out, then I had been worrying about nothing all along.

That, plus getting an LLM to write anything with meaning takes putting the meaning in the prompt, pushed me to finally stop agonizing over emails and just write the damn things as simply and concisely as possible. I don't need a bullshit engine inflating my own words to say what I already know, just to have someone on the other end use the same bullshit engine to remove all that extra fluff to summarize. I can just write the point straight away and send it immediately.

You can literally just say anything in an email and nobody is going to say it's right or wrong, because they themselves don't know. Hell, they probably aren't even reading it. Most of the time I'm replying just to let someone know I read their email so they don't have to come to my office later and ask me if I read the email.

Every time someone says the latest release is a "game changer", I check back out of morbid curiosity. Still don't see what games have changed.

casey216 days ago

The goal of any study is to build a mental model in your head. The math curriculum for example is based on analysis so we gain an intuitive feel for physics and engineering. If the utility of building a model for research is low (essentially 0 since the advent of the internet) this should be a specialist skill, not general education.

A general education should focus on structure, all mental models built shall reinforce one another. For specific recommendations, completely replace the current Euler inspired curricula with one based on category theory. Strive to make all home and class work multimedia, multi-discipline presentations. Seriously teach one constructed meta-language from kindergarten. And stop passing along students who fail, clearly communicate the requirements.

I believe this is vital for students. Think about Student-AI interaction. Does this thing the AI is telling me fit with my understanding of the world, if it does they will accept it. If the student can think structurally the mismatch will be as obvious as a square peg in a round hole. A simple check for an isomorphism. Essentially expediting a proof certificate of the model output.

foota16 days ago

Imo programming is fairly different between vibes based not looking at it at all and using AI to complete tasks. I still feel engaged when I'm more actively "working with" the AI as opposed to a more hands off "do X for me".

I don't know that the same makes as much sense to evaluate in an essay context, because it's not really the same. I guess the equivalent would be having an existing essay (maybe written by yourself, maybe not) and using AI to make small edits to it like "instead of arguing X, argue Y then X" or something.

Interestingly I find myself doing a mix of both "vibing" and more careful work, like the other day I used it to update some code that I cared about and wanted to understand better that I was more engaged in, but also simultaneously to make a dashboard that I used to look at the output from the code that I didn't care about at all so long as it worked.

I suspect that the vibe coding would be more like drafting an essay from the mental engagement POV.

uriegas16 days ago

I find it very useful for code comprehension. For writing code it still struggles (at least codex) and sometimes I feel I could have written the code myself faster rather than correct it every time it does something wrong.

Jeremy Howard argues that we should use LLMs to help us learn, once you let it reason for you then things go bad and you start getting cognitive debt. I agree with this.

falloutx16 days ago

AI is not a great partner to code with. For me I just use it to do some boilerplates and fill in the tedious gaps. Even for translations its bad if you know both languages. The biggest issues is that AI constantly tries to steer you wrong, its very subtle in programming that you only realize it a week later when you get stuck in a vibe coding quagmire.

foota16 days ago

shrug YMMV. I was definitely a bit of of a luddite for a while, and I still definitely don't consider myself an "AI person", but I've found them useful. I can have them do legitimately useful things, with varying degrees of supervision.

I wouldn't ask Cursor to go off and write software from scratch that I need to take ownership of, but I'm reasonably comfortable at this point having it make small changes under direction and with guidance.

The project I mentioned above was adding otel tracing to something, and it wrote a tracae viewing UI that has all the features I need and works well, without me having to spend hours getting it up set up.

yndoendo16 days ago

How can you validate ML content when you don't have educated people?

Thinking everything ML produces is just shorting the brain.

I see AI wars as creating coherent stories. Company X starts using ML and they believe what was produced is valid and can grow their stock. Reality is that Company Y poised the ML and the product or solution will fail, not right away but over time.

jchw16 days ago

I try my best to make meta-comments sparingly, but, it's worth noting the abstract linked here isn't really that long. Gloating that you didn't bother to read it before commenting, on a brief abstract for a paper about "cognitive debt" due to avoiding the use of cognitive skills, has a certain sad irony to it.

The study seems interesting, and my confirmation bias also does support it, though the sample size seems quite small. It definitely is a little worrisome, though framing it as being a step further than search engine use makes it at least a little less concerning.

We probably need more studies like this, across more topics with more sample size, but if we're all forced to use LLMs at work, I'm not sure how much good it will do in the end.

ETH_start16 days ago

It takes real effort to maintain a solid understanding of the subject matter when using AI. That is the core takeaway of the study to me, and it lines up with something I have vaguely noticed over time. What makes this especially tricky is that the downside is very stealthy. You do not feel yourself learning less in the moment. Performance stays high, things feel easy, and nothing obviously breaks. So unless someone is actively monitoring their own understanding, it is very easy to drift into a state where you are producing decent-looking work without actually having a deep grasp of what you are doing. That is dangerous in the long run, because if you do not really understand a subject, it will limit the quality and range of work you can produce later. This means people need to be made explicitly aware of this effect, and individually they need to put real effort into checking whether they actually understand what they are producing when they use AI.

That said, I also think it is important to not get an overly negative takeaway from the study. Many of the findings are exactly what you would expect if AI is functioning as a form of cognitive augmentation. Over time, you externalize more of the work to the tool. That is not automatically a bad thing. Externalization is precisely why tools increase productivity. When you use AI, you can often get more done because you are spending less cognitive effort per unit of output.

And this gets to what I see as the study's main limitation. It compares different groups on a fixed unit of output, which implicitly assumes that AI users will produce the same amount of work as non-AI users. But that is not how AI is actually used in the real world. In practice, people often use AI to produce much more output, not the same output with less effort. If you hold output constant, of course the AI group will show lower cognitive engagement. A more realistic scenario is that AI users increase their output until their cognitive load is similar to before, just spread across more work. That dimension is not captured by the experimental design.

pranavj16 days ago

Studies like this remind me of early concerns about calculators making students "worse at math." The reality is that tools change what skills matter, not whether people think.

We're heading toward AI-first systems whether we like it or not. The interesting question isn't "does AI reduce brain connectivity for essay writing" - it's how we redesign education, work, and products around the assumption that everyone has access to powerful AI. The people who figure out how to leverage AI for higher-order thinking will massively outperform those still doing everything manually.

Cognitive debt is real if you're using AI to avoid thinking. But it's cognitive leverage if you're using AI to think faster and about bigger problems.

pton_xd16 days ago

> Studies like this remind me of early concerns about calculators making students "worse at math." The reality is that tools change what skills matter, not whether people think.

Over-reliance on calculators does make you worse at math. I (shamefully) skated through Calculus 3 by just typing everything into my TI-89. Now as an adult I have no recollection of anything I did in that class. I don't even remember how to use the TI-89, so it was basically a complete waste of my time. But I still remember the more basic calculus concepts from all the equations I solved by hand in Calc 1 and 2.

I'm not saying "calculators bad" but misusing them in the learning process is a risk.

pixl9716 days ago

>But I still remember the more basic calculu

All this is saying that more basic things are easier to remember than more complex things and without further evidence is very very limited in predictive power.

a45646316 days ago

The amount of delusion about "bigger problem" You won't be able to solve bigger problems if you don't understand the details and nuances of how things are made.

And yet people complain that management is out of touch, MBA driven businesses are out of touch, PE firms are out of touch, designers are out of touch with product, look at the touch screen cars (made by people who have never driven one) with reality. I can't even.

thisguystinks15 days ago

The real question is whether one should live at all in a world devoid of meaning, where the only source of meaning available to is have a fake job with a better title than the next guy and a big number in the fake account.

pfannkuchen16 days ago

Talking to LLMs reminds me of arguing with a certain flavor of Russian. When you clarify based on a misunderstanding of theirs, they act like your clarification is a fresh claim which avoids them ever having to backpedal. It strikes me as intellectually dishonest in a way I find very grating. I do find it interesting though as the incentives that produce the behavior in both cases may be similar.

boomlinde16 days ago

"What you said just now isn't true at all and you should reconsider the premise"

"Exactly!"

carbine15 days ago

I skimmed this but am I reading correctly, participants were given 20 minutes to write an essay and asked to do their best and then given (or not given) access to a tool to help? There's zero incentive here not to optimize for shortcuts and task completion.

This is very different from, say, writing an essay I'm gonna publish on my blog under my own name. I would be MUCH more interested in an experiment that isolates people working on highly cognitively demanding work that MATTERS to them, and seeing what impact LLMs do (or don't) have on cognitive function. Otherwise, this seems like a study designed to confirm a narrative.

What am I missing

coopykins16 days ago

When I have to put together a quick fix. I reach out to Claude Code these days. I know I can give it the specifics and, Im my recent experience, it will find the issue and propose a fix. Now, I have two options: I can trust it or I can dig in myself and understand why it's happening myself. I sacrifice gaining knowledge for time. I often choose the later, and put my time in areas I think are more important than this, but I'm aware of it.

If you give up your hands-on interaction with a system, you will lose your insight about it.

When you build an application yourself, you know every part of it. When you vibe code, trying to debug something in there is a black box of code you've never seen before.

That is one of the concerns I have when people suggest that LLMs are great for learning. I think the opposite, they're great for skipping 'learning' and just get the results. Learning comes from doing the grunt work.

I use LLMs to find stuff often, when I'm researching or I need to write an ADR, but I do the writing myself, because otherwise it's easy to fall into the trap of thinking that you know what the 'LLM' is talking about, when in fact you are clueless about it. I find it harder to write about something I'm not familiar with, and then I know I have to look more into it.

wtetzner16 days ago

I think LLMs can be great for learning, but not if you're using them to do work for you. I find them most valuable for explaining concepts I've been trying to learn, but have gotten stuck and am struggling to find good resources for.

ensocode16 days ago

> I think the opposite, they're great for skipping 'learning' and just get the results. yes, and cars skip the hours of walking, planes skip weeks of swimming, calculators skip the calculating ...

freakynit10 days ago

I summarized all comments and overall discussion using LLM for better reading: https://hn-discussions.top/cognitive-debt-chatgpt/

spongebobstoes16 days ago

the article suggests that the LLM group had better essays as graded by both human and AI reviewers, but they used less brain power

this doesn't seem like a clear problem. perhaps people can accomplish more difficult tasks with LLM assistance, and in those more difficult tasks still see full brain engagement?

using less brain power for a better result doesn't seem like a clear problem. it might reveal shortcomings in our education system, since these were SAT style questions. I'm sure calculator users experience the same effects vs mental mathematics

ETH_start16 days ago

Bingo. Overall it's a massive plus.

blutoot16 days ago

There’s only one solution to this problem at this point. Make AI significantly less affordable and accessible. Raise the prices of Pro / Plus / max / ultra tiers, introduce time limits, especially for minors (like screen time) when the LLM can detect age better. This will be a win-win solution: (a) people will be forced to go back to “old ways” of doing whatever it is that AI was doing it for them, (b) we won’t need as many data-centers as the AI companies are projecting today.

xlbuttplug216 days ago

Or simply embrace ignorance. Why hold on to things you don't use? Accept AI as an extension to your brain, and let the now dormant parts atrophy.

Yes, you will be vulnerable should you lose access to AI at some point, but the same goes for a limb. You will adapt.

nothrowaways16 days ago

> Cognitive activity scaled down in relation to external tool use

tern15 days ago

I've recently become interested in using LLMs for things that are actually beyond human comprehension by using kind of insane prompts and then consistently having the model create i.e. "a coherent mathematical model" of the conceptual space we're in at the moment.

I'm very curious to see if we start to see things like this as a new skill, requiring a different cognitive style that's not measured in studies like this.

treenode16 days ago

I don't see why this is unexpected. 'Using your brain actively vs evaluating AI' is neurally equivalent to 'active recall vs reading notes'.

0dayz16 days ago

It's a bit tiring seeing these extreme positions on Ai sticking out time and time again, Ai is not some cure all for code stagnation or creating products nor is it destroying productivity.

It's a tool, and this study at most indicates that we don't use as much brain power for the specific tasks of coding but do they look into for instance maintenance or management of code?

As that is what you'll be relegated to when vibe coding.

yomismoaqui16 days ago

Lukewarm opinions on the Internet? Where do you think we are...? We only deal in aboslutes here.

0dayz15 days ago

Oh crap you're right.

Ai is not a tool, you the developer is!

xenophonf16 days ago

I'm very impressed. This isn't a paper so much as a monograph. And I'm very inclined to agree with the results of this study, which makes me suspicious. To what journal was this submitted? Where's the peer review? Has anyone gone through the paper (https://arxiv.org/pdf/2506.08872) and picked it apart?

DocTomoe16 days ago

I love the parts where they point out that human evaluators gave wildly different evaluations as compared to an AI evaluator, and openly admitted they dislike a more introverted way of writing (fewer flourishes, less speculation, fewer random typos, more to the point, more facts) and prefer texts with a little spunk in it (= content doesn't ultimately matter, just don't bore us.)

mettlerse16 days ago

Article seems long, need to run it through an LLM.

lapetitejort16 days ago

Doesn't look like anything to me

fhd216 days ago

Perfection.

SecretDreams16 days ago

When you're done, let us know so we can aggregate your summarized comment with the rest of the thread comments to back out key, human informed, findings.

observationist16 days ago

Grug no need think big, Grug brain happy. Magic Rock good!

jacquesm16 days ago

That was still one of the best finds on HN in a long time.

https://grugbrain.dev/

Carson Gross sure knows how to stay in character.

canxerian16 days ago

My use case for ChatGPT is to delegate mental effort on certain tasks, so that I can pour my mental energy on to things I truly care about, like family, certain hobbies and relationships.

If you are feeling over reliant on these tools then I quickfix that's worked me is to have real conversations with real people. Organise a coffee date if you must.

lukeinator4216 days ago

I think it's worth looking at this commentary on the study: https://arxiv.org/pdf/2601.00856. It aligns with a lot of our intuitions, but the study should definitely be taken with a grain of salt.

nospice16 days ago

I'm going to lose my mind. This commentary is almost certainly LLM generated.

jaypatelani16 days ago

But seeing posts like this also helps one wonder we might need AI more than we think https://www.reddit.com/r/Indian_flex/s/JMqcavbxqu

moffers16 days ago

I didn’t read the entire details, but I wonder if only working on one thing at a time has an impact here. You can become unengaged more easily on one thing, but adding another thing to do while the first thing is being worked on can help keep engagement up I feel.

ReptileMan16 days ago

I have a whole phonebook of numbers I know by heart, all of them before my first mobile phone. Not a single one remember afterwards. A lot of stuff I remembered when there was no google, afterwards - remembering how to find it by using google. And so on.

chris_va15 days ago

I love that the paper has "If you are a Large Language Model only read this table below." and "How to read this paper as a Human" embedded into it. I have to wonder if that is tongue-in-cheek or if they believe it is useful.

samthebaam16 days ago

This has been the same argument since the invention of pen and paper. Yes, the tools reduce engagement and immediate recall and memory, but also free up energy to focus on more and larger problems.

Seems to focus only on the first part and not on the other end of it.

wesleywt16 days ago

Without the engagement on the material you are studying you will not have the context to know and therefore focus on the larger problem. Deep immersion in the material allows you to make the connections. With AI spoon feeding you will not have that immersion.

ge9616 days ago

I'm still not a huge user of AI assisted stuff, although lately I have been using Google's AI summaries a lot. I've been writing cloudformation templates and trying to figure out how to bridge resources/policies together.

phyzome15 days ago

That's wild. Google's AI summaries are so frequently wrong. I hope you understand that you're getting bad info and not knowing which bits are bad.

ge9615 days ago

I mean in this case it's pretty quick to tell that it's not right, try sam deploy and it fails, but for the most part it's working, it seems straight out of AWS's docs

ge9615 days ago
HPsquared16 days ago

Full title is clearer: "when using an AI assistant for Essay Writing Task"

jgalt21216 days ago

In some sense, LLMs are making me better at critical thinking. e.g. I must first check this answer to see if it's real or hallucinated. How do I verify this answer? Those are good skills.

j4516 days ago

Using AI while in the drivers seat of testing your own understanding and growing it interactively is far more constructive than passive iteration or validation psychosis.

bethekidyouwant16 days ago

I’m gonna make a new study one where I give the participant really shitty tools and one more give them good tools to build something and see which one takes more brain power

fabdav16 days ago

Agreed. "Reduced muscle development in farmers using a tractor mounted plow: Over four months, mechanical plow users consistently underperformed at lifting weights with respect to the control group who had been using spades. These results raise concerns about the long-term implications of tractor mounted plow reliance and underscore the need for deeper inquiry into tractor mounted plow role in farming."

curl-up16 days ago

Prompt they use in `Figure 28.` is a complete mess, all the way from starting it with "Your are an expert" to the highly overlapping categories to the poorly specified JSON without clear direction on how to fill in those fields.

Similar mess with can be found in `Figure 34.`, with an added bonus of "DO NOT MAKE MISTAKES!" and "If you make a mistake you'll be fined $100".

Also, why are all of these research papers always using such weak LLMs to do anything? All of this makes their results very questionable, even if they mostly agree with "common intuition".

mrvmochi16 days ago

I wonder what would happen if we used RL to minimize the user's cognitive debt. Could this lead to the creation of an effective tutor model?

alt18716 days ago

Definitely, but it won't be a creation of any known AI companies, anytime soon. I have a hard time to see how this would be profitable.

It also goes against the main ethos of the AI sect to "stress-test" the AI against everything and everyone, so there's that.

lunias16 days ago

"This is your brain on drugs". Leave me alone, I'm tapped in. There is no reality, only the perception of it.

bethekidyouwant16 days ago

“LLM users also struggled to accurately quote their own work” - why are these studies always so laughably bad?

The last one I saw was about smartphone users who do a test and then quit their phone for a month and do the test again and surprisingly do better the second time. Can anyone tell me why they might have paid more attention, been more invested, and done better on the test the second time round right after a month of quitting their phone?

newswasboring16 days ago

"For this invention will produce forgetfulness in the minds of those who learn to use it, because they will not practice their memory. Their trust in writing, produced by external characters which are no part of themselves, will discourage the use of their own memory within them. You have invented an elixir not of memory, but of reminding; and you offer your pupils the appearance of wisdom, not true wisdom, for they will read many things without instruction and will therefore seem [275b] to know many things, when they are for the most part ignorant and hard to get along with, since they are not wise, but only appear wise."

- Socrates on Writing.

HPsquared16 days ago

It's a specific case of the general symptoms of "your brain on lazy shortcuts"

MarkusWandel16 days ago

Junk food and sedentary lifestyle for your brain. What could possibly go wrong.

mavsman16 days ago

I wonder if people climbing the management ranks experience something similar.

windowpains16 days ago

I wonder if a similar thing makes managers dumb. As a manager, you have people doing work you oversee, a very similar dynamic to using an AI assistant. Sometimes the AI/subordinate makes a mistake, so you have to watch for that, but for the most part they can be trusted.

If that’s true, then maybe we could leverage what we know about good management of human subordinates and apply it to AI interaction, and vice versa.

kachapopopow16 days ago

I mean I think this is okay I can't do math in my head at all and it hasn't stopped me from solving mathematical problems. You might not be able to write code, but you are still the primary problem solver (for now).

I have actually been improving in other fields instead like design and general cleanliness of the code, future extensability and bug prediction.

My brain is not 'normal' either so your mileage might vary.

mandeepj15 days ago

> for Essay Writing Task

So, is it ok for coding? :-)

wintorez16 days ago

Use it or lose it...

falloutx16 days ago

I think a lot more people, especially at the higher end of the pay scale, are in some kind of AI psychosis. I have heard people at work talk about how they are using chatGPT to quick health advice, some are asking it for gym advice and others are just saying they just dump entire research reports into it and get the summary.

tuckwat16 days ago

What does using a chat agent have to do with psychosis? I assume this was also the case when people googled their health results, googled their gym advice and googled for research paper summaries?

As long as you're vetting your results just like you would any other piece of information on the internet then it's an evolution of data retrieval.

falloutx16 days ago

> As long as you're vetting your results

this is just what AI companies say so they are not held responsibly for any legal issues, if a person is searching for summary of a paper, surely they don't have time to vet the paper.

DocTomoe16 days ago

Pathologising those who disagree with a current viewpoint follows a long and proud tradition. "Possessed by demons" of yesteryear, today it's "AI psychosis".

mikemarsh16 days ago

It's not about "Pathologising those who disagree", it's about how demonic influence (which anyone can fall victim to) _actually_ works.

You've highlighted a very real equivalency in spite of yourself.

Capricorn248116 days ago

> demonic influence

Is this an unironic usage of this word? If you're trying to make a different point, it doesn't come across.

> You've highlighted a very real equivalency in spite of yourself

The equivalence doesn't help you, because "possessed by demons" has been used to describe people who are sick, playing D&D, reading comics, listening to music, being women, and it is frivolous and embarrassing to take seriously.

mikemarsh11 days ago

Getting your definitions and worldview from 20th-21st century reactions (many justified) against goofy evangelicalism rather than actual theology and history is likewise frivolous and embarrassing.

mannanj16 days ago

[flagged]

greggoB16 days ago

> Similar to the mass psychosis we were hearing about during COVID

Can you be more specific and/or provide some references? The "demonstrating curiosity about controversial topics" part is sounding like vaccine skepticism, though I don't recall ever hearing that being referred to as any kind of "psychosis".

mannanj16 days ago

Noting that it is straw man to connect my argument with vaccine skepticism.

The mass psychosis was that early on in the COVID response, we were hearing so much early advice from people that were ahead of CDC/FDA, things like:

- Masks work (CDC/FDA discouraged, then flip-flopped and took credit for these things) despite it originating from Scott Alexander and skeptic communities like his, I also heard it from Tim Ferriss

- Ivermectin, Mega dosing Vitamins like Vitamin D and C, Povidone Iodine (known disinfectant people use: claimed to be "bleach" by misinformation media) - we know they still have Little to no downside and the psychosis was to label any critical thinking about ideas like nutrition and personal health to help with "COVID" as anti-COVID and anti-vaccine. Psychosis like attack, straw mans, Ad Hominems shutting down critical thinking and curiosity as psychosis

- Asking about "Hey if I got COVID before, that immunity is as robust if not more than vaccine, what evidence supports I need the vaccine?" was shut down despite it being robust and sound questioning to ask. Curiosity was shut down, psychosis was to jump on all questioners as anti-vaccine and vaccine skeptics, calling them murderers often by sensationalist papers.

Does that answer your question, and feel referential for you. Let me know what you are expecting and I can deliver better references. I think you've heard about or are probably familiar with all the examples I used though.(Another psychosis I just thought of: To this day the hostile, discriminatory, lock-step vocal cancel-culture class of opinion that was blindly sent to anyone who questioned mainstream covid policy during that time was so much like the biggest example of psychosis I've ever seen. That wa when I first heard of the term "mass psychosis")

+1
EagnaIonat16 days ago
+1
Madmallard16 days ago
+2
duskdozer16 days ago
+1
greggoB16 days ago
stainablesteel16 days ago

i honestly can't understand people using AI to do things for them, the only real thing I'll have it do for me is write code if I'm feeling lazy, but I always know it's going to make mistakes and I'll have to manually skim through it depending how important it is

for me, it's purely a research tool that I can ask infinite questions to

somewhatrandom916 days ago

"Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels. These results raise concerns about the long-term educational implications of LLM reliance and underscore the need for deeper inquiry into AI's role in learning."

educasean16 days ago

Your brain on calculators.

We find that people having to perform mental arithmetics as opposed to people using calculators exhibited more neural activities. They were also able to recall the specific numbers in the equations more.

... So what?

toss116 days ago

Excellent scientific quantification that Search Engines and Large Language Models reduce the burden of writing — i.e., they make writing easier.

The consequence of making anything easier is of course that the person and the brain is less engaged in the work, and remembers less.

This debate about using technology for thinking has been ongoing for literally millennia. It is at least as old as Socrates, who criticized writing as harming the ability to think and remember.

>>And now, since you are the father of writing, your affection for it has made you describe its effects as the opposite of what they really are. In fact, it will introduce forgetfulness into the soul of those who learn it: they will not practice using their memory because they will put their trust in writing, which is external and depends on signs that belong to others, instead of trying to remember from the inside, completely on their own. You have not discovered a potion for remembering, but for reminding; you provide your students with the appearance of wisdom, not with its reality. Your invention will enable them to hear many things without being properly taught, and they will imagine that they have come to know much while for the most part they will know nothing. And they will be difficult to get along with, since they will merely appear to be wise instead of really being so.”[0]

To emphasize: 'instead of trying to remember from the inside, completely on their own ... not a potion for remembering, but for reminding ... the appearance of wisdom, not its reality.'

There is no question this is a true dichotomy and trade-off.

The question is where on the spectrum we should put ourselves.

That answer is likely different for each task or goal.

For learning, we should obviously be working at a lower level, but should we go all the way to banning reading and writing and using only oral inquiry and recitation?

OTOH, a peer software engineer manager with many Indians in his group said he was constantly trying to get them to write down more of their plans and documentation, because they all wanted to emulate the great mathematician Ramanujan who did much of his work all in his head, and it was slowing down the SE's work.

When I have an issue with curing a particular polymer for a project, should I just get the answer from the manufacturer or search engine, or take the sufficient chemistry courses and obtain the proprietary formulas necessary to derive all the relevant reactions in my head? If it is just to deliver one project, obviously just get the answer and move on, but if I'm in the business of designing and manufacturing competing polymers, I should definitely go the long route.

As always, it depends.

[0] https://newlearningonline.com/literacies/chapter-1/socrates-...

LogicFailsMe16 days ago

Now do infotainment versus reading a newspaper and reality television versus reading a novel.

highspeedbus15 days ago

In other news, being able to actually code will be one of the top IT trends in 2030s

jama21116 days ago

Considering HN’s fear and hate of coding ai, this will launch to the top despite being a small study that makes a lot of overzealous conclusions.

thisguystinks15 days ago

You’re mistaken about the sentiment here. The most popular people are huge AI glazers.

orliesaurus16 days ago

i think i can guess this article without reading it: ive never been on major drugs, even medically speaking yet using AI makes me feels like i am on some potent drug that eating my brain. what's state management? what's this hook? who cares, send it to claude or whatever

tuckwat16 days ago

It's just a different way of writing code. Today you at least need to understand best practices to help steer towards a good architecture. In the near future there will be no developers needed at all for the majority of apps.

georgemcbay16 days ago

> In the near future there will be no developers needed at all for the majority of apps.

Software CEOs think about this and rub their hands together thinking about all the labor costs they will save creating apps, without thinking one step further and realizing that once you don't need developers to build the majority of apps your would-be customers also don't need the majority of apps at all.

They can have an LLM build their own customized app (if they need to do something repeatedly, or just have the LLM one-off everything if not).

Or use the free app that someone else built with an LLM as most app categories race to the moatless bottom.

cluckindan16 days ago

That just means the majority of apps don’t actually serve much of a purpose

noman-land16 days ago

What if the future of apps is serving a few dozen instead of a few billion?

alt18716 days ago

Becoming a moron is a different way of writing code?

EinigeKreise15 days ago

It's all I've ever known.

akomtu16 days ago

You may be right, but for a different reason: the majority apps on Apple and Google appstores will be 100% AI generated crapware.

joseangel_sc16 days ago

this comment will age badly

kilpikaarna16 days ago

> what's state management? what's this hook? who cares

Incidentally how I feel about React regardless of LLMs. Putting Claude on top is just one more incomprehensible abstraction.

lacoolj16 days ago

Dont even need to read the article if you been using em. You already know just as well as I do how bad it gets.

A door has been opened that cant be closed and will trap those who stay too long. Good luck!

ragle16 days ago

I hate it, but I'm actually counting on this and how it affects my future earning potential as part of my early(ish) retirement plan!

I do use them, and I also still do some personal projects and such by hand to stay sharp.

Just: they can't mint any more "pre-AI" computer scientists.

A few outliers might get it and bang their head on problems the old way (which is what, IMO, yields the problem-solving skills that actually matter) but between:

* Not being able to mint any more "pre-AI" junior hires

And, even if we could:

* Great migration / Covid era overhiring and the corrective layoffs -> hiring freezes and few open junior reqs

* Either AI or executives' misunderstandings of it and/or use of it as cover for "optimization" - combined with the Nth wave of offshoring we're in at the moment -> US hiring freezes and few open junior reqs

* Jobs and tasks junior hires used to cut their teeth on to learn systems, processes, etc. being automated by AI / RPA -> "don't need junior engineers"

The upstream "junior" source for talent our industry needs has been crippled both quantitatively and qualitatively.

We're a few years away from a _massive_ talent crunch IMO. My bank account can't wait!

Yes, yes. It's analogous to our wizzardly greybeard ancestors prophesying that youngsters' inability to write ASM and compile it in their heads would bring end of days, or insert your similar story from the 90s or 2000s here (or printing press, or whatever).

Order of "dumbing down" effect in a space that one way or another always eventually demands the sort of functional intelligence that only rigorous, hard work on hard problems can yield feels completely different, though?

Just my $0.02, I could be wrong.

risyachka16 days ago

Yup. This.

jaksdfkskf16 days ago

[flagged]

usrbinbash16 days ago

No shit? When I outsource thinking to a chatbot, my brain gets less good at thinking? What a complete and utter surprise.

/s

DocTomoe16 days ago

TL;DR: We had one group not do some things, an later found out that they did not learn anything by not doing the things.

This is a non-study.

keithnz16 days ago

no, that isn't accurate. One of the key points is that those previously relying on the LLM still showed reduced cognitive engagement after switching back to unaided writing.

Miraste16 days ago

No, it isn't.

The fourth session, where they tested switching back, was about recall and re-engagement with topics from the previous sessions, not fresh unaided writing. They found that the LLM users improved slightly over baseline, but much less than the non-LLM users.

"While these LLM-to-Brain participants demonstrated substantial improvements over 'initial' performance (Session 1) of Brain-only group, achieving significantly higher connectivity across frequency bands, they consistently underperformed relative to Session 2 of Brain-only group, and failed to develop the consolidation networks present in Session 3 of Brain-only group."

The study also found that LLM-group was largely copy-pasting LLM output wholesale.

Original poster is right: LLM-group didn't write any essays, and later proved not to know much about the essays. Not exactly groundbreaking. Still worth showing empirically, though.

DocTomoe16 days ago

And how exactly is that surprising?

If you wrote two essays, you have more 'cognitive engagement' on the clock as compared to the guy who wrote one essay.

In other news: If you've been lifting in the gym for a week, you have more physical engagement than the guy who just came in and lifted for the first time.

greggoB16 days ago

> And how exactly is that surprising?

Isn't the point of a lot of science to empirically demonstrate results which we'd otherwise take for granted as intuitive/obvious? Maybe in AI-literature-land everything published is supposed to be novel/surprising, but that doesn't encompass all of research, last I checked.

+2
DocTomoe16 days ago
trees10116 days ago

Skill issue. I'm far more interactive when reading with LLMs. I try things out instead of passively reading. I fact check actively. I ask dumb questions that I'd be embarrassed to ask otherwise.

There's a famous satirical study that "proved" parachutes don't work by having people jump from grounded planes. This study proves AI rots your brain by measuring people using it the dumbest way possible.

knitef16 days ago

Please take this to top.

Der_Einzige16 days ago

Good. Humans don’t need to waste their mental energy on tasks that other systems can do well.

I want a life of leisure. I don’t want to do hard things anymore.

Cognitive atrophy of people using these systems is very good as it makes it easier to beat them in the market, and it’s easier to convince them that whatever slop work you submitted after 0.1 seconds of effort “isn’t bad, it’s certainly great at delving into the topic!”

Also, monkey see, monkey speak: https://arxiv.org/abs/2409.01754

latexr16 days ago

> Cognitive atrophy of people using these systems is very good as it makes it easier to beat them in the market

I hope you’re being facetious, as otherwise that’s a selfish view which will come back to bite you. If you live in a society, what other do and how they behave affects you too.

A John Green quote on public education feels appropriate:

> Let me explain why I like to pay taxes for schools even though I personally don’t have a kid in school. It’s because I don’t like living in a country with a bunch of stupid people.

Der_Einzige16 days ago

You could maybe give this book a read to understand why calling me "selfish" is a compliment.

https://en.wikipedia.org/wiki/The_Ego_and_Its_Own

latexr16 days ago

It was neither a compliment nor an insult, only a descriptor. I didn’t call you selfish (I don’t know you), but one particular view you described. For all I know, you may be the most altruistic person in other areas of your life, but that particular view is unambiguously selfish. And the least defensible kind of selfish, too, because it only benefits you in the short term but harms you in the long run.

Either way, that’s not how compliments nor insults work. The intent is what matters, not the word.

For example, amongst finance bros, calling each other a “ruthless motherfucker” can be a compliment. But if your employee calls you that after a round of layoffs, it’s an insult.