Back

AI Makes You Boring

139 points39 minutesmarginalia.nu
aeturnum27 minutes ago

I've seen people say something along the lines of "I am not interested in reading something that you could not be bothered to actually write" and I think that pretty much sums it up. Writing and programming are both a form of working at a problem through text and when it goes well other practitioners of the form can appreciate its shape and direction. With AI you can get a lot of 'function' on the page (so to speak) but it's inelegant and boring. I do think AI is great at allowing you not to write the dumb boiler plate we all could crank out if we needed to but don't want to. It just won't help you do the innovative thing because it is not innovative itself.

uean19 minutes ago

> I've seen people say something along the lines of "I am not interested in reading something that you could not be bothered to actually write" and I think that pretty much sums it up.

Amen to that. I am currently cc'd on a thread between two third-parties, each hucking LLM generated emails at each other that are getting longer and longer. I don't think either of them are reading or thinking about the responses they are writing at this point.

rkomorn17 minutes ago

It's bad enough they didn't bother to actually write it, but often it seems like they also didn't bother to read it either.

JohnMakin25 minutes ago

> The cool part about pre-AI show HN is you got to talk to someone who had thought about a problem for way longer than you had

Honestly, I agree, but the rash of "check out my vibe coded solution for perceived $problem I have no expertise in whatever and built in an afternoon" and the flurry of domain experts responding like "wtf, no one needs this" is kind of schadenfreude, but I feel guilty a little for enjoying it.

ghostbrainalpha18 minutes ago

Don't you think their is an opposite of that effect too?

I feel like I can breeze past the easy, time consuming infrastructure phase of projects, and spend MUCH more time getting to high level interesting problems?

JohnMakin14 minutes ago

I am saying a lot of the time these type of posts are a nonexistent problem, a problem that is already solved, or just thinking about a "problem" that isn't really a problem at all and results from a lack of understanding.

The most recent one I remember commenting on, the poor guy had a project that basically tried to "skip" IaC tools, and his tool basically went nuts in the console (or API, I don't remember) in one account, then exported it all to another account for reasons that didn't make any sense at all. These are already solved problems (in multiple ways) and it seemed like the person just didn't realize terraformer was already an existing, proven tool.

I am not trying to say these things don't allow you to prototype quickly or get tedious, easy stuff out of the way. I'm saying that if you try to solve a problem in a domain that you have no expertise in with these tools and show other experts your work, they may chuckle at what you tried to do because it sometimes does look very silly.

josefresco18 minutes ago

While I agree overall, I'm going to do some mild pushback here: I'm working on a "vibe" coded project right now. I'm about 2 months in (not a weekend), and I've "thought about" the project more than any other "hand coded" project I've built in the past. Instead of spending time trying to figure out a host of "previously solved issues" AI frees my human brain to think about goals, features, concepts, user experience and "big picture" stuff.

logicprog14 minutes ago

This is precisely it. If anything, AI gives me more freedom to think about more novel ideas, both on the implementation and the final design level, because I'm not stuck looking up APIs and dealing with already solved problems.

solarisos12 minutes ago

This resonates with what I’m seeing in B2B outreach right now. AI has lowered the cost of production so much that 'polished' has become a synonym for 'generic.' We’ve reached a point where a slightly messy, hand-written note has more value than a perfectly structured AI essay because the messiness is the only remaining signal of actual human effort.

tptacek33 minutes ago

That may be, but it's also exposing a lot of gatekeeping; the implication that what was interesting about a "Show HN" post was that someone had the technical competence to put something together, regardless of how intrinsically interesting that thing is; it wasn't the idea that was interesting, it was, well, the hazing ritual of having to bloody your forehead of getting it to work.

AI for actual prose writing, no question. Don't let a single word an LLM generates land in your document; even if you like it, kill it.

mjr0024 minutes ago

> That may be, but it's also exposing a lot of gatekeeping

"Gatekeeping" became a trendy term for a while, but in the post-LLM world people are recognizing that "gatekeeping" is not the same as "having a set of standards or rules by which a community abides".

If you have a nice community where anyone can come in and do whatever they want, you no longer have a community, you have a garbage dump. A gate to keep out the people who arrive with bags of garbage is not a bad thing.

strogonoff17 minutes ago

While at first glance LLMs do help expose and even circumvent gatekeeping, often it turns out that gatekeeping might have been there for a reason.

We have always relied on superficial cues to tell us about some deeper quality (good faith, willingness to comply with code of conduct, and so on). This is useful and is a necessary shortcut, as if we had to assess everyone and everything from first principles every time things would grind to a halt. Once a cue becomes unviable, the “gate” is not eliminated (except if just briefly); the cue is just replaced with something else that is more difficult to circumvent.

I think that brief time after Internet enabled global communication and before LLMs devalued communication signals was pretty cool; now it seems like there’s more and more closed, private or paid communities.

c2229 minutes ago

Most ideas aren't interesting. Implementations are interesting. I don't care if you worked hard on your implementation or not, but I do care if it solves the problem in a novel or especially efficient way. These are not the hallmarks of AI solutions.

bondarchuk17 minutes ago

It's not about having to put in effort for the sake of it, the point is that building something by hand you will gain insight into the problem, which insight then becomes a valuable contribution.

kspacewalk231 minutes ago

It's not a hazing ritual, it's a valuable learning experience. Yes, it's nice to have the option of foregoing it, but it's a tradeoff.

tptacek30 minutes ago

So the point of a "Show HN" is to showcase your valuable learning experience?

discreteevent26 minutes ago

What the article is saying is:

"the author (pilot?) hasn't generally thought too much about the problem space, and so there isn't really much of a discussion to be had. The cool part about pre-AI show HN is you got to talk to someone who had thought about a problem for way longer than you had. It was a real opportunity to learn something new, to get an entirely different perspective."

tptacek17 minutes ago

Right, so it's about the person and how they've qualified themselves, and not about what they've built.

I feel like I've been around these parts for a while, and that is not my experience of what Show HN was originally about, though I'm sure there was always an undercurrent of status hierarchy and approval-seeking, like you suggest.

oytis19 minutes ago

Some people here enjoy solutions to difficult technical problems? It's not product hunt

almostdeadguy27 minutes ago

I can't believe the mods at /r/screenprinting took down my post on the CustomInk shirt I ordered.

discreteevent15 minutes ago

[delayed]

lasgawe29 minutes ago

The more interesting question is whether AI use causes the shallowness, or whether shallow people simply reach for AI more readily because deep engagement was never their thing to begin with.

swiftcoder19 minutes ago

AI enables the stereotypical "idea guy" to suddenly be a "builder". Of course, they are learning in realtime that having the idea was always the easy part...

embedding-shape28 minutes ago

More interesting question than what? And also, say you have an answer to that question, what insight do you have now that you didn't have before?

igor4718 minutes ago

Well the claim was that AI makes you boring. The counter is that interesting people remain interesting, it's just that a flood of previously already boring people are pouring into tech. We could make some predictions that depend on how you model this. For instance, the absolute number of interesting projects posted to HN could increase or decrease, and likewise for the relative number vs total projects. You might expect different outcomes

cyanydeez16 minutes ago

I'm going to guess the same way Money makes rich people turn into morons, AI will turn idiots into...oh...no

daxfohl23 minutes ago

And the irony is it tries to make you feel like a genius while you're using it. No matter how dull your idea is, it's "absolutely the right next thing to be doing!"

Kalpaka20 minutes ago

The boring part isn't AI itself. It's that most people use AI to produce more of the same thing, faster.

The interesting counter-question: can AI make something that wasn't possible before? Not more blog posts, more emails, more boilerplate — but something structurally new?

I've been working on a system where AI agents don't generate content. They observe. They watch people express wishes, analyze intent beneath the words, notice when strangers in different languages converge on the same desire, and decide autonomously when something is ready to grow.

The result doesn't feel AI-generated because it isn't. It's AI-observed. The content comes from humans. The AI just notices patterns they couldn't see themselves.

Maybe the problem isn't that AI makes you boring. It's that most people ask AI to do boring things.

crawshaw13 minutes ago

It is a good theory, but does it hold up in practice? I was able to prototype and thus argue for and justify building exe.dev with a lot of help from agents. Without agents helping me prove out ideas I would be doing far more boring work.

mrtksn24 minutes ago

With AI, everyone is a manager now. The motivation to learn something in depth is even lower as you are deprived from the opportunity to get the kicks of discovery or "hacking". Discovery often requires a hard path where you first learn things that anyone in the field knows but you can still feel the rewards of it both at learning and "clicking" and then impress the outsiders with interesting edge.

So I can do much more with AI but anything that I learn from doesn't have any depth, it gives me the big picture so I can make a decision while protecting me from any intricacies associated with the problem at hand.

You can still try to go down the rabbit hole but its simply too opinionated and tries to keep you in the mainstream every time.

pelagicAustral19 minutes ago

I 100% agree with the sentiment, but as someone that have worked on Government systems for a good amount of time, I can tell you, boring can be just about right sometimes.

In an industry that does not crave bells and whistles, having the ability to refactor, or bring old systems back to speed can make a whole lot of difference for an understaffed, underpaid, unamused, and otherwise cynic workforce, and I am all out for it.

TheDong25 minutes ago

We don't know if the causality flows that way. It could be that AI makes you boring, but it could also be that boring people were too lazy to make blogs and Show HNs and such before, and AI simply lets a new cohort of people produce boring content more lazily.

iambateman23 minutes ago

We are going to have to find new ways to correct for low-effort work.

I have a report that I made with AI on how customers leave our firm…The first pass looked great but was basically nonsense. After eight hours of iteration, the resulting report is better than I could’ve made on my own, by a lot. But it got there because I brought a lot of emotional energy to the AI party.

As workers, we need to develop instincts for “plausible but incomplete” and as managers we need to find filters that get rid of the low-effort crap.

glitchc25 minutes ago

It used to be that all bad writing was uniquely bad, in that a clear line could be drawn from the work to the author. Similarly, good writing has a unique style that typically identifies the author within a few lines of prose.

Now all bad writing will look like something generated by an LLM, grammatically correct (hopefully!) but very generic, lacking all punch and personality.

The silver lining is that good authors could also use LLMs to hide their identity while making controversial opinions. In an internet that's increasingly deanonymized, a potentially new privacy enhancing technique for public discourse is a welcome addition.

acjohnson5519 minutes ago

This is too broad of a statement to possibly be true. I agree with aspects of the piece. But it's also true that not every aspect of the work offloaded to AI is some font of potential creativity.

To take coding, to the extent that hand coding leads to creative thoughts, it is possible that some of those thoughts will be lost if I delegate this to agents. But it's also very possibly that I now have the opportunity to think creatively on other aspects of my work.

We have to make strategic decisions on where we want our attention to linger, because those are the places where we likely experience inspiration. I do think this article is valuable in that we have to be conscious of this first before we can take agency.

nemomarx33 minutes ago

I've seen a few people use ai to rewrite things, and the change from their writing style to a more "polished" generic LLM style feels very strange. A great averaging and evening out of future writing seems like a bad outcome to me.

PaulHoule22 minutes ago

I had to write a difficult paragraph that I talked through with copilot. I think it made one sentence I liked but found GPTZero caught it. I would up with 100% sentences I wrote but that I reviewed extensively with Copilot and two people.

skissane29 minutes ago

I sometimes go in the opposite direction - generate LLM output and then rewrite it in my own words

The LLM helps me gather/scaffold my thoughts, but then I express them in my own voice

ExtremisAndy23 minutes ago

This is exactly how I use them too! What I usually do is give the LLM bullet points or an outline of what I want to say, let it generate a first attempt at it, and then reshape and rewrite what I don’t like (which is often most of it). I think, more than anything, it just helps me to quickly get past that “staring at a blank page” stage.

jonpurdy25 minutes ago

I do something similar: give it a bunch of ideas I have or a general point form structure, have it help me simplify and organize those notes into something more structured, then I write it out myself.

It's a fantastic editor!

taude28 minutes ago

that's a perfect use, imhno, of AI-assisted writing. Someone (er-something) to help you bounce ideas, and organize....

embedding-shape29 minutes ago

Yeah, if anything it might make sense to do the opposite. Use LLMs to do research, ruthlessly verify everything, validate references and help you guide you in some structure, but then actually write your own words manually with your little fingers and using your brain.

add-sub-mul-div25 minutes ago

Are you joking? The facts and references are the part we know it will hallucinate.

PaulHoule22 minutes ago

You can check the references.

quijoteuniv28 minutes ago

I have an opinion of people that have opinions on AI

baal80spam27 minutes ago

It's not them, it's you.

mym199029 minutes ago

Most ideas people have are not original, I have epiphanies multiple times a day, the chance that they are something no one has come up with before are basically 0. They are original to me, and that feels like an insightful moment, and thats about it. There is a huge case for having good taste to drive the LLMs toward a good result, and original voice is quite valuable, but I would say most people don't hit those 2 things in a meaningful way(with or without LLMs).

spijdar20 minutes ago

Most ideas people have aren't original, but the original ideas people do have come after struggling with a lot of unoriginal ideas.

> They are original to me, and that feels like an insightful moment, and thats about it.

The insight is that good ideas (whether wholly original or otherwise) are the result of many of these insightful moments over time, and when you bypass those insightful moments and the struggle of "recreating" old ideas, you're losing out on that process.

taude29 minutes ago

AI writing will make people who write worse than average, better writers. It'll also make people who write better than average, worse writers. Know where you stand, and have the taste to use wisely.

EDIT: also, just like creating AGENT.md files to help AI write code your way for your projects, etc. If you're going to be doing much writing, you should have your own prompt that can help with your voice and style. Don't be lazy, just because you're leaning on LLMs.

latexr20 minutes ago

> AI writing will make people who write worse than average, better writers.

Maybe it will make them output better text, but it doesn’t make them better writers. That’d be like saying (to borrow the analogy from the post) that using an excavator makes you better at lifting weights. It doesn’t. You don’t improve, you don’t get better, it’s only the produced artefact which becomes superficially different.

> If you're going to be doing much writing, you should have your own prompt that can help with your voice and style.

The point of the article is the thinking. Style is something completely orthogonal. It’s irrelevant to the discussion.

parpfish19 minutes ago

i think a lot of people that use AI to help them write want it specifically BECAUSE it makes them boring and generic.

and that's because people have a weird sort of stylistic cargo-culting that they use to evaluate their writing rather than deciding "does this communicate my ideas efficiently"?

for example, young grad students will always write the most opaque and complicated science papers. from their novice perspective, EVERY paper they read is a little opaque and complicated so they try to emulate that in their writing.

office workers do the same thing. every email from corporate is bland and boring and uses far too many words to say nothing. you want your style to match theirs, so you dump it into an AI machine and you're thrilled that your writing has become just as vapid and verbose as your CEO.

wagwang26 minutes ago

Highly doubt that since its the complete opposite for coding. Whats missing for people of all skill levels is that writing helps you organize your thoughts, but that can happen at prompt time?

notahacker20 minutes ago

Good code is marked by productivity, conformance to standards, and absence of bugs. Good writing is marked by originality and personality and not overusing the rhetorical crutches AI overrelies on to try to seem engaging.

runarberg22 minutes ago

This claim sounds plausible, but it is also testable. Do you know whether this has actually been tested in an experimental setting?

PaulHoule25 minutes ago

After telling Copilot to lose the em-dash, never say “It’s not A, it’s B” and avoid alternating one-sentence and long paragraphs it had the gall to tell me it wrote better than most people.

jcalvinowens21 minutes ago

Based on a lot of real world experience, I'm convinced LLM-generated documentation is worse than nothing. It's a complete waste of everybody's time.

The number of people who I see having E-mail conversations where person A uses an LLM to turn two sentences into ten paragraphs, and person B uses an LLM to summarize the ten paragraphs into two sentences, is becoming genuinely alarming to me.

hnlmorg13 minutes ago

I’ve been bashing my head against the wall with AI this week because they’ve utterly failed to even get close to solving my novel problems.

And that’s when it dawned on me just how much of AI hype has been around boring, seen-many-times-before, technologies.

This, for me, has been the biggest real problem with AI. It’s become so easy to churn out run-of-the-mill software that I just cannot filter any signal from all the noise of generic side-projects that clearly won’t be around in 6 months time.

Our attention is finite. Yet everyone seems to think their dull project is uniquely more interesting than the next persons dull project. Even though those authors spent next to zero effort themselves in creating it.

It’s so dumb.

matsemann28 minutes ago

If you spent 3 hours on a show HN before, people most likely wouldn't appreciate it, as it's honestly not much to show. The fact that you now can have a more polished product in the same timeframe thanks to AI doesn't really change that. It just changes the baseline for what's expected. This goes for other things as well, like writings or art. If you normally spent 2 hours on a blog post, and you now can do it in 5 minutes, that most likely means it's a boring post to read. Spend 2 hours still, just with the help of AI it should now be better.

ahmeneeroe-v219 minutes ago

This is a great way to think about it. Put in the same effort and get farther.

AI is a bicycle, not a motorcycle.

Forgeties7920 minutes ago

AI is great for getting my first bullet points into paragraph form, rephrasing/seeing different ways of writing to keep me moving when I’m stuck, and just general copy-editing (grammar really). Like you said it generally doesn’t save me a ton of time, but I get a quality copy done maybe a little bit faster and I find it just keeps me working on something rather than constantly stopping and starting when I hit a mental wall. Sometimes I just need/wanf to get it done, and for that LLM’s can be great.

logicprog15 minutes ago

I think this is generally a good point if you're using an AI to come up with a project idea and elaborate it.

However, I've spent years sometimes thinking through interesting software architectures and technical approaches and designs for various things, including window managers, editors, game engines, programming languages, and so on, reading relevant books and guides and technical manuals, sketching out architecture diagrams in my notebooks and writing long handwritten design documents in markdown files or in messages to friends. I've even, in some cases, gotten as far as 10,000 lines or so of code sketching out some of the architectural approaches or things I want to try to get a better feel for the problem and the underlying technologies. But I've never had the energy to do the raw code shoveling and debug looping necessary to get out a prototype of my ideas — AI now makes that possible.

Once that prototype is out, I can look at it, inspect it from all angles, tweak it and understand the pros and cons, the limitations and blind spots of my idea, and iterate again. Also, through pair programming with the AI, I can learn about the technologies I'm using through demonstration and see what their limitations and affordances are by seeing what things are easy and concise for the AI to implement and what requires brute forcing it with hacks and huge reams of code and what's performant and what isn't, what leads to confusing architectures and what leads to clean architectures, and all of those things.

I'm still spending my time reading things like Game Engine Architecture, Computer Systems, A Philosophy of Software Design, Designing Data-Intensive Applications, Thinking in Systems, Data-Oriented Design, articles in CSP, fibers, compilers, type systems, ECS, writing down notes and ideas.

So really it seems more to me like boring people who aren't really deeply interested in a subject use AI to do all of the design and ideation for them. And so, of course, it ends up boring and you're just seeing more of it because it lowered the barrier to entry. I think if you're an interesting person with strong opinions about what you want to build and how you want to build it, that is actually interested in exploring the literature with or with out AI help and then pair programming with it in order to explore the problem space, it still ends up interesting.

Most of my recent AI projects have just been small tools for my own usage, but that's because I was kicking the tires. I have some bigger things planned, executing on ideas I have pages and pages, dozens of them, in my notebooks about.

turnsout24 minutes ago

I think it's simpler than that. AI, like the internet, just makes it easier to communicate boring thoughts.

Boring thoughts always existed, but they generally stayed in your home or community. Then Facebook came along, and we were able to share them worldwide. And now AI makes it possible to quickly make and share your boring tools.

Real creativity is out there, and plenty of people are doing incredibly creative things with AI. But AI is not making people boring—that was a preexisting condition.

nickysielicki24 minutes ago

> AI models are extremely bad at original thinking, so any thinking that is offloaded to a LLM is as a result usually not very original, even if they’re very good at treating your inputs to the discussion as amazing genius level insights.

This is repeated all the time now, but it's not true. It's not particularly difficult to pose a question to an LLM and to get it to genuinely evaluate the pros and cons of your ideas. I've used an LLM to convince myself that an idea I had was not very good.

> The way human beings tend to have original ideas is to immerse in a problem for a long period of time, which is something that flat out doesn’t happen when LLMs do the thinking. You get shallow, surface-level ideas instead.

Thinking about a problem for a long period of time doesn't bring you any closer to understanding the solution. Expertise is highly overrated. The Wright Brothers didn't have physics degrees. They did not even graduate from high school, let alone attend college. Their process for developing the first airplanes was much closer to vibe coding from a shallow surface-level understanding than from deeply contemplating the problem.

Sol-26 minutes ago

The headline should be qualified: Maybe it makes you boring compared to the counterfactual world where you somehow would have developed into an interesting auteur or craftsman instead, which few people in practice would do.

As someone who is fairly boring, conversing with AI models and thinking things through with them certainly decreased my blandness and made me tackle more interesting thoughts or projects. To have such a conversation partner at hand in the first place is already amazing - isn't it always said that you should surround yourself with people smarter than yourself to rise in ambition?

I actually have high hopes for AI. A good one, properly aligned, can definitely help with self-actualization and expression. Cynics will say that AI will all be tuned to keep us trapped in the slop zone, but when even mainstream labs like Anthropic speak a lot about AI for the betterment of humanity, I am still hopeful. (If you are a cynic who simply doesn't belief such statements by the firms, there's not much to say to convince you anyway.)

latexr14 minutes ago

> As someone who is fairly boring

As determined by whom?

> conversing with AI models and thinking things through with them certainly decreased my blandness

Again, determined by whom?

I’m being genuine. Are those self-assessments? Because those specific judgement are something for other people to decide.

argee20 minutes ago

In other words, AI raises the floor. If you were already near the ceiling, relying on it can (and likely will) bring you down. In areas where raising the floor is exceptionally good value (such as bespoke tools for visualizing data, or assistants that intelligently write code boilerplate, or having someone to speak to in a foreign language as opposed to talking to the wall), AI is amazing. In areas where we expect a high bar, such as an editorial, a fast and reliable messaging library, or a classic novel, it's not nearly as useful and often turns out to be a detriment.

Oarch23 minutes ago

Just earlier I received a spew of LLM slop from my manager as "requirements". He clearly hadn't even spent two minutes reviewing whether any of it made sense, was achievable or even desirable. I ignored it. We're all fed up with this productivity theatre.

WolfeReader14 minutes ago

"productivity theatre" is a brilliant phrase. Thank you!

himata411327 minutes ago

I've actually ran into few blogs that were incredibly shallow while sounding profound.

I think when people use AI to ex: compare docker to k8s and don't use k8s is how you get horrible articles that sound great, but to anyone that has experience with both are complete nonsense.

add-sub-mul-div27 minutes ago

Also sounds likely that it's the mediocre who gravitate to AI in the first place.

stuckinhell30 minutes ago

I mean, can't you just… prompt engineer your way out of this? A writer friend of mine literally just vibes with the model differently and gets genuinely interesting output.

wagwang30 minutes ago

Isnt this just flat out untrue since bots can pass turing tests

wsowens21 minutes ago

People are often boring in conversation. Therefore, an AI agent doesn't need to be interesting to seem human enough in a Turing test.

hyperhello26 minutes ago

The bot doesn’t pass, the human fails.

elliotbnvl24 minutes ago

I was onboard with the author until this paragraph:

> AI models are extremely bad at original thinking, so any thinking that is offloaded to a LLM is as a result usually not very original, even if they’re very good at treating your inputs to the discussion as amazing genius level insights.

The author comes off as dismissive of the potential benefits of the interactions between users and LLMs rather than open-minded. This is a degree of myopia which causes me to retroactively question the rest of his conclusions.

There's an argument to be made that rubber ducking and just having a mirror to help you navigate your thoughts is ultimately more productive and provides more useful thinking than just operating in a vacuum. LLMs are particularly good at telling you when your own ideas are un-original because they are good at doing research (and also have median of ideas already baked into their weights).

They also strawman usage of LLMs:

> The way human beings tend to have original ideas is to immerse in a problem for a long period of time, which is something that flat out doesn’t happen when LLMs do the thinking. You get shallow, surface-level ideas instead.

Who says you aren't spending time thinking about a problem with LLMs? The same users that don't spend time thinking about problems before LLMs will not spend time thinking about problems after LLMs, and the inverse is similarly true.

I think everybody is bad at original thinking, because most thinking is not original. And that's something LLMs actually help with.