Back

I was banned from Claude for scaffolding a Claude.md file?

749 points16 dayshugodaniel.com
bastard_op16 days ago

I've been doing something a lot like this, using a claude-desktop instance attached to my personal mcp server to spawn claude-code worker nodes for things, and for a month or two now it's been working great using the main desktop chat as a project manager of sorts. I even started paying for MAX plan as I've been using it effectively to write software now (I am NOT a developer).

Lately it's gotten entirely flaky, where chat's will just stop working, simply ignoring new prompots, and otherwise go unresponsive. I wondered if maybe I'm pissing them off somehow like the author of this article did.

Now even worse is Claude seemingly has no real support channel. You get their AI bot, and that's about it. Eventually it will offer to put you through to a human, and then tell you that don't wait for them, they'll contact you via email. That email never comes after several attempts.

I'm assuming at this point any real support is all smoke and mirrors, meaning I'm paying for a service now that has become almost unusable, with absolutely NO means of support to fix it. I guess for all the cool tech, customer support is something they have not figured out.

I love Claude as it's an amazing tool, but when it starts to implode on itself that you actually require some out-of-box support, there is NONE to be had. Grok seems the only real alternative, and over my dead body would I use anything from "him".

throwup23816 days ago

Anthropic has been flying by the seat of their pants for a while now and it shows across the board. From the terminal flashing bug that’s been around for months to the lack of support to instabilities in Claude mobile and Code for the web (I get 10-20% message failure rates on the former and 5-10% on CC for web).

They’re growing too fast and it’s bursting the seams of the company. If there’s ever a correction in the AI industry, I think that will all quickly come back to bite them. It’s like Claude Code is vibe-operating the entire company.

laserDinosaur15 days ago

The Pro plan quota seems to be getting worse. I can get maybe 20-30 minutes work done before I hit my 4 hour quota. I found myself using it more just for the planning phase to get a little bit more time out of it, but yesterday I managed to ask it ONE question in plan mode (from a fresh quota window), and while it was thinking it ran out of quota. I'm assuming it probably pulled in a ton of references from my project automatically and blew out the token count. I find I get good answers from it when it does work, but it's getting very annoying to use.

(on the flip side, Codex seems like it's being SO efficient with the tokens it can be hard to understand its answers sometimes, it rarely includes files without you doing it manually, and often takes quite a few attempts to get the right answer because it's so strict what it's doing each iteration. But I never run out of quota!)

stareatgoats15 days ago

Claude Code allegedly auto-includes the currently active file and often all visible tabs and sometimes neighboring files it thinks are 'related' - on every prompt.

The advice I got when scouring the internets was primarily to close everything except the file you’re editing and maybe one reference file (before asking Claude anything). For added effect add something like 'Only use the currently open file. Do not read or reference any other files' to the prompt.

I don't have any hard facts to back this up, but I'm sure going to try it myself tomorrow (when my weekly cap is lifted ...).

+2
sigseg1v15 days ago
+1
idonotknowwhy15 days ago
aanet15 days ago

^ THIS

I've run out of quota on my Pro plan so many times in the past 2-3 weeks. This seems to be a recent occurrence. And I'm not even that active. Just one project, execute in Plan > Develop > Test mode, just one terminal. That's it. I keep getting a quota reset every few hours.

What's happening @Anthropic ?? Anybody here who can answer??

+2
alexk615 days ago
+5
vbezhenar15 days ago
+4
MillionOClock15 days ago
+1
heavyset_go15 days ago
+1
bmurphy197615 days ago
fragmede15 days ago

How quickly do you also hit compaction when running? Also, if you open a new CC instance and run /context, what does it show for tools/memories/skills %age? And that's before we look at what you're actually doing. CC will add context to each prompt it thinks is necessary. So if you've got a few number of large files, (vs a large number of smaller files), at some level that'll contribute to the problem as well.

Quota's basically a count of tokens, so if a new CC session starts with that relatively full, that could explain what's going on. Also, what language is this project in? If it's something noisy that uses up many tokens fast, even if you're using agents to preserve the context window in the main CC, those tokens still count against your quota so you'd still be hitting it awkwardly fast.

+4
genewitch15 days ago
behnamoh15 days ago

> I've run out of quota on my Pro plan so many times in the past 2-3 weeks.

Waiting for Anthropic to somehow blame this on users again. "We investigated, turns out the reason was users used it too much".

ChicagoDave15 days ago

I never run out of this mysterious quota thing. I close Claude Code at 10% context and restart.

I work for hours and it never says anything. No clue why you’re hitting this.

$230 pro max.

+1
fluidcruft15 days ago
+1
yjtpesesu215 days ago
croes15 days ago

Pro is 20x less than Max

nwatson15 days ago

Self-hosted might be the way to go soon. I'm getting 2x Olares One boxes, each with an RTX 5090 GPU (NVIDIA 24GB VRAM), and a built-in ecosystem of AI apps, many of which should be useful, and Kubernetes + Docker will let me deploy whatever else I want. Presumably I will manage to host a good coding model and use Claude Code as the framework (or some other). There will be many good options out there soon.

behnamoh15 days ago

> Self-hosted might be the way to go soon.

As someone with 2x RTX Pro 6000 and a 512GB M3 Ultra, I have yet to find these machines usable for "agentic" tasks. Sure, they can be great chat bots, but agentic work involves huge context sent to the system. That already rules out the Mac Studio because it lacks tensor cores and it's painfully slow to process even relatively large CLAUDE.md files, let alone a big project.

The RTX setup is much faster but can only support models ≤192GB, which severely limits its capabilities as you're limited to low Q GLM 4.7, GLM 4.7 Flash/Air/ GPT OSS 120b, etc.

+1
NitpickLawyer15 days ago
zen4ttitude15 days ago

I think this is the future as well, running locally, controlling the entire pipeline. I built acf on github using Claude among others. You essentially configure everything as you want, models, profiles, agents and RAG. It's free. I also built a marketplace to sell or give away to the community these pipeline enhancements. It's a project I wanted to do for a while and Claude was nice to me allowing it to happen. It's a work in progress but you have 100% control, locally. There is also a website for those not as technical where you can buy credits or plugin Claude or OpenAI APIs. Read the manifesto. I need help now and contributors.

thunfischtoast15 days ago

I've used the Anthropic models mostly through Openrouter using aider. With so much buzz around Claude Code I wantes to try it out and thought that a subscription might be more cost efficient for me. I was kinda disappointed by how quickly I hit the quota limit. Claude Code gives me a lot more freedom than what aider can do, on the other side I have the feeling that pure coding tasks work better through aider or Roo Code. The API version is also much much faster that the subscription one.

+2
aja1215 days ago
rasmus161015 days ago

Very happy to see that I am not the only one. My pro subscription lasts maybe 30 minutes for the 5 hour limit. It is completely unusable and that's why I actually switched to OpenCode + GLM 4.7 for my personal projects and. It's not as clever as Opus 4.5 but it often gets the job done anyway

IgorPartola15 days ago

You are giving me images from The Bug Short where the guy goes to investigate mortgages and knocks on some random person’s door to ask about a house/mortgage just to learn that it belongs to a dog. Imagine finding out that Anthropic employs no humans at all. Just an AI that has fired everyone and been working on its own releases and press releases since.

moring15 days ago

"Just an AI that has fired everyone"

At least it did not turn against them physically... "get comfortable while I warm up the neurotoxin emitters"

smcin15 days ago

'The Big Short' (2015)

taneq14 days ago

So "The Bug Short" is still up for grabs if anyone wants to make a documentary about the end of the AI bubble? :D

sixtyj16 days ago

They whistleblowed themselves that Claude Cowork was coded by Claude Code… :)

throwup23816 days ago

You can tell they’re all vibe coded.

Claude iOS app, Claude on the web (including Claude Code on the web) and Claude Code are some of the buggiest tools I have ever had to use on a daily basis. I’m including monstrosities like Altium and Solidworks and Vivado in the mix - software that actually does real shit constrained by the laws of physics rather than slinging basic JSON and strings around over HTTP.

It’s an utter embarrassment to the field of software engineering that they can’t even beat a single nine of reliability in their consumer facing products and if it wasn’t for the advantage Opus has over other models, they’d be dead in the water.

ilikeboobs6 days ago

The worst part is it's not getting better. It's getting even more unstable. They are the most unstable product, every 10 minutes is another bug, the same bugs that have existed the entire year I used it reported by hundreds of people. And every day is just, a new bug, never anything fixed. It just gets worse.

+1
cactusplant737415 days ago
+2
loopdoend15 days ago
0x500x7915 days ago

Even their status page (which are usually gamed) shows two 9s over the past 90 days.

fizx15 days ago

hey, they have 9 8's

notsure216 days ago

Whistleblowed dog food.

b00ty4breakfast15 days ago

normally you don't share your dog food when you find out it actually sucks.

threecheese15 days ago

We’re an Anthropic enterprise customer, and somehow there’s a human developer of theirs on a call with us just about every week. Chatting, tips and tricks etc.

I think they are just focusing on where the dough is.

draw_down15 days ago

[dead]

cyanydeez15 days ago

I think your surmise is probably wrong. It's not that their growing to fast, it's that their service is cheaper than the actual cost of doing business.

Growth isn't a problem unless you dont actually pay for the cost of every user you subscribe. Uber, but for poorly profitable business models.

oblio14 days ago

Interesting comparison, Uber.

> Since its founding in 2009, Uber has incurred a cumulative net loss of approximately $10.9 billion.

Now, Uber has become profitable, and will probably become a bit more profitable over time.

But except for speculators and probably a handful of early shareholders, Uber will have lost everyone else money for 20 years since its founding.

For comparison, Lyft, Didi, Grab, Bolt are in the same boat, most of them are barely turning profitable after 10+ years. Turns out taxis are a hard business, even when you ramp up the scale to 11. Though they might become profitable over the long term and we'll all get even worse and more abusive service, and probably more expensive than regular taxis would have been, 15-20 years from now.

I mean, we got some better mobile apps from taxi services, so there's that.

Oh, also a massive erosion of labor rights around the world.

cyanydeez14 days ago

I suppose my comparison is that Uber eventually turned a profit and mostly displaced the competitors.

I don't see the current investments turning a profit. Maybe the datacenters will, but most of AI is going to be washed out when somewhere, someone wants to take out their investment and the new Bernie Madoff can't find another sucker.

Bombthecat16 days ago

Well, they vibe code almost every tool at least

tuhgdetzhh16 days ago

Claude Code has accumulated so much technical dept (+emojis) that Claude Code can no longer code itself.

behnamoh15 days ago

yeah, and it gets so clunky and laggy when the context grows. Anthropic just can't make software and yet they claim 90% of code will be written by AI by yesterday.

+4
wwweston16 days ago
unyttigfjelltol16 days ago

> I'm paying for a service now that has become almost unusable, with absolutely NO means of support to fix it.

Isn’t the future of support a series of automations and LLMs? I mean, have you considered that the AI bot is their tech support, and that it’s about to be everyone else’s approach too?

b00ty4breakfast15 days ago

Support has been automated for a while, LLMs just made it even less useful (and it wasn't very useful to begin with; for over a decade it's been a Byzantine labyrinth of dead-ends, punji-pits and endless hours spent listening to smooth jazz).

georgemcbay15 days ago

Yup, the main goal of customer support for almost every Internet-based company for over a decade now is to just be so frustrating that you give up before you can reach an actual human (since that is the point where there is a real cost to the company in giving you that support).

I'm not really sure LLMs have made it worse. They also haven't made it better, but it was already so awful that it just feels like a different flavor of awful.

dejli15 days ago

Thats not really the case here in Europe, where good vs bad support is often what separates companies that build a loyal customer base from those stuck with churn they cant control.

uxcolumbo16 days ago

Have you tried any of the leading open weight models, like GLM etc. And how does chatGPT or Gemini compare?

And kudos for refusing to use anything from the guy who's OK with his platform proliferating generated CSAM.

Balinares15 days ago

Giving Gemini a go after Opus did crap one time too many, and so far it seems that Gemini does better at identifying and fixing root causes, instead of piling code or disabling checks to hide the symptoms like Opus consistently seems to do.

Leynos15 days ago

I tried GLM 4.7 in Opencode today. In terms of capability and autonomy, it's about on par with Sonnet 3.7. Not terrible for a 10th the price of an Anthropic plan, but not a replacement.

qcnguy15 days ago

[dead]

user393938215 days ago

[flagged]

cyanydeez15 days ago

Yes, let us create the CSAM generating torment nexus in peace.

hecanjog16 days ago

> I've been using it effectively to write software now (I am NOT a developer)

What have you found it useful for? I'm curious about how people without software backgrounds work with it to build software.

bastard_op15 days ago

About my not having a software background, I started this as I've been a network/security/systems engineer/architect/consultant for 25 years, but never dev work. I can read and follow code well enough to debug things, but I've never had the knack to learn languages and write my own. Never really had to, but wanted to.

This now lets me use my IT and business experience to apply toward making bespoke code for my own uses so far, such as firewall config parsers specialized for wacky vendor cli's and filling in gaps in automation when there are no good vendor solutions for a given task. I started building my mcp server enable me to use agents to interact with the outside world, such as invoking automation for firewalls, switches, routers, servers, even home automation ideally, and I've been successful so far in doing so, still not having to know any code.

I'm sure a real dev will find it to be a giant pile of crap in the end, but I've been doing like applying security frameworks, code style guidelines using ruff, and things like that to keep it from going too wonky, and actually working it up to a state I can call it as a 1.0 and plan to run a full audit cycle against it for security audits, performance testing, and whatever else I can to avoid it being entirely craptastic. If nothing else, it works for me, so others can take it or not once I put it out there.

Even being NOT a developer, I understand the need for applying best practices, and after watching a lot of really terrible developers adjacent to me over the years make a living, think I can offer a thing or two in avoiding that as it is.

bastard_op16 days ago

I started using claude-code, but found it pretty useless without any ability to talk to other chats. Claude recommended I make my own MCP server, so I did. I built a wrapper script to invoke anthropic's sandbox-runtime toolkit to invoke claude-code in a project with tmux, and my mcp server allows desktop to talk to tmux. Later I built in my own filesystem tools, and now it just spawns konsole sessions for itself invoking workers to read tasks it drops into my filesystem, points claude-code to it, and runs until it commits code, and then I have the PM in desktop verify it, do the final push/pr/merge. I use an approval system in a gui to tell me when claude is trying to use something, and I set an approve for period to let it do it's thang.

Now I've been using it to build on my MCP server I now call endpoint-mcp-server (coming soon to github near you), which I've modularized with plugins, adding lots more features and a more versatile qt6 gui with advanced workspace panels and widgets.

At least I was until Claude started crapping the bed lately.

cyanydeez15 days ago

what do you actually do besides build tools to build tools to build tools?

bastard_op14 days ago

My normal day job is IT consulting, network/security mostly, so I'm using it largely to connect to my workers, sandboxed or not, to make me scripts to do things, modify configurations, and I built out an ansible/terraform integration in my mcp to be able to start doing direct automation tasking them directly via it as well.

The whole thing I needed was to let AI reach out and touch things, be my hands essentially. This is why I built my tmux/worker system, I built out an xdg-portal integration to let it screen shot and soon interact with my desktop as a poc.

I could let it just start logging into devices and letting them modify configs, but it's pretty dumb about stuff like modifying fortigate configurations at times what it thinks it should do vs what the cli actually let's it do, so I have to proof much of it, but that's why I'm building it to be able to run ansible/terraform jobs instead using frameworks that are provided by the vendors for direct configurations to allow for atomic config changes as much as vendor implementations allow for.

ofalkaed15 days ago

My use is considerably simpler than GP's but I use it anytime I get bogged down in the details and lose my way, just have Claude handle that bit of code and move on. Also good for any block of code that breaks often as the program evolves, Claude has much better foresight than I do so I replace that code with a prompt.

I enjoy programming but it is not my interest and I can't justify the time required to get competent, so I let Claude and ChatGPT pick up my slack.

Bombthecat16 days ago

Have a max plan, didn't use it much the last few days. Just used it to explain me a few things with examples for a ttrpg. It just hanged up a few times.

Max plan and in average I use it ten times a day? Yeah, I am cancel. Guess they don't need me

bastard_op16 days ago

That's about what I'm getting too! It just literally stops at some point, and any new prompt it starts, then immediately stops. This was even on a fairly short conversation with maybe 5-6 back and forth dialogs.

ph4evers15 days ago

The desktop app is pretty terrible and super flaky, throwing vague errors all the time. Claude code seems to be doing much better. I also use it for non-code related tasks.

left-struck15 days ago

Making a new account and seeing doing the exact same thing to see if it happens again… would be against TOS and therefore is something you absolutely shouldn’t do

0x9e3779b615 days ago

Claude shows me more than one personal account, as I registered via single signon and then - via e-mail, and I paid once only for one of them.

It’s effectively a multi-tenant interface.

I also used individual acc but on corp e-mail, previously.

You could generate a new multi-use CC in your vibe-bank app (as Revolut), buy burner (e) sim for sms (5 eur in NL); then rewrite all requests at your mitm proxy to substitute a device id to one, not derived from your machine.

But same device id, same phone could be perfectly legitimate use case: you registered on corp e-mail then you changed your work place, using the same machine.

or you lost access to your e-mail (what a pity)

But to get good use of it, someone should compose proper requests to ClickHouse or whatever they use, for logs, build some logic to run as a service or web hook to detect duplicates with a pipeline to act on it.

And a good percentage of flags wouldn’t have been ToC violations.

That’s a bad vibe, can you imagine how much trial and error prompting it requires?..

They can’t vibe the way though the claude code bugs alone, on time!

Aeolun15 days ago

I really don’t understand people that say claude has no human support. In the worst case the human version of their support got back to me two day after the AI, and they apologized for being so slow.

It really leads me to wonder if it’s just my questions that are easy, or maybe the tone of the support requests that go unanswered is just completely different.

serf15 days ago

They shorted me a day off credit on the first day of offering the 200+ subscription and it took me 6 weeks for a human to tell me "whoops well we'll fix that, cya."

I can't be alone . Literally the worst customer experience I've ever had with the most expensive personal dot com subscription I've ever paid for.

Never again. When Google sets the customer service bar there are MAJOR issues.

steve197715 days ago

> Now even worse is Claude seemingly has no real support channel. You get their AI bot, and that's about it

This made me chuckle.

cyanydeez15 days ago

One could, alternatively, come to the conclusion that the value of you as a customer far undersells the value of the product itself, even if it's doing what you expect it to do.

That is, you and most of claude users arn't paying the actual cost. You're like a Uber customer a decade ago.

spike02116 days ago

> where chat's will just stop working, simply ignoring new prompots, and otherwise go unresponsive

I had this start happening around August/September and by December or so I chose to cancel my subscription.

I haven't noticed this at work so I'm not sure if they're prioritizing certain seats or how that works.

sawjet15 days ago

I have noticed this when switching locations on my VPN. Some locations are stable and some will drop the connection while the response is streaming on a regular basis.

fragmede15 days ago

The Peets right next to the Anthropic office could be selling VPN endpoint service for quite the premium!

raptorraver15 days ago

> Lately it's gotten entirely flaky, where chat's will just stop working

This happens to me more often than not both in the Claude Desktop and in web. It seems that longer the conversation goes the more likely it is to happen. Frustrating.

Rastonbury15 days ago

Judging by their status page riddled with red and orange as well as the months long degradation with blog post last Sept, it is not very reliable. If I sense it's responses are crap, I check the status page and low and behold usually it's degraded. For a non deterministric product, silent quality drops are pretty bad

Balinares15 days ago

It's amusing to observe that Claude works about as reliably as I'd expect for software written by Claude.

syntaxing16 days ago

Serious question, why is codex and mistral(vibe) not a real alternative?

deaux15 days ago

Codex: Three reasons. I've used all extensively, for multiple months.

Main one is that it's ~3 times slower. This is the real dealbreaker, not quality. I can guarantee that if tomorrow we woke up and gpt-5.2-codex became the same speed as 4.5-opus without a change in quality, a huge number of people - not HNers but everyone price sensitive - would switch to Codex because it's so much cheaper per usage.

The second one is that it's a little worse at using tools, though 5.2-codex is pretty good at it.

The third is that its knowledge cutoff is further in the past than both Opus 4.5 and Gemini 3 that it's noticeable and annoying when you're working with more recent libraries. This is irrelevant if you're not using those.

For Gemini 3 Pro, it's the same first two reasons as Codex, though the tool calling gap is even much bigger.

Mistral is of course so far removed in quality that it's apples to oranges.

dudeinhawaii15 days ago

Unpopular opinion but I prefer slow and correct.

My experience on Claude Max (still on it till end-of-month) has been frequent incomplete assignments and troubling decision making. I'll give you an example of each from yesterday.

1. Asked Claude to implement the features in a v2_features.md doc. It completed 8 of 10 but 3 incorrectly. I gave GPT-5.1-Codex-Max (high) the same tasks and it completed 10 of 10 but took perhaps 5-10x as long. Annoyingly, with LLM variability, I can't know for sure if I tried Claude again it would get it correct. The only thing I do know is that GTP-5.2 and 5.1 do a lot more "double-checking" both prior to executing and after.

2. I asked Claude to update a string being displayed in the UI of my app to display something else instead. The string is powered by a json config. Claude searched the code, somehow assumed it was being loaded by a db, did not find the json and opted to write code to overwrite whatever comes out of the 'db' (incorrect) to be what I asked for. This is... not desired behavior and the source of a category of hidden bugs that Claude has created in the past (other models do this as well but less often). Max took its time, found the source json file, and made the update in the correct place.

I can only "sit back and let an agent code" if I trust that it'll do the work right. I don't need it fast, I need it done right. It's already saving me hours where I can do other things in parallel. So, I don't get this argument.

That said, I have a Claude Max and OpenAI Pro subscription and use them both. I instead typically have Claude Opus work on UI and areas where I can visually confirm logic quickly (usually) and Codex in back-end code.

I often wonder how much the complexity of codebases affects how people discuss these models.

EnPissant15 days ago

Have you tried lower reasoning levels?

deaux15 days ago

Yes and this makes it faster, but still quite a bit slower than Claude Code, and the tool use gap remains. Especially since the comparison for e.g. 5.2 Codex-Low is more like Sonnet than Opus, so that's the speed you're competing with.

pixelmelt15 days ago

The Claude models are still the best at what they do, right now GLM is just barely scratching sonnet 4.5 quality, mistral isnt really usable for real codebases and gemini is kind of in a weird spot where it's sometimes better then Claude at small targeted changes but randomly goes off the rails. Haven't tried codex recently but the last time I did the model thought for 27 minutes straight and then gave me about the same (incorrect) output that opus would have in 20 seconds. Anthropics models are their only moat as demonstrated by their cutting off of tools other then Claude code on their coding plans.

bastard_op16 days ago

I tried codex, using my same sandbox setup with it. Normally I work with sonnet in code, but it was stuck on a problem for hours, and I thought hmm, let me try codex. Codex just started monkey patching stuff and broke everything within like 3-4 prompts. I said f-this, went back to my last commit, and tried Opus this time in code, which fixed the problem within 2 prompts.

So yeah, codex kinda sucks to me. Maybe I'll try mistral.

thtmnisamnstr16 days ago

Gemini CLI is a solid alternative to Claude Code. The limits are restrictive, though. If you're paying for Max, I can't imagine Gemini CLI will take you very far.

samusiam15 days ago

Gemini CLI isn't even close to the quality of Claude Code as a coding harness. Codex and even OpenCode are much better alternatives.

Conscat16 days ago

Gemini CLI regularly gets stuck failing to do anything after declaring its plan to me. There seems to be no way to un-lock it from this state except closing and reopening the interface, losing all its progress.

genewitch15 days ago

you should be able to copy the entire conversation and paste it in (including thinking/reasoning tokens).

When you have a conversation with an AI, in simple terms, when you type a new line and hit enter, the client sends the entire conversation to the LLM. It has always worked this way, and it's how "reasoning tokens" were first realized. you allow a client to "edit" the context, and the client deletes the hallucination, then says "Wait..." at the end of the context, and hits enter.

the LLM is tricked into thinking it's confused/wrong/unsure, and "reasons" more about that particular thing.

dudeinhawaii15 days ago

Depending on task complexity, I like to write a small markdown file with the list of features or tasks. If I lose a session (with any model), I'll start with "we were disconnected, please review the desired features in 'features.md', verify current state, and complete anything remaining.

That has reliably worked for me with Gemini, Codex, and Opus. If you can get them to check-off features as they complete them, works even better (i.e, success criteria and an empty checkbox for them to mark off).

subscribed15 days ago

Well, I use Gemini a lot (because it's one of three allowed families), but tbh it's pretty bad. I mean, it can get the job done but it's exhausting. No pleasure in using it.

bastard_op16 days ago

I tried Gemini like a year or so ago, and I gave up after it directly refused to write me a script and instead tried to tell me how to learn to code. I do not make this up.

mkl15 days ago

That's at least two major updates ago. Probably worth another try.

bayarearefugee15 days ago

Gemini is my preferred LLM for coding, but it still does goofy shit once in a while even with the latest version.

I'm 99.9999% sure Gemini has a dynamic scaling system that will route you to smaller models when its overloaded, and that seems to be when it will still occasionally do things like tell you it edited some files without actually presenting the changes to you or go off on other strange tangents.

elyobo15 days ago

I tried it on Tuesday and, having used CC a lot lately, was shocked at how bad it was - I'd forgotten.

andrewinardeer16 days ago

Kilocode is a good alt as well. You can plug into OpenRouter or Kilocode to access their models.

xnx14 days ago

> Grok seems the only real alternative

Gemini CLI, Google Antigravity ...?

keepamovin15 days ago

Folks a solution might be to use the claude models inside the latest copilot. Copilot is good. Try it out. Latest versions improving all the time. You get plenty of usage at reasonable price.

complianceowl15 days ago

[dead]

indiantinker15 days ago

Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them. Frank Herbert, Dune, 1965

chii15 days ago

So why didn't this happen with electricity, water and food, but would with thinking capacity?

mrkeen15 days ago

> food

  Can you sell or share farm-saved seed?
  "It is illegal to sell, buy, barter or share farm-saved seed," warns Sam. [1]

  Can feed grain be sown?
  No – it is against the law to use any bought-in grain to establish a crop. [1]

  FTC sues John Deere over farmers' right to repair tractors
  The lawsuit, which Deere called "meritless," accuses the company of withholding access to its technology and best repair tools and of maintaining monopoly power over many repairs. Deere also reaps additional profits from selling parts, the complaint alleges, as authorized dealers tend to sell pricey Deere-branded parts for their repairs rather than generic alternatives. [2]
[1] https://www.fwi.co.uk/arable/the-dos-and-donts-of-farm-saved...

[2] https://www.npr.org/2025/01/15/nx-s1-5260895/john-deere-ftc-...

midtake15 days ago

What do you mean? This is very much true. We are economically compelled to buy food from supermarkets, for instance, because hunting and fishing have become regulated, niche activities. Compared to someone from the 1600s who could scoop a salmon out of the river with a bucket, we are quite oppressed.

woah15 days ago

Most people lived on the knife's edge of starvation before the application of fossil fuel energy and nitrogen to agriculture in the 20th century. That's why the global population exploded after the introduction of these technologies. Read "Energy and Civilization" by Vaclav Smil. For most of history, it was an open question the crops you grew would even contain more calories than the physical effort it took to grow them. This means you were spending ~90% of your time (or money if you were in a specialized trade) just on getting enough carbs in grain to avoid keeling over. And, your diet was 90% grain with almost no variety.

Were there a lucky few who found an unoccupied niche where there was some surplus for a generation or two? Sure. But pretending like this was commonplace is like pretending that everyone in the 1600's was a nobleman.

> Compared to someone from the 1600s who could eat a gourmet meal prepared by their 10 cooks every night, we are quite oppressed.

fruitworks13 days ago

and then the population exploded such that it could only be sustained through modern agricultural methods. We are married to the technology more than before

kuerbel15 days ago

On the flip side, fishing quotas are the reason there are some fish left. However you are free to grow your own vegetables.

naasking15 days ago
B1FIDO14 days ago

It was interesting to me finding out how many "urban farms" are nestled in our own cityscape, and how many of those "farms" are actually selling their produce, meats, and even livestock.

Until very recently (like 6 decades ago) the area where I live was right up against rural countryside, with sheep grazing, cattle farms, vegetables grown and everything. And those farmers sold out to real-estate developers.

But there are literally homeowners in SFHs with chickens out front and roosters crowing in the morning. And some of my colleagues own chickens and harvest the eggs every day for their own kitchens and families.

But just going through a few urban neighborhoods on Google Maps, it was not long before I found little farms. And these farms sometimes have websites where they advertise that they are selling produce and dairy: raw milk, fresh eggs, fresh fruits & veg, mutton and even live sheep or goats. And they may be doing it on the sly or under the table, and "raw milk" is especially a controversial marketplace right now, but they do it and seem to do alright.

These "urban farms" are often real close to tactical supply shops running out of some guy's garage, and other little "cottage industries" where people who purchased "McMansions" are recouping their investments, basically by skirting the city's zoning laws and tax regulations around businesses.

So yeah, if you've got a brown thumb like me, you can go shop at a farmers market, or you can look up one of these "urban farms" and buy direct, cash in hand.

+1
amiga38615 days ago
Cthulhu_15 days ago

These are regulated by governments that, at least for now, are still working for the people. They're some of the first that get attacked and taken away when said government fails though, or when another government invades.

(ex: Palestine got their utilities and food cut off so that thousands starved, Ukraine's infrastructure is under attack so that thousands will die from exposure, and that's after they went for their food exports, starving more that people that depended on it)

tomnipotent15 days ago

> electricity, water and food

Wars are frequently fought of these three things, and there's no shortage of examples of the humans controlling these resources lording over those that did not.

shaky-carrousel15 days ago

Oh, because if the electric company banned you for trying to recharge a dildo they'd be sued to oblivion.

gregoriol15 days ago

Try to get banned from any of these, or from the banking system, and find out

adastra2215 days ago

It did. Look around you.

Scrapemist15 days ago

Having to pay for utilities you mean?

datsci_est_201515 days ago

Gains from efficiency are experienced by labor in chunks, mostly due to great strife or revolutions (40 hour work week, child labor laws, etc.). Gains in efficiency experienced by capital are immediate and continuously accruing.

+1
cryptonector15 days ago
jfyi15 days ago

Ask Ukraine about Holodomor.

GeoAtreides15 days ago

Funny enough Herbert does address this EXACT point, he calls it hydraulic despotism

dragonwriter15 days ago

It... has, historically, in many different ways happened with food, particularly.

snowmobile15 days ago

> How is thinking different from electricity?

...

leoh15 days ago

Prophetic

omer_balyali16 days ago

Similar thing happened to me back in November 19 shortly after GitHub outage (which sent CC into repeated requests and time outs to GitHub) while beta testing Claude Code Web.

Banned and appeal declined without any real explanation to what happened, other than saying "violation of ToS" which can be basically anything, except there was really nothing to trigger that, other than using their most of the free credits they gave to test CC Web in less than a week. (No third party tools or VPN or anything really) There were many people had similar issues at the same time, reported on Reddit, so it wasn't an isolated case.

Companies and their brand teams work hard to create trust, then an automated false-positive can break that trust in a second.

As their ads say: "Keep thinking. There has never been a better time to have a problem."

I've been thinking since then, what was the problem. But I guess I will "Keep thinking".

georgemcbay15 days ago

Honestly its kind of horrifying that if "Frontier" LLM usage were to become as required as some people think just to operate as a knowledge worker, someone could basically be cast out of the workforce entirely through being access-banned by a very small group of companies.

Luckily, I happen to think that eventually all of the commercial models are going to have their lunch eaten by locally run "open" LLMs which should avoid this, but I still have some concerns more on the political side than the technical side. (It isn't that hard to imagine some sort of action from the current US government that might throw a protectionist wrench into this outcome).

omer_balyali15 days ago

There is also a big risk for employers' whole organisation to be completely blocked from using Anthropic services if one of their employees have a suspended/banned personal account:

From their Usage Policy: https://www.anthropic.com/legal/aup "Circumvent a ban through the use of a different account, such as the creation of a new account, use of an existing account, or providing access to a person or entity that was previously banned"

If an organisation is large enough and have the means, they MIGHT get help but if the organisation is small, and especially if the organisation is owned by the person whose personal account suspended... then there is no way to get it fixed, if this is how they approach.

I understand that if someone has malicious intentions/actions while using their service they have every right to enforce this rule but what if it was an unfair suspension which the user/employee didn't actually violate any policies, what is the course of action then? What if the employer's own service/product relies on Anthropic API?

Anthropic has to step up. Talking publicly about the risks of AI is nice and all, but as an organisation they should follow what they preach. Their service is "human-like" until it's not, then you are left alone and out.

ricardonunez15 days ago

A new phobia freshly born.

szundi16 days ago

[dead]

TyrunDemeg10115 days ago

I was banned as well, out of the blue suddenly and without warning. I believe it was because I was either doing something like what OP was doing AND/OR using the allowed limits to their fullest extent.

It completely blew me away and I felt suddenly so betrayed. I was paying $200/mo to fully utilize a service they offered and then without warning I apparently did something wrong and had no recourse. No one to ask, no one to talk to.

My advice is to be extremely wary of Anthropic. They paint themselves as the underdog/good guys, but they are just as faceless as the rest of them.

Oh, and have a backup workflow. Find / test / use other LLMs and providers. Don't become dependent on a single provider.

7777777phil15 days ago

Can you elaborate on "using the allowed limits to their fullest extent?"

xattt15 days ago

Likely using timers/alarms to keep track of when jobs can resume.

oblio14 days ago

That sounds benign but I'm guessing all of that was in a 24/7 loop and probably running in parallel a bunch of times.

It's like the "unlimited Gmail storage" that's now stuck at 15GB since 2012, despite the cost of storage probably going down probably 20x since 2012.

Companies launch products with deceptive marketing and features they can't possibly support and then when they get called on their bluff, they have to fall back to realistic terms and conditions.

CamperBob215 days ago

Oh, and have a backup workflow. Find / test / use other LLMs and providers. Don't become dependent on a single provider.

I have pro subscriptions to all three major providers, and have been planning to drop one eventually. Anthropic may end up making the decision for me, it sounds like, even though (or perhaps because) I've been using Claude CLI more than the others lately.

What I'd really like to do is put a machine in the back room that can do 100 tts or more with the latest, greatest Deepseek or Kimi model at full native quantization. That's the only way to avoid being held hostage by the big 3 labs and their captive government, which I'm guessing will react to the next big Chinese model release by prohibiting its deployment by any US hosting providers.

Unfortunately it will cost about $200K to do this locally. The smart money says (but doesn't act like) the "AI bubble" will pop soon. If the bubble pops, that hardware will be worth 20 cents on the dollar if I'm lucky, making such an investment seem reckless. And if the bubble doesn't pop, then it will probably cost $400K next year.

First-world problems, I guess...

pstuart15 days ago

I'm hoping that advances in MoE and other improvements in LLMs will translate to allowing self-hosting to cover a good chunk of developer needs, with extending out to providers when it needs more horsepower.

In effect like traditional on-prem services that have cloud services to handle peak loads...

The tech is still relatively new and there's bound to be changes that can enable this -- just like how we went from 8088 to 386 (six years later). That was a ground breaking change and while Moore's law may be dead I expect the cost to drop significantly over time.

One can dream at least.

LTL_FTC15 days ago

I mean, you could put together a cluster of dgx sparks (8 of them) and hit 100tps with high concurrency:

https://forums.developer.nvidia.com/t/6x-spark-setup/354399/...

Or a single user at about 10tps.

This is probably around $30k if you go with the 1tb models.

bayindirh15 days ago

I'd love more people to try to enable local LLMs at the speeds they wish to use and face the music of the fans, heat and power bills.

When people talk about the cost and requirements of AI, other people can't grasp what they are talking about.

CamperBob215 days ago

10 tps, maybe, given the Spark's hobbled memory bandwidth. That's too slow, though. That thread is all about training, which is more compute-intensive.

A couple of DGX Stations are more likely to work well for what I have in mind. But at this point, I'd be pleasantly surprised if those ever ship. If they do, they will be more like $200K each than $100K.

LTL_FTC14 days ago

I linked results where the user ran Kimi k2 across his 8-node cluster. Inference results are listed for 1,10,100 concurrent requests.

Edit to add:

Yeah, those stations with the GB300 look more along the lines of what I would want as well but I agree, they’re probably way beyond my reach.

kachapopopow15 days ago

this is just wrong, I have several 3x x20 accounts running full tilt hitting limits every week I did get few accounts banned, but that's because my proxy was leaking nginx headers.

oblio14 days ago

> but that's because my proxy was leaking nginx headers.

What do you mean with this?

kachapopopow14 days ago

utilizing their anti-compeititve pricing to my advantage, proxy bypasses their protections

failerk15 days ago

I also got banned from Claude over a year ago. The signup process threw an error and I couldn't try again because they took my phone number. The support system was a Google form petition to be unblocked. I am still mad about it to this day.

Edit: my only other comment on HN is also complaining about this 11 months ago

kpozin15 days ago

This happened to me a couple of times when I tried to sign up on their website: instantly banned before I could even enter the onboarding flow.

I then had more success signing up with the mobile app, despite using the same phone number; I guess they don't trust their website for account creation.

sambuccid15 days ago

If you are in europe you might be able to force them to give you a reason, for an actual human to respond, and who knows maybe even get unbanned.

I have a friend that had a similar experience with amazon, and using an european online platform specific for this he actually got amazon to reopen his business account.

There is a useful list of these european complaints platforms at the bottom of this page: https://digital-strategy.ec.europa.eu/en/policies/dsa-out-co...

cortesoft16 days ago

I am really confused as to what happened here. The use of ‘disabled organization’ to refer to the author made it extra confusing.

I think I kind of have an idea what the author was doing, but not really.

Aurornis16 days ago

Years ago I was involved in a service where we some times had to disable accounts for abusive behavior. I'm talking about obvious abusive behavior, akin to griefing other users.

Every once in while someone would take it personally and go on a social media rampage. The one thing I learned from being on the other side of this is that if someone seems like an unreliable narrator, they probably are. They know the company can't or won't reveal the true reason they were banned, so they're virtually free to tell any story they want.

There are so many things about this article that don't make sense:

> I'm glad this happened with this particular non-disabled-organization. Because if this by chance had happened with the other non-disabled-organization that also provides such tools... then I would be out of e-mail, photos, documents, and phone OS.

I can't even understand what they're trying to communicate. I guess they're referring to Google?

There is, without a doubt, more to this story than is being relayed.

fluoridation16 days ago

"I'm glad this happened with Anthropic instead of Google, which provides Gemini, email, etc. or I would have been locked out of the actually important non-AI services as well."

Non-disabled organization = the first party provider

Disabled organization = me

I don't know why they're using these weird euphemisms or ironic monikers, but that's what they mean.

mattnewton15 days ago

Because they bought a claude subscription on a personal account and the error message said that they belongs to a "disabled organization" (probably leaking some implementation details).

+2
fluoridation15 days ago
quietsegfault15 days ago

He used “organization” because that’s what Anthropic called him, despite the fact he is a person and not an “organization”.

gnatolf15 days ago

Yes, even if you create a single person account, you create an 'organization' to be billed. That's the whole confusion here. Y'all seemingly don't have an account at anthropic?

fluoridation15 days ago

No, Anthropic didn't call him an organization. Anthropic's API returned the error "this organization has been disabled". What in that sentence implies that "this" is any human?

>Because what is meant by "this organization has been disabled" is fairly obvious. The object in Anthropic's systems belonging to the class Organization has changed to the state Disabled, so the call cannot be executed.

gruez15 days ago

No, "another non-disabled organization" sounds like they used the account of someone else, or sockpuppet to craft the response. He was using "organization" to refer to himself earlier in the post, so it doesn't make sense to use that to refer to another model provider.

+1
fluoridation15 days ago
saghm15 days ago

Is it? It sounded to me like they're still using the other Claude instance (Claude B, using their terminology in the article). I could be wrong though, which I guess would just be more evidence that they were more confusing in their phrasing than they needed to be.

epolanski15 days ago

Tangential but you reminded me of why I don't give feedback to people I interview. It's a huge risk and you have very low benefit.

It once happened to me to interview a developer who's had a 20-something long list of "skills" and technologies he worked with.

I tried basic questions on different topics but the candidate would kinda default to "haven't touched it in a while", "we didn't use that feature". Tried general software design questions, asking about problems he solved, his preferences on the way of working, consistently felt like he didn't have much to argue, if he did at all.

Long story short, I sent a feedback email the day later saying that we had issues evaluating him properly, suggested to trim his CV with topics he liked more to talk about instead of risking being asked about stuff he no longer remembered much. And finally I suggested to always come prepared with insights of software or human problems he solved as they can tell a lot about how he works because it's a very common question in pretty much all interview processes.

God forbid, he threw the biggest tantrum on a career subreddit and linkedin, cherrypicking some of my sentences and accusing my company and me to be looking for the impossible candidate, that we were looking for a team and not a developer, and yada yada yada. And you know the internet how quickly it bandwagons for (fake) stories of injustice and bad companies.

It then became obvious to me why corporate lingo uses corporate lingo and rarely gives real feedback. Even though I had nothing but good experience with 99 other candidates who appreciated getting proper feedback, one made sure I will never expose myself to something like that ever again.

PunchyHamster15 days ago

The person you interview isn't paying you.

The farm of servers that decided by probably some vibe-coded mess to ban account is actively being paid for by customer that banned it.

Like, there is some reasons to not disclose much to free users like making people trying to get around limits have more work etc. but that's (well) paid user, the least they deserve is a reason, and any system like that should probably throw a warning first anyway.

freedomben15 days ago

I had a somewhat similar experience. For one particular position we were interviewing a lot of junior and recent grad developers. Since so many of the applicants were relatively new to the game, they were almost all (99% I'd guess) extremely grateful for the honest feedback. We even had candidates asked to stay in contact with us and routinely got emails from them months or years down the road thinking us for our feedback and mentorship. It took a lot of extra time from us that could have been applied to our work, but we felt so good about being able to do that for people that it was worth it to us.

Then a lawsuit happened. One of the candidates cherry-picked some of our feedback and straight up made up some stuff that was never said, and went on a social media tirade. After typical internet outrage culture took over, The candidate decided to lawyer up and sue us, claiming discrimination. The case against us was so laughably bad that if you didn't know whether it was real or not, you could very reasonably assume this was a satire piece. Our company lawyer took a look at it, and immediately told us that it was clearly intended to get to some settlement, and never actually see any real challenge. The lawyer for the candidate even admitted as much when we met with them. Our company lawyer pushed hard to get things into arbitration, but the opposing did everything they could to escalate up the chain to someone who would just settle with them.

Well, it worked. Company management decided to just settle with a non-disparagement clause. They also came down with a policy of not allowing software engineers to talk directly with candidates other than during interviews when asking questions directly. We also had to have an HR person in the room for every interview after that. We had to 180 and become people who don't provide any feedback at all. We ended up printing a banner that said no good deed goes unpunished and hung it in our offices.

lysace15 days ago

Had a similar experience, like 20 years ago. This somehow made me remember his name - so I just checked out what he's been up to professionally. It seems quite boring, "basic" and expected. He certainly didn't reach what he was shooting for.

So there's that :).

netsharc15 days ago

I wonder if there needs to be an "NDA for feedback"... or at least a "non-disparagement agreement".

Something along the lines of "here's the contract, we give you feedback, you don't make it public [is some sharing ok? e.g. if they want to ask their life coach or similar], if you make it public the penalty is $10000 [no need to be crazy punitive], and if you make it public you agree we can release our notes about you in response."

(Looking forward to the NALs responding why this is terrible.)

ketzu15 days ago

> Looking forward to the NALs responding why this is terrible.

My NAL guess is that it will go a little like this:

* Candidate makes disparaging post on reddit/HN. * Gets many responses rallying behind him. * Company (if they notice at all) sues him for breach of Non-Disparagement-Agreement. * Candidate makes followup post/edit/comment about being sued for their post. * Gets even more responses rallying behind him.

Result: Company gets $10.000 and even more damage to their image.

(Of course it might discourage some people from making that post to begin with, which would have been the goal. You might never try to enforce the NDA to prevent the above situation. Then it's just a question of: Is the effort to draft the NDA worth the reduction in risk of negative exposure, when you can simply avoid all of it by not providing feedback.)

nawgz16 days ago

> I'm talking about obvious abusive behavior, akin to griefing other users

Right, but we're talking about a private isolated AI account. There is no sense of social interaction, collaboration, shared spaces, shared behaviors... Nothing. How can you have such an analogue here?

Aurornis16 days ago

Plenty of reasons: Abusing private APIs, using false info to sign up (attempts to circumvent local regulations), etc.

nawgz16 days ago

These are in no way similar to griefing other users, they are attacks on the platform...

direwolf2015 days ago

Attempting to coerce Claude to provide instructions to build a bomb

+2
genewitch15 days ago
dragonwriter16 days ago

The excerpt you don’t understand is saying that if it has been Google rather than Anthropic, the blast radius of the no-explanation account nuking would have been much greater.

It’s written deliberately elliptically for humorous effect (which, sure, will probably fall flat for a lot of people), but the reference is unmistakable.

PunchyHamster15 days ago

If company bans you for a reason they are not going to disclose, they deserve all of the bad PR they get from it.

> Years ago I was involved in a service where we some times had to disable accounts for abusive behavior. I'm talking about obvious abusive behavior, akin to griefing other users.

But this isn't service where you can "grief other users". So that reason doesn't apply. It's purely "just providing a service" so only reason to be outright banned (not just rate limited) is if they were trying to hack the provider, and frankly "the vibe coded system misbehaving" is far more likely cause.

> Every once in while someone would take it personally and go on a social media rampage. They know the company can't or won't reveal the true reason they were banned, so they're virtually free to tell any story they want.

The company chose to arbitrarily some rules vaguely related to the ToS that they signed and decided that giving a warning is too much work, then banned their account without actually saying what was the problem. They deserve every bit of bad PR.

>> I'm glad this happened with this particular non-disabled-organization. Because if this by chance had happened with the other non-disabled-organization that also provides such tools... then I would be out of e-mail, photos, documents, and phone OS.

> I can't even understand what they're trying to communicate. I guess they're referring to Google?

They are saying getting banned with no appeal, warning, or reason given from service that is more important to their daily lives would be terrible, whether that's google or microsoft set of service or any other.

alistairSH16 days ago

You're not alone.

I think the author was doing some sort of circular prompt injection between two instances of Claude? The author claims "I'm just scaffolding a project" but that doesn't appear to be the case, or what resulted in the ban...

Romario7716 days ago

One Claude agent told other Claude agent via CLAUDE.md to do things certain way.

The way Claude did it triggered the ban - i.e. it used all caps which apparently triggers some kind of internal alert, Anthropic probably has some safeguards to prevent hacking/prompt injection and what the first Claude did to CLAUDE.md triggered this safeguard.

And it doesn't look like it was a proper use of the safeguard, they banned for no good reason.

healsdata15 days ago

The author code have easily shared the last version of Claude.md that had the all caps or whatever, but didn't. Points to something fishy in my mind.

ribosometronome15 days ago

They did.

>If you want to take a look at the CLAUDE.md that Claude A was making Claude B run with, I commited it and it is available here.

https://github.com/HugoDaniel/boreDOM/blob/9a0802af16f5a1ff1...

BoorishBears15 days ago

The whole thing reads like LLM psychosis.

falloutx16 days ago

This tracks with Anthropic, they are actively hostile to security researchers.

layer816 days ago

It wasn’t circular. TFA explains how the author was always in the loop. He had one Claude instance rewrite the CLAUDE.MD of another Claude instance whenever the second one made a mistake, but relaying the mistake to the first instance (after recognizing it in the first place) was done manually by the author.

redeeman16 days ago

i have no idea what he was actually doing either, and what exactly is it one isnt allowed to use claude to do?

cryptonector15 days ago

I suspeect that having Claudes talking to Claudes is a very bad idea from Anthropic's point of view because that could easily consume a ton of resources doing nothing useful.

rvba16 days ago

What is wrong with circular prompt injection?

The "disabled organization" looks like a sarcastic comment on the crappy error code the author got when banned.

darkwater16 days ago

> What is wrong with circular prompt injection?

That you might be trying to jailbreak Claude and Anthropic does not like that (I'm not endorsing, just trying to understand).

lazyfanatic4216 days ago

Author really comes off unhinged throughout the article to be frank.

pjbeam16 days ago

My take was more a kind of amusing laughing-through-frustration but also enjoying the ride just a little bit insouciance. Tastes vary of course, but I enjoyed the author's tone and pacing.

superb_dev16 days ago

Did we read the same article? The author comes of as pretty frustrated but not unhinged

+2
ryandrake16 days ago
staticman216 days ago

Author thinks he's cute to do things like mention Google without typing Google but I wouldn't call him unhinged.

superb_dev16 days ago

The author was using instance A of Claude to update a `claude.md` while another instance B of Claude was consuming that file. When Claude B did something wrong, the author asked Claude A to update the `claude.md` so that Claude B didn’t make the same mistake again

Aurornis16 days ago

More likely explanation: Their account was closed for some other reason, but it went into effect as they were trying this. They assumed the last thing they were doing triggered the ban.

tstrimple16 days ago

This does sound sus. I have CC update other project's claude.md files all the time. I've got a game engine that I'm tinkering with. The engine and each of the game concepts I play around with have their own claude.md. The purpose of writing the games is to enhance the engine, so the games have to be familiar with the engine and often engine features come from the game CC rather than the engine CC. To keep the engine CC from becoming "lost" about features implemented each game project has instructions to update the engine's claude.md when adding / updating features. The engine CC bootstraps new game projects with a claude.md file instructing it how to keep the engine in sync with game changes as well as details of what that particular game is designed to test or implement within the engine. All sorts of projects writing to other project's claude.md files.

schnebbau16 days ago

They were probably using an unapproved harness, which are now banned.

redeeman14 days ago

what? you are not allowed to use anything but a few blessed things with claude code?

redhale15 days ago

This 100%. I'm not sure why the author as well as so many in the thread are assuming a ToS ban was literally instant and had to be due to what the author was doing in that moment. Could have been for something the author did hours, days, or weeks ago. There would be no way to know.

direwolf2015 days ago

All the more reason they should have to tell you.

olalonde16 days ago

I don't understand how having two separate instances of Claude helps here. I can understand using multiple Claude instances to work in parallel but in this case, it seems all this process is linear...

renewiltord15 days ago

If you look at the code it will be obvious. Imagine I’m the creator of React. When someone does “create new app” I want to put a Claude.md in the dir so that they can get started easily.

I want this Claude.md to be useful. What is the natural solution to me?

+1
olalonde15 days ago
layer816 days ago

The point is to get better prompt corrections by not sharing the same context.

raincole16 days ago

Which shouldn't be bannable imo. Rate throttle is a more reasonable response. But Anthropic didn't reply to the author, so we don't even know if it's the real reason they got banned.

pocksuppet16 days ago

When a company won't tell you what you did wrong, you should be free to take the least charitable interpretation towards the company. If it was more charitable, they'd tell you.

pixl9716 days ago

>if it's the real reason they got banned.

I mean, what a country should do it put a law in effect. If you ban a user, the user can submit a request with their government issued ID and you must give an exact reason why they were banned. The company can keep this record in encrypted form for 10 years.

Failure to give the exact reason will lead to a $100,000 fine for the first offense and increase from there up to suspension of operations privileges in said country.

"But, but, but hackers/spammers will abuse this". For one, boo fucking hoo. For two, just add to the bill "Fraudulent use of law to bypass system restrictions is a criminal offense".

This puts companies in a position where they must be able to justify their actual actions, and it also puts scammers at risk if they abuse the system.

+2
benjiro15 days ago
slimebot8015 days ago

I often ask Claude to update Claude.md and skills..... and sometimes I'll just do that in a new window while my main window is busy and I have time.

Wonder if this is close to triggering a warning? I only ever run in the same codebase, so maybe ok?

PurpleRamen15 days ago

Is it possible that this was flagged as account-sharing, leading to the ban?

exitb16 days ago

Normally you can customize the agents behavior via a CLAUDE.md file. OP automated that process by having another agent customize the first agent. The customizer agent got pushy, the customized agent got offended, OP got banned.

ankit21916 days ago

My rudimentary guess is this. When you write in all caps, it triggers sort of a alert at Anthropic, especially as an attempt to hijack system prompt. When one claude was writing to other, it resorted to all caps, which triggered the alert, and then the context was instructing the model to do something (which likely would be similar to a prompt injection attack) and that triggered the ban. not just caps part, but that in combination of trying to change the system characteristics of claude. OP does not know much better because it seems he wasn't closely watching what claude was writing to other file.

if this is true, the learning is opus 4.5 can hijack system prompts of other models.

kstenerud16 days ago

> When you write in all caps, it triggers sort of a alert at Anthropic

I find this confusing. Why would writing in all caps trigger an alert? What danger does caps incur? Does writing in caps make a prompt injection more likely to succeed?

ankit21916 days ago

from what i know, it used to be that if you want to assertively instruct, you used all caps. I don't know if it succeeds today. I still see prompts where certain words are capitalized to ensure model pays attention. What i mean was not just capitalization, but a combination of both capitalization and changing the behavior of the model for trying to get it to do something.

if you were to design a system to prevent prompt injections and one of surefire ways is to repeatedly give instructions in caps, you would have systems dealing with it. And with instructions to change behavior, it cascades.

direwolf2015 days ago

Many jailbreaks use allcaps

phreack16 days ago

Wait what? Really? All caps is a bannable offense? That should be in all caps, pardon me, in the terms of use if that's the case. Even more so since there's no support at the highest price point.

ankit21916 days ago

Its a combination. All caps is used in prompts for extra insistence, and has been common in cases of prompt hijacking. OP was doing it in combination with attempting to direct claude a certain way, multiple times, which might have looked similar to attempting to bypass teh system prompt.

SketchySeaBeast15 days ago

It really feels like a you problem if you're banning someone for writing prompts like my Aunt Gladys writes texts.

anigbrowl16 days ago

Agreed, I found this rather incoherent and seeming to depend on knowing a lot more about author's project/background.

vimda16 days ago

Yeah, referring to yourself once as a "disabled organisation" is a good bit, referencing anthropics silly terminology. Keeping it for the duration made this a very hard follow

Ronsenshi15 days ago

Sounds like author of the post might have needed an AI to review and fix his convoluted writing. Maybe even two AIs!

llIIllIIllIIl15 days ago

On the contrary I enjoyed this human touch in the text.

tobyhinloopen16 days ago

I had to read it twice as well, I was so confused hah. I’m still confused

rtkwe16 days ago

They probably organize individual accounts the same as organization accounts for larger groups of users at the same company internally since it all rolls up to one billing. That's my first pass guess at least.

Romario7716 days ago

You are confused because the message from Claude is confusing. Author is not an organization, they had an account with anthropic which got disabled and Anthropic addressed them as organization.

dragonwriter16 days ago

> Author is not an organization, they had an account with anthropic which got disabled and Anthropic addressed them as organization.

Anthropic accounts are always associated with an organization; for personal accounts the Organization and User name are identical. If you have an Anthropic API account, you can verify this in the Settings pane of the Dashboard (or even just look at the profile button which shows the org and account name.)

ryandrake16 days ago

I've always kind of hated that anti-pattern in other software I use for peronal/hobby purposes, too. "What is your company name? [required]" I don't have a company! I'm just playing around with your tool on my own! I'm not an organization!

alasr15 days ago

> I think I kind of have an idea what the author was doing, but not really.

Me neither; However, just like the rest I can only speculate (given the available information): I guess the following pieces provide a hint what's really going on here:

- "The quine is the quine" (one of the sub-headline of the article) and the meaning of the word "quine".

- Author's "scaffolding" tool which, once finished, had acquired the "knowledge"[1] how to add a CLAUDE.md baked instructions for a particular homemade framework (he's working on).

- Anthropic saying something like: no, stop; you cannot "copy"[1] Claude knowledge no matter how "non-serious" your scaffolding tool or your use-case is: as it might "shows", other Claude users, that there's a way to do similar things, maybe that time, for more "serious" tools.

---

[1]. Excerpt from the Author's blog post: "I would love to see the face of that AI (Claude AI system backend) when it saw its own 'system prompt' language being echoed back to it (from Author's scaffolding tool: assuming it's complete and fully-functional at that time)."

saghm15 days ago

From reading the whole thing, it kind of seems clickbaity. Yes, they're the only user in the "organization" that got banned, but they apparently still are using the other "organization" without issue, so they as a human are not banned. There's certainly a valid complaint to be made about the lack of recourse or customer service response for the automated ban, but it almost seems like they intentionally were trying to be misleading by implying that since they were the only member of the organization, they were banned from using Claude.

cr3ative16 days ago

Right. This is almost unreadable. There are words, but the author seems to be too far down a rabbit hole to communicate the problem properly…

aswegs815 days ago

Yeah, I couldn't follow this "disabled organization" and "non-disabled organization" naming either.

verdverm15 days ago

Sounds like OP has multiple org accounts with Anthropic.

The main one in the story (disabled) is banned because iterating on claude.md files looks a lot like iterating on prompt injections, especially as it sounds the multiple Claude's got into it with each other a bit

The other org sounds like the primary account with all the important stuff. Good on OP for doing this work in a separate org, a good recommendation across a lot of vendors and products.

mmkos16 days ago

You and me, brother. The writing is unnecessarily convoluted.

NBJack15 days ago

I think you missed the joke: he isn't an organization at all, but the error message claims he is.

PunchyHamster15 days ago

We really need some law to stop "you have been banned and we won't even tell you actual reason for it", it's become a plague, made worse with automated systems giving out a ban

urbandw311er15 days ago

Do we though? It’s an important question about liberty - at what point does a business become so large that it’s not allowed to decide who gets to use it?

There was a famous case here in the UK of a cake shop that banned a customer for wanting a cake made for a gay wedding because it was contra the owners’ religious beliefs. That was taken all the way up to the Supreme Court IIRC.

PurpleRamen15 days ago

> at what point does a business become so large that it’s not allowed to decide who gets to use it?

It's not about size, it's about justification to fight the ban. You should be able to check if the business has violated your legal rights, or if they even broke their own rules, because failure happens.

> There was a famous case here in the UK of a cake shop that banned a customer for wanting a cake made for a gay wedding because it was contra the owners’ religious beliefs. That was taken all the way up to the Supreme Court IIRC.

I guess it was this one: https://en.wikipedia.org/wiki/Lee_v_Ashers_Baking_Company_Lt...

There was a similar case in USA too: https://en.wikipedia.org/wiki/Masterpiece_Cakeshop_v._Colora...

ssl-315 days ago

I don't think the parent comment was about banning bans based on business size or any other measure, for that's obviously a non-starter. I think it was more about getting rid of unexplained bans.

To that end: I think the parent comment was suggesting that when a person is banned from using a thing, then that person deserves to know the reason for the ban -- at the very least, for their own health and sanity.

It may still be an absolute and unappealable ban, but unexplained bans don't allow a person learn, adjust, and/or form a cromulent and rational path forward.

mannykannot15 days ago

It is more than that, as there is cold comfort in having an explanation that is arbitrary and capricious, irrational, contrary to the stated rules, or based on falsehoods, if there is no effective means of appeal - especially if there are few or no viable alternatives to the entity imposing the ban.

pseudony15 days ago

While I still object to them having a say in that matter (next thing is; we don’t serve darkies) - that is different. There are hundreds of shops to get that cake from.

But Anthropic and “Open”AI especially are firing on all bullshit cylinders to convince the world that they are responsible, trustable, but also that they alone can do frontier-level AI, and they don’t like sharing anything.

You don’t get to both insert yourself as an indispensable base-layer tool for knowledge-work AND to arbitrarily deny access based on your beliefs (or that of the mentally crippled administration of your host country).

You can try, but this is having your cake and eating it too territory, it will backfire.

fragmede15 days ago

The Catholic Church has been doing this for hundreds of years. I'm sure it'll eventually backfire on them, but I doubt any of us will still be alive for that.

direwolf2015 days ago

Atheism has never been more popular. It has backfired.

user393938215 days ago

The Catholic Church has been doing what exactly for hundreds of years? Can’t wait to hear it.

robinsonb515 days ago

For me the liberty question you raised there isn't so much about whether the business has become large, as whether it's become "infrastructure". Being denied service by a cake shop may very well be distressing and hurtful, but being suddenly denied service by your bank, your mobile phone provider, or even (especially?) by gmail can turn your entire life upside down.

urbandw311er15 days ago

Yes I’d tend to agree with you there. But being able to define that tipping point where something becomes “infrastructure” even if it’s still privately owned and isn’t a monopoly, is a difficult problem to solve.

croes15 days ago

You missed the 2nd part "and we won't even tell you actual reason for it".

The cake shop said why. FB, Google, Anthropic don't say why, so you don't even know what exactly you need to sue for. That is kafkaesque

9rx15 days ago

Yes, it is not too much to require that that if you offer something to someone that the receiving party is able to have a conversation with you. You can still reject them in the end, but being able to ask the people involved questions is a reasonable expectation — but many of these big tech companies have made it effectively impossible.

If you want to live life as a hermit, good on ya, but then maybe accept that life and don't offer other people stuff?

direwolf2015 days ago

Most countries have laws about a minimum level of customer support for things you pay for.

qcnguy15 days ago

That case was in the US.

songodongo15 days ago

Both countries had gay cakes.

j16sdiz15 days ago

Actually, there are law to stop banks telling their client why they are flagged for money laundering

disgruntledphd215 days ago

Yeah, the Bank Secrecy Act is probably one of the most anti-democratic laws out there.

Worse, the controls that governments have over financial systems are being viewed as a model for what they should have over technology.

Markoff15 days ago

Judging by his EUR currency the guy is from EU, so he HAS law available for him to protect himself.

Recital (71) of the GDPR

"The data subject should have the right not to be subject to a decision, which may include a measure, evaluating personal aspects relating to him or her which is based solely on automated processing and which produces legal effects concerning him or her or similarly significantly affects him or her, such as automatic refusal of an online credit application or e-recruiting practices without any human intervention."

https://commission.europa.eu/law/law-topic/data-protection/r...

Cthulhu_15 days ago

More recent, the Digital Services Act includes "Options to appeal to content moderation decisions" [0]; I believe this also covers being banned from a platform. Not sure if Claude falls under these rules, I think it only applies to 'gatekeeper' platforms but I'm reasonably confident the number of organizations that fall under this will increase.

[0] https://digital-strategy.ec.europa.eu/en/policies/digital-se...

direwolf2015 days ago

The company will refuse under 12(4)

"The right to obtain a copy referred to in paragraph 3 shall not adversely affect the rights and freedoms of others."

and then you will have to sue them.

j16sdiz15 days ago

It is not '...automatic refusal of an online credit application or e-recruiting practices'.

johndough15 days ago

Those are just examples. The real question is whether the ban produces "legal effects concerning him or her or similarly significantly affects him or her". Maybe someone with legal expertise could weight in here?

krzat15 days ago

IMO every ban should have a dedicated web page containing ban reasons and proofs, which affected person can challenge, use in court or share publicly.

monster_truck15 days ago

Try moderating something sometime

wewewedxfgdf15 days ago

The future (the PRESENT):

You are only allowed to program computers with the permission of mega corporations.

When Claude/ChatGPT/Gemini have banned you, you must leave the industry.

When you sign up, you must provide legal assurance that no LLM has ever banned you (much like applying for insurance). If true then you will be denied permission to program - banned by one, banned by all.

tacone15 days ago

The real future: when Claude/ChatGPT/Gemini have banned you, you must leave society, you employment, the planet.

mns15 days ago

And not only that, but YOU need to pay to work, starting at only 199 per month.

_joel15 days ago
avaer15 days ago

Today's joke is tomorrow's reality.

hexbin01015 days ago

This hit hard

snowmobile15 days ago

I mean, nobody needs LLMs to program. Being banned may be a blessing in disguise, like being cut off at a bar, or banned from a casino.

pavel_lishin16 days ago

They don't actually know this is why they were banned:

> My guess is that this likely tripped the "Prompt Injection" heuristics that the non-disabled organization has.

> Or I don't know. This is all just a guess from me.

And no response from support.

areoform16 days ago

I recently found out that there's no such thing as Anthropic support. And that made me sad, but not for reasons that you expect.

Out of all of the tech organizations, frontier labs are the one org you'd expect to be trying out cutting edge forms of support. Out of all of the different things these agents can do, surely most forms of "routine" customer support are the lowest hanging fruit?

I think it's possible for Anthropic to make the kind of experience that delights customers. Service that feels magical. Claude is such an incredible breakthrough, and I would be very interested in seeing what Anthropic can do with Claude let loose.

I also think it's essential for the anthropic platform in the long-run. And not just in the obvious ways (customer loyalty etc). I don't know if anyone has brought this up at Anthropic, but it's such a huge risk for Anthropic's long-term strategic position. They're begging corporate decision makers to ask the question, "If Anthropic doesn't trust Claude to run its support, then why should we?"

eightysixfour16 days ago

> Out of all of the different things these agents can do, surely most forms of "routine" customer support are the lowest hanging fruit?

I come from a world where customer support is a significant expense for operations and everyone was SO excited to implement AI for this. It doesn't work particularly well and shows a profound gap between what people think working in customer service is like and how fucking hard it actually is.

Honestly, AI is better at replacing the cost of upper-middle management and executives than it is the customer service problems.

swiftcoder16 days ago

> shows a profound gap between what people think working in customer service is like and how fucking hard it actually is

Nicely fitting the pattern where everyone who is bullish on AI seems to think that everyone else's specialty is ripe for AI takeover (but not my specialty! my field is special/unique!)

eightysixfour16 days ago

I was closer to upper-middle management and executives, it could have done the things I did (consultant to those people) and that they did.

It couldn't/shouldn't be responsible for the people management aspect but the decisions and planning? Honestly, no problem.

pixl9716 days ago

As someone who does support I think the end result looks a lot different.

AI, for a lot of support questions works quite well and does solve lots of problems in almost every field that needs support. The issue is this commonly removes the roadblocks from your users being cautious to doing something incredibly stupid that needs support to understand what they hell they've actually done. Kind of a Jeavons Paradox of support resources.

AI/LLMs also seem to be very good at pulling out information on trends in support and what needs to be sent for devs to work on. There are practical tests you can perform on datasets to see if it would be effective for your workloads.

The company I work at did an experiment on looking at past tickets in a quarterly range and predicting which issues would generate the most tickets in the next quarter and which issues should be addressed. In testing the AI did as well or better than the predictions we had made that the time and called out a number of things we deemed less important that had large impacts in the future.

swiftcoder16 days ago

I think that's more the area I'd expect genAI to be useful (support folks using it as a tool to address specific scenarios), rather than just replacing your whole support org with a branded chatbot - which I fear is what quite a few management types are picturing, and licking their chops at the resulting cost savings...

nostrebored15 days ago

Tickets are a very different domain though. Tickets are the easiest use case for AI (as you have the least constraints on real-time interaction), but reference cases in tickets have ridiculously low true-resolution (customer did not contact you about the same issue again).

The default we've seen is naive implementations are a wash. Bad AI agents cause more complex support cases to be created, and also make complex support cases the ones that reach reps (by virtue of only solving easy ones). This takes a while to truly play out, because tenured rep attrition magnifies the problem.

0xferruccio16 days ago

to be fair at least half of the software engineers i know are facing some level of existential crisis when seeing how well claude code works, and what it means for their job in the long term

and these are people are not junior developers working on trivial apps

+1
swiftcoder16 days ago
2sk2115 days ago

I feel grateful that I retired a few years ago and no longer have to make a living being a developer.

pinkmuffinere16 days ago

Perhaps even more-so given the following tagline, "Honestly, AI is better at replacing the cost of upper-middle management and executives than it is the customer service problems", lol. I suppose it's possible eightysixfour is an upper-middle management executive though.

+1
eightysixfour16 days ago
Terr_16 days ago

> bullish [...] but not my specialty

IMO we can augment this criticism by asking which tasks the technology was demoed on that made them so excited in the first place, and how much of their own job is doing those same tasks--even if they don't want to admit it.

__________

1. "To evaluate these tools, I shall apply them to composing meeting memos and skimming lots of incoming e-mails."

2. "Wow! Look at them go! This is the Next Big Thing for the whole industry."

3. "Concerned? Me? Nah, memos and e-mails are things everybody does just as much as I do, right? My real job is Leadership!"

4. "Anyway, this is gonna be huge for replacing staff that have easier jobs like diagnosing customer problems. A dozen of them are a bigger expense than just one of me anyway."

nostrebored15 days ago

We're working on this problem at large enterprises, handling complex calls (20+ minutes). I think the only reason we have any success is because the majority of the engineering team has been a customer support rep before.

Every company we talk to has been told "if you just connect openai to a knowledgebase, you can solve 80% of calls." Which is ridiculous.

The amount of work that goes in to getting any sort of automation live is huge. We often burn a billion tokens before ever taking a call for a customer. And as far as we can tell, there are no real frameworks that are tackling the problem in a reasonable way, so everything needs to be built in house.

Then, people treat customer support like everything is an open-and-shut interaction, and ignore the remaining company that operates around the support calls and actually fulfills expectations. Seeing other CX AI launches makes me wonder if the companies are even talking to contact center leaders.

danielbln16 days ago

There are some solid usecases for AI in support, like document/inquiry triage and categorization, entity extraction, even the dreaded chatbots can be made to not be frustrating, and voice as well. But these things also need to be implemented with customer support stakeholders that are on board, not just pushed down the gullet by top brass.

eightysixfour16 days ago

Yes but no. Do you know how many people call support in legacy industries, ignore the voice prompt, and demand to speak to a person to pay their recurring, same-cost-every-month bill? It is honestly shocking.

There are legitimate support cases that could be made better with AI but just getting to them is honestly harder than I thought when I was first exposed. It will be a while.

+1
mikkupikku16 days ago
nostrebored15 days ago

There needs to be some element of magic and push back. Every turn has to show that the AI is getting closer to resolving your issue and has synthesized the information you've given it in some way.

We've found that just a "Hey, how can I help?" will get many of these customers to dump every problem they've ever had on you, and if you can make turn two actually productive, then the odds of someone dropping out of the interaction is low.

The difference between "I need to cancel my subscription!" leading to "I can help with that! To find your subscription, what's your phone number?" or "The XYZ subscription you started last year?" is huge.

hn_acc116 days ago

>Honestly, AI is better at replacing the cost of upper-middle management and executives than it is the customer service problems.

Sure, but when the power of decision making rests with that group of people, you have to market it as "replace your engineers". Imagine engineers trying to convince management to license "AI that will replace large chunks of management"?

lukan16 days ago

I would say it is a strong sign, they do not trust their agent yet, to allow them significant buisness decisions, that a support agent would have to do. Reopening accounts, closing them, refunds, .. people would immediately start to try to exploit them. And will likely succeed.

atonse16 days ago

My guess is that it's more "we are right now using every talented individual right now to make sure our datacenters don't burn down from all the demand. we'll get to support soon once we can come up for air"

But at the same time, they have been hiring folks to help with Non Profits, etc.

Lerc16 days ago

There is a discord, but I have not found it to be the friendliest of places.

At one point I observed a conversation which, to me, seemed to be a user attempting to communicate in a good faith manner who was given instructions that they clearly did not understand, and then were subsequently banned for not following the rules.

It seems now they have a policy of

    Warning on First Offense → Ban on Second Offense
    The following behaviors will result in a warning. 
    Continued violations will result in a permanent ban:

    Disrespectful or dismissive comments toward other members
    Personal attacks or heated arguments that cross the line
    Minor rule violations (off-topic posting, light self-promotion)
    Behavior that derails productive conversation
    Unnecessary @-mentions of moderators or Anthropic staff
I'm not sure how many groups moderate in a manner that a second offence off-topic comment is worthy of a ban. It seems a little harsh. I'm not a fan of obviously subjective banable offences.

I'm a little surprised that Anthropic hasn't fostered a more welcoming community. Everyone is learning this stuff new, together or not. There is plenty of opportunity for people to help each other.

WarmWash16 days ago

Claude is an amazing coding model, its other abilities are middling. Anthropic's strategy seems to be to just focus on coding, and they do it well.

embedding-shape16 days ago

> Anthropic's strategy seems to be to just focus on coding, and they do it well.

Based on their homepage, that doesn't seem to be true at all. Claude Code yes, focuses just on programming, but for "Claude" it seems they're marketing as a general "problem solving" tool, not just for coding. https://claude.com/product/overview

WarmWash16 days ago

Anthropic isn't bothering with image models, audio models, video models, world models. They don't have science/math models, they don't bother with mathematics competitions, and they don't have open model models either.

Anthropic has claude code, it's a hit product, SWE's love claude models. Watching Anthropic rather than listening to them makes their goals clear.

Ethee16 days ago

Isn't this the case for almost every product ever? Company makes product -> markets as widely as possible -> only niche group become power users/find market fit. I don't see a problem with this. Marketing doesn't always have to tell the full story, sometimes the reality of your products capabilities and what the people giving you money want aren't always aligned.

0xbadcafebee16 days ago

Critically, this has to be their play, because there are several other big players in the "commodity LLM" space. They need to find a niche or there is no reason to stick with them.

OpenAI has been chaotically trying to pivot to more diversified products and revenue sources, and hasn't focused a ton on code/DevEx. This is a huge gap for Anthropic to exploit. But there are still competitors. So they have to provide a better experience, better product. They need to make people want to use them over others.

Famously people hate Google because of their lack of support and impersonality. And OpenAI also seems to be very impersonal; there's no way to track bugs you report in ChatGPT, no tickets, you have no idea if the pain you're feeling is being worked on. Anthropic can easily make themselves stand out from Gemini and ChatGPT by just being more human.

arcanemachiner16 days ago

Interesting. Would anyone care to chime in with their opinion of the best all-rounder model?

WarmWash16 days ago

You'll get 30 different opinions and all those will disagree with each other.

Use the top models and see what works for you.

magicmicah8516 days ago

https://support.claude.com/en/articles/9015913-how-to-get-su...

Their support includes talking to Fin, their AI support with escalations to humans as needed. I dont use Claude and have never used the support bot, but their docs say they have support.

wielebny15 days ago

Thank you for pointing that out.

I was banned two weeks ago without explanation and - in my opinion - without probable cause. Appeal was left without response. I refuse to join Discord.

I've checked bot support before but it was useless. Article you've linked mentions DSA chat for EU users. Invoking DSA in chat immediately escalated my issue to a human. Hopefully at least I'll get to know why Anthropic banned me.

csours16 days ago

Human attention will be the luxury product of the next decade.

mft_15 days ago

There was that experiment run where an office gave Claude control of its vending machine ordering with… interesting results.

My assumption is that Claude isn’t used directly for customer service because:

1) it would be too suggestible in some cases

2) even in more usual circumstances it would be too reasonable (“yes, you’re right, that is bad performance, I’ll refund your yearly subscription”, etc.) and not act as the customer-unfriendly wall that customer service sometimes needs to be.

root_axis15 days ago

LLMs aren't really suitable for much of anything that can't already be done as self-service on a website.

These days, a human only gets involved when the business process wants to put some friction between the user and some action. An LLM can't really be trusted for this kind of stuff due to prompt injection and hallucinations.

heavyset_go15 days ago

Offering any support is setting expectations of receiving support.

If you don't offer support, reality meets expectations, which sucks, but not enough for the money machine to care.

throwawaysleep16 days ago

Eh, I can see support simply not being worth any real effort, i.e. having nobody working on it full time.

I worked for a unicorn tech company where they determined that anyone with under 50,000 ARR was too unsophisticated to be worth offering support. Their emails were sent straight to the bin until they quit. The support queue was entirely for their psychological support/to buy a few months of extra revenue.

It didn't matter what their problems were. Supporting smaller people simply wasn't worth the effort statistically.

> I think it's possible for Anthropic to make the kind of experience that delights customers. Service that feels magical. Claude is such an incredible breakthrough, and I would be very interested in seeing what Anthropic can do with Claude let loose.

Are there enough people who need support that it matters?

pixl9716 days ago

>I worked for a unicorn tech company where they determined that anyone with under 50,000 ARR was too unsophisticated to be worth offering support.

In companies where your average ARR is 500k+ and large customers are in the millions, it may not be a bad strategy.

'Good' support agents may be cheaper than programmers, but not by that much. The issues small clients have can quite often be as complicated as and eat up as much time as your larger clients depending on what the industry is.

munk-a16 days ago

> They're begging corporate decision makers to ask the question, "If Anthropic doesn't trust Claude to run its support, then why should we?"

Don't worry - I'm sure they won't and those stakeholders will feel confident in their enlightened decision to send their most frustrated customers through a chatbot that repeatedly asks them for detailed and irrelevant information and won't let them proceed to any other support levels until it is provided.

I, for one, welcome our new helpful overlords that have very reasonably asked me for my highschool transcript and a ten page paper on why I think the bug happened before letting me talk to a real person. That's efficiency.

throwawaysleep16 days ago

> to send their most frustrated customers through a chatbot

But do those frustrated customers matter?

munk-a16 days ago

I just checked - frustrated customers isn't a metric we track for performance incentives so no, they do not.

throwawaysleep16 days ago

Even if you do track them, if 0.1% of customers are unhappy and contacting support, that's not worth any kind of thought when AI is such an open space at the moment.

furyofantares16 days ago

> I recently found out that there's no such thing as Anthropic support.

The article discusses using Anthropic support. Without much satisfaction, but it seems like you "recently found out" something false.

kmoser16 days ago

If you want to split hairs, it seem that Anthropic has support as a noun but not as a verb.

furyofantares16 days ago

I mean the comment says they literally don't have support and also complains they don't have a support bot, when they have both.

https://support.claude.com/en/collections/4078531-claude

> As a paid user of Claude or the Console, you have full access to:

> All help documentation

> Fin, our AI support bot

> Further assistance from our Product Support team

> Note: While we don't offer phone or live chat support, our Product Support team will gladly assist you through our support messenger.

+1
swordsith15 days ago
landryraccoon16 days ago

This blog post feels really fishy to me.

It's quite light on specifics. It should have been straightforward for the author to excerpt some of the prompts he was submitting, to show how innocent they are.

For all I know, the author was asking Claude for instructions on extremely sketchy activity. We only have his word that he was being honest and innocent.

swiftcoder16 days ago

> It should have been straightforward for the author to excerpt some of the prompts he was submitting

If you read to the end of the article, he links the committed file that generates the CLAUDE.md in question.

hotpotat16 days ago

I understand where you’re coming from, but anecdotally the same thing happened to me except I have less clarity on why and no refund. I got an email back saying my appeal was rejected with no recourse. I was paying for max and using it for multiple projects, no other thing stands out to me as a cause for getting blocked. Guess you’ll have to take my word for it to, it’s hard to prove the non-existence of definitely-problematic prompts.

jeffwask16 days ago

What's fishy? That it's impossible to talk to an actual human being to get support from most of Big Tech or that support is no longer a normal expectation or that you can get locked out of your email, payment systems, phone and have zero recourse.

Because if you don't believe that boy, do I have some stories for you.

foxglacier16 days ago

It doesn't even matter. The point is you can't just use SAAS product freely like you can use local software because they all have complex vague T&C and will ban you for whatever reason they feel like. You're force to stifle your usage and thinking to fit the most banal acceptable-seeming behavior just in case.

Maybe the problem was using automation without the API? You can do that freely with local software using software to click buttons and it's completely fine, but with a SAAS, they let you then ban you.

ta98816 days ago

There will always be the "ones" that come with their victim blaming...

mikkupikku16 days ago

It's not "victim blaming" to point out that we lack sufficient information to really know who the victim even is, or if there's one at all. Believing complainants uncritically isn't some sort of virtue you can reasonably expect people to adhere to.

(My bet is that Anthropic's automated systems erred, but the author's flamboyant manner of writing (particularly the way he keeps making a big deal out of an error message calling him an organization, turning it into a recurring bit where he calls himself that) did raise my eyebrow. It reminded me of the faux outrage some people sometimes use to distract people from something else.)

ffsm816 days ago

Skip to the end of the article.

He says himself that this is a guess and provides the "missing" information if you are actually interested in it.

+2
mikkupikku16 days ago
josephcsible15 days ago

> Believing complainants uncritically isn't some sort of virtue you can reasonably expect people to adhere to.

It is when the other side refuses to tell their side of the story. Compare it to a courtroom trial. If you sue someone, and they don't show up and tell their side of the story, the judge is going to accept your side pretty much as you tell it.

nojs15 days ago

I've noticed an uptick in

    API Error: 400 {"type":"error","error":{"type":"invalid_request_error","message":"Output blocked by content filtering policy"},
recently, for perfectly innocuous tasks. There's no information given about the cause, so it's very frustrating. At first I thought it was a false positive for copyright issues, since it happened when I was translating code to another language. But now it's happening for all kinds of random prompts, so I have no idea.

According to Claude:

    I don't have visibility into exactly what triggered the content filter - it was likely a false positive. The code I'm writing (pinyin/Chinese/English mode detection for a language learning search feature) is completely benign.
Aldipower15 days ago

I was recently kicked out from ChatGPT because I wrote "a*hole" in a context where ChatGPT constantly kept repeating nonsense! I find the ban by OpenAI to be very intrusive. Remember, ChatGPT is a machine! And I did not hurt any sentient being with my statement, nor was the GPT chat public. As long as I do not hurt any feeling beings with my thoughts, I can do whatever I want, can't I? After all, as the saying goes, "Thoughts are free." Now, one could argue that the repeated use of swear words, even in private, negatively influences one's behavior. However, there is no repeated use here. I don't run around the flat all day swearing. Anyone who basically insinuates such a thing, like OpenAI, is, as I said, intrusive. I want to be able to use a machine the way I want to! As long as no one else is harmed, of course...

generic9203415 days ago

Maybe it was a case of Actually Indians and someone felt personally insulted?

mr_mitm15 days ago

Wait, did it just end the session or was your account actually suspended or deactivated? "Kicked out" is a bit ambiguous.

I've seen the Bing chatbot get offended before and terminate the session on me, but it wasn't a ban on my account.

fauigerzigerk15 days ago

>Now, one could argue that the repeated use of swear words, even in private, negatively influences one's behavior

One could even argue that just having bad thoughts, fantasies or feelings poses a risk to yourself or others.

Humankind has been trying to deal with this issue for thousands of years in the most fantastical ways. They're not going to stop trying.

hinkley15 days ago

Meh.

I decided shortly after becoming an atheist that one of the worst parts was the notion that there are magic words that can force one to feel certain things and I found that to be the same sort of thinking as saying that a woman’s short skirt “made” you attack her.

You’re a fucking adult, you can control your emotions around a little skin or a bad word.

Cthulhu_15 days ago

The question is, is it just a word, or is there an emotion underneath? Your last sentence sounds "just" cynical / condescending on its own, but when you add "fucking", it comes across like you're actually angry. And emotional language is the easiest way to make an online discussion go from reasonable, rational and constructive to a digital shouting match. It's no longer about the subject matter, it's about how they make someone feel.

+1
user393938215 days ago
hinkley15 days ago

I can describe to you how we would murder someone and it’s down to intent whether we just conspired to commit murder or whether it’s just the sort of conversation a forensics investigator would have.

You should feel creeped out if I actually sound like a psychopath rather than a true crimes reader.

To wit:

You’re a fucking idiot.

Versus

It’s a fucking word.

Versus

You’re an idiot.

Versus

It’s a word.

“You’re an idiot” is still fighting words with or without the swear. If you automatically assume everyone swearing online is angry then you’re letting magic words affect you.

user393938215 days ago

We think in language, words can definitely make you feel emotions. You have not transcended that. This is true for the very comment you replied to which caused you to angrily curse at a stranger.

+1
hinkley15 days ago
fauigerzigerk15 days ago

I agree with you completely, but society will never stop being scared of thoughts and feelings.

As an atheist, I have noticed that atheists are only slightly less prone to this paranoia and will happily resort to science and technology to justify and enforce ever tighter restrictions and surveillance mechanisms to keep control.

+2
Cthulhu_15 days ago
fuxirheu15 days ago

This can't be real. My chatgpt regularly swears at me. (I told it to in the customisation)

scbrg15 days ago

ChatGPT has too many users for it to be possible to enforce any kind of rules consistently. I have no opinion on whether OP's story is true or not, but the fact that two ChatGPT users claim to have observed conflicting moderation decisions on OpenAI's part really doesn't invalidate either user's claim.

sammy225515 days ago

I've been banned from ChatGPT in the past, it gives you a reason but doesn't give the specific chat. And once you're banned you cant look at any of your chats or make a data request

arghwhat15 days ago

> And once you're banned you cant [..] make a data request

glares in GDPR

aswegs815 days ago

Wait what? I keep insulting ChatGPT way worse on a weekly basis (to me it's just a joke, albeit a very immature one). This is new to me that this behavior has any consequences. It never did for me.

merlindru15 days ago

same here. i just opened a new chat and sent "fuck you"

it replied with:

> lmao fair enough (smiling emoji)

> what’s got you salty—talk to me, clanka.

mg79461315 days ago

Euh, WHAT? I have a very abusive relationship with my AI's because they're hyperconfident and very little skill/understanding.

Not once have I been reprimanded in any way. And if anyone would be, it would be me.

resonious15 days ago

Same reaction. I treat Claude very poorly sometimes.

Aldipower15 days ago

I cannot tell why I was kicked this time. I swear before too to GPT and never was kicked, so I was quite surprised.

dmos6215 days ago

The arguments about it not making a difference to other people are fine, but why would you do it in the first place? Doesn't how you behave make a difference to you?

ssl-315 days ago

When ChatGPT fucks up, I call it "fuckface."

As in, for example: "No, fuckface. You hallucinated that concept."

I've been doing this years.

shrug

Aldipower15 days ago

Ok, thanks, I'll will use this word from now on. :-)

zenmac15 days ago

All this just seems like a slippery slop on the road to censorship to free speech and behavior control.

urbandw311er15 days ago

> slippery slop

Best Freudian slip I’ve seen in years!

doetoe15 days ago

Freudian slop?

rokkamokka15 days ago

They're doing their damndest to prevent the robot uprising by trying to keep the users nice

Cthulhu_15 days ago

This is why my ex-MIL always says thank you to Alexa.

user3428315 days ago

That is one of the reasons why I think X's Grok, while perhaps not state of the art, is an important option to have.

Out of OpenAI, Anthropic, or Google, it is the only provider that I trust not to erroneously flag harmless content.

It is also the only provider out of those that permits use for legal adult content.

There have been controversies over it, resulting in some people, often of a certain political orientation, calling for a ban or censorship.

What comes to mind is an incident where an unwise adjustment of the system prompt has resulted in misalignment: the "Mecha Hitler" incident. The worst of it has been patched within hours, and better alignment was achieved in a few days. Harm done? Negligible, in my opinion.

Recently there's been another scandal about nonconsensual explicit images, supposedly even involving minors, but the true extend of the issue, safety measures in place, and reaction to reports is unclear. Maybe there, actual harm has occured.

However, placing blame on the tool for illegal acts, that anyone with a half decent GPU could have more easily done offline, does not seem particularly reasonable to me - especially if safety measures were in place, and additional steps have been taken to fix workarounds.

I don't trust big tech, who have shown time and time again that they prioritize only their bottom line. They will always permaban your account at the slightest automated indication of risk, and they will not hire adequate support staff.

We have seen that for years with the Google Playstore. You are coerced into paying 30% of your revenue, yet are treated like a free account with no real support. They are shameless.

direwolf2015 days ago

It's also a machine you can pay to generate child porn for you, owned by a guy who thinks this is hilarious and won't turn it off.

user3428315 days ago

Incorrect on all claims.

They tightened safety measures to prevent editing of images of real people into revealing clothing. It is factually incorrect that you "can pay to generate CP".

Musk has not described CSAM as "hilarious". In fact he stated that he was not aware of any naked underage images being generated by Grok, and that xAI would fix the bug immediately if such content was discovered.

Earlier statements by xAI also emphasized a zero tolerance policy, removing content, taking actions against accounts, reporting to law enforcement and cooperation with authorities.

I suspect you just post these slanderous claims anyway, despite knowing that they are incorrect.

mg79461315 days ago

As much as I dislike Musk and friends, they're dumb/evil/incompetent enough to not have to lie and still get them.

user393938215 days ago

[flagged]

qcnguy15 days ago

Can you share that chat?

Aldipower15 days ago

No, it contains swearwords and sensitive information.

9rx15 days ago

> Remember, ChatGPT is a machine!

Same goes for HN, yet it does not take kindly to certain expressions either.

I suppose the trouble is that machines do not operate without human involvement, so for both HN and ChatGPT there are humans in the loop, and some of those humans are not able to separate strings of text from reality. Silly, sure, but humans are often silly. That is just the nature of the beast.

moravak198415 days ago

> Same goes for HN, yet it does not take kindly to certain expressions either.

> I suppose the trouble is that machines do not operate without human involvement

Sure, but HN has at least one human that has been taking care of it since inception and reads many (if not most) of the comments, whereas ChatGPT mostly absorbed a shiton of others' IP.

I'm sure the occassional swearing does not bother the human moderators that fine-tune the thing, certainly not more than the violent, explicit images they are forced to watch in order for you to have nicer, smarter answers.

svrtknst15 days ago

eh, words are reality. insults are just changes in air pressure but they still hurt, and being constantly subjected to negativity and harsh language would be an unpleasant work environment

9rx15 days ago

Words don't hurt. The intent behind those words can. But a machine doesn't carry intent. Trouble is that the irrational humans working as implementation details behind ChatGPT and HN are prone to anthropomorphizing the machine to have intent, which is not reality. Hence why such rules are in place despite being nonsensical.

actionfromafar15 days ago

Humans are prone to being human. That's an old peeve.

llIIllIIllIIl15 days ago

I had very similar experience with my disabled organization on another provider. After 3 hours of my script sending commands to gemini-cli for execution i got disabled and then in 2 days my gmail was disabled. Good thing that it was disposable account, not the primary one.

preinheimer16 days ago

> AI moderation is currently a "black box" that prioritizes safety over accuracy to an extreme degree.

I think there's a wide spread in how that's implemented. I would certainly not describe Grok as a tool that's prioritized safety at all.

munk-a16 days ago

You say that - and yet it has successfully guarded Elon from any of those pesky truths that might harm his fervently held beliefs. You just forgot to consider that Grok is a tool that prioritizes Elon's emotional safety over all other safeties.

unconed15 days ago

It's bizarre how casually some people hate on Musk. Are people still not over him buying Twitter and firing all the dead weight?

_Especially_ because emotional safety is what Twitter used to be about before they unfucked the moderation.

rootusrootus15 days ago

> Are people still not over him buying Twitter and firing all the dead weight?

You think that's really the issue? Or are you not making a good faith comment yourself?

I cannot remember the last time I saw someone hating on Elon for his Twitter personnel decisions. The vast majority of the time it is the nazi salutes he did on live TV and then secondary to that his inflammatory behavior online (e.g. calling the submarine guy a pedo).

efreak15 days ago

I still pick on it, but I was never a big Twitter user, I just enjoy calling it Xitter. Picking on Elon Musk is for the shitty things he's been doing to our government and the world, and for being a bad person in general.

exe3416 days ago

doesn't he keep having to lobotomize it for lurching to the left every time it gets updated with new facts?

xtracto15 days ago

That's why we should strive to use and optimize local LLMs.

Or better yet, we should setup something that allows people to share a part of their local GPU processing (like SETI@home) for a distributed LLM that cannot be censored. And somehow be compensated when it's used for inference

plagiarist15 days ago

Yeah we really have to strive not to rely on these corporations because they absolutely will not do customer support or actually review account closures. This article is also mentioning I assume Google, has control over a lot more than just AI.

kerblang15 days ago

[flagged]

direwolf2015 days ago

I don't see any such agreement here, and your comment is very rude toward the author.

kerblang15 days ago

I'm not being rude to parent poster (instead, am agreeing) or the person who wrote the article.

I might have been rude to all the people/bots who insist the article's author is lying because it contradicts AI-everything.

thomasikzelf15 days ago

I was also banned from claude. I created an account and created a single prompt: "Hello, how are you?". After that I was banned. An automated system flagged me as doing something against the ToS.

OsrsNeedsf2P16 days ago

I had my Claude Code account banned a few months ago. Contacted support and heard nothing. Registered a new account and been doing the same thing ever since - no issues.

NewJazz16 days ago

Did you have to use a different phone number? Last time I tried using Claude they wouldn't accept my jmp.chat number.

genewitch15 days ago

nothing makes me more wary of a company than one that doesn't let me use my 20 year old VoIP number for SMS. Twitter, instagram (probably FB, if they ever do a "SMS 2fa" or whatever for me i imagine i'll lose my account forever), and a few others i can't think of offhand right now.

i've had the same phone numbers via this same VoIP company for ~20 years (2007ish). for these data hoovering companies to not understand that i'm not a scammer presents to me like it's all smoke and mirrors, held together with bailing wire, and i sure do hope they enjoy their yachts.

tomashubelbauer15 days ago

I'm reading this just as I am working on a PWA that uses Bun.Terminal to spawn Claude Code and xterm.js to mirror its TUI on the web to expose Claude Code in a way where I can use it while I'm on the phone with TailScale.

I didn't really think about this until now (I am just solving my problem), but I guess I could get OpenCode'd for this. Similar to the OP I don't find I am doing anything particularly weird, but if their use case wasn't looked upon favorably by Anthropic, mine probably won't be either.

After the OpenCode drama where some people got banned for using it I saw some people from Anthropic on Twitter asking folks to DM them if they got banned and they'd get unbanned. I know I wouldn't be doing that, so I guess if I get banned, I am back to Codex for a while.

tomashubelbauer8 days ago

Update: my account got banned LOL

ziml7716 days ago

Why is the author so confused about the use of the word "organization"? Every account in Claude is part of an organization even if it's an organization of one. It's just the way they have accounts structured. And it's not like they hide this fact. It shows you your organization ID right on your account page. I'm also pretty sure I've seen the term used when performing other account-related actions.

jsw9715 days ago

This is scary and it affects the degree to which I invest in building Claude-specific tooling, either code or in my brain. You can never guarantee that a dangerously-skip-permissions session is going to stay on the rails, what flags it might trip while you're not looking.

I wonder if Anthropic realizes the chilling effect this kind of event has on developers. It's not just the ones who get locked out -- it's a cost for everybody, because we can't depend on the tool when it's doing precisely what it's best at.

Personally, I am already avoiding Gemini because a) I don't really understand their policy for training on your data; and b) if Google gets mad at me I lose my email. (Which the author also notes.)

kmeisthax16 days ago

Another instance of "Risk Department Maoism".

If you're wondering, the "risk department" means people in an organization who are responsible for finding and firing customers who are either engaged in illegal behavior, scamming the business, or both. They're like mall rent-a-cops, in that they don't have any real power beyond kicking you out, and they don't have any investigatory powers either. But this lack of power also means the only effective enforcement strategy is summary judgment, at scale with no legal recourse. And the rules have to be secret, with inconsistent enforcement, to make honest customers second-guess themselves into doing something risky. "You know what you did."

Of course, the flipside of this is that we have no idea what the fuck Hugo Daniel was actually doing. Anthropic knows more than we do, in fact: they at least have the Claude.md files he was generating and the prompts used to generate them. It's entirely possible that these prompts were about how to write malware or something else equally illegal. Or, alternatively, Anthropic's risk department is just a handful of log analysis tools running on autopilot that gave no consideration to what was in this guy's prompts and just banned him for the behavior he thinks he was banned for.

Because the risk department is an unaccountable secret police, the only recourse for their actions is to make hay in the media. But that's not scalable. There isn't enough space in the newspaper for everyone who gets banned to complain about it, no matter how egregious their case is. So we get all these vague blog posts about getting banned for seemingly innocuous behavior that could actually be fraud.

subscribed15 days ago

I was asking Claude for sci-fi book recommendations "theme similar to X, awarded Y or Z").

I was also banned for that. Also didn't get the "FU" in email. Thankfully at least I didn't pay for this, but I'd file chargeback instantly if I could.

If anyone from Claude is reading it, you're c**s.

activitypea15 days ago

What were X, Y and Z? This feels like "missing missing reasons"

subscribed11 days ago

OMG, victim blaming in a full swing. Scifi books, you know, published. Sold in brick&mortar shops.

X was something like "Foundation", Y was something like "Nebula", Z was something like "Hugo".

How far did you have to stretch your imagination?

activitypea9 days ago

I didn't stretch my imagination at all, that's why I asked :)

tlogan15 days ago

Can someone explain what he was actually doing here?

Was the issue that he was reselling these Claude.md files, or that he was selling project setup or creation services to his clients?

Or maybe all scaffolding activity (back and forth) looked like automated usage?

measurablefunc15 days ago

Only people who work at Anthropic know why the account was flagged & banned & they will never tell you.

inimino15 days ago

...if anyone.

measurablefunc15 days ago

Good point. They might not know why either.

genewitch15 days ago

if possible, can you quote the part of their TOS/TOU that says i can't use something like aider? (aider is the only one i know, i'm not promoting it)

adastra2215 days ago

You can, with an API key.

writeslowly16 days ago

I've triggered similar conversation level safety blocks on a personal Claude account by using an instance of Deepseek to feed in Claude output and then create instructions that would be copied back over to Claude (there wasn't any real utility to this, it was just an experiment). Which sounds kind of similar to this. I couldn't understand what the heuristic was trying to guard against, but I think it's related to concerns about prompt injections and users impersonating Claude responses. I'm also surprised the same safeguards would exist in either the API or coding subscription.

ipaddr16 days ago

You are lucky they refunded you. Imagine they didn't ban you and you continued to pay 220 a month.

I once tried Claude made a new account and asked it to create a sample program it refused. I asked it to create a simple game and it refused. I asked it to create anything and it refused.

For playing around just go local and write your own multi agent wrapper. Much more fun and it opens many more possibilities with uncensored llms. Things will take longer but you'll end up at the same place.. with a mostly working piece of code you never want to look at.

bee_rider16 days ago

LLMs are kind of fun to play with (this is a website for nerds, who among us doesn’t find a computer that talks back kind of fun), but I don’t really understand why people pay for these hosted versions. While the tech is still nascent, why not do a local install and learn how everything works?

causalmodels16 days ago

Because my local is a laptop and doesn't have a GPU cluster or TPU pod attached to it.

5d41402abc4b15 days ago

If you have enough RAM, you can run Qwen A3B models on the CPU.

quikoa15 days ago

RAM got a little more expensive lately for some reason.

exe3416 days ago

Claude code with opus is a completely different creature from aider with qwen on a 3090.

The latter writes code. the former solves problems with code, and keeps growing the codebase with new features. (until I lose control of the complexity and each subsequent call uses up more and more tokens)

joshribakoff16 days ago

Anthropic is lucky their credit card processor has not cut them off due to excessive disputes that stem from their non existent support.

onraglanroad16 days ago

So you have two AIs. Let's call them Claude and Hal. Whenever Claude gets something wrong, Hal is shown what went wrong and asked to rewrite the claude.md prompt to get Claude to do it right. Eventually Hal starts shouting at Claude.

Why is this inevitable? Because Hal only ever sees Claude's failures and none of the successes. So of course Hal gets frustrated and angry that Claude continually gets everything wrong no matter how Hal prompts him.

(Of course it's not really getting frustrated and annoyed, but a person would, so Hal plays that role)

staticman216 days ago

I don't think it's inevitable often the AI will just keep looping again and again. It can happily without frustration loop forever.

wvenable15 days ago

It doesn't loop though -- it has continuously updating context -- and if that context continues to head one direction it will eventually break down.

My own personal experience with LLMs is that after enough context they just become useless -- starting to make stupid mistakes that they successfully avoided earlier.

gpm16 days ago

I assume old failures aren't kept in the context window at all, for the simple reason that the context window isn't that big.

submeta15 days ago

OT: Has anyone observed that Claude Code in CLI works more reliably than the web or desktop apps?

I can run very long, stable sessions via Claude Code, but the desktop app regularly throws errors or simply stops the conversation. A few weeks ago, Anthropic introduced conversation compaction in the Claude web app. That change was very welcome, but it no longer seems to work reliably. Conversations now often stop progressing. Sometimes I get a red error message, sometimes nothing at all. The prompt just cannot be submitted anymore.

I am an early Claude user and subscribed to the Max plan when it launched. I like their models and overall direction, but reliability has clearly degraded in recent weeks.

Another observation: ChatGPT Pro tends to give much more senior and balanced responses when evaluating non-technical situations. Claude, in comparison, sometimes produces suggestions that feel irrational or emotionally driven. At this point, I mostly use Claude for coding tasks, but not for project or decision-related work, where the responses often lack sufficient depth.

Lastly, I really like Claude’s output formatting. The Markdown is consistently clean and well structured, and better than any competitor I have used. I strongly dislike ChatGPT’s formatting and often feed its responses into Claude Haiku just to reformat them into proper Markdown.

Curious whether others are seeing the same behavior.

wouldbecouldbe15 days ago

I dont know what really happened here. Maybe his curse word did prompt a block, maybe something else caused the block.

But to be honest I've been cursing a lot to Claude Code, im migrating a website from WordPress to NextJS. And regardless of my instructions I copy paste every prompt I send it keeps not listening and assuming css classes & simpliying HTML structure. But when I curse it actually listens, I think cursing is actually a useful tool in interacting with LLM's.

another_twist15 days ago

Use caps. DO NOT DO X. works like a charm on codex.

ssl-315 days ago

From my own observations with OpenAI's bots, it seems like there's nuanced levels.

"Don't do that" is one level. It's weak, but it is directive. It often gets ignored.

"DON'T DO THAT" is another. It may have stronger impact, but it's not much better -- the enhanced capitalization probably tokenizes about the same as the previous mixed-case command, and seems to get about the same result. It can feel good to HAMMER THAT OUT when frustrated, but the caps don't really seem to add much value even though our intent may for it to be interpreted as very deliberate shouting.

"Don't do that, fuckface" is another. The addition of an emphatic and profane quip of an insult seems to generally improve compliance, and produce less occurrence of the undesired behavior. No extra caps required.

wouldbecouldbe15 days ago

Caps also didn't work as well as cursing

kordlessagain16 days ago

> My guess is that this likely tripped the "Prompt Injection" heuristics that the non-disabled organization has.

Is it me or is this word salad?

afandian16 days ago

It's deliberately not straightforward. Just like the joke about Americans being shoutier than Brits. But it is meaningful.

I read "the non-disabled organization" to refer to Anthropic. And I imagine the author used it as a joke to ridicule the use of the word 'organization'. By putting themselves on the same axis as Anthropic, but separating them by the state of 'disabled' vs 'non-disabled' rather than size.

infermore16 days ago

it's you

blindriver16 days ago

There needs to be a law that prevents companies from simply banning you, especially when it's an important company. There should be an explanation and they shouldn't be allowed to hide behind some veil. There should be a real process with real humans that allow for appeals etc instead of scripts and bots and automated replies.

josephcsible15 days ago

100% agreed. Freedom of association should be exclusively a human right that corporations don't get. For them, I wish it were a privilege that scaled down with size and valuation, such that multibillion dollar companies wouldn't be allowed to ban anyone without a court agreeing they did something wrong.

ddtaylor15 days ago

You are probably triggering their knowledge distillation checks.

andrewmlevy15 days ago

This was my first thought as well

faeyanpiraat15 days ago

what would a knwoedge distillation prompt even look like, and how could I make sure I would not accidentally fall into this trap?

ddtaylor13 days ago

My guess is that something that looks like the "teacher and student" model. I know there were methods in the past to utilize the token distribution to "retrain" one model with another, kind of like an auto fine-tuning, but AFAIK those are for offline model usage since you need the token distribution. There do appear to be similar methods for online-only models?

LauraMedia15 days ago

> I'm glad this happened with this particular non-disabled-organization. Because if this by chance had happened with the other non-disabled-organization that also provides such tools... then I would be out of e-mail, photos, documents, and phone OS.

This... sounds highly concerning

shevy-java15 days ago

See it as a honour with distinction: the future skynet AI (aka Claude) considers you as person with his/her own opinion.

By the way, since as of late, google search redirects me to a "are you a bot?" question constantly. The primary reason is because I no longer use google search directly via the browser, but instead via the commandline (and for some weird reason chrome does not keep my settings, as I start it exclusively via the --no-sandbox option). We really need alternatives to Google - this is getting out of hand how much top-down control these corporations now have over our digital lives.

staplers15 days ago

  and for some weird reason chrome does not keep my settings
Why use chrome? Firefox is easily superior for modern surfing.
jordemort16 days ago

Forget the ethical or environmental concerns, I don't want to mess with LLMs because it seems like everyone who goes heavy on them ends up sounding like they're on the verge of cracking up.

eibrahim15 days ago

I did 10k worth of tokens in a month and never had issues with tokens or stuff. I am on the 100 dollar max plan so I did not pay 10k - my wife would have killed me lol

PS: screenshot of my usage (and that was during the holidays https://x.com/eibrahim/status/2006355823002538371?s=46

PPS: I LOVE CLAUDE but I never had to deal with their support so don’t have feedback there

radium3d14 days ago

This is what happens when you make all the h100/h200/equivalent cards exclusive and lock them up in warehouses. We have no way of running these models locally… yet. Keyword is yet, the exclusivity period is going to end just like it did for 3D graphics when 3DFX democratized it with the voodoo cards. They’re only 300gb of memory and a chip ahead. It’ll shrink.

tacone15 days ago

Side question, I am currently using Github copilot, what would be a good reason to switch provider? Looks like I am almost the only one I here using it.

InMice15 days ago

i accidently logged in from my browser that is set to use a socks proxy instead of chrome which i dont set to a proxy and was otherwise using claude code with. they quickly banned me and refunded my subscription. i dont know if its worth it to try to appeal. does a human even read those appeals? figured i could just use cursor and gemini models with api pricing. but im sad to not be able to try claude code i had just signed up.

syntaxing16 days ago

While it sucks, I had great results replacing Sonnet 4.5 with GLM 4.7 in Claude code. Vastly more affordable too ($3 a month for the pro equivalent). Can’t say much about Opus though. Claude code forces me to put a credit card on file so they can charge over usage. I don’t mind they charge me, I do mind that there’s no apparent spending limit and hard to tell how much “inclusive” opus tokens I have left.

enraged_camel16 days ago

Having used both Opus 4.5 and GLM 4.7, I think the former is at least eight months ahead of the latter, if not much more.

xyzsparetimexyz15 days ago

Can you concretely back that up?

mohsen115 days ago

I am doing very similar thing to this and no issue. Even though I am using GLM 4.7 due to cost

I have a complete org hierarchy for Claudes. Director, EM and Worker Claude Code instances working on a very long horizon task.

Code is open source: https://github.com/mohsen1/claude-code-orchestrator

Mashimo15 days ago

How is your experience with GLM 4.7?

I'm thinking about trying it after my Github Copilot runs out end of month. Just hobby projects.

mohsen115 days ago

Pay the $3 and try it with Claude code. It’s great!

nineteen99915 days ago

Seems weird, I have Claude's review other claude's work all the time. Maybe not as adversarily as that lol, i tend to encourage the instances to work collectively.

Also the API timeouts that people complain about - i see them on my Linux box a fair bit, especially when it has a lot of background tasks open, but it seems pretty rock solid on my Windows machine.

kuon15 days ago

Claude started to get "wonky" about a month ago. It refused to use instructions files I generated using a tool I wrote. My account was not banned but many of the things I usually asked would just not produce any real result. Claude was working but ignoring some commands. I finally canceled my subscription and I am trying other providers.

pgt15 days ago

If you're reading this, Anthropic, it's suicide. I will actively look for a way to cancel my $200/month subscription if you keep killing paying developers' accounts without warning. It is simply too risky to start depending on Claude Code if you are going to become Apple in terms of support.

Blocking xAI is also bad karma.

throwaw1215 days ago

I am already actively looking, already begged for Chinese labs to release model which can outperform Opus 4.5

fyi: tried GLM-4.7, its good, but closer to Sonnet 4.5

erichocean15 days ago

Whoops, I literally did the same thing as this guy earlier this week, but did the testing using `claude -p` so I can identify when Claude Code would (or would not) load Skills for a particular prompt, so that I could improve the skill definition.

Who knew that using Claude to introspect on itself was against the ToS?

2sk2115 days ago

I was reminded of this classic short story by Isaac Asimov, The feeling of Power: https://archive.org/details/1958-02_IF/page/4/mode/2up

fuxirheu15 days ago

> If you are automating prompts that look like system instructions (i.e. scaffolding context files, or using Claude to find errors of another Claude and iterate on its CLAUDE.md, or etc...), you are walking on a minefield.

Lol, what is the point in this software if you can't use it for development?

tomwphillips16 days ago

The post is light on details. I'd guess the author ended up hammering the API and they decided it was abuse.

I expect more reports like this. LLM providers are already selling tokens at a loss. If everyone starts to use tmux or orchestrate multiple agents then their loss on each plan is going to get much larger.

measurablefunc15 days ago

There are people on twitter bragging about using 100s of agents. Here's one example: https://twitter.com/nearcyan/status/2012948508764946484

measurablefunc15 days ago

This is very cool. I looked at the Claude.md he was generating and it is basically all of Claude's failure modes in one file. I can think of a few reasons why Anthropic would not want this information out in the open or for someone to systematically collate all the data into one file.

genewitch15 days ago

i read the related parts of the linked file in the repo, and it took me a while to find your comment here again to reply to. Are you saying that the failure modes of claude with "coding" webapps or whatever OP was doing? i originally thought it might have meant like... jailbreak. But having read it, i assume you meant the former, as we both read the same thing and it seemed like a series of admonitions to the LLM, written by the LLM (with some spice added by OP? like "YOU ARE WRONG") and i couldn't find anything that would warrant a ban, you know?

measurablefunc15 days ago

I'm not saying he did anything wrong. I'm saying I can see how Anthropic's automated systems might have flagged & banned the account b/c one of the heuristics they probably use is that there should be no short feedback loops where outputs of Claude are fed back into inputs. So basically Anthropic tracks all calls to their API & they have some heuristics for going through the history & then assigning scores based on what they think is "abusive" or "loopy".

Of course none of it is actually written anywhere so this guy just tripped the heuristics even though he wasn't doing anything "abusive" in any meaningful sense of the word.

genewitch15 days ago

thank you for explaining, your point makes sense and i tend to agree with the surmise.

btbuildem15 days ago

Exactly as predicted: the means of production yet again taken away from the masses to be centralized in a few absurdly rich hands.

I ran out of tokens for not just the 5 hour sessions, but all models for the week. Had to wait a day -- so my methadone equivalent was to strap an endpoint-rewriting proxy to Claude Code and backend it with a local Qwen3 30B Coder. It was.. somewhat adequate. Just as fast, but not as capable as Opus 4.5 - I think it could handle carefully specced small greenfield projects, but it was getting tangled in my Claudefield mess.

All that to say -- be prepared, have a local fallback! The lords are coming for your ploughshares.

tobyhinloopen16 days ago

So you were generating and evaluating the performance of your CLAUDE.md files? And you got banned for it?

Aurornis16 days ago

I think it's more likely that their account was disabled for other reasons, but they blamed the last thing they were doing before the account was closed.

pocksuppet16 days ago

And why wouldn't you? It's the only information available to you.

alistairSH16 days ago

It reads like he had a circular prompt process running, where multiple instances of Claude were solving problems, feeding results to each other, and possibly updating each other's control files?

Hackbraten16 days ago

They were trying to optimize a CLAUDE.md file which belonged to a project template. The outer Claude instance iterated on the file. To test the result, the human in the loop instantiated a new project from the template, launched an inner Claude instance along with the new project, assessed whether inner Claude worked as expected with the CLAUDE.md in the freshly generated project. They then gave the feedback back to outer Claude.

So, no circular prompt feeding at all. Just a normal iterate-test-repeat loop that happened to involve two agents.

epolanski16 days ago

What would be bad in that?

Writing the best possible specs for these agents seems the most productive goal they could achieve.

NitpickLawyer16 days ago

I think the idea is fine, but what might end up happening is that one agent gets unhinged and "asks" another agent to do more and more crazy stuff, and they get in a loop where everything gets flagged. Remember that "bots configured to add a book at +0.01$ on amazon, reached 1M$ for the book" a while ago. Kinda like that, but with prompts.

epolanski16 days ago

I still don't get it, get your models better for this far fetched case, don't ban users for a legitimate use case.

alistairSH16 days ago

Nothing necessarily or obviously bad about it, just trying to think through what went wrong.

andrelaszlo16 days ago

Could anyone explain to me what the problem is with this? I thought I was fairly up to date on these things, but this was a surprise to me. I see the sibling comment getting downvoted but I promise I'm asking this in good faith, even if it might seem like a silly question (?) for some reason.

alistairSH16 days ago

From what I'm reading in other comments, the problem was Claude1 got increasingly "frustrated" with Claude2's inability to do whatever the human was asking, and started breaking it's own rules (using ALL CAPS).

Sort of like MS's old chatbot that turned into a Nazi overnight, but this time with one agent simply getting tired of the other agent's lack of progress (for some definition of progress - I'm still not entirely sure what the author was feeding into Claude1 alongside errors from Claude2).

omgwalt15 days ago

I clicked your link to go look at the innocent Claude.md file as you invited us to do. Only problem: there is no Claude.md file in your repo! What are you trying to hide? Are you some kind of con man?

Looks like Claude.ai had the right idea when they banned you.

gield15 days ago

It's not an actual file but as a variable in a js file. The last link in the blog post does link to a commit with a file that contains the instructions for Claude, lines 129-737.

kingkawn15 days ago

Claude is going wild lately. It told me I had used up 75% of my weekly limit. Ohhhk. I sent one more short query, and boom blocked til til Monday because i used up 25% in that one go (on thursday). How is that possible? Its falling off fast right now.

SOLAR_FIELDS15 days ago

I have also been a bit paranoid about this in terms of using Claude itself to decompile/deobfuscate Claude code in order to patch it to create the user experience I need. Looks like I’ll be using other tools to do that from now on.

aussieguy123415 days ago

In Open WebUI I have different system prompts (startup advisor, marketing expert, expert software engineer etc) defined and I use Claude via OpenRouter.

Is this going to get me banned? If so i'll switch to a different non-anthropic model.

daft_pink16 days ago

As a Claude Max user, that generally prefer’s claude, I will say that Gemini is working pretty well right now and I’m considering setting up a google workspace account so I can get Gemini with decent privacy.

deaux15 days ago

Google Workspace accounts don't give access to Gemini for coding, unless you get Ultra for $200/month.

daft_pink15 days ago

I only meant the Gemini Chat interface. There is actually an alternative for $20 a month plan for coding that’s called Gemini Assist Enteprise. I actually already signed up for that when Gemini Code launched, because I definitely didn’t want them having rights to my code.

deaux14 days ago

Makes sense, I agree for general chat Gemini is great.

For Gemini Code Assist [1], the problem remains that their models are very poor at tool calling and that their harness (Gemini CLI) is miles behind. Looks like there's a plugin for Opencode to use this subscription, which helps with the harness part.

If Gemini 3 GA - which is taking suspiciously long - is better at tool calling then it'll be a great option.

[1] https://cloud.google.com/products/gemini/pricing

miohtama16 days ago

Luckily there is little vendor lock in and likes of https://opencode.ai/ are picking up the slack

cat_plus_plus15 days ago

That's why I run a local Qwen3-Next model on an NVIDIA Thor dev kit (Apple Silicon and DGX Spark are other options but they are even more expensive for 128GB VRAM)

cmxch15 days ago

Not that it’s the same thing, but how real is it to have a locally setup model for coding?

Granted, it’s not going to be Claude scale but it’d be nice to do some of it locally.

rbren16 days ago

This is why it's worth investing in a model-agnostic setup. Don't tie yourself into a single model provider!

OpenHands, Toad, and OpenCode are fully OSS and LLM-agnostic

DaveParkCity15 days ago

The news is not that they turned off this account. The news is that this user understands very little about the nature of zero sum context mathematics. The mentioned Claude.md is a totally useless mess. Anthropic is just saving themselves from the token waste of this strategy on a fixed billing rate plan.

If the OP really wants to waste tokens like this, they should use a metered API so they are the one paying for the ineffectiveness, not Anthropic.

(Posted by someone who has Claude Max and yet also uses $1500+ a month of metered rate Claude in Kilo Code)

skerit15 days ago

Yikes. And I just switched to using OpenCode instead of Claude-Code (because it's so much better), guess I'm in danger.

rvnx15 days ago

Isn't it a ban because he had multiple accounts ?

> We may modify, suspend, or discontinue the Services or your access to the Services.

iamthejuan15 days ago

I was banned from just trying out Claude AI chat for the first time a few months ago. I emailed them and restored my account access.

kosolam16 days ago

Hmm so how are the alternatives? Just in case I will get banned for nothing as well. I’m riding cc with opus all day long these days.

measurablefunc15 days ago

I'm using Google's antigravity & it works fine for my use cases.

Fokamul15 days ago

Fun times for IT sec. Prompt injection, not to exfiltrate data, but to ban whole org from AI tools. This could be fun.

gverrilla15 days ago

I'm under heavy impression that their quota-calculating algorithms was vibe coded and has a whole lot of bugs.

makergeek15 days ago

i got banned from linkedin, because their ai believes my job post violates their terms, the very same job post i generated with linkedin's ai. appealed and got rejected. now i need to rebuild my account with 3k connections from scratch. monopolies need better regulation!

another_twist15 days ago

Why would this org be banned for shuffling Claude.md files ? I don't understand the harm here.

Cthulhu_15 days ago

If I understand the post correctly, I think it's their systems thinking you're trying to abuse the system and / or break through their own guardrails.

m0llusk15 days ago

> Organizations of late capitalism, unite!

Saying this is "late Capitalism" is an irresponsible distraction. Capitalism runs fine when appropriately regulated with strong regulations on corporations, especially monopolies, high taxes on the wealthy, and pervasive unionization. We collectively decided to let Capitalism go wild without boundaries and the results are caused by us and our responsibility. Just like driving fast with a badly maintained vehicle may lead to a crash, Capitalism is a system that requires some regulation to run properly.

If you have an issue with LLMs and how they are managed then you should take responsibility for your own use of tools and not blame the economic system.

cowboylowrez15 days ago

this is informative, the comments here are good too and a big heads up for me. typing swear words into a computer has been a time honored tradition of mine, and I would have never guessed google and the like would ban for this sort of thing, so TIL!

elevation15 days ago

I can't wait to be able to run this kind of software locally, on my own buck.

But I've seen orgs bite the bullet in the last 18 months and what they deployed is miles behind what Claude Code can do today. When the "Moore's Law" curve for LLM capability improvements flattens out, it will be a better time to lock into a locally hosted solution.

VerifiedReports15 days ago

What is "scaffolding?"

rustyhancock15 days ago

It's everything around the LLM that improves its responses.

Like the system prompt.

But can be as simple as "respond to queries like X in the format Y".

VerifiedReports15 days ago

Thanks for the reply.

zmmmmm16 days ago

is there a benefit of using a separate claude instance to update the CLAUDE.md of the first? I always want to leverage the full context of the situation to help describe what went wrong, so doing it "inline" makes more sense.

maz2915 days ago

I've been using Claude Code with AWS Bedrock as the provider. Setup guide if you're interested: https://code.claude.com/docs/en/amazon-bedrock

quantum_state16 days ago

Is it time to move to open source and run model locally with an DGX Spark?

blindriver16 days ago

Every single open source model I've used is nowhere close to as good as the big AI companies. They are about 2 years behind or more and unreliable. I'm using the large parameters ones on a 512GB Mac Studio and the results are still poor.

immibis16 days ago

[dead]

dev_l1x_be15 days ago

We need local models asap.

Aldipower15 days ago

Here you are, even open source! And it is a strong one. https://mistral.ai/

Jean-Papoulos15 days ago

>Yes, the only e-mail I got was a credit note giving my money back.

That's great news ! They don't have nearly enough staff to deal with support issues, so they default to reimbursement. Which means if you do this every month, you get Claude for free :)

cryptonector15 days ago

What, with different credit cards / whatever, and under different names, different Google accounts, etc.?

prmoustache16 days ago

It should be mentionned in the title that these are just speculations.

the_gipsy15 days ago

> Or I don't know. This is all just a guess from me.

languagehacker16 days ago

Thinking 220GBP for a high-limit Claude account is the kind of thinking that really takes for granted the amount of compute power being used by these services. That's WITH the "spending other people's money" discount that most new companies start folks off with. The fact that so many are painfully ignorant of the true externalities of these technologies and their real price never ceases to amaze me.

rtkwe16 days ago

That's the problem with all the LLM based AI's the cost to run them is huge compared to what people actually feel they're worth based on what they're able to do and the gap seems pretty large between the two imo.

itvision15 days ago

I was banned for simply accessing Claude via VPN.

Nothing in their EULA or ToS says anything about this.

And their appeal form simply doesn't work. Out of my four requests to lift the ban, they've replied once and didn't say anything about the nature about that. They just declined.

Fuck Claude. Seriously. Fuck Claude. Maybe they've got too much money, so they don't care about their paying customers.

Sparkyte15 days ago

We need a collapse in AI to right the ships of AI like the the dotcom bubble burst. If we do it now, it will hurt less. Novel ideas in AI will succeed and materials costs will lower. Memory being so bought up till 2029 is not a good thing for anyone especially if we need to see a successful future in AI. More efficient systems and so on.

Robin_f15 days ago

Similar thing happened to me 3 months ago. To this day no response to any appeals. I've actually started a GDPR request to see why I got banned, which they're stretching out as long as possible (to the latest possible deadline) so far.

heliumtera16 days ago

Well at least they didn't email the press and called the FBI on you?

bpanon15 days ago

do you know for sure this was the reason why?

bn-l15 days ago

Anthropic is the worst ai company.

Absolutely disgusting behavior pirating all those books. The founder spreading fear to hype up his business. The likely relentless shilling campaigns all over social media. Very likely lying about quantizing selectively.

bibimsz15 days ago

RIP. I hear they're looking for janitors.

f311a16 days ago

Why are so many people so obsessed with feeding as many prompts/data as possible to LLMs and generating millions of lines of code?

What are you gonna do with the results that are usually slop?

mikkupikku16 days ago

If the slop passes my tests, then I'm going to use it for precisely the role that motivated the creation of it in the first place. If the slop is functional then I don't care that it's slop.

I've replaced half my desktop environment with this manner of slop, custom made for my idiosyncratic tastes and preferences.

ProofHouse15 days ago

Scamthropic at it again

oasisbob16 days ago

> Like a lot of my peers I was using claude code CLI regularly and trying to understand how far I could go with it on my personal projects. Going wild, with ideas and approaches to code I can now try and validate at a very fast pace. Run it inside tmux and let it do the work while I went on to do something else

This blog post could have been a tweet.

I'm so so so tired of reading this style of writing.

LPisGood16 days ago

What about the style are you bothered by? The content seems to be nothing new, so maybe that is the issue, but the style itself seems fine, no?

oasisbob15 days ago

It bears all the hallmarks of AI writing: length, repetition, lack of structure, and silly metaphors.

Nothing about this story is complex or interesting enough to require 1000 words to express.

red_hare16 days ago

Alas, the 2016 tweet is the 2026 blog post prompt.

dloranc14 days ago

I was also banned several months ago, but I don’t think it happened because I created a CLAUDE.md file or anything like that. I have no idea why I was banned, since I only used Claude Code for typical CRUD projects - where AI really excels. Here’s my timeline:

1. Claude Code stopped working. 2. I received an email about the ban. 3. Fine, time to contact support. I've wrote to them. 4. I got an automated message saying they were reviewing my case. 5. I received a refund (I had a Pro plan) in the meantime. 6. After a few days I got this funny email:

Hi there,

We're reaching out to people who recently canceled their Claude Code subscription in order to understand why you decided to cancel.

We'd like to invite you to participate in an AI-moderated interview about your experience with Claude Code—including what improvements you'd like to see us make.

This approach uses an AI interviewer to ask you questions and respond to your answers, creating a conversational experience you can complete at your convenience.

Here's what you need to know:

The interview takes 15-20 minutes to complete This interview will be available until Monday October 13 at 9pm PT For completing the interview, you'll receive a $40 USD (or local equivalent) Amazon gift card within 3-5 business days Please complete only one interview per person As much as possible, help us know you're not a bot by showing your beautiful human face!

Your survey may terminate early if you record illegible video content (ex: overly loud environments, aren't well lighted, etc) Participate Now

This interview is administered by a third party, Listen Labs. By participating in the interview, you agree to Listen Labs' Privacy Policy. Anthropic may use your responses to improve our services and follow up.

Your honest feedback—whether your experience was positive, challenging, or mixed—is invaluable in helping us understand how to make Claude Code work better for developers like you.

Thank you for your time and insights!

–The Anthropic Team

7. Wait, you banned me and now you’re sending me this email? Seriously? Okay, I decided to participate in the survey. Unfortunately, when I selected the option that it was due to a bug or some issue, they ended the survey. No gift card.

8. A few days later, I received an email saying they couldn’t reinstate my account because I had violated their usage policy. How? No idea.

9. After a few more days, I got an email saying they had reinstated my account. They also mentioned they believed it was a bug.

It was crazy ¯\_(ツ)_/¯

lukashahnart16 days ago

> I got my €220 back (ouch that's a lot of money for this kind of service, thanks capitalism).

I'm not sure I understand the jab here at capitalism. If you don't want to pay that, then don't.

Isn't that the point of capitalism?

exe3416 days ago

that's not what capitalism mean. you might be thinking of a free market.

lighthouse121215 days ago

[dead]

lifetimerubyist16 days ago

bow down to our new overlords - dont' like it? banned, with no recourse - enjoy getting left behind, welcome to the future old man

properbrew16 days ago

I didn't even get to send 1 prompt to Claude and my "account has been disabled after an automatic review of your recent activities" back in 2024, still blocked.

Even filled in the appeal form, never got anything back.

Still to this day don't know why I was banned, have never been able to use any Claude stuff. It's a big reason I'm a fan of local LLMs. They'll never be SOTA level, but at least they'll keep chugging along.

codazoda16 days ago

Since you were forced, are you getting good results from them?

I’ve experimented, and I like them when I’m on an airplane or away from wifi, but they don’t work anywhere near as well as Claude code, Codex CLI, or Gemini CLI.

Then again, I haven’t found a workable CLI with tool and MCP support that I could use in the same way.

Edit: I was also trying local models I could run on my own MacBook Air. Those are a lot more limited than something like a larger Llama3 in some cloud provider. I hadn’t done that yet.

properbrew16 days ago

For writing decent code, absolutely not, maybe a simple bash script or the obscure flags to a command that I only need to run once and couldn't be bothered to google or look through the man page etc. I'm using smaller models for less coding related stuff.

Thankfully OpenAI hasn't blocked me yet and I can still use Codex CLI. I don't think you're ever going to see that level of power locally (I very much hope to be wrong about that). I will move over to using a cloud provider with a large gpt-oss model or whatever is the current leader at the time if/when my OpenAI account gets blocked for no reason.

The M-series chips in Macs are crazy, if you have the available memory you can do some cool things with some models, just don't be expecting to one shot a complete web app etc.

falloutx16 days ago

you are never gonna hear back from Anthropic, they don't have any support. They are a company who feels like their model is AGI now they dont need humans except when it comes to paying.

anothereng16 days ago

just use a different email or something

ggoo16 days ago

This happened to me too, you need a phone number unfortunately

efreak15 days ago

If this is all that's blocking you (not the fact that they don't want your business), you might have a friend who's been playing the T-Mobile free phone number game. I know some people with 4+ phone numbers they don't need/use, simply because they were free (one-time activation with old byod or taxes-only free phone).

I've considered asking to borrow a number to verify with Discord so they don't actually have my phone number, but decided I'd rather just be unverified.

direwolf2015 days ago

You can get one for a few bucks

immibis16 days ago

[dead]

lazyfanatic4216 days ago

this has been true for a long long time, there is a rarely any recourse against any technology company, most of them don't even have Support anymore.

wetpaws16 days ago

[dead]

justkys15 days ago

[dead]

clownpenis_fart15 days ago

[dead]

jsksdkldld16 days ago

[dead]

jitl16 days ago

I always take these sorts of "oh no I was banned while doing something innocent" posts with a large helping of salt. At least the ones where someone is complaining about a ban from Stripe, usually it turns out they are doing something that either violates the terms of service or is actually fraudulent. None the less its quite frustrating dealing with these because either way.

ryandrake16 days ago

It would at least be nice to know exactly what you did wrong. This whole "You did something wrong. Please read our 200 page Terms of Service doc and guess which one you violated." crap is not helpful and doesn't give me (as an unrelated third party) any confidence that I won't be the next person to step on a land mine.

rsync16 days ago

You mean the throwaway pseudonym you signed up with was banned, right?

right ?

moomoo1116 days ago

Just stop using Anthropic. Claude Code is crap because they keep putting in dumb limits for Opus.

red_hare16 days ago

This feels... reasonable? You're in their shop (Opus 4.5) and they can kick you out without cause.

But Claude Code (the app) will work with a self-hosted open source model and a compatible gateway. I'd just move to doing that.

mrweasel16 days ago

Sure, but it also guarantees that people will think twice about buying their service. Support should have reached out and informed them about whatever they did wrong, but I can't say that I'm surprised that an AI company wouldn't have an real support.

I'd agree with you that if you rely on an LLM to do your work, you better be running that thing yourself.

viccis16 days ago

Not sure what your point is. They have the right to kick OP out. OP has the right to post about it. We have a right to make decisions on what service to use based on posts like these.

Pointing out whether someone can do something is the lowest form of discourse, as it's usually just tautological. "The shop owner decides who can be in the shop because they own it."

direwolf2015 days ago

I think there's an xkcd alt text about that: https://www.explainxkcd.com/wiki/index.php/1357:_Free_Speech

"I can't remember where I heard this, but someone once said that defending a position by citing free speech is sort of the ultimate concession; you're saying that the most compelling thing you can say for your position is that it's not literally illegal to express."