Back

Anthropic's original take home assignment open sourced

606 points23 hoursgithub.com
lbreakjai19 hours ago

I consider myself rather smart and good at what I do. It's nice to have a look at problems like these once in a while, to remind myself of how little I know, and how much closer I am to the average than to the top.

epolanski10 hours ago

Computing is a very broad topic. Even Linus or Carmack have no skills or knowledge about countless topics that would be mundane to you.

It doesn't matter really, what matters is our ability to stare into the void of what we don't know and start making progress.

Our ability to process and master new topics is part of the job.

I'm sure you've done that countless times.

TrackerFF18 hours ago

Well it is a specialized problem. If you've never worked on anything similar previously, it is going to take time. Don't even need to interview for selective billion dollar companies like Anthropic to encounter these types of problems - after college I interviewed for various electronics/hardware companies where you'd get asked to optimize low-level code - which would have looked quite foreign, if you had never actually worked on such problems before.

johnnyanmac24 minutes ago

>Don't even need to interview for selective billion dollar companies like Anthropic to encounter these types of problems

I'll take any interviews at this point in time.

But yes, every domain has its jargon. I work tangentially to this and quickly understood this as a GPGPU problem. A relatively elementary one if you studied this space, though a time limit of 2 hours seems overly restrictive if you aren't actively studying this stuff.

Onavo17 hours ago

If you ask an EE to debug react state management code without prior exposure they won't do too well either. But on the other hand they can easily pick up most of it after a week long crash course while training a performance engineer who can optimize code for a specific architecture would take months.

sublinear11 hours ago

> they can easily pick up most of it after a week long crash course

I have to disagree and question what you mean by "optimization". It's very easy to write web code that technically accomplishes a task, but does so poorly. This is the natural consequence of having so many options available.

The vast majority of web devs with less than 5 years of experience simply don't understand plain javascript well enough. It's a longstanding problem that devs will reach for the most ergonomic tools, not the best tools.

Lacking sufficient experience, they can't help it. This happens in all programming languages and in all layers of software. AI slop is even worse because it tends towards the mean.

johnnyanmac20 minutes ago

>The vast majority of web devs with less than 5 years of experience simply don't understand plain javascript well enough

they are never tested on it, and many won't dig that deep in the day-to-day. Whose fault is it that they don't know plain javascript well enough? That's the result of shipping "content" over any other metric of proper software engineering.

Funnily enough I did take a mini-course (not a week, but we're talking maybe 100 hours of work as a recreational online summer class) in plain javascript at my university. Quite the quirky language. But this was in ES3 or so, so maybe there's many more guard rails these days against the core jank that makes up JS

ontouchstart10 hours ago

Engineering is more or less about getting familiar with the proper tools and use them to solve specific problems: add new features, debugging, refactoring and optimizing.

And the tools themselves are built by other engineers and they need new features, debugging, optimization etc. It is turtles all the way down.

But each layer has its own jargons, conventions and unwritten hacks. That is where experience comes in. Once you get out off a rabbit hole or pothole, you are one step closer to becoming the “domain expert”. There is no short cut.

ignoramous16 hours ago

> EE to debug react state management ... easily pick up most of it after a week long crash course while training a performance engineer ... would take months

Isn't that mostly because as you go up the abstraction layer, tools and docs to teach yourself the tricks of trade fast are in abundance (let alone a popular layer like React)? Which inturn is likely a function of incentives and opportunities.

+1
fny15 hours ago
fergie18 hours ago

I'm 30 years in, and literally don't understand the question.

WithinReason15 hours ago

After a quick look this is can be seen as a low level GPU/TPU optimization problem where you have to consider the throughput and depth of different arithmetic pipelines. If you want to hire people who understand how to do that you unfortunately have to give them such a convoluted task and emulate the relevant parts of HW. (In reality this is probably more like TPU since it has scalar pipelines, but the optimization methods are not that different)

The task is to parallelize tree traversal, which is embarrassingly unparallel so it's tricky.

WithinReason12 hours ago

This also shows that a performance engineer's job, even at Anthropic, is to be a glorified human compiler, who is often easily beaten by LLMs.

scottyah8 hours ago

I think the job is to be one of the few that's better than LLMs.

mike_hearn17 hours ago

The question isn't clearly written down anywhere, that's why. Presumably actual candidates would have been given more info over the phone or email. Part of the "challenge" is reverse engineering their Python; unclear if that's intentional.

If you look at the top of perf_takehome.py then there is a brief comment saying the challenge is to optimize a kernel. Kernel in GPU land means a program that computes on data in parallel, it's not an OS kernel:

    Optimize the kernel (in KernelBuilder.build_kernel) as much as possible in the
    available time, as measured by test_kernel_cycles on a frozen separate copy
    of the simulator.
However, this kernel doesn't run on an actual GPU. It runs on a little interpreter for a custom assembly language written in Python. Thus you will be optimizing the program built in-memory by the function on this line:

https://github.com/anthropics/original_performance_takehome/...

This function is described only as:

    Like reference_kernel2 but building actual instructions.
    Scalar implementation using only scalar ALU and load/store.
The KernelBuilder class has some fields like "instrs" but we can't immediately see what they're meant to be because this is Python and types are optional. Nonetheless we can see that instructions are being added to a list, and below we can see the test_kernel_cycles function that runs the interpreter on the program. So our mission is to change the build_kernel function to make a better program. And it says this is an assembly version of the python function reference_kernel2 which is found in problem.py.

What exactly is this kernel doing? The reference_kernel2 function doesn't explain itself either - it's some sort of parallel tree walk. Let's put that to one side for a second and explore the machine, which is defined in problem.py. The machine itself is also largely undocumented, but there's a brief description in a docstring on line 66.

At this point it helps to understand the design of exotic processors. The emulator is for a fictional CPU that uses a VLIW SIMD ISA. Normal programmers will never encounter such a chip. Intel tried to make such a machine decades ago and it never took off, since then the concept has been largely dead. I believe it's still used in some mobile DSPs like Qualcomm's Hexagon. Notably, NVIDIA PTX is not such an ISA so this seems to have been chosen just to make things harder. As the comment explains, in a VLIW machine multiple instructions are packed together into a "slot" and executed in parallel. In a normal CPU the hardware reads a serial stream of instructions and works out just in time which can be executed in parallel, using fancy out-of-order circuitry. In a VLIW machine that's done ahead of time by the compiler or (in this case) the humble programmer, you. But this isn't just a VLIW machine, it's also multi-core, and multi-"engine", so there are multiple levels of execution going on. And it's SIMD, meaning each instruction can itself operate on multiple bits of data simultaneously.

This machine doesn't have registers or cache but it does have "scratch space", and so you can use the vector instructions to load data into a series of 32 bit scratch words and then do things on them in parallel. And multiple vector instructions can also run in parallel. "Broadcasting a scalar" in SIMD-speak means taking a single value and repeating it over multiple scratch space slots (or register subwords in a real machine), so you take e.g. 0xFF and get 0xFFFFFFFFFFFFFFFF.

And that's it, that's all we get. As the code says: "This comment is not meant to be full ISA documentation though, for the rest you should look through the simulator code". Possible point of confusion: real ISAs are serialized to bytes but this one is just Python tuples. The code is only partially typed; sometimes you're just left guessing.

So to recap, the problem is to optimize an undocumented program expressed in undocumented data structures returned by a Python function whose result is interpreted by a partly documented Python class that simulates a fictional exotic CPU architecture using an abandoned design that gives a lot of parallel computational capacity, but which requires all parallelism to be statically declared ahead of time, whilst simultaneously reverse engineering the Python that does all this.

Does that help? Sounds like a fun exercise :)

Edit: I just checked and Google TPUs are much more VLIW like so perhaps this simulator is designed to match a TPU. I know Anthropic rely on TPUs for serving and have done some optimization for them.

HarHarVeryFunny12 hours ago

It does seem a bit of a strange challenge - a bit reminiscent of high school math problems where understanding the question was as much part of it as actually solving the problem when you understood it.

Since the focus of the challenge appears(?) intended to be optimization, not reverse engineering, it's a bit odd that they don't give a clear statement of what the kernel is meant to be computing. Perhaps the challenge is intended to be a combination of the two, but then the correct reverse engineering part of it becomes a gate for the optimization part, else you'll be solving the wrong problem.

Given the focus on results achieved by Opus 4.5, maybe that's the main point - to show how well Opus can reverse engineer something like this. If they gave the actual clear problem statement, then maybe you could brute force an optimal solution using tree search.

+1
HarHarVeryFunny11 hours ago
+1
fc417fc8029 hours ago
forgotpwd1617 hours ago

This is nice writeup. Thanks. Another commenter said will've taken them 2h just to sketch out ideas; sans LLMs will've taken me more than 2h just to collect all this info let alone start optimizing it.

mike_hearn16 hours ago

It took me about 10 minutes to generate that writeup the old fashioned 100% organic way, because one of the things that's unspecified is whether you're allowed to use AI to help solve it! So I assumed as it's a job interview question you're not allowed, but now I see other comments saying it was allowed. That would let you get much further.

I think I'd be able to make some progress optimizing this program in two hours but probably not much. I'm not a performance engineer but have designed exotic emulated CPU architectures before, so that helps a lot.

+1
maccard14 hours ago
owlbite11 hours ago

I think calling VLIW "an adandoned design" is somewhat of an exaggeration, such architectures are pretty common for embedded audio processing.

matt_d8 hours ago

Worth adding on that note:

From JAX to VLIW: Tracing a Computation Through the TPU Compiler Stack, https://patricktoulme.substack.com/p/from-jax-to-vliw-tracin...

Google’s Training Chips Revealed: TPUv2 and TPUv3, HotChips 2020, https://hc32.hotchips.org/assets/program/conference/day2/Hot...

Ten Lessons From Three Generations Shaped Google’s TPUv4i, ISCA 2021, https://gwern.net/doc/ai/scaling/hardware/2021-jouppi.pdf

mike_hearn11 hours ago

Sure. I did mention DSPs. But how many people write code for DSPs?

+1
HarHarVeryFunny8 hours ago
b40d-48b2-979e13 hours ago

    Sounds like a fun exercise :)
I'll be honest, that sounds like the opposite of fun since the worst parts of my job are touching the parts of a Python codebase that are untyped. The sad part is this work codebase isn't even that old, maybe a few years, and the developers definitely should have known better if they had anyone capable leading them. Alas, they're all gone now.

Harder than figuring out the instruction set for some exotic CPU are definitely the giant untyped dicts/lists common in data science code.

dist-epoch15 hours ago

> but which requires all parallelism to be statically declared ahead of time

this is what all specialized chips like TPU/Cerebras require today, and it allows for better optimization than a generic CPU since you can "waste" 30 min figuring out the perfect routing/sequencing of operations, instead of doing it in the CPU in nanoseconds/cycles

another benefit is you can throw away all the CPU out-of-order/branch prediction logic and put useful matrix multipliers in it's place

carschno16 hours ago

On the one hand, this exercise probably reflects a realistic task. Daily engineering work comprises a lot of reverse engineering and debugging of messy code. On the other hand, this does not seem very suitable as an isolated assignment. The lack of code base-specific context has a lot of potential for frustration. I wonder what they really tested on the candidates, and whether this was what they wanted to filter for.

fc417fc80210 hours ago

> The lack of code base-specific context has a lot of potential for frustration.

I think that's one of the intentional points. Being able to quickly understand what the provided source code is doing.

fergie15 hours ago

Wow! Thanks for the explanation :)

mannyv8 hours ago

"Performance can be optimized by not using python."

bsder15 hours ago

Since it's a CPU, you start with the idea that there is an ALU and spiral outward from that. That gives you something concrete to wrap your head around while you climb up the abstraction levels.

However, when I hit "scratch_write" and it wasn't in the Machine class and it wasn't coming from some Decorator and it was getting defined and deleted by a member function ... I stopped. That's paying lip service to the variable typing that is scattered around and actively hampers even basic IDE usage. Probably the typing was added by AI/LLM after the fact, and it missed that unusual usage. The Python convention used to be that those kinds of variables got declared as "_scratch_write" with a leading underscore to flag that they were "private/internal".

That was the gigantic red "We write shitty code" signal or worse "We don't care about wasting your time" signal. Human review should have flagged that.

Shame. I was kinda looking forward to the technical problem, but I'm not going to spend a bunch of time using grep to untangle garbage code to get at it.

I suspect everything would actually be much clearer if you wrote it in SystemVerilog and tested with Cocotb. Let's see if their LLMs can handle that porting job. HAH!

PeterStuer17 hours ago

Which part exactly are ypu having trouble with?

- Optimize the kernel (in KernelBuilder.build_kernel) as much as possible in the available time, as measured by test_kernel_cycles on a frozen separate copy of the simulator

karmajunkie9 hours ago

Thank goodness, I thought it was just me...

measurablefunc18 hours ago

Generate instructions for their simulator to compute some numbers (hashes) in whatever is considered the memory of their "machine"¹. I didn't see any places where they actually disallow cheating b/c it says they only check the final state of the memory² so seems like if you know the final state you could just "load" the final state into memory. The cycle count is supposedly the LLM figuring out the fewest number of instructions to compute the final state but again, it's not clear what they're actually measuring b/c if you know the final state you can cheat & there is no way to tell how they're prompting the LLM to avoid the answers leaking into the prompt.

¹https://github.com/anthropics/original_performance_takehome/...

²https://github.com/anthropics/original_performance_takehome/...

saagarjha17 hours ago

Well, they read your code in the actual hiring loop.

+3
measurablefunc17 hours ago
mangatmodi15 hours ago

Smart is different than the knowledge. If you learn about these concepts andwork on these problems, then you will be able to solve them.

It's not about you being average, just a different knowledge set.

ActorNightly7 hours ago

Yours is a good mentality to have because it creates the emotional drive to learn more, so don't lose that. That being said, this isn't really that complicated. Its just a matter of taking enough time to look at the code and understand how its structured. I feel like the thing that differentiates developer skill is pretty much being able to do that, specifically in the process of having the model of the program in your head.

sigbottle6 hours ago

Does it?

For me, I've had that mentality for the longest time and I didn't get anything done because, well, "I'm just average".

For me, a little bit of arrogance (there's no way I couldn't do X, let's go do it), even if I end up "looking stupid" (see, I told you it was that hard!), was far more valuable to my development

chistev15 hours ago

What we know is a drop, what we don't know is an ocean.

elzbardico9 hours ago

There's a big chance you're falling in a subtle form of imposter syndrome that manifests itself by largely over-estimating the average skill level.

But this is good. Staying humble makes you hungrier for learning.

gervwyk5 hours ago

Don’t stress, its very likely that this problem was vibe coded :) It’s insane how much better Claude Code is compared to alternatives lately.

LouisSayers13 hours ago

It's the type of thing you'd be exposed to in a computer science degree - operating systems / compilers.

Always room to learn in software :)

xenihn18 hours ago

It comes with test suites, so that gives you a base to start from. You can at the very least do trial-and-error and come up with some heuristics on the fly. You're at a huge disadvantage to someone who has some familiarity but can convincingly play it off as being a newcomer, though.

deadbabe12 hours ago

If you think you’re average, you’re not average.

apsurd19 hours ago

disagree. nobody has a monopoly on what metric makes someone good. I don't understand all this leet code optimization. actually i do understand it, but it's a game that will attract game optimizers.

the hot take is, there are other games.

tuetuopay17 hours ago

This is the opposite of leet code.

Yes, this applies to some simulated imaginary CPU with an artificial problem. Except that the job asked here is exactly the core of what a performance engineer will do at anthropic: optimize kernels for their fleet of GPUs. Is it simplified? Yes! (e.g. the simulator does not restrict memory access patterns)

This is a real-world problem adapted to a lab setting that can fit in one's head in a matter of hours. Leetcode would have you reimplement the hashmap used in there.

saagarjha17 hours ago

This is explicitly not Leetcode, in fact its goal is to attract optimizers

sevenzero18 hours ago

Also leetcode does not really provide insight into ones ability to design business solutions. Whether it be system design, just some small feature implementation or communication skills within a team. Its just optimizers jerking each other off on some cryptic problems 99.999999999% of developers will never see in real life. Maybe it would've been useful like 30 years ago, but all commonly used languages have all these fancy algorithms baked into their stdlib, why would I ever have to implement them myself?

lbreakjai18 hours ago

But this is an interview problem at Anthropic, not at your local CRUD factory. They _are_ looking for the optimizers, because they _are_ working on cryptic problems the 99.9999% of us will never encounter.

thorncorona18 hours ago

Or more likely, the commonality is how you're applying your software skills?

In every other field it's helpful to understand the basics. I don't think software is the exception here.

sevenzero18 hours ago

Understanding basics is very different to being able to memorize algorithms. I really dont see why I'd ever have to implement stuff like quicksort myself somewhere. Yes I know what recursion is, yes I know what quick sort is, so if I ever need it I know what to look for. Which was good enough throughout my career.

pvalue00521 hours ago

I suspect this was released by Anthropic as a DDOS attack on other AI companies. I prompted 'how do we solve this challenge?' into gemini cli in a cloned repo and it's been running non-stop for 20 minutes :)

bjackman18 hours ago

Lately with Gemini CLI / Jules it doesn't seem like time spent is a good proxy for difficulty. It has a big problem with getting into loops of "I am preparing the response for the user. I am done. I will output the answer. I am confident. Etc etc".

I see this directly in Gemini CLI as the harness detects loops and bails the reasoning. But I've also just occasionally seen it take 15m+ to do trivial stuff and I suspect that's a symptom of a similar issue.

aiiotnoodle13 hours ago

I've noticed using antigravity and vscode, Gemini 3 pro often comes back with model too busy or something like that and basically 500s.

Seems like capacity because it works a lot better late at night.

I don't see the same with the claude models in antigravity.

menaerus11 hours ago

I also noticed that and I also noticed that it starts to struggle when the workspace "tab" you're working in gets longer - it basically gets stuck at "Starting agent ...". I initially thought it must be a very big context that the model is struggling with but since since restarting the "app" and kill -9 fixes it, it suggests that it's a local issue. Strange.

trillic10 hours ago

Anecdotally, I notice better performance and output quality across most providers outside of 8a-5p ET.

mixel15 hours ago

I saw this too. Sometimes it "think" inside of the actual output and its much more likely to end up in the loop of "I am ready to answer" while it is doing that already

sva_15 hours ago

I feel like sometimes it just loops those messages when it doesn't actually generate new tokens. But I might be wrong

bjackman15 hours ago

There are some other failure modes that all feel kinda vaguely related that probably help with building a hypothesis about what's going wrong:

Sometimes Gemini tools will just randomly stop and pass the buck back to you. The last thing will be like "I will read the <blah> code to understand <blah>" and then it waits for another prompt. So I just type "continue" and it starts work again.

And, sometimes it will spit out the internal CoT directly instead of the text that's actually supposed to be user-visible. So sometimes I'll see a bunch of paragraphs starting with "Wait, " as it works stuff out and then at the end it says "I understand the issue" or whatever, then it waits for a prompt. I type "summarise" and it gives me the bit I actually wanted.

It feels like all these things are related and probably have to do with the higher-level orchestration of the product. Like I assume there are a whole bunch of models feeding data back and forth to each other to form the user-visible behaviour, and something is wrong at that level.

hackpelican7 hours ago

At one point it started spitting out its CoT in the comments of the code it’s supposed to be changing.

bird086119 hours ago

Which Gemini model did you use? My experience since launch of G3Pro has been that it absolutely sucks dog crap through a coffee straw.

pvalue00518 hours ago

/model: Auto (Gemini 3) Let Gemini CLI decide the best model for the task: gemini-3-pro, gemini-3-flash

After ~40 minutes, it got to:

The final result is 2799 cycles, a 52x speedup over the baseline. I successfully implemented Register Residency, Loop Unrolling, and optimized Index Updates to achieve this, passing all correctness and baseline speedup tests. While I didn't beat the Opus benchmarks due to the complexity of Broadcast Optimization hazards, the performance gain is substantial.

It's impressive as I definitely won't be able to do what it did. I don't know most of the optimization techniques it listed there.

I think it's over. I can't compete with coding agents now. Fortunately I've saved enough to buy some 10 acre farm in Oregon and start learning to grow some veggies and raise chickens.

light_hue_112 hours ago

Keep in mind that the boat on competing with machines to generate assembly sailed for 99% of programmers half a century ago. It is not surprising that this is an area where AI is strong.

IsTom14 hours ago

Did you check that it did the things it claims it did?

triyambakam10 hours ago

> grow some veggies and raise chickens.

Maybe Claude will be able to do that soon, too.

ece7 hours ago

After an hour with a few prompts, the first working version got to 3529 cycles (41x speedup) for me. I was using Gemini 3 pro preview.

apsurd17 hours ago

we've lost the plot.

you can't compete with an AI on doing an AI performance benchmark?

kqr16 hours ago

This is not an AI performance benchmark, this is an actual exercise given to potential human employees during a recruitment process.

Mashimo19 hours ago

> sucks dog crap through a coffee straw.

That would be impressive.

stronglikedan11 hours ago

Only if the dog didn't get too much human food the night before.

anematode19 hours ago

New LLM benchmark incoming? I bet once it's done, people will still say it's not AGI.

+2
dotancohen18 hours ago
languid-photic19 hours ago

Naively tested a set of agents on this task.

Each ran the same spec headlessly in their native harness (one shot).

Results:

    Agent                        Cycles     Time
    ─────────────────────────────────────────────
    gpt-5-2                      2,124      16m
    claude-opus-4-5-20251101     4,973      1h 2m
    gpt-5-1-codex-max-xhigh      5,402      34m
    gpt-5-codex                  5,486      7m
    gpt-5-1-codex                12,453     8m
    gpt-5-2-codex                12,905     6m
    gpt-5-1-codex-mini           17,480     7m
    claude-sonnet-4-5-20250929   21,054     10m
    claude-haiku-4-5-20251001    147,734    9m
    gemini-3-pro-preview         147,734    3m
    gpt-5-2-codex-xhigh          147,734    25m
    gpt-5-2-xhigh                147,734    34m
Clearly none beat Anthropic's target, but gpt-5-2 did slightly better in much less time than "Claude Opus 4 after many hours in the test-time compute harness".
lawrencechen18 hours ago

codex cli + gpt-5-2-codex-xhigh got to 1606 with the prompt "beat 1487 cycles. go." ~53 minutes.

jstummbillig18 hours ago

Will you look at this man's prompting skills?!

dudewhocodes14 hours ago

Serious prompt engineering right here

mettamage14 hours ago

Wow, is gpt-5-2-codex-xhigh really that good in general? Is this the 200$ per month version?

woadwarrior0113 hours ago

gpt-5.2-codex xhigh with OpenAI codex on the $20/month plan got to 1526 cycles with OP's prompt for me. Meanwhile claude code with Opus 4.5 on the team premium plan ($150/month) gave up with a bunch of contrived excuses at 3433 cycles.

ponyous19 hours ago

Very interesting thanks! I wonder what would happen if you kept running Gemini in a loop for a while. Considering how much faster it ended it seems like there is a lot more potential.

a24j17 hours ago

Can you share the agent-comparison harness code or point to something similar? I want to learn about benchmarking models in a basic or practical sense.

languid-photic12 hours ago
a24j48 minutes ago

Thanks so much!!

raphaelj15 hours ago

Could you try with some open-weighted models, e.g. Qwen3-coder, GLM-4.7 or Devstral-2?

kevinday5 hours ago

I tried GLM-4.7 running locally on a beefy GPU server, in about 3 minutes it got to 25846 cycles, but then struggled in circles for about 90 minutes without making any meaningful progress, making the same mistakes repeatedly and misdiagnosing the cause most of the time. It seems to understand what needs to happen to reach the goal, but keeps failing on the implementation side. It seemed to understand that to beat the target an entirely new approach would be required (it kept leaning towards a wavefront design), but wasn't seeing the solution due to the very limited ISA.

forgotpwd1619 hours ago

Could you make a repo with solutions given by each model inside a dir/branch for comparison?

kitrak9518 hours ago

Are you giving instructions to a stranger on the internet?

forgotpwd1617 hours ago

Instructions?! Just asked since GP already did it. No need to realize top comment's "DDOS attack on other AI companies" joke.

edf1318 hours ago

I think he’s asking rather than giving instructions

pelagicAustral17 hours ago

He's prompting

giancarlostoro19 hours ago

I do wonder how Grok would compare, specifically their Claude Code Fast model.

game_the0ry11 hours ago

> If you optimize below 1487 cycles, beating Claude Opus 4.5's best performance at launch, email us at performance-recruiting@anthropic.com with your code (and ideally a resume) so we can be appropriately impressed and perhaps discuss interviewing.

This is an interesting way to recruit. Much better than standard 2 leetcode medium/hard questions in 45 mins.

paxys10 hours ago

This is simply to enter the recruiting pipeline. once you're in you will do the same leetcode interviews as everyone else.

alt22710 hours ago

You would hope that if you manage to beat their engineers best optimisations at launch, then you would leapfrog a certain amount of the initial stages.

Then again, this may just be a way to get free ideas at optimising their product from outside the box.

benlivengood9 hours ago

One could use any number of LLMs on a take-home problem so in-person interviews are a must.

+2
legel9 hours ago
driverdan8 hours ago

Is this a fact or an assumption?

yodsanklai9 hours ago

It would take something like one week full time to work on this. It's not something you can do if you have a full-time job and apply to several other companies. I find it unreasonable to ask a candidate to spend that much time for an uncertain result.

It's true that being ready for leetcode takes practice, but at least it's standard so you can re-use the skills to other interviews. Optimizing some generated code is certainly fun, but it's as useless as leetcode for your average programmer.

tcoff918 hours ago

As long as there are qualified candidates willing to do unreasonable tasks for the chance to work at a company, there's not much incentive for the company to change their system. Those people will also probably work unreasonably hard and make unreasonable sacrifices for the company.

abra015 hours ago

This is a really fun problem! I suggest anyone who likes optimization in a very broad sense to try their hand at it. Might be the most fun I've had while interviewing. I had to spend a week-worth of evenings on it to fully scratch the itch, and I managed to get 1112 cycles. But that was mostly manual, before the current crop of agentic models (clopus 4.5, gpt5.2). I wonder how far you can RalphWiggum it!

lukah13 hours ago

I've never heard AI-assisted coding referred to as "RalphWiggum"ing a problem, and now I will have to use that always. Thank you.

avaer21 hours ago

It's pretty interesting how close this assignment looks to demoscene [1] golf [2].

[1] https://en.wikipedia.org/wiki/Demoscene [2] https://en.wikipedia.org/wiki/Code_golf

It even uses Chrome tracing tools for profiling, which is pretty cool: https://github.com/anthropics/original_performance_takehome/...

wiz21c17 hours ago

I was in the demoscene long ago and that kind of optimisation is definitely in the ballpark of what we did: optimize algorithm down to machine code level (and additionally, cheat like hell to make you believe we ran the algorithm for real :-)).

But to be honest, I wonder what algorithm they implement. I have read the code for 2 minutes, and it sound like random forest prediction. Anyone knows what the code does ?

saagarjha17 hours ago

It’s some useless problem like a random tree walk or something like that, the actual algorithm is not particularly important to the problem

psb21715 hours ago

Yeah, I assume it was partly chosen since the problem structure provides some convenient hooks for selectively introducing subtle and less subtle inefficiencies in the baseline algorithm that match common optimization patterns.

KeplerBoy18 hours ago

perfetto is pretty widely used for such traces, because building a viewer for your traces is a completely avoidable pain.

nice_byte21 hours ago

it's designed to select for people who can be trusted to manually write ptx :-)

fabian411 hours ago

[flagged]

tap1278348711 hours ago

[flagged]

epiccoleman11 hours ago

It definitely bears all the LLM hallmarks we've come to know. emdash, the "this isn't X. it's Y" structure - and then, to cap it off, a single pithy sentence to end it.

nostrademons11 hours ago

Also bears all the hallmarks of an ordinary post (by someone fairly educated) on the Internet. This would make sense, because LLMs were trained on lots of ordinary posts on the Internet, plus a fair number of textbooks and scientific papers.

haliskerbas11 hours ago

I've noticed people who are using LLMs more, myself included, are starting to talk like that.

Oops I mean, you're absolutely right, those ARE hallmark signs of an LLM. Let me breakdown why this isn't just your imagination but actually...

sureglymop21 hours ago

Having recently learned more about SIMD, PTX and optimization techniques, this is a nice little challenge to learn even more.

As a take home assignment though I would have failed as I would have probably taken 2 hours to just sketch out ideas and more on my tablet while reading the code before even changing it.

forgotpwd1618 hours ago

Unless misread, 2 hours isn't the time limit for the candidate to do this but the time Claude eventually needed to outperform best returned solution. Best candidate could've taken 6h~2d to achieve this result.

fhd218 hours ago

Their Readme.md is weirdly obsessed with "2 hours":

"before Claude Opus 4.5 started doing better than humans given only 2 hours"

"Claude Opus 4.5 in a casual Claude Code session, approximately matching the best human performance in 2 hours"

"Claude Opus 4.5 after 2 hours in our test-time compute harness"

"Claude Sonnet 4.5 after many more than 2 hours of test-time compute"

So that does make one wonder where this comes from. Could just be LLM generated with a talking point of "2 hours", models can fall in love with that kind of stuff. "after many more than 2 hours" is a bit of a tell.

Would be quite curious to know though. How I usually design take home assignments is:

1. Candidate has several _days_ to complete (usually around a week).

2. I design the task to only _take_ 2-4 hours, informing the candidate about that, but that doesn't mean they can't take longer. The subsequent interview usually reveals if they went overboard or struggled more than expected.

But I can easily picture some places sending a candidate the assignment and asking them to hand in their work within two hours. Similar to good old coding competitions.

alcasa18 hours ago

No the 2 hours is their time limit for candidates. The thing is that you are allowed to use any non-human help for their take homes (open book), so if AI can solve it in below 2 hours, it's not very good at assessing the human.

saagarjha17 hours ago

4 hours but AI help is (was?) allowed. I assume it was retired because of Opus basically oneshotting it

alcasa14 hours ago

Fair enough. I feel like designing AI-proof take-homes is getting ever more futile. Given the questions need to be sufficiently low context to be human-doable in a short time and timespans for AI tasks increasing, I'm not sure take homes can actually serve any filtering function whatsoever, besides checking if applicants are willing to put in a minimal amount of effort.

LarsKrimi2 hours ago

I liked the core challenge. Finding the balance of ALU and VALU, but I think that the problem with the load bandwidth could lead to problems

Like optimizing for people who assume the start indices always will be zero. I am close to 100% sure that's required to get below 2096 total loads but it's just not fun

If it however had some kind of dynamic vector lane rotate that could have been way more interesting

bytesandbits19 hours ago

Having done a bunch of take home for big (and small) AI labs during interviews, this is the 2nd most interesting one I have seen so far.

petters19 hours ago

And the answer to the obvious follow-up question is...?

mrklol17 hours ago

Milk before cereals

matthews315 hours ago

Milk, then cereal, then bowl!

Xmd5a15 hours ago

How about a bowl, and then, 30 minutes ~ 1 hour later, milk with cereals?

kevthecoder15 hours ago

42

darkwater17 hours ago

Maybe it's under NDA :)

reader927418 hours ago

fries

amirhirsch8 hours ago

I'm at 1137 with one hour with opus now... Pipelined vectorized hash, speculation, static code for each stage, epilogues and prologues for each stage-to-stage...

I think I'm going to get sub 900 since i just realized i can in-parallel compute whether stage 5 of the hash is odd just by looking at bits 16 and 0 of stage 4 with less delay.....

lalaland11258 hours ago

How do you avoid the load bottleneck?

amirhirsch5 hours ago

======================================================================

BROADCAST LOAD SCHEDULE

======================================================================

Round | Unique | Load Strategy

------|--------|------------------------------------------

   0  |    1   | 1 broadcast → all 256 items

   1  |    2   | 2 broadcasts → groups

   2  |    4   | 4 broadcasts → groups

   3  |    8   | 8 broadcasts → groups

   4  |   16   | 16 broadcasts → groups

   5  |   32   | 32 broadcasts → groups

   6  |   63   | 63 loads (sparse, use indirection)

   7  |  108   | 108 loads (sparse, use indirection)

   8  |  159   | 159 loads (sparse, use indirection)

   9  |  191   | 191 loads (sparse, use indirection)

  10  |  224   | 224 loads (sparse, use indirection)

  11  |    1   | 1 broadcast → all 256 items

  12  |    2   | 2 broadcasts → groups

  13  |    4   | 4 broadcasts → groups

  14  |    8   | 8 broadcasts → groups

  15  |   16   | 16 broadcasts → groups
Total loads with grouping: 839

Total loads naive: 4096

Load reduction: 4.9x

amirhirsch8 hours ago

take advantage of index collisions, optimizing round 0 and 11, speculative pre-loading, and the early branch predictor (which now I am doing looking at bits output at stage 3)

lzhou5 hours ago

it's actually pretty funny since opus will suggest both of these with enough prying (though with a single-prompt it might not try it).

koolba22 hours ago

What is the actual assignment here?

The README only gives numbers without any information on what you’re supposed to do or how you are rated.

glalonde22 hours ago

"Optimize the kernel (in KernelBuilder.build_kernel) as much as possible in the available time, as measured by test_kernel_cycles on a frozen separate copy of the simulator." from perf_takehome.py

vermilingua21 hours ago

Think that means you failed :(

nice_byte21 hours ago

+1

being cryptic and poorly specified is part of the assignment

just like real code

in fact, it's _still_ better documented an self contained than most of the problems you'd usually encounter in the wild. pulling on a thread to end up with a clear picture of what needs to be accomplished is like 90% of the job very often.

throwaway8152320 hours ago

I didn't see much cryptic except having to click on "perf_takehome.py" without being told to. But, 2 hours didn't seem like much to bring the sample code into some kind of test environment, debug it enough to works out details of its behaviour, read through the reference kernel and get some idea of what the algorithm is doing, read through the simulator to understand the VM instruction set, understand the test harness enough to see how the parallelism works, re-code the algorithm in the VM's machine language while iterating performance tweaks and running simulations, etc.

Basically it's a long enough problem that I'd be annoyed at being asked to do it at home for free, if what I wanted from that was a shot at an interview. If I had time on my hands though, it's something I could see trying for fun.

ithkuil18 hours ago

My instinct to read about the problem was to open the "problem.py" file, which states "Read the top of perf_takehome.py for more introduction"

So yeah. They _could_ have written it much more clearly in the readme.

tayo4220 hours ago

2 hours does seem short. It took me a half hour to get through all you listed and figure out how to get the valu instruction working.

I suspect it would take me another hour to get it implemented. Leaving 30 minutes to figure out something clever?

Idk maybe I'm slow or really not qualified.

+1
nice_byte20 hours ago
avaer20 hours ago

It's definitely cleaner than what you will see in the real world. Research-quality repositories written in partial Chinese with key dependencies missing are common.

IMO the assignment('s purpose) could be improved by making the code significantly worse. Then you're testing the important stuff (dealing with ambiguity) that the AI can't do so well. Probably the reason they didn't do that is because it would make evaluation harder + more costly.

nine_k9 hours ago

This is a kind of task that's best solved by possibly spending more than the allocated 2 hours on it, once any obvious low-hanging fruit is picked. An optimization task is what a machine does best. So the real problem would be to construct a machine that would be able to run the optimization. A right optimization framework that results from the effort could also efficiently solve many more similar problems in the future.

I understand that this test is intended to somehow test the raw brianpower, the ability to tackle an unfamiliar and complicated domain, and to work under stress. But I hope it's not representative of the actual working conditions at Anthropic. It's like asking a candidate to play a Quake deathmatch when hiring to a special forces assault squad.

saagarjha6 hours ago

> So the real problem would be to construct a machine that would be able to run the optimization.

This is a valid way to solve the problem.

FriendlyMike14 hours ago

They should just have you create a problem that can't be solved by an llm in two hours. That's the real problem here

ec10968511 hours ago

Solvable in more than 2 but not less than 2 would be the real trick.

OisinMoran12 hours ago

"You have 1 minute to design a maze that takes 2 minutes to solve"

seamossfet7 hours ago

I'm getting flashbacks from my computer engineering curriculum. Probably the first place I'd start is replacing comparison operators on the ALU with binary arithmetic since it's much faster than branch logic. Next would probably be changing the `step` function from brute iterators on the instructions to something closer to a Btree? Then maybe a sparse set for the memory management if we're going to do a lot of iterations over the flat memory like this.

htrp5 hours ago

Idle side note: surprised that https://github.com/anthropic is just some random dude in Australia

NitpickLawyer20 hours ago

The writing was on the wall for about half a year (publicly) now. The oAI 2nd place at the atcoder world championship competition was the first one, and I remember it being dismissed at the time. Sakana also got 1st place in another atcoder competition a few weeks ago. Google also released a blog a few months back on gemini 2.5 netting them 1% reduction in training time on real-world tasks by optimising kernels.

If the models get a good feedback loop + easy (cheap) verification, they get to bang their tokens against the wall until they find a better solution.

cgearhart25 minutes ago

I think this is the actual “bitter lesson”—the scalable solution (letting LLMs bang against the problem nonstop) will eventually far outperform human effort. There will come a point—whether sooner or later—where this’ll be the expected norm for handling such problems. I think the only question is whether there is any distinction between problems like this (clearly defined with a verifiable outcome) vs the space of all interesting computer programs. (At the moment I think there’s space between them. TBD.)

lostmsu14 hours ago

1% doesn't sound like a lot at all.

_aavaa_9 hours ago

That depends on how close to the theoretical max you think they are.

myahio11 hours ago

Sakana is a grift from what I understand

NitpickLawyer7 hours ago

Eh. I'd call them overly enthusiastic :) I know they publish hype-y stuff, they jumped the gun on a few things, I get that. But their recent result was on a "live" contest, and they did share agent traces, so that's likely a legit result.

throwaway0123_59 hours ago

> Claude Opus 4.5 in a casual Claude Code session, approximately matching the best human performance in 2 hours

Is this saying that Claude matched the best human performance, where the human had two hours? I think that is the correct reading, but I'm not certain they don't mean that Claude had two hours, and matched the best human performance where the human had an arbitrary amount of time. The former is impressive but the later would be even more so.

eisbaw12 hours ago

I got to 1364 cycles for now, semi-manually: Using design space exploration organized via backlog.md project, and then recombination from that. 20 agents in parallel.

Asked to generate drawio for the winner so I can grok it more easily, then I gave feedback.

Edit: 1121 cycles

eisbaw7 hours ago

1023 cycles

pickpocket8 hours ago

I cleared this assignment but did not clear the follow up interview that was way easier than this. So I gave up on tech interviews in general, stayed where I was.

nottorp17 hours ago

Is it "write 20 astroturfing but somewhat believable posts about the merits of "AI" and how it is going to replace humans"?

atomlib16 hours ago

I'm afraid that position is already filled by the CEO.

falloutx16 hours ago

It should be "can you gaslight a CEO into firing 90% of their software engineers?"

Maro20 hours ago

> This repo contains a version of Anthropic's original performance take-home, before Claude Opus 4.5 started doing better than humans given only 2 hours.

Was the screening format here that this problem was sent out, and candidates had to reply with a solution within 2 hours?

Or, are they just saying that the latest frontier coding models do better in 2 hours than human candidates have done in the past in multiple days?

saagarjha17 hours ago

4 hours

mrklol17 hours ago

Oh, I thought candidates got 2 hours but now I am confused too

demirbey0518 hours ago

It's showcase more than being take home assignment. I couldnt understand what the task is ,only performance comparisons between their LLM

measurablefunc17 hours ago

The task is ill-defined.

saagarjha17 hours ago

You make it faster

measurablefunc15 hours ago

Fewer instructions doesn't mean it's faster. It can be faster but it's not guaranteed in general. Obvious counterexample is single threaded vs multi-threaded code. Single threaded code will have fewer instructions but won't necessarily be faster.

+1
saagarjha14 hours ago
kristianpaul20 hours ago

“If you optimize below 1487 cycles, beating Claude Opus 4.5's best performance at launch, email us at performance-recruiting@anthropic.com with your code (and ideally a resume) so we can be appropriately impressed and perhaps discuss interviewing.”

afro8820 hours ago

> at launch

Does this confirm they actually do knee cap models after the launch period to save money, without telling users?

mediaman19 hours ago

No, they later updated the harness for this and it subsequently got better scores.

sevenzero18 hours ago

The company that wanted to simply get away with the thievery of terabytes of intellectual property, what a great place to work at! Not. Anthropic has no shame.

torginus16 hours ago

Are you allowed to change the instruction sequence? I see some optimization opportunities - it'd be obviously the correct thing to do an optimizing compiler, but considering the time allotted, Id guess you could hand-optimize it, but that feels like cheating.

saagarjha14 hours ago

Yes, in fact this will be one of the first things you will want to do.

svilen_dobrev17 hours ago

if anyone is interested to try their agent-fu, here's some more-real-world rabbit-hole i went optimizing in 2024. Note this is now dead project, noone's using it, and probably same for the original. i managed to get it 2x-4x faster than original, took me several days then. btw There are some 10x optimizations possible but they break few edge cases, so not entirely correct.

https://github.com/svilendobrev/transit-python3

Incipient19 hours ago

>so we can be appropriately impressed and perhaps discuss interviewing.

Something comes across really badly here for me. Some weird mix of bragging, mocking, with a hint of aloof.

I feel these top end companies like the smell of their own farts and would be an insufferable place to work. This does nothing but reinforce it for some reason.

sponnath19 hours ago

I have to agree. It's off-putting to me too. I'm impressed by the performance of their models on this take-home but I'm not impressed at their (perhaps unintentional) derision of human programmers.

qbane16 hours ago

Remember: It is a company that keep saying how much production code can be written by AI in xx years, but at the same time recruiting new engineers.

yodsanklai9 hours ago

Thanks for noticing this. I got the same feeling when reading this. It may not sound like much, and it doesn't mean it's an insufferable place to work, but it's a hint it might be.

Rant: On a similar note, I recently saw a post on Linkedin from Mistral, where they were bragging to recruit candidates from very specific schools. That sounded very pretentious (and also an HR mistake on several levels IMHO).

sublimefire10 hours ago

Did a bit of soul searching and manually optimised to 1087 but I give up. What is the number we are chasing here? IMO I would not join a company giving such a vague problem because you can feel really bad afterwards, especially if this does not open a door to the next stage of the interview. As an alternative we could all instead focus on a real kernel and improve it :)

trishume9 hours ago

Author of the take-home here: That's quite a good cycle count, substantially better than Claude's, you should email it to performance-recruiting@anthropic.com.

karmasimida15 hours ago

I am able to beat this 1487 benchmark by switching between LLMs, doesn't seem that hard lol. Albeit, I do not fully understand what the solution is, loll

lostmsu14 hours ago

Yeah, GPT 5.2 on high got down to 1293 on the 5th try (about 32mins).

mips_avatar21 hours ago

Going through the assignment now. Man it’s really hard to pack the vectors right

saagarjha17 hours ago

Oh, this was fun! If you like performance puzzles you should really do it. Actually I might go back and see if I can improve on it this weekend…

potato-peeler13 hours ago

What does clock cycles mean? Don’t think they are referring to the cpu clock?

pshirshov16 hours ago

Yet Claude is the only agent which deadlocks (blocks in GC forever) after an hour of activity.

pickpocket8 hours ago

i cleared this one but didn't clear the follow up interview that was way easier than this

fumi20269 hours ago

I could only cut it down to 41 cycles.

greesil21 hours ago

This is a knowledge test of GPU architecture?

avaer21 hours ago

Kind of, but not any particular GPU.

The machine is fake and simulated: https://github.com/anthropics/original_performance_takehome/...

But presumably similar principles apply.

benreesman20 hours ago

It's a test of polyhedral layout algebra, what NVIDIA calls CuTe and the forthcoming C++ standard calls std::mdspan.

This is the general framework for reasoning about correct memory addressing in the presence of arbitrary constraints like those of hardware.

saagarjha17 hours ago

You can get pretty far without needing to care about this fwiw

greesil14 hours ago

Not far enough if you're turning cash into waste heat with GPUs :)

tayo4220 hours ago

I wonder if the Ai is doing anything novel? Or if it's like a brute force search of applying all types of existing optimizations that already exist and have been written about.

piokoch13 hours ago

How something that generates next token, given a list of previous tokens, can do something novel?

rellfy13 hours ago

By that same logic, humans would not be able to do anything novel either.

tucnak21 hours ago

The snarky writing of "if you beat our best solution, send us an email and MAYBE we think about interviewing you" is really something, innit?

ahussain20 hours ago

They wrote:

> If you optimize below 1487 cycles, beating Claude Opus 4.5's best performance at launch, email us at performance-recruiting@anthropic.com with your code (and ideally a resume) so we can be appropriately impressed and perhaps discuss interviewing.

That doesn’t seem snarky to me. They said if you beat Opus, not their best solution. Removing “perhaps” (i.e. MAYBE) would be worse since that assumes everyone wants to interview at Anthropic. I guess they could have been friendlier: “if you beat X, we’d love to chat!”

0x3f20 hours ago

I suppose you could interpret it either way, but having dealt with their interview pipeline I'd choose the snark.

dude25071117 hours ago

Yeah, a nerd bypassed HR and showed their true character. They are swimming in easy money.

lovich20 hours ago

That paraphrases to

"do better than we have publicly admitted most of humanity can do, and we may deign to interview you"

It sounds incredibly condescending, if not snarky, but I would classify those adjectives as mostly synonymous.

miki12321119 hours ago

I suspect this is partially legal CYA.

There's more to employees than their raw ability to go below some performance threshold. If somebody passes the test, but lives in an US sanctioned country with no plans to move, is well known for using the n-word on social media or has previously broken an NDA, Anthropic probably doesn't want to interview them.

andruby19 hours ago

I understand how it can be interpreted as snarky, but how could it have been written better? It's a hard path to walk and recruiting/interviewing is inherently sensitive it seems.

Aurornis12 hours ago

> It's a hard path to walk and recruiting/interviewing is inherently sensitive it seems.

Hiring and interviewing is in a weird place right now. We’re coming off of a period where tech jobs were easy to get and companies were competing for candidates. A lot of candidates quickly got used to the idea of companies working hard to charm and almost beg them to join. When those candidates encounter what it’s like to apply for highly competitive companies who have 1000x more applicants than they’d ever consider, the resulting straightforwardness can be shocking.

+1
lovich18 hours ago
throwaway74320 hours ago

I took the "perhaps" as a decision to be considered by the applicant, considering they'd be competent enough to get in at a place of their choice, not just anthropic.

lovich20 hours ago

Does the applicant or the employer decide if an interview happens in your experience?

Do you think if the applicants are really in that level of demand that they would be getting a take home test instead of being actively recruited?

Legitimately lay out your understanding of a world where an employer is chasing after employees who are high in demand, give them a test that is expected to take hours, and have a hedged bet in their wording, instead of saying we will absolutely hire you if you pass X bar?

riffraff20 hours ago

I feel that came out wrong but the "maybe" was intended to be a way of saying "no guarantees", to avoid giving people the idea "solve this, get hired".

Bootvis19 hours ago

Should have asked Claude how to write it better.

maerch18 hours ago

In that case, removing „perhaps“ would have helped a lot. It is not about maybe being hired, but about maybe being interviewed.

dmurray18 hours ago

They don't want to guarantee an interview to everyone who sends them an improved solution, either.

If three people send them improvements, they'll probably get interviews. If three thousand do, the problem is easier than they thought or amenable to an LLM or one bright person figured out a trick and shared it with all his classmates or colleagues or all of GitHub.

NewJazz19 hours ago

They may not be able to hire folks in certain jurisdictions. Or even interview them. (Iran, NK)

kristopolous20 hours ago

If you're an asshole that wants millions of dollars...i mean there's still places to say no

sourcegrift20 hours ago

Pride comes before fall thankfully

altmanaltman20 hours ago

its anthrophic. their entire marketing is just being an pompous ass and AI fear mongering.

dhruv300620 hours ago

I wonder if OpenAI follows suit.

rvz20 hours ago

They should.

spencerflem18 hours ago

Oh wow it’s by Tristan Hume, still remember you from EyeLike!

Graziano_M12 hours ago

I recognized the name and dug around too. I played DEFCON CTF with him back in the day!

alexpadula14 hours ago

Looks rather fun!

piokoch17 hours ago

Interesting... Who would spend hours working for free for some company that promised only that they would invite you for a job interview. Maybe.

Aurornis12 hours ago

When this was being used it was probably given to candidates who had already started the interview loop and been screened.

The current e-mail invitation in the README is just another avenue for exceptional people to apply. If someone is already highly qualified from their background and resume they can go through the front door (direct application). For those who have incredible talent but not necessarily the background or resume to unlock the front door yet, this is a fun way to demonstrate it.

cjrp17 hours ago

I guess someone who enjoys solving these kinds of problems anyway, and thinks the potential upside if they do get hired is worth it.

mrdootdoot13 hours ago

“In English, Data”

zeroCalories21 hours ago

It shocks me that anyone supposedly good enough for anthropic would subject themselves to such a one sided waste of time.

pclmulqdq21 hours ago

I generally have a policy of "over 4 hours and I charge for my time." I did this in the 4-hour window, and it was a lot of fun. Much better than many other take-home assignments.

heavyset_go20 hours ago

I don't do take home assignments, but when I did, I would offer to do it at my hourly rate, even if it was just an hour. It's time I would otherwise spend making money.

Anyone worth working with respected that and I landed several clients who forwent the assignment altogether. It's chump change in the grand scheme of things, and often a formality.

Does help that I have a very public web presence and portfolio, though.

theptip20 hours ago

For many reasons, you’re not gonna get into Anthropic with that attitude.

PlanksVariable20 hours ago

And Anthropic will never land heavyset_go with their attitude. I guess we’re at an impasse.

heavyset_go17 hours ago

I don't care

ramraj0714 hours ago

I have foregone our take home for exceptional candidates, but let me ask you, do you also demand compensation for in person or zoom call 1-1 interviews? Surely thats the same time of your life.

zeroCalories12 hours ago

It signals a degree of investment from the other side if they're willing to burn their own time talking to you. I can understand a small screening process to filter candidates, but I'm not going to do your silly dance for multiple hours if you're not going to do it with me.

dheera19 hours ago

Time is the issue, not money.

I couldn't care less about getting paid for a few hours, what's truly annoying when you're job hunting is the company having an extremely high rejection rate even at the take-home stage. That's an inordinate waste of time multiplied by a lot of companies.

If you have a >50% chance of rejecting, don't even give the candidate a take-home. Be at least 90% sure you want them before you get to that stage.

Aurornis11 hours ago

> I generally have a policy of "over 4 hours and I charge for my time.

Worth mentioning that demanding to be paid to apply for a company is usually equivalent to rejecting the job. Most companies are going to end the interview there. Few HR departments would allow one applicant to be paid for the same interview loop as other candidates.

I was helping out in a mentoring program during the ZIRP period when the idea of charging companies for take-home interviews started to become popular. I can’t think of anyone it actually worked for in that group. I’ve heard anecdotes online of some people doing it with success, but any company like Anthropic is just going to close your application and move on if you request to be paid for applying. They have a zillion other qualified candidates in line.

If someone is giving a take-home problem that looks like you’re actually doing work for the company, that’s a different story. This problem is not actually work, obviously.

pclmulqdq7 hours ago

Yeah, I have told HR people this and been rejected. I do say this upfront because I don't want to send you a surprise bill. The main response I get is "OK, that's fine, don't spend more than 4 hours on it." The Anthropic recruiter told me, "no problem, it's a 4-hour test anyway."

Aurornis5 hours ago

> I do say this upfront because I don't want to send you a surprise bill.

Sending a company a surprise bill that they didn't agree upon is bad practice. Interviews are customarily not compensated, so it's unreasonable to surprise bill someone for it.

If you send a company a surprise bill for the interview, it's going to give the HR people a good laugh as they cross you off the candidates list. Everyone involved is going to forever remember you as the person who tried surprise billing for the interview and make a mental note to never interview you again at future companies.

It's not a good thing to try.

whateveracct20 hours ago

4 hours continuous or no? I can't imagine finding 4 hours of straight focus.

ryanjshaw20 hours ago

These kinds of roles are for youngsters with minimal commitments who are looking for their shot to break into a wild industry. It’s not for the middle aged single parent with FTE and just enough free time to do an extra load of laundry.

saagarjha17 hours ago

Continuous

+2
whateveracct10 hours ago
djmips18 hours ago

If you look at it as a puzzle game then it's not any different than the time you use to play other games.

browningstreet21 hours ago

I’ve been sent the Anthropic interview assignments a few times. I’m not a developer so I don’t bother. At least at the time they didn’t seem to have technical but not-dev screenings. Maybe they do now.

throwa35626219 hours ago

Care to elaborate the first part?

Did you apply for a position? Did they send you the assignment without prior discussion?

sealeck21 hours ago

Why is writing code to execute a program using the fewest instructions possible on a virtual machine a waste of time?

0x3f20 hours ago

The expected time you spend on it is much less than the expected time they'll spend on it.

efilife18 hours ago

you don't get paid for it

mips_avatar21 hours ago

It’s kind of an interesting problem.

OhNoNotAgain_9919 hours ago

[dead]

mannykannot12 hours ago

I beat the target by deleting the parts that were causing the cycle count to be too high. /s

eisbaw6 hours ago

submit and see if Anthropic accepts it

kartibbb15 hours ago

[flagged]

kartibbb15 hours ago

[flagged]

tmp-12785371618 hours ago

[flagged]

falloutx16 hours ago

Well working under someone who keeps insisting Software engineering is dead sounds like a toxic work environment.

woof16 hours ago

"1) Python is unreadable."

Would you prefer C or C++?

"2) AI companies are content with slop and do not even bother with clear problem statements."

It's a filter. If you don't get the problem, you'll waste their time.

"3) LOC and appearance matter, not goals or correctness."

The task was goal+correctness.

"4) Anthropic must be a horrible place to work at."

Depends on what you do. For this position it's probably one of the best companies to work at.

am17an13 hours ago

1) Python is unreadable." Would you prefer C or C++?

> Unironically, yes. Unless I never plan to look at that code again

tap1278348711 hours ago

It is a filter for academics who write horrible Python code and feel smart, yes.

I think they also have open positions for stealing other people's code and DDoS-ing other people's websites.

myahio23 hours ago

[flagged]

jackblemming21 hours ago

Seems like they’re trying to hire nerds who know a lot about hardware or compiler optimizations. That will only get you so far. I guess hiring for creativity is a lot harder.

And before some smart aleck says you can be creative on these types of optimization problems: not in two hours, it’s far too risky vs regurgitating some standard set of tried and true algos.

onion2k21 hours ago

And before some smart aleck says you can be creative on these types of optimization problems: not in two hours, it’s far too risky vs regurgitating some standard set of tried and true algos.

You're both right and wrong. You're right in the sense that the sort of creativity the task is looking for isn't really possible in two hours. That's something that takes a lot of time and effort over years to be able to do. You're wrong because that's exactly the point. Being able to solve the problem takes experience. Literally. It's having tackled these sorts of problems over and over in the past until you can draw on that understanding and knowledge reasonably quickly. The test is meant to filter out people who can't do it.

I also think it's possible to interpret the README as saying humans can't do better than the optimizations that Claude does when Claude spends two hours of compute time, regardless of how long the human takes. It's not clear though. Maybe Claude didn't write the README.

tmule21 hours ago

Your comments history suggests you’re rather bitter about “nerds” who are likely a few standard deviations smarter than you (Anthropic OG team, Jeff Dean, proof nerds, Linus, …)

jackblemming21 hours ago

And they’re all dumber than John von Neumann, who cares?

margalabargala20 hours ago

Transitively, you haven't thought the most thoughts or cared the most about anything, therefore we should disregard what you think and care about?

+1
jackblemming20 hours ago
muglug21 hours ago

If they're hiring performance engineers then they're hiring for exactly these sets of skills.

It's a take-home test, which means some people will spend more than a couple of hours on it to get the answer really good. They would have gone after those people in particular.

Analemma_21 hours ago

This would be an inappropriate assignment for a web dev position, but I'm willing to bet that a 1% improvement in cycles per byte in inference (or whatever) saves Anthropic many millions of dollars. This is one case where the whiteboard assignment is clearly related to the actual job duties.

saagarjha17 hours ago

The solution was explicitly graded on creativity fwiw

rvz21 hours ago

> Seems like they’re trying to hire nerds who know a lot about hardware or compiler optimizations. That will only get you so far. I guess hiring for creativity is a lot harder.

Good. That should be the minimum requirement.

Not another Next.js web app take home project.