Back to jobs
A

Research Engineer, Search and Knowledge Post-Training

🇺🇸Anthropic

Type
Full Time
Level
Mid-level
Location
Remote-Friendly (Travel-Required) | San Francisco, CA | Seattle, WA | New York City, NYRemote OK
Posted 6h ago

Job Description

About Anthropic Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems. About the role We want future AI systems to have superhuman epistemics: the ability to parse evidence at enormous scale and draw rigorous conclusions for both itself and the user. Search is the capability that determines whether a model can pick a signal out of noise, weigh conflicting evidence, and know what it doesn't know. Every higher-order capability we care about depends on search being trustworthy. If we want Claude to be a trustworthy collaborator on real knowledge work, it has to be a trustworthy searcher. We're hiring a Research Engineer to advance the science and engineering that goes into making Claude this trustworthy searcher. This is a research role for someone who is unusually rigorous: you'll define hypotheses about what makes a model an epistemically sound searcher, design the experiments that test them, and turn search post-training from a craft into a measurable science. You'll be the person who insists on cleanly isolated variables, calibrated metrics, and reproducible signal, while also having the engineering skill to build the infrastructure necessary to get them. This work sits at the intersection of reinforcement learning, retrieval, and evaluation, and it directly shapes how Claude behaves in any setting where evidence matters: research, analysis, agentic workflows, and beyond. What you'll do Own a research direction for a class of search post-training problems end-to-end: form hypotheses about latent capabilities, design experiments that isolate them, run training, and decide what to try next. Build the instrumentation that turns environment design into a controlled experiment so we can study how each environment factor contributes to the capabilities we care about, rather than overfitting to any one regime. Design frontier-discriminating evaluations that distinguish genuine reasoning over evidence from plausible pattern matching and that hold up as models improve. Drive optimization rigor across the stack: efficient experiment design, ablations, training run economics, and the discipline to know when a result is real. Collaborate deeply with researchers across post-training, RL infrastructure, and product to translate model behavior in the wild into concrete training signals and back again. Set the bar for the team's experimental standards — what we measure, how we measure it, how we know a result is real. Minimum (must-have) Have an unusually rigorous, quantitative mindset Are an outstanding software engineer in Python, comfortable across the stack from data pipelines to RL training to evaluation infrastructure Have shipped real ML research repeatedly, with taste for which experiments a

Read original posting

Required Skills

PythonGoRustR
A

Anthropic