Research Engineer / Scientist, Tool Use Safety
About Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
About the team
The Tool Use Team within Research is responsible for making Claude the world's most capable, safe, reliable, and efficient model for tool use and agentic applications. The team focuses on the foundational layer - solving core problems such as tool use safety (e.g. prompt injection robustness), tool call accuracy, long horizon & complex tool use workflow, large scale & dynamic tools, and tool use efficiency. These are foundations to the majority of Anthropic’s customers as well as internal teams building specific agentic applications such as Claude for Chrome, Computer Use, Claude Code, Search.
About the role
We\'re looking for Research Engineers / Scientists to help us advance the frontier of safe tool use. With tool use adoption accelerating rapidly across our platform, the next generation requires even more breakthrough research to enable us to scale responsibly : for example, training Claude to be extremely robust against sophisticated prompt injection, preventing data exfiltration attempts through tool misuse, defending against adversarial attacks in realistic multi-turn agent conversations, and ensuring safety when agents operate autonomously for longer horizons with access to a large number of tools.
You\'ll collaborate with a diverse group of researchers and engineers to advance safe tool use in Claude. You\'ll own the full research lifecycle—from identifying fundamental limitations to implementing solutions that ship in production models. This work is critical for derisking our model’s increasing capabilities and empowering Claude to more autonomously assist users.
Note : For this role, we conduct all interviews in Python.
Responsibilities
You may be a good fit if you
Strong candidates may also have one or more of the following
The expected salary range for this position is :
$315,000 - $425,000 USD
Logistics
Education requirements : We require at least a Bachelor\'s degree in a related field or equivalent experience.
Location-based hybrid policy : Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship : We do sponsor visas. However, we aren\'t able to successfully sponsor visas for every role and every candidate. If we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. We value diverse perspectives and strive to include a range of experiences on our team. We believe AI systems have enormous social and ethical implications.
How we\'re different
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. We value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, with collaboration and strong communication skills highly valued.
The easiest way to understand our research directions is to read our recent research. This research continues many directions, including GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Come work with us!
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a collaborative office environment.
We do not discriminate on the basis of protected status. We encourage applicants from diverse backgrounds to apply.
Equal Employment Opportunity
We do not discriminate on the basis of protected status under any applicable law. We also invite voluntary self-identification as part of government reporting requirements, which is optional and confidential. The information collected is used solely for EEO compliance and reporting purposes.
#J-18808-Ljbffr
Research Scientist • New York, NY, United States