Loading page...
Loading page...
San Francisco, CA · Onsite · Full-time
Our mission is to make autonomous AI agents resilient to emerging threats to their reliability and security.
We are tackling the greatest challenge in AI: bridging the gap between simulation and production. While models are trained to minimize average-case loss on static distributions, production is an adversarial environment. This lack of adversarial robustness is a major bottleneck for the efficacy of AI in high-stakes applications.
Agent security is fundamentally different from traditional cybersecurity. AI vulnerabilities are amorphous and non-deterministic, and the space of adversarial inputs is unbounded. These vulnerabilities are more persistent than those of the past and require dedicated research to solve.
We are a team of AI safety and security researchers laser-focused on addressing this challenge. We have built RL infrastructure at DeepMind, trained state-of-the-art models at Jane Street, NVIDIA, and Cohere, and published leading research at top conferences (NeurIPS, ICLR, ICML, and more). We care deeply about the safety risks of AI agent adoption and began collaborating as part of the Harvard AI Safety Team. We are a well-funded, early-stage startup backed by top investors.
As a Chief of Staff, you will be the lead contributor for establishing our company as the rigorous standard for AI safety and unlocking our business potential.
You will shape how we communicate our work to customers and partners, workng closely with the founders to define the narrative of AI reliability and security.
You will lead key strategic projects, from identifying new partners to assembling coalitions of security experts to build policy consensus.
You will make high-stakes decisions on behalf of the company with direct accountability for outcomes.
Email hello@generalanalysis.com with your resume and a couple of sentences why you'd be a good fit.