Loading page...
Loading page...
San Francisco, CA · Onsite · Full-time
Our mission is to make autonomous AI agents resilient to emerging threats to their reliability and security.
We are tackling the greatest challenge in AI: bridging the gap between simulation and production. While models are trained to minimize average-case loss on static distributions, production is an adversarial environment. This lack of adversarial robustness is a major bottleneck for the efficacy of AI in high-stakes applications.
Agent security is fundamentally different from traditional cybersecurity. AI vulnerabilities are amorphous and non-deterministic, and the space of adversarial inputs is unbounded. These vulnerabilities are more persistent than those of the past and require dedicated research to solve.
We are a team of AI safety and security researchers laser-focused on addressing this challenge. We have built RL infrastructure at DeepMind, trained state-of-the-art models at Jane Street, NVIDIA, and Cohere, and published leading research at top conferences (NeurIPS, ICLR, ICML, and more). We care deeply about the safety risks of AI agent adoption and began collaborating as part of the Harvard AI Safety Team. We are a well-funded, early-stage startup backed by top investors.
As a Software Engineer, you will own end-to-end development of our platform for safety and reliability testing.
You will be responsible for designing and building out complex agentic systems and the user platform to access these capabilities, and interfacing directly with customers to build features and meet their needs.
Email hello@generalanalysis.com with your resume and a couple of sentences why you'd be a good fit.