About the Team
Security is at the foundation of OpenAI’s mission to ensure that artificial general intelligence benefits all of humanity.
The Security team protects OpenAI’s technology, people, and products. We are technical in what we build but operational in how we execute, and we support all of OpenAI’s research and product initiatives. Our tenets include: prioritizing impact, enabling researchers, preparing for future transformative technologies, and fostering a robust security culture.
Current security clearance is not mandatory, but being eligible for sponsorship is required.
About the Role
Lead an effort to map, characterize, and prioritize cross-layer vulnerabilities in advanced AI systems – spanning data pipelines, training/inference runtimes, system and supply chain components. You’ll drive offensive research, produce technical deliverables, and serve as OpenAI’s primary technical counterpart for select external partners (including potential U.S. government stakeholders).
What you’ll do:
Build an AI Stack Threat Map across the AI lifecycle, from data to deployment
Deliver deep-dive reports on vulnerabilities and mitigations for training and inference, focused on systemic, cross-layer risks.
Orchestrate inputs across research, engineering, security, and policy to produce crisp, actionable outputs.
Engage external partners as the primary technical representative; align deliverables to technical objectives and milestones.
Perform hands-on threat modeling, red-team design, and exploitation research across heterogeneous infrastructures (compilers, runtimes, and control planes.)
Translate complex technical issues for technical and executive audiences; brief on risk, impact, and mitigations.
You may thrive if you:
Have led high-stakes security research programs with external sponsors (e.g., national-security or critical-infrastructure stakeholders).
Have deep experience with cutting edge offensive-security techniques
Are fluent across AI/ML infrastructure (data, training, inference, schedulers, accelerators) and can threat-model end-to-end.
Operate independently, align diverse teams, and deliver on tight timelines.
Communicate clearly and concisely with experts and decision-makers.
Goals & impact
Provide decision-makers a common vulnerability taxonomy, early warning of systemic weaknesses, and a repeatable methodology that measurably raises the bar for adversaries.
Outcomes include: more resilient AI architectures, reduced exploit windows, and better-targeted security R&D investments across defense and public-sector stakeholders.
Key technical challenges
End-to-end coverage: Tracking threats across the AI lifecycle, including data, software, and system-level components.
Cross-disciplinary integration: Reconciling perspectives from owners of disjoint stack layers to capture composite attack paths.
Stochastic inference: Non-determinism from temperature/top-k/top-p decoding complicates reproducibility; requires seeded runs, harness control, and careful methodology to validate vulnerabilities.
About OpenAI
OpenAI is an AI research and deployment company dedicated to ensuring that general-purpose artificial intelligence benefits all of humanity. We push the boundaries of the capabilities of AI systems and seek to safely deploy them to the world through our products. AI is an extremely powerful tool that must be created with safety and human needs at its core, and to achieve our mission, we must encompass and value the many different perspectives, voices, and experiences that form the full spectrum of humanity.
We are an equal opportunity employer, and we do not discriminate on the basis of race, religion, color, national origin, sex, sexual orientation, age, veteran status, disability, genetic information, or other applicable legally protected characteristic.
For additional information, please see OpenAI’s Affirmative Action and Equal Employment Opportunity Policy Statement.
Qualified applicants with arrest or conviction records will be considered for employment in accordance with applicable law, including the San Francisco Fair Chance Ordinance, the Los Angeles County Fair Chance Ordinance for Employers, and the California Fair Chance Act. For unincorporated Los Angeles County workers: we reasonably believe that criminal history may have a direct, adverse and negative relationship with the following job duties, potentially resulting in the withdrawal of a conditional offer of employment: protect computer hardware entrusted to you from theft, loss or damage; return all computer hardware in your possession (including the data contained therein) upon termination of employment or end of assignment; and maintain the confidentiality of proprietary, confidential, and non-public information. In addition, job duties require access to secure and protected information technology systems and related data security obligations.
To notify OpenAI that you believe this job posting is non-compliant, please submit a report through this form. No response will be provided to inquiries unrelated to job posting compliance.
We are committed to providing reasonable accommodations to applicants with disabilities, and requests can be made via this link.
OpenAI Global Applicant Privacy Policy
At OpenAI, we believe artificial intelligence has the potential to help people solve immense global challenges, and we want the upside of AI to be widely shared. Join us in shaping the future of technology.