Opportunity type
Full-time
Cause areas
AI safety & policy
Information security
Routes to impact
Direct high impact on an important cause
Increasing awareness of institutional decision making challenges in AI safety
Skill-building & building career capital
Skill set
Software engineering
Information security
Organization building, boosting & running
AI governance understanding
Location
London, UK (with options to work in other UK government offices including Birmingham, Cardiff, Darlington, Edinburgh, Salford or Bristol; hybrid)
Description
The AI Security Institute is seeking a Staff/Principal Security Engineer, Trust & Risk, to build and own risk platforms that automate security controls and assurance for advanced AI research in a high-impact, government-adjacent environment.
- Design and implement automated, programmatic security controls and evidence pipelines across cloud, CI/CD, and ML workflows.
- Collaborate with research, platform, and security teams to embed compliance and risk management into infrastructure and AI model handling.
- Influence global AI governance, working directly with government leaders and frontier AI companies.
- Enjoy extensive benefits: hybrid work, generous leave, pension contributions, learning stipends, and opportunities for rapid growth and external collaboration.
Applicants from any nationality are encouraged; visa status is considered, and successful candidates must pass UK security clearance checks.
This text was generated by AI. If you notice any inconsistencies, please let us know using this form
Join 60k subscribers and sign up for the EA Newsletter, a monthly email with the latest ideas and opportunities