Opportunity type
Full-time
Cause areas
AI safety & policy
Routes to impact
Direct high impact on an important cause
Skill-building & building career capital
Learning about important cause areas
Skill set
Research
Software engineering
Organization building, boosting & running
Location
Berkeley Office (CA, USA); Remote (International); Remote (US); expected Singapore office (2026 Q3)
Description
A full-time role for experienced engineers who want to lead high-impact AI safety research across model training, evaluations, or research infrastructure.
- Research impact – Lead projects on AI deception, adversarial robustness, red-teaming, or mechanistic interpretability with a 30+ person technical team
- Technical scope – Work on large-scale transformer training, frontier model evaluations, or GPU infrastructure and distributed systems
- Leadership – Mentor engineers and researchers; shape 3–6 month research roadmaps within growing workstreams
- Compensation and flexibility – $150K–$250K salary (location-dependent), remote or Berkeley-based, visa sponsorship and travel support available
Also open to senior researchers with a strong vision for new AI safety directions within FAR.Research.
This text was generated by AI. If you notice any inconsistencies, please let us know using this form
Join 60k subscribers and sign up for the EA Newsletter, a monthly email with the latest ideas and opportunities