Opportunity type
Full-time
Cause areas
AI safety & policy
Routes to impact
Direct high impact on an important cause
Skill-building & building career capital
Direct/increased engagement with EA
Skill set
Research
Software engineering
Academia
Forecasting
Location
Berkeley Office (CA, USA); Remote (US); Remote (International); future Singapore office (expected 2026 Q3)
Description
A full-time research role for experienced ML scientists who want to advance frontier AI safety through hands-on technical work in collaboration with leading researchers and institutions.
- Research ownership: Lead and scale AI alignment agendas in areas like evals, red-teaming, deception, robustness, or mechanistic interpretability
- Infrastructure and support: Access substantial GPU compute, engineering mentorship, and dedicated research pods
- Ecosystem access: Collaborate with frontier labs, national AI safety institutes, and top academics; publish at major conferences
- Compensation and flexibility: $120K–$190K salary (location-dependent), remote or Berkeley-based, visa sponsorship available
Senior researchers with a strong vision are also invited to propose and incubate new research directions at FAR.AI.
This text was generated by AI. If you notice any inconsistencies, please let us know using this form
Join 60k subscribers and sign up for the EA Newsletter, a monthly email with the latest ideas and opportunities