Effective altruism

opportunities board

Work on the world's most pressing problems. Browse jobs, fellowships, internships, courses, and more at high-impact organisations.
  • Full-time
    294
  • Part-time
    59
  • Internship
    68
  • Fellowship
    58
  • Volunteer
    49
  • Funding
    43
  • Independent project
    26
  • Contest
    8
  • Event
    6
  • Advising
    28
  • Recurring opportunity
    97
  • Course
    13
  • AI safety & policy
    179
  • Animal welfare
    48
  • Biosecurity & pandemic preparedness
    34
  • Building effective altruism
    80
  • Climate change
    16
  • Global health & development
    135
  • Global priorities research
    35
  • Information security
    25
  • Institutional decision-making
    37
  • International security & cooperation
    51
  • Mental health & wellbeing
    6
  • Nuclear security
    14
  • Other (pressing)
    9
  • Remote
    262
  • USA
    182
  • UK
    49
  • Europe (excluding UK)
    53
  • Canada
    13
  • Central America & Caribbean
    13
  • South America
    17
  • Asia
    45
  • Africa
    73
  • Australia & Oceania
    11
  • Middle East
    10
  • Mexico
    8
  • High school
    24
  • Undergraduate degree
    202
  • Graduate degree
    135
  • Doctoral degree
    75
  • Specialized degree (M.D., J.D., etc.)
    69
  • Direct high impact on an important cause
    320
  • Direct/increased engagement with EA
    109
  • Increasing awareness of institutional decision making challenges in AI safety
    27
  • Learning about important cause areas
    131
  • Networking with peers around AI governance topics
    34
  • Skill-building & building career capital
    388
  • Testing your fit for a certain career path
    75
  • AI governance understanding
    48
  • Academia
    31
  • Collaborative problem-solving
    149
  • Communication
    272
  • Community building
    86
  • Data
    85
  • Entrepreneur
    13
  • Finance
    11
  • Forecasting
    8
  • Information security
    17
  • Legal
    7
  • Management
    174
  • Operations
    226
  • Organization building, boosting & running
    207
  • Policy writing
    48
  • Political & bureaucratic
    44
  • Research
    249
  • Software engineering
    39
  • The Institute for AI Policy and Strategy
    1
  • Works in Progress
    1
  • 0Labs
    2
  • 80,000 Hours (80K)
    8
  • AI Safety Asia (AISA)
    1
  • AI Safety Awareness Project (AISAP)
    1
  • AI Safety Connect (AISC)
    2
  • AI Safety Ideas (AISI)
    1
  • AI Security Institute (AISI)
    6
  • ALLAI
    1
  • ANSER
    1
  • Abdul Latif Jameel Poverty Action Lab (J-PAL)
    8
  • Active Site
    2
  • Alignment Ecosystem Development
    1
  • Alignment Research Center
    1
  • Alignment Research Engineer Accelerator (ARENA)
    1
  • Alliance to Feed the Earth in Disasters (ALLFED)
    3
  • AltProtein.jobs
    1
  • Amazon
    1
  • Ambitious Impact (AIM) [Formerly Charity Entrepreneurship]
    1
  • Americans Responsible for Innovation
    1
  • Anima International
    1
  • Animal Advocacy Africa
    2
  • Animal Charity Evaluators
    1
  • Animal Equality
    3
  • Animal Justice Academy
    1
  • Animal Outlook
    1
  • Animal Welfare League
    1
  • Ansh
    3
  • Anthropic
    3
  • Apart Research
    1
  • Apollo Research
    9
  • Aquatic Animal Policy Org
    1
  • Aquatic Life Institute (ALI)
    2
  • Arcadia Impact
    2
  • Artificial Intelligence Governance & Safety Canada (AIGS)
    1
  • Aspen Institute
    2
  • Authentic Memory
    1
  • Beneficial AI Foundation
    1
  • Berkeley Existential Risk Initiative (BERI)
    1
  • Blue Origin
    1
  • BlueDot Impact
    10
  • Broad Institute of MIT and Harvard
    1
  • CEEALAR
    2
  • CLEAR Global
    1
  • Cambridge AI Safety Hub (CAISH)
    1
  • Cambridge Boston Alignment Initiative
    2
  • Carnegie Endowment for International Peace
    1
  • Carnegie Mellon University
    3
  • Center for AI Safety (CAIS)
    6
  • Center for Democracy and Technology (CDT)
    1
  • Center for Disease Control and Prevention (CDC)
    1
  • Center for Human-Compatible AI (CHAI)
    1
  • Center for Security and Emerging Technology (CSET)
    1
  • Center for Strategic and International Studies (CSIS)
    1
  • Center for Strategic and International Studies (CSIS)
    7
  • Center for a New American Security (CNAS)
    1
  • Center on Long-Term Risk
    1
  • Centre for Effective Altruism (CEA)
    10
  • Centre for Humanitarian Dialogue (HD)
    1
  • Centre for the Study of Existential Risk (CSER)
    1
  • Child Rights and You (CRY)
    1
  • Clean Air Task Force
    3
  • Clingendael Institute
    3
  • Cloudflare
    3
  • Coalition for Epidemic Preparedness Innovations (CEPI)
    2
  • Coefficient Giving
    6
  • Constellation
    15
  • Convergent Research
    2
  • Cornell Tech
    1
  • Cosmos Institute
    1
  • DIV Fund
    1
  • Dawn Song (UC Berkeley)
    1
  • Development Innovation Lab
    3
  • Dovetail
    2
  • Dwarkesh Podcast
    1
  • EA San Francisco
    1
  • EAGxAustralasia
    1
  • EAGxBerlin
    1
  • EAGxSingapore 2026
    1
  • ERA
    1
  • Effectief Geven
    3
  • Effective Altruism Australia
    1
  • Effective Altruism Community
    3
  • Effective Altruism Czechia
    1
  • Effective Altruism Switzerland
    1
  • Effective Altruism UK
    1
  • Effective Altruism at UC Berkeley
    1
  • Effective Thesis
    3
  • Effektiver Altruismus Deutschland
    1
  • Epoch AI
    2
  • Equistamp
    1
  • Escape the City
    1
  • Evidence Action
    17
  • Existential Risk Observatory
    1
  • FAR.AI
    7
  • Faculty
    1
  • Faunalytics
    1
  • Fish Welfare Initative
    2
  • Fondation pour la Recherche Stratégique
    1
  • Forecasting Research Institute (FRI)
    4
  • Foresight Institute
    1
  • Fortify Health
    5
  • Founders Pledge
    1
  • Frankfurt AI Safety
    1
  • Friends of the Earth Malta
    1
  • Fundación Igualdad Animal
    1
  • Future of Life Institute (FLI)
    3
  • Future of Privacy Forum
    1
  • Genesis Molecular AI
    1
  • GiveDirectly
    10
  • GiveWell
    3
  • Giving What We Can (GWWC)
    2
  • Good Food Institute (GFI)
    2
  • Graduate Applications International Network (GAIN)
    1
  • Haize Labs
    2
  • Happier Lives Institute
    2
  • Health Progress Hub
    2
  • High Impact Athletes
    1
  • High Impact Professionals (HIP)
    1
  • Hudson Institute
    13
  • INHR
    1
  • IRC Africa Inc.
    1
  • Impactful Policy Careers
    1
  • Innovate Animal Ag
    1
  • Innovations for Poverty Action
    7
  • Institute for Law & AI (LawAI)
    2
  • Institute for Progress
    1
  • International Rescue Committee (IRC)
    1
  • Irregular
    2
  • J-PAL Africa
    2
  • July AI
    1
  • Kickstarting For Good
    2
  • Kurzgesagt
    1
  • LawZero
    2
  • Lawrence Livermore National Laboratory (LLNL)
    3
  • Lead Exposure Elimination Project (LEEP)
    3
  • Lens Academy
    1
  • Lila Sciences
    1
  • Living Goods
    2
  • Longview Philanthropy
    2
  • MATS Program
    1
  • MIT Lincoln Laboratory
    2
  • MLex
    2
  • Machine Intelligence Research Institute (MIRI)
    1
  • Malaria Consortium
    17
  • Menlo Ventures
    1
  • Mieux Donner
    1
  • Mila
    1
  • Mirror Biology Dialogues Fund
    1
  • Mobius
    1
  • NYU Animal Welfare Program
    1
  • NYU Center for Mind, Ethics, and Policy
    1
  • National University of Singapore
    1
  • New Incentives
    2
  • New Roots Institute
    2
  • New York State Department of Financial Services
    1
  • Notify Health
    2
  • Novah
    1
  • One Acre Fund
    3
  • One for the World
    1
  • OpenAI
    6
  • Our World in Data
    1
  • Outcapped
    3
  • Partnership for Public Service
    1
  • Pause AI
    3
  • Perimeter
    5
  • Pfizer
    1
  • Pivotal Research
    1
  • Prevail Fund
    5
  • Principled Agents
    2
  • Prism Research
    1
  • ProVeg International
    1
  • Pulitzer Center
    1
  • RAND
    4
  • Redefine Meat
    1
  • Rethink Priorities
    1
  • Rethink Wellbeing
    2
  • SaferAI
    1
  • Sage
    1
  • SandboxAQ
    1
  • Sandia National Laboratories
    1
  • Schmidt Sciences
    2
  • SecureBio
    1
  • Seldon Lab
    1
  • Sentient Futures
    1
  • Sightsavers
    1
  • Sinergia Animal
    5
  • Singapore AI Safety Hub (SASH)
    1
  • Sociedade Vegetariana Brasileira
    1
  • Speculative Technologies
    1
  • Stockholm International Peace Research Institute (SIPRI)
    1
  • Successif
    2
  • Superlinear
    1
  • Survival and Flourishing Fund (SAF & SFF)
    1
  • Suvita
    5
  • The Collective Intelligence Project
    1
  • The Good Food Institute
    2
  • The Humane League
    4
  • The Midas Project
    1
  • The Pollination Project
    1
  • The School for Moral Ambition
    1
  • The White House
    1
  • Tien Procent Club
    1
  • Timaeus
    1
  • Transluce
    3
  • Truthful AI
    3
  • U.S Office of Personnel Management
    1
  • U.S. Department of Energy (DOE)
    2
  • United Nations (UN)
    1
  • United Nations Institute for Disarmament Research (UNIDIR)
    3
  • University of Cambridge
    1
  • University of Oxford
    2
  • University of Vienna
    1
  • ValleyDAO
    1
  • Valthos
    1
  • Vector Impact Talent
    1
  • Veterinary Association for Farm Animal Welfare (VAFAW)
    1
  • Vista Institute for AI Policy
    2
  • Welfare Matters
    1
  • Wisconsin Project on Nuclear Arms Control
    1
  • WorkStream
    1
  • World Health Organization
    1
  • Zanzalu
    1
Give us feedback →
Researcher, Alignment Science
Researcher, Alignment Science
OpenAISan Francisco, USA
Today
Routes to impact
Direct high impact on an important cause
Skill-building & building career capital
Description
A research role at OpenAI for experienced ML researchers focused on building scalable alignment methods that ensure frontier AI models follow human intent, remain honest, and behave safely.
  • Focus area - design and run experiments on intent following, honesty, calibration, and robustness using reinforcement learning and empirical ML methods
  • Impact - work translates directly into deployed models, with results eligible for external publication when they advance the broader science of alignment
  • Compensation - $250K–$445K base salary plus equity; hybrid model (3 days in office) based in San Francisco, with relocation assistance available
  • Ideal background - hands-on LLM training and evaluation experience, strong Python and PyTorch skills, and comfort with RL, post-training, or scalable oversight research
Apply directly via the OpenAI careers page.
This text was generated by AI. If you notice any inconsistencies, please let us know using this form
Lead AI Applications Developer / Senior AI Developer, Safety
Lead AI Applications Developer / Senior AI Developer, Safety
MilaMontreal, Canada
1 week ago
Research Engineer
Research Engineer
FAR.AIRemote (USA)
1 week ago
Research Scientist/Engineer (Science of Scheming)
Research Scientist/Engineer (Science of Scheming)
Apollo ResearchLondon, United Kingdom
1 month ago
Head of Legal
Head of Legal
Apollo ResearchRemote, UK, Europe (excluding UK)
1 month ago
Applied Researcher (Product)
Applied Researcher (Product)
Apollo ResearchRemote, UK, Europe (excluding UK)
1 month ago
AI Red Teamer, Frontier AI Safety
AI Red Teamer, Frontier AI Safety
July AII cannot access external URLs or browse web pages. I can only work with information provided directly in our conversation. If you can copy and paste the job posting text here, I'll be happy to extract the location information according to your formatting rules.
2 months ago
Techncial Recruiter
Techncial Recruiter
FAR.AIRemote
5 days ago
US/National Security Specialist
US/National Security Specialist
Apollo ResearchRemote, UK, Europe (excluding UK)
1 month ago
Join 60k subscribers and sign up for the EA Newsletter, a monthly email with the latest ideas and opportunities