The Effective Altruism

Opportunities Board

Work on the world's most pressing problems. Browse jobs, fellowships, internships, courses, and more at high-impact organisations.
                Give us feedback →
                The Alignment Project
                The Alignment Project
                AI Security Institute (AISI)I cannot access external URLs or browse the web, so I'm unable to visit the page and extract location information. However, I notice the text you provided appears to be about the Alignment Project funding opportunity rather than a job posting with a specific location. If you have the actual job posting content or can provide the location details directly, I'd be happy to format it according to your rules.
                I cannot access external URLs or browse the web, so I'm unable to visit the page and extract location information. However, I notice the text you provided appears to be about the Alignment Project funding opportunity rather than a job posting with a specific location. If you have the actual job posting content or can provide the location details directly, I'd be happy to format it according to your rules.
                8 months ago
                AI safety & policy
                Funding
                Advising
                Routes to impact
                Direct high impact on an important cause
                Description
                The Alignment Project offers up to £1 million in funding to support groundbreaking research in AI alignment, aiming to ensure advanced AI systems are safe, reliable, and beneficial to society.
                • Open to researchers across disciplines, with a focus on advancing AI alignment and control.
                • Access to up to £5 million in AWS cloud computing credits for large-scale technical experiments.
                • Support from leading experts and dedicated teams throughout the project lifecycle.
                • Opportunities for venture capital investment to accelerate commercial alignment solutions.
                For more information or to apply, visit the Alignment Project website.
                This text was generated by AI. If you notice any inconsistencies, please let us know using this form
                Fellowship, Artificial Intelligence in Strategic Stability and Military Competition
                Fellowship, Artificial Intelligence in Strategic Stability and Military Competition
                Stanford, USA
                4 days ago
                Fellowship, Geopolitics of Artificial Intelligence
                Fellowship, Geopolitics of Artificial Intelligence
                Stanford, USA
                4 days ago
                Anthology Fund
                Anthology Fund
                Menlo Ventures, AnthropicSan Francisco, USA / Menlo Park, USA
                San Francisco, USA / Menlo Park, USA
                1 month ago
                Request for Proposals, The Launch Sequence
                Request for Proposals, The Launch Sequence
                Remote
                3 months ago
                Mentor — BASE Fellowship Fall 2026
                Mentor — BASE Fellowship Fall 2026
                Black in AI Safety & Ethics (BASE)Remote
                Remote
                5 days ago
                PhD Studentship, Monitoring and Increasing LLM Safety
                PhD Studentship, Monitoring and Increasing LLM Safety
                University of CambridgeCambridge, United Kingdom
                Cambridge, United Kingdom
                1 week ago
                PhD Positions, Responsible AI
                PhD Positions, Responsible AI
                University of ViennaVienna, Austria
                Vienna, Austria
                2 weeks ago
                Request for Proposal, Science of Trustworthy AI (2026)
                Request for Proposal, Science of Trustworthy AI (2026)
                Remote
                2 months ago
                Join 60k subscribers and sign up for the EA Newsletter, a monthly email with the latest ideas and opportunities