Opportunity type
Funding
Independent project
Cause areas
AI safety & policy
Routes to impact
Direct high impact on an important cause
Skill-building & building career capital
Learning about important cause areas
Skill set
Research
Software engineering
Academia
Deadline
2026-05-26
Description
Schmidt Sciences is inviting proposals for a pilot program in AI interpretability, focused on developing new methods to detect and mitigate deceptive behaviors in AI models.
- Funding available between $300k and $1M for projects lasting one to three years.
- Open globally to individual researchers, teams, institutions, and multi-institution collaborations; collaborations across geographic boundaries are encouraged.
- Key research areas include detecting deceptive behaviors in large language models (LLMs), steering models to improve truthfulness, and applying these methods to real-world use cases.
- Applicants may request funding for compute or access to Schmidt Sciences’ computing resources, and will receive additional support such as software engineering and API credits.
No specific visa requirements are mentioned; projects must comply with all applicable laws and cannot include lobbying or political activity.
This text was generated by AI. If you notice any inconsistencies, please let us know using this form
Join 60k subscribers and sign up for the EA Newsletter, a monthly email with the latest ideas and opportunities