Research Intern, Fundamental Safety Tooling for LLMs

Research Intern, Fundamental Safety Tooling for LLMs

Haize Labs
Opportunity type
Internship
Cause areas
AI Safety & Policy
Routes to impact
💡 Direct/Increased Engagement with EA
📈 Skill-Building & Building Career Capital
🤲 Direct High Impact on an Important Cause
🧪 Testing Your Fit for a Certain Career Path
Relevant aptitudes
Software Engineering
Conceptual & Empirical Research
Organization Building, Boosting & Running
Duration
🍂 Fall (Aug - Dec)
Location
New York City, NY, USA
Description
Haize Labs haizes LLMs at scale. We are the robustness layer eliminating the risk of using language models in any setting. To prevent these systems from failing, we preemptively discover all the ways in which they can fail and continuously eliminate them in deployment. - We are looking for Research Interns to help us to develop fundamental safety tooling for LLMs. Your work will not only set the standard both in terms of research, but also in terms of how LLMs are tested, verified, and applied across customers, companies, and industries. You will directly influence how the world responsibly uses LLMs. - Responsibilities
  • Work directly with customers to adapt our core R&D for different domains.
  • Build out core infra, cloud tooling, and UX around our algorithms.
  • Deliver a delightful human-in-the-loop product experience.
  • Ship tools that are used by developers across the world. 
- Qualifications
  • Experience with ML in an applied setting.
  • Strong open source presence _or _strong track record of software engineering projects and employment.
  • Can ramp up very quickly on understanding our research.
  • Love to break things, i.e. have a “stick it to the Man” attitude. - What sets us apart We are not here to write GPT wrappers or get rich quick off the AI bubble. We're here to work on the hardest, most fundamental research problem in AI: making it reliable and robust. Come here to push yourself, learn fast, experience excellence, and kickstart your life's work. We value our team above all else, and firmly believe that greatness begets greatness. - Since starting 6 months ago, we’ve developed a suite of safety tools that’s being used at places like Anthropic, AI21, Scale AI, and several other foundation model providers. We’ve been fortunate enough to be backed by the founders of Cognition, Hugging Face, Weights and Biases, Nous, Etched, Okta, Replit as well as rockstar AI and security executives from Stripe, Anduril, and Netflix. We’re lucky to be advised by professors from CMU and Harvard. - Our founding team has been working together for quite some time. We are turning down our Stanford PhD offers to send it on Haize Labs, have gotten into Y Combinator and other accelerators multiple times (and turned down multiple times), were the first Student Researcher at Allen AI, co-led R&D at a Series A NLP startup, wrote ML-guided matchmaking services for 50,000+ students, built an educational nonprofit supporting 60 countries, and did some other cool things along the way.
Join 60k subscribers and sign up for the EA Newsletter, a monthly email with the latest ideas and opportunities