PhD student in AI security

PhD student in AI security

Opportunity type
Fellowship
Cause areas
AI safety & policy
Routes to impact
Skill-building & building career capital
Learning about important cause areas
Direct high impact on an important cause
Skill set
Information security
Academia
Conceptual & empirical research
Software engineering
Deadline
2026-01-12
Location
Linköping, Sweden
Description
Linköping University is seeking a highly talented and motivated PhD student to join a research team focused on AI security, specifically investigating “Memory Poisoning in LLM Agents: Foundations, Attacks, and Defenses.”
  • Conduct cutting-edge research on security risks in large language model (LLM) agents, with a focus on memory poisoning attacks.
  • Collaborate with a dynamic team across Linköping University, Chalmers University of Technology, and Recorded Future, with opportunities for research visits.
  • Benefit from a competitive starting salary (at least 36,400 SEK/month), full employee benefits, and a supportive, diverse academic environment.
  • Devote most of your time to doctoral studies and research, with the possibility of teaching or departmental duties up to 20% of full-time.
Applicants must have a Master’s degree in Computer Science, Electrical Engineering, or Applied Mathematics (or equivalent), strong programming and mathematical skills, and excellent English communication abilities. The position is open to international applicants; background screening may be required before employment.
This text was generated by AI. If you notice any inconsistencies, please let us know using this form
Join 60k subscribers and sign up for the EA Newsletter, a monthly email with the latest ideas and opportunities