MATS Summer 2026 applications are now open! https://matsprogram.org/apply?utm_source=corpora&utm_medium=&utm_campaign=s26 We believe reducing risks from unaligned AI is one of the world's most urgent and talent-constrained challenges, and that ambitious people from a wide range of backgrounds can meaningfully contribute to this work. That's why we're training the next generation of AI alignment, interpretability, security, and governance researchers.
Running June-August 2026, MATS’ 12-week research program provides everything you need to launch your career in AI safety. This includes field-leading research mentorship, funding ($15k stipend + $12k compute), Berkeley/London offices (depending on mentor preference), housing, talks/workshops with AI experts, and a global network of peers.
MATS has accelerated 450+ researchers so far. Among alumni who graduated before 2025, 80% are working directly in AI safety/security and 10% have co-founded active AI safety startups. Participants have coauthored 150+ papers, with 7,800+ citations https://scholar.google.com/citations?user=VgJaUK4AAAAJ&hl=en, and rate our program 9.4/10. Our mentors include world-class researchers from Anthropic, Google DeepMind, OpenAI, UK AISI, GovAI, Redwood, METR, Apollo Research, Goodfire, RAND, AI Futures Project, and more.
*Apply by January 18 AoE to be considered!* Visit our website https://matsprogram.org/apply?utm_source=corpora&utm_medium=&utm_campaign=s26 for details.
Help us spread the word with people you know who'd be a good fit, and feel free to reach out if you have any questions!
Best, Eric
--
Eric Dhan Operations Generalist, MATS LinkedIn https://www.linkedin.com/in/eric-dhan/