AI Safety Course
Do you want to help make AI beneficial to humanity? Explore AI Safety in our comprehensive 8-week program, following a Governance or Alignment Track.
Course runs from 31st March - June 23rd 2025
Applications Closed
In this course you’ll explore the risks posed by advanced AI systems and approaches to addressing these risks, following our governance or alignment tracks.
The course is organised by AI Safety Collab and Safe AI London, a project of Arcadia Impact.
It runs from the week starting 31st March for 8 weeks, with an optional 4-week project phase after. The time commitment is about 5 hours per week including 2 hours of reading, 1 hour of exercises, and 1.5-2 hours for discussion sessions.
It’s open to applications from anyone interested in learning about the topic, you don’t need a technical background to participate (although some technical experience can be helpful for the Alignment track). The discussion sessions will be arranged after your application and will likely be online with in-person events arranged in London.
Further details can be found on the application form.
If you already have some background knowledge in AI Safety, we are also looking for course facilitators. Express interest in facilitating through the same application form.
Curriculum
AI Governance Track
The AI Governance Track follows the curriculum by Bluedot Impact covering approaches to reduce AI risks through standards and regulation, and foreign policy to promote AI safety internationally.
Each week will cover:
How AI systems work
The promise and Perils of AI
Foundations of AI Governance
The AI Policy Toolkit
How AI is Governed Today
Governance at Frontier AI Companies
Proposals to Govern AI
Contributing to AI Governance
After the course there is an optional project phase
AI Alignment Track
The AI Alignment Track follows the curriculum by AI Safety Atlas covering progress in AI and technical approaches to reducing risks.
Each week will cover:
Capabilities
Risks
Strategies
Governance
Evaluations
Specification
Generalisation
Scalable Oversight
Interpretability (optional)
Not based in London?
Find more ways to learn about AI Safety and get involved in the AI Safety community: AISafety.com
See AI Safety Fundamentals courses run by Bluedot Impact