Technical AI Safety conference at Oxford Martin School. Free admission. Third iteration, first time in UK (previous in 2024 and 2025). Welcomes researchers and professionals from all backgrounds interested in AI safety discussions regardless of prior research experience. Organized by Oxford Martin AI Governance Initiative and Noeon Research. Registration now open.
Open deadlines
Conferences, workshops, fellowships, and training programmes with currently-open submission or sign-up deadlines — calls for papers, fellowship applications, registration, early-bird pricing. Sorted by deadline, soonest first.
Workshop on secure and verifiable AI development, bringing together researchers, builders, and funders across ML, hardware security, systems, cryptography, and computer security. Focuses on verification techniques for AI safety. Colocated with IEEE Security and Privacy conference. Organized by FAR.AI.
Quarterly AI safety research fellowship based in London. Fellows pursue independent research projects with mentorship from alignment researchers. Applications for Q3 2026 cohort due May 3.
Workshop at ICML 2026 exploring pluralistic AI: aligning with the diversity of human values. Accepts 4-8 page papers plus unlimited references. Topics span machine learning, philosophy, HCI, social sciences, and policy on methods for pluralistic ML training, value conflict handling, and approaches to diverse societal values. CFP deadline May 3, camera-ready June 10. Non-archival format accepting position papers, works in progress, policy papers, and academic papers.
Anthropic's four-month fellowship program for AI safety research. Weekly stipend of $3,850 USD / £2,310 GBP / $4,300 CAD, plus ~$15k/month compute budget and close mentorship from Anthropic researchers. Priority areas include scalable oversight, adversarial robustness, and interpretability. Application deadline May 3.
Fully funded 5-month in-person AI safety fellowship pairing senior advisors with emerging talent on technical, governance, strategy, and field-building projects. Monthly stipend of $8,400, ~$15k compute budget for empirical fellows, workspace at Constellation Berkeley. Applications close Sept 26, onboarding completes Dec 31, program runs Jan 5 - Mar 31 2027 with extension through June 31. Over 80% of first cohort now working full-time in AI safety.
OpenAI's safety fellowship program (Sept 2026 - Feb 2027) for researchers pursuing work on safety and alignment of advanced AI systems. Priority areas include safety evaluation, ethics, robustness, scalable mitigations, privacy-preserving safety methods, agentic oversight, and high-severity misuse domains. Monthly stipend, compute support, mentorship, and workspace in Berkeley. Application deadline May 3, notification July 25.
Neural Information Processing Systems 2026 held across three satellite locations: Sydney, Atlanta, and Paris. Features Evaluations & Datasets Track, workshops, competitions, and safety-related tracks. Abstract deadline May 4, full paper deadline May 6, author notifications Sept 24. In-scope for safety-related workshops and eval track submissions.
Annual mechanistic interpretability workshop at ICML 2026 in Seoul. Focuses on developing principled methods to analyze and understand model internals - weights and activations. Brings together researchers from academia, industry, and independent research. CFP deadline May 8 (AoE). Follows successful previous editions at ICML 2024 and NeurIPS 2025.
5-day multi-track unconference bringing together researchers in theoretical AI alignment. Covers mathematical approaches including Singular Learning Theory, Agent Foundations, Causal Incentives, Computational Mechanics, and Scalable Oversight. Unconference format where participants can propose and lead sessions. Free to attend. Application deadline June 1. Limited travel and accommodation funding available needs-based. Limited onsite bedrooms available for booking after registration.
UNIDIR's global conference bringing together diplomats, policymakers, military experts, industry leaders, academia, civil society, and research labs to discuss AI security and ethics. Part of UNIDIR's broader work on international AI policy, disarmament, and governance - directly relevant to the AI safety community's governance interests.