Applied Adversarial Epistemics Tracker

Open deadlines

Conferences, workshops, fellowships, and training programmes with currently-open submission or sign-up deadlines — calls for papers, fellowship applications, registration, early-bird pricing. Sorted by deadline, soonest first.

type
format
topic

TAIS 2026 - Technical AI Safety Conference

conference ★ 0.91 CFP closes May 1, 2026
📅 May 14, 2026 📍 Oxford, UK via Technical AI Safety (TAIS) Conference

Technical AI Safety conference at Oxford Martin School. Free admission. Third iteration, first time in UK (previous in 2024 and 2025). Welcomes researchers and professionals from all backgrounds interested in AI safety discussions regardless of prior research experience. Organized by Oxford Martin AI Governance Initiative and Noeon Research. Registration now open.

#alignment#governance#safety-research#evals#interpretability conferencetechnicalOxfordfree

Workshop on Assurance and Verification of AI Development (AVID)

workshop ★ 0.84 Apps close May 1, 2026
📅 May 17, 2026 📍 San Francisco, USA via FAR AI - Foundational AI Research

Workshop on secure and verifiable AI development, bringing together researchers, builders, and funders across ML, hardware security, systems, cryptography, and computer security. Focuses on verification techniques for AI safety. Colocated with IEEE Security and Privacy conference. Organized by FAR.AI.

#evals#alignment#safety-research#security workshopverificationcryptographyhardware-security

Pivotal Research Fellowship 2026 Q3

fellowship ★ 0.96 Apps close May 3, 2026
📅 Jun 29, 2026 – Aug 28, 2026 📍 London, UK via Pivotal Research Fellowship

Quarterly AI safety research fellowship based in London. Fellows pursue independent research projects with mentorship from alignment researchers. Applications for Q3 2026 cohort due May 3.

#alignment#interpretability

Second Pluralistic Alignment Workshop

workshop ★ 1.00 CFP closes May 3, 2026

Workshop at ICML 2026 exploring pluralistic AI: aligning with the diversity of human values. Accepts 4-8 page papers plus unlimited references. Topics span machine learning, philosophy, HCI, social sciences, and policy on methods for pluralistic ML training, value conflict handling, and approaches to diverse societal values. CFP deadline May 3, camera-ready June 10. Non-archival format accepting position papers, works in progress, policy papers, and academic papers.

#alignment#governance ICMLworkshopalignmentpluralistic

Anthropic Fellows Program July 2026

fellowship ★ 1.00 Apps close May 3, 2026
📅 Jul 20, 2026 – Nov 20, 2026 📍 Hybrid via Anthropic Alignment Blog

Anthropic's four-month fellowship program for AI safety research. Weekly stipend of $3,850 USD / £2,310 GBP / $4,300 CAD, plus ~$15k/month compute budget and close mentorship from Anthropic researchers. Priority areas include scalable oversight, adversarial robustness, and interpretability. Application deadline May 3.

#alignment#interpretability#control#evals#adversarial-robustness fellowshipresearchAnthropic

Astra Fellowship Fall 2026

fellowship ★ 1.00 Apps close May 3, 2026
📅 Sep 14, 2026 – Feb 5, 2027 📍 Berkeley, USA via Constellation Astra Fellowship , OpenAI Safety Fellowship

Fully funded 5-month in-person AI safety fellowship pairing senior advisors with emerging talent on technical, governance, strategy, and field-building projects. Monthly stipend of $8,400, ~$15k compute budget for empirical fellows, workspace at Constellation Berkeley. Applications close Sept 26, onboarding completes Dec 31, program runs Jan 5 - Mar 31 2027 with extension through June 31. Over 80% of first cohort now working full-time in AI safety.

#alignment#control#evals#governance#safety-research#interpretability fellowshipempiricalgovernancestrategy

OpenAI Safety Fellowship 2026

fellowship ★ 1.00 Apps close May 3, 2026
📅 Sep 14, 2026 – Feb 5, 2027 📍 Berkeley, USA · Hybrid via OpenAI Safety Fellowship , Constellation Astra Fellowship

OpenAI's safety fellowship program (Sept 2026 - Feb 2027) for researchers pursuing work on safety and alignment of advanced AI systems. Priority areas include safety evaluation, ethics, robustness, scalable mitigations, privacy-preserving safety methods, agentic oversight, and high-severity misuse domains. Monthly stipend, compute support, mentorship, and workspace in Berkeley. Application deadline May 3, notification July 25.

#alignment#safety-research#governance#evals

NeurIPS 2026

conference ★ 0.92 CFP closes May 6, 2026
📅 Dec 6, 2026 – Dec 13, 2026 📍 In-person via NeurIPS — Safety-related Workshops

Neural Information Processing Systems 2026 held across three satellite locations: Sydney, Atlanta, and Paris. Features Evaluations & Datasets Track, workshops, competitions, and safety-related tracks. Abstract deadline May 4, full paper deadline May 6, author notifications Sept 24. In-scope for safety-related workshops and eval track submissions.

#interpretability#evals#alignment

ICML 2026 Workshop on Mechanistic Interpretability

workshop ★ 1.00 CFP closes May 8, 2026

Annual mechanistic interpretability workshop at ICML 2026 in Seoul. Focuses on developing principled methods to analyze and understand model internals - weights and activations. Brings together researchers from academia, industry, and independent research. CFP deadline May 8 (AoE). Follows successful previous editions at ICML 2024 and NeurIPS 2025.

#interpretability#alignment#circuit-tracing#sparse-autoencoders ICMLworkshopmechanistic-interpretability

ILIAD 2026 - Theoretical AI Alignment Conference

conference ★ 1.00 Apps close Jun 1, 2026
📅 Aug 3, 2026 – Aug 7, 2026 📍 Berkeley, USA via ILIAD - Theoretical AI Alignment Conference

5-day multi-track unconference bringing together researchers in theoretical AI alignment. Covers mathematical approaches including Singular Learning Theory, Agent Foundations, Causal Incentives, Computational Mechanics, and Scalable Oversight. Unconference format where participants can propose and lead sessions. Free to attend. Application deadline June 1. Limited travel and accommodation funding available needs-based. Limited onsite bedrooms available for booking after registration.

#alignment#theory#interpretability#agent-foundations#formal-foundations conferenceunconferencetheoreticalfree

UNIDIR Global Conference on AI, Security and Ethics 2026

conference ★ 0.99 Reg closes Jun 7, 2026

UNIDIR's global conference bringing together diplomats, policymakers, military experts, industry leaders, academia, civil society, and research labs to discuss AI security and ethics. Part of UNIDIR's broader work on international AI policy, disarmament, and governance - directly relevant to the AI safety community's governance interests.

#governance#policy UNinternationalgovernancehybrid