Applied Adversarial Epistemics Tracker

Open calls for papers

Workshops, conferences, and fellowships with currently-open application deadlines. Sorted by deadline, soonest first.

type
format
topic

TAIS 2026 - Technical AI Safety Conference

conference ★ 0.91 CFP closes May 1, 2026
📅 May 14, 2026 📍 Oxford, UK via Technical AI Safety (TAIS) Conference

Technical AI Safety conference at Oxford Martin School. Free admission. Third iteration, first time in UK (previous in 2024 and 2025). Welcomes researchers and professionals from all backgrounds interested in AI safety discussions regardless of prior research experience. Organized by Oxford Martin AI Governance Initiative and Noeon Research. Registration now open.

#alignment#governance#safety-research#evals#interpretability conferencetechnicalOxfordfree

Second Pluralistic Alignment Workshop

workshop ★ 1.00 CFP closes May 3, 2026

Workshop at ICML 2026 exploring pluralistic AI: aligning with the diversity of human values. Accepts 4-8 page papers plus unlimited references. Topics span machine learning, philosophy, HCI, social sciences, and policy on methods for pluralistic ML training, value conflict handling, and approaches to diverse societal values. CFP deadline May 3, camera-ready June 10. Non-archival format accepting position papers, works in progress, policy papers, and academic papers.

#alignment#governance ICMLworkshopalignmentpluralistic

NeurIPS 2026

conference ★ 0.92 CFP closes May 6, 2026
📅 Dec 6, 2026 – Dec 13, 2026 📍 In-person via NeurIPS — Safety-related Workshops

Neural Information Processing Systems 2026 held across three satellite locations: Sydney, Atlanta, and Paris. Features Evaluations & Datasets Track, workshops, competitions, and safety-related tracks. Abstract deadline May 4, full paper deadline May 6, author notifications Sept 24. In-scope for safety-related workshops and eval track submissions.

#interpretability#evals#alignment

ICML 2026 Workshop on Mechanistic Interpretability

workshop ★ 1.00 CFP closes May 8, 2026

Annual mechanistic interpretability workshop at ICML 2026 in Seoul. Focuses on developing principled methods to analyze and understand model internals - weights and activations. Brings together researchers from academia, industry, and independent research. CFP deadline May 8 (AoE). Follows successful previous editions at ICML 2024 and NeurIPS 2025.

#interpretability#alignment#circuit-tracing#sparse-autoencoders ICMLworkshopmechanistic-interpretability