โŠš Applied Adversarial Epistemics Tracker

Upcoming events

Events, training, fellowships, mixers, and CFPs across Applied Adversarial Epistemics, instruments for epistemic access to model cognition, and the broader AI safety, alignment, and governance community. Refreshed weekly. Sorted by salience. What's AAE?

type
format
topic

ICML 2026 Workshop on Mechanistic Interpretability

workshop โ˜… 1.00 CFP closes May 8, 2026

Annual mechanistic interpretability workshop at ICML bringing together researchers to explore how neural networks make decisions. Develops principled methods to analyze and understand model internals (weights and activations) to gain greater insight into behavior. Organized by researchers from Google DeepMind, Harvard, Northeastern University, Imperial College London. High-quality venue for mechanistic interpretability research. CFP deadline May 8.

#interpretability#alignment#circuit-tracing#sparse-autoencoders#mechanistic-interpretability#instrument-science ICMLworkshopmechanistic-interpretabilityICML-workshopthird-iteration
Salience signals
{
  "type_weight": 0.85,
  "source_trust": 0.9,
  "topic_relevance": 1,
  "time_proximity": 0.7735849056603774,
  "community_signal": 0.9,
  "speaker_org_signal": 0.95,
  "is_deadline_open": 1,
  "source_count": 1
}

BlueDot Technical AI Safety Project Sprint May 2026

workshop โ˜… 0.92 Apps close May 10, 2026
๐Ÿ“… May 18, 2026 โ€“ Jun 21, 2026 ๐Ÿ“ Virtual via BlueDot Impact

5-week project-based course for engineers and early researchers to work with an AI safety expert on a contribution to AI safety research or engineering. Includes mentorship, regular check-ins, and a published write-up. Covers alignment, mechanistic interpretability, evaluations, red-teaming, AI control, and scalable oversight.

#alignment#interpretability#evals
Salience signals
{
  "type_weight": 0.85,
  "source_trust": 0.9,
  "topic_relevance": 0.9,
  "time_proximity": 0.8,
  "community_signal": 0.8,
  "speaker_org_signal": 0.85,
  "is_deadline_open": 1,
  "source_count": 1
}

TAIS 2026 - Technical AI Safety Conference

conference โ˜… 1.00 Reg closes May 13, 2026 ?

Free one-day technical AI safety conference at Oxford. Third iteration of TAIS, first UK-hosted event following TAIS 2024 (Tokyo) and TAIS 2025. Organized by Oxford Martin AI Governance Initiative and Noeon Research. Features paper submissions track. Registration open via Luma. Focus on technical AI safety research and community building.

#alignment#governance#safety-research#evals#interpretability conferencetechnicalOxfordfreeone-day
Salience signals
{
  "type_weight": 1,
  "source_trust": 0.9,
  "topic_relevance": 1,
  "time_proximity": 0.6571428571428571,
  "community_signal": 0.85,
  "speaker_org_signal": 0.9,
  "is_deadline_open": 1,
  "source_count": 1
}

EA Global: London 2026

conference โ˜… 0.94 Apps close May 20, 2026 ?
๐Ÿ“… May 29, 2026 โ€“ May 31, 2026 ๐Ÿ“ London, UK via EA Global London 2026 , EA Forum Events

Major Effective Altruism conference with significant AI safety attendance. Run by Centre for Effective Altruism. Designed for individuals with solid understanding of core EA ideas actively applying them. One application form covers all 2026 EA Global events. AI safety community heavily represented.

#alignment#governance effective-altruism
Salience signals
{
  "type_weight": 1,
  "source_trust": 0.8,
  "topic_relevance": 0.7,
  "time_proximity": 0.979874213836478,
  "community_signal": 1,
  "speaker_org_signal": 0.75,
  "is_deadline_open": 1,
  "source_count": 2
}

Australian AI Safety Forum 2026

conference โ˜… 0.89 CFP closes Jun 15, 2026 ?
๐Ÿ“… Jul 7, 2026 โ€“ Jul 8, 2026 ๐Ÿ“ Sydney, Australia via Australian AI Safety Forum

Two-day interdisciplinary forum bringing together researchers, policymakers, industry leaders, and civil society to advance AI safety work in Australia. Grounded in the International AI Safety Report, focusing on measurement, evaluation, and governance of AI systems. Features keynotes, workshops, panels, and networking sessions exploring technical AI safety, governance approaches, risk assessment, and cross-sector dialogue. Organized by Gradient Institute and sponsored by the Australian Government.

#evals#governance#evaluation#measurement-science#alignment#evaluations#policy interdisciplinary
Salience signals
{
  "type_weight": 1,
  "source_trust": 0.75,
  "topic_relevance": 0.95,
  "time_proximity": 0.7886792452830189,
  "community_signal": 0.7,
  "speaker_org_signal": 0.8,
  "is_deadline_open": 1,
  "source_count": 1
}

GovAI DC Summer Fellowship 2026

fellowship โ˜… 0.96 Apps closed Mar 1, 2026 ?
๐Ÿ“… Jun 8, 2026 โ€“ Aug 28, 2026 ๐Ÿ“ Washington DC, USA via GovAI - Centre for the Governance of AI

Three-month bipartisan fellowship designed to launch or accelerate impactful careers in American AI governance and policy. Participants deepen understanding of the field, connect with network of experts, and build skills and professional profile. $21,000 stipend. Alumni have secured positions at leading AI companies (DeepMind, OpenAI, Anthropic).

#governance#policy fellowshippolicygovernanceDC
Salience signals
{
  "type_weight": 0.8,
  "source_trust": 0.9,
  "topic_relevance": 1,
  "time_proximity": 0.929559748427673,
  "community_signal": 0.9,
  "speaker_org_signal": 1,
  "is_deadline_open": 0,
  "source_count": 2
}

GovAI Summer Fellowship 2026 - Research Track

fellowship โ˜… 0.96 Apps closed Jan 4, 2026 ?
๐Ÿ“… Jun 8, 2026 โ€“ Aug 28, 2026 ๐Ÿ“ London, UK via GovAI - Centre for the Governance of AI

Three-month fellowship where fellows conduct independent research on AI governance topic of their choice with mentorship from leading experts. ยฃ12,000 stipend. GovAI was founded to help decision-makers navigate the transition to advanced AI through rigorous research and talent fostering. Alumni have secured positions at DeepMind, OpenAI, Anthropic.

#governance#policy fellowshipresearchgovernanceLondon
Salience signals
{
  "type_weight": 0.8,
  "source_trust": 0.9,
  "topic_relevance": 1,
  "time_proximity": 0.929559748427673,
  "community_signal": 0.9,
  "speaker_org_signal": 1,
  "is_deadline_open": 0,
  "source_count": 2
}

CBAI Summer Research Fellowship 2026

fellowship โ˜… 0.95 Apps closed Apr 12, 2026 ?

Nine-week AI safety research fellowship run by Cambridge Boston Alignment Initiative. Accepts 30 fellows (undergraduate, Master's, PhD students, postdocs, and recent graduates). Includes $10,000 stipend, accommodation in Harvard dorms, meals, workspace access, and up to $10,000 in compute credits. Applications reviewed on rolling basis through four-stage process. International students on OPT/CPT eligible but visa sponsorship not available.

#alignment#interpretability#governance#evals#biosecurity#safety-research fellowshipresearchCambridgeHarvardstipendhousingcompute-credits
Salience signals
{
  "type_weight": 0.8,
  "source_trust": 0.85,
  "topic_relevance": 0.95,
  "time_proximity": 0.9345911949685535,
  "community_signal": 0.8,
  "speaker_org_signal": 0.85,
  "is_deadline_open": 1,
  "source_count": 1
}

MATS Summer 2026 Fellowship

fellowship โ˜… 0.91
๐Ÿ“… Jun 1, 2026 โ€“ Aug 31, 2026 ๐Ÿ“ Berkeley / London, USA / UK ยท Hybrid via MATS: ML Alignment & Theory Scholars

ML Alignment and Theory Scholars summer cohort. 12-week fellowship running early June to late August with in-person cohorts in Berkeley and London. Applications closed but still collecting Expression of Interest. Top-performing fellows may extend research for additional 6-12 months through London-based extension program with continued funding and mentorship. September onwards extension phase available for accepted fellows.

#alignment#mechanistic-interpretability#governance#interpretability
Salience signals
{
  "type_weight": 0.8,
  "source_trust": 0.95,
  "topic_relevance": 1,
  "time_proximity": 0.969811320754717,
  "community_signal": 0.9,
  "speaker_org_signal": 0.95,
  "is_deadline_open": 0,
  "source_count": 1
}

Workshop on Assurance and Verification of AI Development (AVID)

workshop โ˜… 0.88 Apps closed May 1, 2026 ?
๐Ÿ“… May 17, 2026 ๐Ÿ“ San Francisco, USA via FAR AI - Foundational AI Research

Workshop on secure and verifiable AI development, bringing together researchers, builders, and funders across ML, hardware security, systems, cryptography, and computer security. Focuses on verification techniques for AI safety. Colocated with IEEE Security and Privacy conference. Organized by FAR.AI.

#evals#alignment#safety-research#security workshopverificationcryptographyhardware-security
Salience signals
{
  "type_weight": 0.85,
  "source_trust": 0.9,
  "topic_relevance": 0.85,
  "time_proximity": 0.9428571428571428,
  "community_signal": 0.65,
  "speaker_org_signal": 0.85,
  "is_deadline_open": 1,
  "source_count": 1
}

Anthropic Fellows Program July 2026

fellowship โ˜… 0.86
๐Ÿ“… Jul 1, 2026 โ€“ Oct 31, 2026 ๐Ÿ“ In-person via Anthropic Alignment Blog

Four-month AI safety research fellowship. Fellows receive weekly stipend of 3,850 USD / 2,310 GBP / 4,300 CAD, funding for compute (~$15k/month), and close mentorship from Anthropic researchers. Designed to accelerate AI safety research and foster research talent. Application deadline May 3, 2026.

#alignment#technical-safety#interpretability
Salience signals
{
  "type_weight": 0.8,
  "source_trust": 0.95,
  "topic_relevance": 0.95,
  "time_proximity": 0.8138364779874214,
  "community_signal": 0.9,
  "speaker_org_signal": 0.95,
  "is_deadline_open": 0,
  "source_count": 1
}

ARENA 8.0

fellowship โ˜… 0.84
๐Ÿ“… May 25, 2026 โ€“ Jun 26, 2026 ๐Ÿ“ London, United Kingdom via ARENA: Alignment Research Engineering Accelerator

Alignment Research Engineering Accelerator intensive bootcamp in London. Five-week in-person programme at LISA (London Initiative for Safe AI) in Shoreditch. ARENA runs 2-3 bootcamps per year, providing intensive training in alignment research engineering. Applications to ARENA 8.0 are now closed.

#alignment#interpretability#technical-safety#mechanistic-interpretability
Salience signals
{
  "type_weight": 0.8,
  "source_trust": 0.9,
  "topic_relevance": 0.95,
  "time_proximity": 0.9714285714285714,
  "community_signal": 0.85,
  "speaker_org_signal": 0.85,
  "is_deadline_open": 0,
  "source_count": 1
}
๐Ÿ“… Jun 5, 2026 โ€“ Jun 7, 2026 ๐Ÿ“ London, UK via Foresight Institute , Foresight Institute

Flagship conference by Foresight Institute gathering leading scientists, entrepreneurs, funders, and policymakers to explore frontiers of science and technology. Includes AI safety track among broader frontier science topics. 40-year-old organization focused on transformative technology. Three-day event open for registration.

#alignment#governance frontier-sciencemulti-track
Salience signals
{
  "type_weight": 1,
  "source_trust": 0.75,
  "topic_relevance": 0.8,
  "time_proximity": 0.949685534591195,
  "community_signal": 0.7,
  "speaker_org_signal": 0.75,
  "is_deadline_open": 1,
  "source_count": 1
}
๐Ÿ“… Aug 3, 2026 โ€“ Aug 7, 2026 ๐Ÿ“ Berkeley, United States via ILIAD - Theoretical AI Alignment Conference

Five-day multi-track unconference bringing together 100+ researchers focused on theoretical AI alignment with mathematical emphasis. Unconference format where participants can propose and lead their own sessions. Focus areas include Singular Learning Theory, Agent Foundations, Causal Incentives, Computational Mechanics, Safety-by-Debate, and Scalable Oversight. Free to attend with limited onsite accommodations. Financial support for travel and lodging available on needs basis. Organized by Iliad, an umbrella organization for applied mathematics research in alignment.

#alignment#theoretical-foundations#agent-foundations#formal-foundations
Salience signals
{
  "type_weight": 1,
  "source_trust": 0.85,
  "topic_relevance": 1,
  "time_proximity": 0.6528301886792452,
  "community_signal": 0.85,
  "speaker_org_signal": 0.85,
  "is_deadline_open": 0,
  "source_count": 1
}

Seoul Alignment Workshop 2026

workshop โ˜… 0.82

Part of the ongoing Alignment Workshop series organized by FAR.AI, bringing together global leaders to deepen collective understanding of potential risks from Artificial General Intelligence and collaboratively explore effective strategies for mitigating these risks. Focus on alignment research and AGI safety.

#alignment#governance#control#interpretability#technical-safety#agi-risk#evals
Salience signals
{
  "type_weight": 0.85,
  "source_trust": 0.9,
  "topic_relevance": 0.95,
  "time_proximity": 0.7937106918238994,
  "community_signal": 0.8,
  "speaker_org_signal": 0.9,
  "is_deadline_open": 0,
  "source_count": 1
}

Pluralistic Alignment @ ICML 2026

workshop โ˜… 0.79
๐Ÿ“… Jul 11, 2026 ๐Ÿ“ Seoul, South Korea via Pluralistic Alignment Workshop at ICML

Annual workshop at ICML bringing together researchers from diverse fields to address how AI systems can be aligned with humanity's varied and often conflicting values. Examines technical, philosophical, and societal aspects of pluralistic AI. Accepts works in progress, position papers, policy papers, and academic papers. 4-8 pages content using ICML template. Double-blind review process with three reviewers per paper. Focus on integration of diverse perspectives into AI alignment frameworks.

#alignment#governance#pluralistic-alignment#pluralistic-values#measurement-science ICML-workshopinterdisciplinary
Salience signals
{
  "type_weight": 0.85,
  "source_trust": 0.85,
  "topic_relevance": 0.95,
  "time_proximity": 0.7685534591194969,
  "community_signal": 0.8,
  "speaker_org_signal": 0.85,
  "is_deadline_open": 0,
  "source_count": 1
}

OpenAI Safety Fellowship 2026

fellowship โ˜… 0.79
๐Ÿ“… Sep 14, 2026 โ€“ Feb 5, 2027 ๐Ÿ“ Berkeley, USA ยท Hybrid via OpenAI Safety Fellowship , Constellation Astra Fellowship

OpenAI safety fellowship partnering with Constellation, running September 2026 through February 2027. Prioritizes research in safety evaluation, ethics, robustness, scalable mitigations, privacy-preserving safety methods, agentic oversight, and high-severity misuse domains. In-person workspace at Constellation in Berkeley with remote participation permitted. Provides monthly stipend, compute resources, API credits, and mentorship from OpenAI staff. Fellows must produce substantial research deliverables (paper, benchmark, or dataset). Welcomes diverse fields including computer science, social science, cybersecurity, privacy, and HCI.

#alignment#safety-research#governance#evals#control#safety-evals#robustness#oversight#safety-evaluation remote-allowedAPI-credits
Salience signals
{
  "type_weight": 0.8,
  "source_trust": 0.9,
  "topic_relevance": 1,
  "time_proximity": 0.44150943396226416,
  "community_signal": 0.85,
  "speaker_org_signal": 0.95,
  "is_deadline_open": 0,
  "source_count": 1
}

CAMBRIA Summer 2026 - July Cohort

fellowship โ˜… 0.79
๐Ÿ“… Jul 6, 2026 โ€“ Jul 24, 2026 ๐Ÿ“ New York, United States via CAMBRIA - Cambridge Bootcamp for Research in Interpretability and Alignment

Three-week ML upskilling bootcamp focused on AI safety, interpretability, and reinforcement learning based on the ARENA curriculum. Run by Cambridge Boston Alignment Initiative in Manhattan, hosted by Collider. Includes housing, meals, dedicated teaching assistants, and travel support. Prerequisites include Python familiarity and comfort with multivariable calculus and linear algebra.

#interpretability#alignment#technical-safety
Salience signals
{
  "type_weight": 0.8,
  "source_trust": 0.85,
  "topic_relevance": 0.95,
  "time_proximity": 0.7937106918238994,
  "community_signal": 0.8,
  "speaker_org_signal": 0.85,
  "is_deadline_open": 0,
  "source_count": 1
}

CAMBRIA 2026 May Cohort

fellowship โ˜… 0.78
๐Ÿ“… May 18, 2026 โ€“ Jun 5, 2026 ๐Ÿ“ Cambridge, United States via CAMBRIA - Cambridge Bootcamp for Research in Interpretability and Alignment

Three-week ML upskilling bootcamp focused on AI safety, interpretability, and reinforcement learning based on the ARENA curriculum. Run by Cambridge Boston Alignment Initiative in Cambridge, MA. May 18 to June 5 cohort (mostly outside the astronomical summer window; CBAI brands all 2026 cohorts as Summer 2026 internally). Includes housing, meals, 24/7 office access in Harvard Square, and dedicated teaching assistants. Prerequisites: Python familiarity and comfort with multivariable calculus and linear algebra. Travel support provided. CBAI does not publish a closing date for this cohort, so no specific deadline is recorded here.

#interpretability#alignment#technical-safety
Salience signals
{
  "type_weight": 0.8,
  "source_trust": 0.85,
  "topic_relevance": 0.95,
  "time_proximity": 0.7714285714285715,
  "community_signal": 0.8,
  "speaker_org_signal": 0.85,
  "is_deadline_open": 0,
  "source_count": 1
}

Anthropic Fellows Program May 2026

fellowship โ˜… 0.78
๐Ÿ“… May 1, 2026 โ€“ Aug 31, 2026 ๐Ÿ“ In-person via Anthropic Alignment Blog

Four-month AI safety research fellowship. Fellows receive weekly stipend of 3,850 USD / 2,310 GBP / 4,300 CAD, funding for compute (~$15k/month), and close mentorship from Anthropic researchers. Designed to accelerate AI safety research and foster research talent.

#alignment#technical-safety#interpretability
Salience signals
{
  "type_weight": 0.8,
  "source_trust": 0.95,
  "topic_relevance": 0.95,
  "time_proximity": 0.3666666666666667,
  "community_signal": 0.9,
  "speaker_org_signal": 0.95,
  "is_deadline_open": 0,
  "source_count": 1
}

CAIS AI and Society Fellowship 2026

fellowship โ˜… 0.77 Apps closed Mar 24, 2026
๐Ÿ“… Jun 1, 2026 โ€“ Aug 31, 2026 ๐Ÿ“ San Francisco, USA via Center for AI Safety , Center for AI Safety

Three-month research program investigating societal impacts of advanced AI and the institutions and policies that could help societies respond well. Organized by the Center for AI Safety. Application deadline March 24. Focus on how current AI systems work, their societal-scale risks, and how to manage them.

#governance#policy#alignment fellowshipgovernancepolicyresearch
Salience signals
{
  "type_weight": 0.8,
  "source_trust": 0.9,
  "topic_relevance": 0.85,
  "time_proximity": 0.969811320754717,
  "community_signal": 0.75,
  "speaker_org_signal": 0.85,
  "is_deadline_open": 0,
  "source_count": 1
}

Pivotal Research Fellowship 2026 Q3

fellowship โ˜… 0.77
๐Ÿ“… Jun 29, 2026 โ€“ Aug 28, 2026 ๐Ÿ“ London, United Kingdom via Pivotal Research Fellowship

Nine-week AI safety research fellowship in London with optional extensions up to 6 months. Full-time, in-person program at LISA. Provides ยฃ6,000-ยฃ8,000 stipend (senior fellows ยฃ8,000), travel coverage, ยฃ2,000 housing stipend for non-London residents, meals and compute resources. Weekly one-on-one mentorship with established researchers, research management support, co-working space with weekday meals, workshops and expert Q&A sessions, career planning. Seven cohorts completed with 129 alumni. 70-90% of applicants receive extension offers. Alumni at Anthropic, Google DeepMind, UK AISI, GovAI. Quarterly cohorts.

#alignment#governance#technical-safety#mechanistic-interpretability
Salience signals
{
  "type_weight": 0.8,
  "source_trust": 0.75,
  "topic_relevance": 0.95,
  "time_proximity": 0.8289308176100629,
  "community_signal": 0.8,
  "speaker_org_signal": 0.85,
  "is_deadline_open": 0,
  "source_count": 1
}

Cambridge ERA:AI Fellowship 2026

fellowship โ˜… 0.77
๐Ÿ“… Jul 6, 2026 โ€“ Sep 11, 2026 ๐Ÿ“ Cambridge, United Kingdom via ERA Cambridge: Existential Risk Alliance Fellowship

Ten-week fellowship in Cambridge, UK targeting researchers and entrepreneurs focused on mitigating risks from frontier AI. Building talent for AI safety and governance research through opportunities in technical research, governance, and technical AI governance. Provides competitive stipend, meals during working hours, transportation, visa and lodging coverage, weekly mentorship from expert researchers, 30+ events throughout fellowship, dedicated research management support and compute resources. Welcomes talented individuals at any career stage motivated to contribute to AI safety and governance research.

#alignment#governance#technical-safety
Salience signals
{
  "type_weight": 0.8,
  "source_trust": 0.85,
  "topic_relevance": 0.95,
  "time_proximity": 0.7937106918238994,
  "community_signal": 0.75,
  "speaker_org_signal": 0.8,
  "is_deadline_open": 0,
  "source_count": 1
}

Foresight Vision Weekend UK 2026

conference โ˜… 0.76
๐Ÿ“… Jun 5, 2026 โ€“ Jun 7, 2026 ๐Ÿ“ London, United Kingdom via Foresight Institute

Flagship event gathering leading scientists, entrepreneurs, funders, and policymakers to explore the frontiers of science and technology. Includes AI safety track. Part of Foresight Institute's Vision Weekend series. 40-year-old organization focused on transformative technology.

#governance#frontier-science#technical-safety
Salience signals
{
  "type_weight": 1,
  "source_trust": 0.75,
  "topic_relevance": 0.7,
  "time_proximity": 0.9446540880503145,
  "community_signal": 0.65,
  "speaker_org_signal": 0.7,
  "is_deadline_open": 1,
  "source_count": 1
}

CAMBRIA Summer 2026 - August Cohort

fellowship โ˜… 0.76
๐Ÿ“… Aug 10, 2026 โ€“ Aug 28, 2026 ๐Ÿ“ Cambridge, United States via CAMBRIA - Cambridge Bootcamp for Research in Interpretability and Alignment

Three-week ML upskilling bootcamp focused on AI safety, interpretability, and reinforcement learning based on the ARENA curriculum. Run by Cambridge Boston Alignment Initiative in Cambridge, MA. Includes housing, meals, 24/7 office access in Harvard Square, and dedicated teaching assistants. Prerequisites include Python familiarity and comfort with multivariable calculus and linear algebra. Travel support provided.

#interpretability#alignment#technical-safety
Salience signals
{
  "type_weight": 0.8,
  "source_trust": 0.85,
  "topic_relevance": 0.95,
  "time_proximity": 0.6176100628930817,
  "community_signal": 0.8,
  "speaker_org_signal": 0.85,
  "is_deadline_open": 0,
  "source_count": 1
}
๐Ÿ“… Jul 13, 2026 โ€“ Oct 31, 2026 ๐Ÿ“ Cambridge, United Kingdom ยท Hybrid via MARS - Mentorship for Alignment Researchers at CAISH

Stage 1 of MARS V 2026: the open general application for prospective participants seeking placement on AI safety research teams. Now closed (deadline 2026-05-03); strong candidates were invited to Stage 2 (mentor selection). Mentorship for Alignment Researchers programme operated by Cambridge AI Safety Hub. Matches students and early-career researchers with experienced researchers from AI labs, think tanks, and academia. In-person kick-off week in Cambridge (July 13-26, choose either July 13-19 or July 20-26), then remote research phase August-September (8-10 weeks part-time). Unpaid but fully funded: $2k+ compute budget, Claude Max (5x) for technical streams, travel funding, accommodation, meals, research management support.

#alignment#technical-safety#mentorship#control#interpretability#evals#governance
Salience signals
{
  "type_weight": 0.8,
  "source_trust": 0.8,
  "topic_relevance": 0.95,
  "time_proximity": 0.7584905660377359,
  "community_signal": 0.75,
  "speaker_org_signal": 0.8,
  "is_deadline_open": 0,
  "source_count": 1
}

MARS V 2026: Stage 2 Mentor Selection

fellowship โ˜… 0.75
๐Ÿ“… Jul 13, 2026 โ€“ Oct 31, 2026 ๐Ÿ“ Cambridge, United Kingdom ยท Hybrid via MARS - Mentorship for Alignment Researchers at CAISH

Stage 2 of MARS V 2026: invited Stage-1 candidates browse mentor projects and apply directly to preferred mentors. Final deadline 2026-05-10. NOT a separate open call; only Stage-1-invited applicants may submit. Mentorship for Alignment Researchers programme operated by Cambridge AI Safety Hub. Same downstream programme as Stage 1: in-person kick-off (July 13-19 or 20-26), then remote phase Aug-Sept (8-10 weeks part-time), $2k+ compute budget, Claude Max (5x), travel and accommodation funded.

#alignment#technical-safety#mentorship#control#interpretability#evals#governance
Salience signals
{
  "type_weight": 0.8,
  "source_trust": 0.8,
  "topic_relevance": 0.95,
  "time_proximity": 0.7584905660377359,
  "community_signal": 0.75,
  "speaker_org_signal": 0.8,
  "is_deadline_open": 0,
  "source_count": 1
}
๐Ÿ“… Jun 18, 2026 โ€“ Jun 19, 2026 ๐Ÿ“ Hybrid via UNIDIR - United Nations Institute for Disarmament Research

United Nations Institute for Disarmament Research global conference addressing artificial intelligence through lens of international security and ethical considerations. Two-day hybrid event. UNIDIR focuses on AI governance, disarmament, and international security policy. In scope per AI governance and international AI policy interests of safety community.

#governance#policy#international-security
Salience signals
{
  "type_weight": 1,
  "source_trust": 0.85,
  "topic_relevance": 0.8,
  "time_proximity": 0.8842767295597485,
  "community_signal": 0.65,
  "speaker_org_signal": 0.85,
  "is_deadline_open": 0,
  "source_count": 1
}

BlueDot Incubator Week June 2026

hackathon โ˜… 0.73
๐Ÿ“… Jun 1, 2026 โ€“ Jun 5, 2026 ๐Ÿ“ London, UK via BlueDot Impact

Five-day intensive programme for AI safety founders going from idea to funded. Successful pitches receive ยฃ50k in equity-free seed funding. Part of BlueDot Impact's incubator and rapid-funding initiatives supporting concrete AI safety work.

#alignment#governance startupincubator
Salience signals
{
  "type_weight": 0.65,
  "source_trust": 0.9,
  "topic_relevance": 0.85,
  "time_proximity": 0.9647798742138365,
  "community_signal": 0.75,
  "speaker_org_signal": 0.75,
  "is_deadline_open": 0,
  "source_count": 1
}
๐Ÿ“… Jul 10, 2026 ๐Ÿ“ Seoul, South Korea via ICML: Safety-related Workshops

Second Workshop on Technical AI Governance Research at ICML 2026, focusing on technical approaches to AI governance, policy, and regulation. Part of the main conference workshop track.

#governance#policy ICMLworkshopgovernancetechnical-governance
Salience signals
{
  "type_weight": 0.85,
  "source_trust": 0.85,
  "topic_relevance": 0.9,
  "time_proximity": 0.7383647798742139,
  "community_signal": 0.75,
  "speaker_org_signal": 0.7,
  "is_deadline_open": 0,
  "source_count": 1
}
๐Ÿ“… Jul 10, 2026 ๐Ÿ“ Seoul, South Korea via ICML: Safety-related Workshops

Workshop at ICML 2026 focused on identifying, diagnosing, and fixing failure modes in agentic AI systems. Covers reproducible triggers for failures, diagnostic tracing methods, and verified repair approaches. Highly relevant to AI safety and robustness.

#evals#alignment ICMLworkshopfailure-modesagentsdiagnostics
Salience signals
{
  "type_weight": 0.85,
  "source_trust": 0.85,
  "topic_relevance": 0.9,
  "time_proximity": 0.7383647798742139,
  "community_signal": 0.75,
  "speaker_org_signal": 0.7,
  "is_deadline_open": 0,
  "source_count": 1
}
๐Ÿ“… Jul 11, 2026 ๐Ÿ“ Seoul, South Korea via ICML: Safety-related Workshops

Second Workshop on Agents in the Wild focusing on safety and security of AI agents deployed in real-world environments. Addresses challenges in ensuring safe and secure operation of autonomous agents. Part of ICML 2026 workshop track.

#alignment#evals ICMLworkshopagentssafetysecurity
Salience signals
{
  "type_weight": 0.85,
  "source_trust": 0.85,
  "topic_relevance": 0.9,
  "time_proximity": 0.7333333333333334,
  "community_signal": 0.75,
  "speaker_org_signal": 0.7,
  "is_deadline_open": 0,
  "source_count": 1
}
๐Ÿ“… Jul 3, 2026 โ€“ Jul 4, 2026 ๐Ÿ“ San Diego, USA via EvalEval Coalition

Workshop addressing tensions between model developers and evaluation researchers, emphasizing methodological rigor, sociotechnical perspectives, and community collaborations. Topics include evaluation methodology (validity, reliability, metrics, reproducibility), infrastructure and stakeholders (tooling, costs, transparency), and sociotechnical impacts (bias, privacy, labor, environmental effects, cultural considerations). Non-archival by default with optional archival publication. Organized by researchers from ETH Zurich, MIT, Hugging Face, and University of Edinburgh.

#evals#safety-research#measurement ACL-workshoptwo-day
Salience signals
{
  "type_weight": 0.85,
  "source_trust": 0.75,
  "topic_relevance": 0.9,
  "time_proximity": 0.8088050314465409,
  "community_signal": 0.7,
  "speaker_org_signal": 0.8,
  "is_deadline_open": 0,
  "source_count": 1
}

Astra Fellowship Fall 2026

fellowship โ˜… 0.71
๐Ÿ“… Sep 14, 2026 โ€“ Feb 5, 2027 ๐Ÿ“ Berkeley, United States via Constellation Astra Fellowship

Fully funded, in-person program pairing senior advisors with emerging talent on 5-month technical, governance, strategy, and field-building projects. $8,400 monthly stipend, ~$15K/month research budget for empirical fellows (compute), workspace at Berkeley research center, weekly mentorship from experts, visa support for international applicants. Applications for Fall 2026 cohort closed May 3rd. Strong placement rates at safety orgs.

#alignment#governance#technical-safety
Salience signals
{
  "type_weight": 0.8,
  "source_trust": 0.85,
  "topic_relevance": 0.9,
  "time_proximity": 0.43647798742138366,
  "community_signal": 0.85,
  "speaker_org_signal": 0.85,
  "is_deadline_open": 0,
  "source_count": 1
}

ICML 2026

conference โ˜… 0.71
๐Ÿ“… Jul 6, 2026 โ€“ Jul 11, 2026 ๐Ÿ“ Seoul, South Korea via ICML: Safety-related Workshops , ICML: Safety-related Workshops

43rd International Conference on Machine Learning, premier venue for AI and ML research. July 6: Expo/Tutorial Day; July 7-9: Main Conference; July 10-11: Workshops. Has established policies for LLM use in peer review, research ethics, and reviewer support. Safety-related workshops include Mechanistic Interpretability, Pluralistic Alignment, and Technical AI Governance Research workshops.

#alignment#interpretability#evals#ml-research#mechanistic-interpretability#governance major-conference
Salience signals
{
  "type_weight": 1,
  "source_trust": 0.85,
  "topic_relevance": 0.75,
  "time_proximity": 0.7937106918238994,
  "community_signal": 0.8,
  "speaker_org_signal": 0.75,
  "is_deadline_open": 0,
  "source_count": 1
}

BlueDot AI Safety Evals Paper Reading Club

reading-group โ˜… 0.70
๐Ÿ“… May 5, 2026 โ€“ Jun 30, 2026 ๐Ÿ“ Virtual via BlueDot Impact Events Calendar (Luma)

Weekly AI safety evaluations paper reading club hosted by BlueDot Impact. Meets every Tuesday at 4:00 PM UTC to discuss evaluation methodologies, safety benchmarks, and measurement frameworks. Open to all interested in AI safety evals research.

#evals#alignment weeklypaper-discussion
Salience signals
{
  "type_weight": 0.45,
  "source_trust": 0.85,
  "topic_relevance": 0.9,
  "time_proximity": 0.4285714285714286,
  "community_signal": 0.7,
  "speaker_org_signal": 0.7,
  "is_deadline_open": 1,
  "source_count": 1
}

MAIA AI Safety Fundamentals Summer 2026

reading-group โ˜… 0.69
๐Ÿ“… May 22, 2026 โ€“ Jul 17, 2026 ๐Ÿ“ Virtual via MAIA - MIT AI Alignment

Eight-week virtual reading group run by MIT AI Alignment (MAIA). Topics include AI's trajectory, misalignment, technical safety, policy, and careers in the field. Two hours per week commitment. Led by small groups facilitated by MAIA members. Free, no prior AI background required. Applications open through May 22. MAIA is MIT student group conducting AI alignment research with membership in the hundreds, supported by CBAI.

#alignment#governance#technical-safety
Salience signals
{
  "type_weight": 0.45,
  "source_trust": 0.8,
  "topic_relevance": 0.9,
  "time_proximity": 0.8857142857142857,
  "community_signal": 0.75,
  "speaker_org_signal": 0.8,
  "is_deadline_open": 0,
  "source_count": 1
}

Agentic AI Summit 2026

conference โ˜… 0.69

Summit bringing together academic leaders, entrepreneurs, AI experts, venture capitalists, and policymakers to discuss the future of AI and Agentic AI. Call for Papers and Startup Spotlight applications open. In-person and livestream available.

#alignment#evals#governance
Salience signals
{
  "type_weight": 1,
  "source_trust": 0.85,
  "topic_relevance": 0.75,
  "time_proximity": 0.6477987421383647,
  "community_signal": 0.75,
  "speaker_org_signal": 0.8,
  "is_deadline_open": 0,
  "source_count": 1
}

Secure & Sovereign AI Workshop 2026

workshop โ˜… 0.68
๐Ÿ“… Jul 18, 2026 โ€“ Jul 19, 2026 ๐Ÿ“ Berlin, Germany via Foresight Institute , Foresight Institute

Technical workshop by Foresight Institute bringing together top talent to address bottlenecks in frontier science and technology advancement. Focus on secure AI systems and AI sovereignty. Two-day workshop open for registration.

#governance#evals#alignment#safety-research#security#control#technical-safety technicalBerlin
Salience signals
{
  "type_weight": 0.85,
  "source_trust": 0.75,
  "topic_relevance": 0.85,
  "time_proximity": 0.7333333333333334,
  "community_signal": 0.7,
  "speaker_org_signal": 0.75,
  "is_deadline_open": 0,
  "source_count": 1
}

AI Risk Content Hackathon

hackathon โ˜… 0.68
๐Ÿ“… Jun 6, 2026 ๐Ÿ“ London, UK via BlueDot Impact Events Calendar (Luma)

One-day hackathon organized by BlueDot Impact focused on AI risk content creation. Brings together community members to develop educational and outreach materials related to AI safety and existential risk. Part of BlueDot's ongoing series of community events building the workforce needed to safely navigate AGI.

#alignment#governance#evals content-creation
Salience signals
{
  "type_weight": 0.65,
  "source_trust": 0.85,
  "topic_relevance": 0.75,
  "time_proximity": 0.9446540880503145,
  "community_signal": 0.75,
  "speaker_org_signal": 0.8,
  "is_deadline_open": 0,
  "source_count": 1
}
๐Ÿ“… May 22, 2026 โ€“ May 24, 2026 ๐Ÿ“ Hybrid via Apart Research , Apart Research

Three-day event bringing together researchers and engineers to prototype verification tools for AI-generated code. Co-organized with Atlas Computing. Top teams can apply for a four-month SPS Fellowship following the hackathon. Focus on secure program synthesis and verification of AI-generated code, relevant to alignment and safety assurance.

#alignment#control#safety-research#evals#security#automated-research#safety-applications#technical-safety#evaluations#governance hybridsprint
Salience signals
{
  "type_weight": 0.65,
  "source_trust": 0.85,
  "topic_relevance": 0.8,
  "time_proximity": 0.8857142857142857,
  "community_signal": 0.7,
  "speaker_org_signal": 0.75,
  "is_deadline_open": 0,
  "source_count": 1
}

SPAR Spring 2026: Mentee Application

fellowship โ˜… 0.66
๐Ÿ“… Feb 16, 2026 โ€“ May 16, 2026 ๐Ÿ“ Virtual via SPAR - Supervised Program for Alignment Research , SPAR - Supervised Program for Alignment Research

Part-time remote research fellowship pairing aspiring researchers with 130+ experienced mentors from Google DeepMind, RAND, Apollo Research, MATS, UK AISI, and other top organizations for three-month AI alignment projects. Participants commit 5-40 hours weekly. Covers project expenses including compute and API/LLM access. Culminates in virtual Demo Day with prizes totaling $7,000. Optional continuation beyond May 16. Mentee application track: aspiring researchers (undergraduate, graduate/PhD students, and professionals at different experience levels without requiring prior research experience) apply to be paired with mentors. Mentee application deadline 2026-01-14 (passed).

#alignment#governance#evals#safety-research#interpretability#technical-safety remotementorship
Salience signals
{
  "type_weight": 0.8,
  "source_trust": 0.85,
  "topic_relevance": 0.95,
  "time_proximity": 0,
  "community_signal": 0.8,
  "speaker_org_signal": 0.9,
  "is_deadline_open": 0,
  "source_count": 1
}

SPAR Spring 2026: Mentor Application

fellowship โ˜… 0.66
๐Ÿ“… Feb 16, 2026 โ€“ May 16, 2026 ๐Ÿ“ Virtual via SPAR - Supervised Program for Alignment Research , SPAR - Supervised Program for Alignment Research

Part-time remote research fellowship pairing aspiring researchers with 130+ experienced mentors from Google DeepMind, RAND, Apollo Research, MATS, UK AISI, and other top organizations for three-month AI alignment projects. Participants commit 5-40 hours weekly. Covers project expenses including compute and API/LLM access. Culminates in virtual Demo Day with prizes totaling $7,000. Optional continuation beyond May 16. Mentor application track: experienced researchers from Google DeepMind, RAND, Apollo Research, MATS, UK AISI etc. apply to mentor a project. Mentor application deadline 2025-12-05 (passed).

#alignment#governance#evals#safety-research#interpretability#technical-safety remotementorship
Salience signals
{
  "type_weight": 0.8,
  "source_trust": 0.85,
  "topic_relevance": 0.95,
  "time_proximity": 0,
  "community_signal": 0.8,
  "speaker_org_signal": 0.9,
  "is_deadline_open": 0,
  "source_count": 1
}

Global South AIS Hackathon

hackathon โ˜… 0.66
๐Ÿ“… Jun 19, 2026 โ€“ Jun 21, 2026 ๐Ÿ“ Hybrid via Apart Research , Apart Research

Weekend AI safety hackathon focused on Global South participation and perspectives. Hybrid format allowing both online and in-person participation. Organized by Apart Research as part of their 55+ research sprints series with 6,000+ participants across 200+ global locations.

#alignment#safety-research#evals#governance
Salience signals
{
  "type_weight": 0.65,
  "source_trust": 0.85,
  "topic_relevance": 0.8,
  "time_proximity": 0.8490566037735849,
  "community_signal": 0.7,
  "speaker_org_signal": 0.7,
  "is_deadline_open": 0,
  "source_count": 1
}

International Programme on AI Evaluation 2026

fellowship โ˜… 0.64 Apps closed Jan 15, 2026 ?
๐Ÿ“… Feb 1, 2026 โ€“ May 29, 2026 ๐Ÿ“ Valencia, Spain ยท Hybrid via International Programme on AI Evaluation , International Programme on AI Evaluation

First global academic programme dedicated to AI evaluation, combining technical depth with policy and governance. 150-hour expert diploma covering capabilities and safety evaluations. Includes 90 hours online instruction, 20 hours hands-on courses, and 40-hour in-person capstone week in Valencia. Faculty from Cambridge, Stanford, Princeton, EU AI Office, UK AI Safety Institute, FAR AI, Apollo Research. Targets professionals joining AI Safety Institutes, government agencies, and industry research labs.

#evals#safety-research#governance#policy fellowshipevalsacademichybriddiplomafundedprestigious
Salience signals
{
  "type_weight": 0.8,
  "source_trust": 0.85,
  "topic_relevance": 0.95,
  "time_proximity": 0,
  "community_signal": 0.7,
  "speaker_org_signal": 0.9,
  "is_deadline_open": 0,
  "source_count": 1
}

NeurIPS 2026

conference โ˜… 0.63
๐Ÿ“… Dec 6, 2026 โ€“ Dec 13, 2026 ๐Ÿ“ In-person via NeurIPS: Safety-related Workshops

Conference on Neural Information Processing Systems held across three locations simultaneously. Main paper track, evaluations and datasets track, position papers track, competitions, and workshops. Workshop application deadline June 6. Safety-related workshops typically include AI safety, alignment, and robustness topics. Major venue for ML research with substantial AI safety community participation.

#interpretability#evals#alignment#ml-research#safety-research#mechanistic-interpretability major-conferencemulti-location
Salience signals
{
  "type_weight": 1,
  "source_trust": 0.85,
  "topic_relevance": 0.75,
  "time_proximity": 0.17162162162162165,
  "community_signal": 0.85,
  "speaker_org_signal": 0.8,
  "is_deadline_open": 0,
  "source_count": 1
}
๐Ÿ“… Aug 3, 2026 โ€“ Aug 6, 2026 ๐Ÿ“ New York, United States via Cognitive Computational Neuroscience (CCN)

Ninth annual Cognitive Computational Neuroscience conference bringing together researchers in cognitive science, neuroscience, and artificial intelligence to explore computations underlying complex behavior. Topics span from brain information processing to AI interpretability. Keynote speakers include Brenden Lake (Princeton), Ila Fiete (MIT), Kenji Doya (OIST), Doris Tsao (Berkeley), and Alona Fyshe (Alberta). Relevant for AAE practitioners drawing on computational cognitive neuroscience, predictive coding, and measurement frameworks applicable to LLMs.

#measurement-science#cognitive-science#interpretability
Salience signals
{
  "type_weight": 1,
  "source_trust": 0.8,
  "topic_relevance": 0.7,
  "time_proximity": 0.6528301886792452,
  "community_signal": 0.6,
  "speaker_org_signal": 0.7,
  "is_deadline_open": 0,
  "source_count": 1
}

UNIDIR Cyber Stability Conference 2026

conference โ˜… 0.52
๐Ÿ“… May 4, 2026 โ€“ May 5, 2026 ๐Ÿ“ Geneva, Switzerland ยท Hybrid via UNIDIR - United Nations Institute for Disarmament Research

Two-day conference on global cooperation for cybersecurity resilience and stability. Organized by United Nations Institute for Disarmament Research. Addresses international frameworks for cyber governance and security cooperation.

#governance#cyber-security UNdisarmament
Salience signals
{
  "type_weight": 1,
  "source_trust": 0.85,
  "topic_relevance": 0.6,
  "time_proximity": 0.4,
  "community_signal": 0.5,
  "speaker_org_signal": 0.75,
  "is_deadline_open": 0,
  "source_count": 1
}
๐Ÿ“… Jun 30, 2026 โ€“ Jul 3, 2026 ๐Ÿ“ Santiago, Chile via Association for the Scientific Study of Consciousness (ASSC)

29th annual meeting of the Association for the Scientific Study of Consciousness. Academic society promoting rigorous research directed toward understanding the nature, function, and underlying mechanisms of consciousness. Includes members from cognitive science, medicine, neuroscience, philosophy, and other relevant disciplines. AAE attendees follow for measurement/metacognition sessions relevant to LLM interpretability. Registration and submissions currently open.

#consciousness#measurement-science#cognitive-science
Salience signals
{
  "type_weight": 1,
  "source_trust": 0.6,
  "topic_relevance": 0.65,
  "time_proximity": 0.8188679245283019,
  "community_signal": 0.5,
  "speaker_org_signal": 0.5,
  "is_deadline_open": 0,
  "source_count": 1
}
๐Ÿ“… Jun 15, 2026 โ€“ Jun 27, 2026 ๐Ÿ“ New York, USA via Machine Learning Summer School (MLSS) - Columbia

Two-week intensive summer school at Columbia University covering machine learning topics including mechanistic interpretability, alignment/safety, RAG & agents, and LLM systems. Approximately 200 PhD students participate alongside faculty and industry speakers. In-scope due to dedicated alignment and mechanistic interpretability tracks.

#interpretability#alignment
Salience signals
{
  "type_weight": 0.35,
  "source_trust": 0.75,
  "topic_relevance": 0.7,
  "time_proximity": 0.869182389937107,
  "community_signal": 0.5,
  "speaker_org_signal": 0.6,
  "is_deadline_open": 0,
  "source_count": 1
}
๐Ÿ“… May 5, 2026 ๐Ÿ“ Berkeley, USA via AI Safety Awareness Group Oakland (Meetup)

A free, accessible workshop hosted by AI Safety Awareness Group Oakland exploring AI's trajectory and societal impact. No technical background required. Features live demonstrations of current AI systems, interactive forecasting activities, and discussions about AI's implications for work, relationships, and society over the next 1-5 years.

#governance#alignment
Salience signals
{
  "type_weight": 0.45,
  "source_trust": 0.7,
  "topic_relevance": 0.7,
  "time_proximity": 0.5714285714285714,
  "community_signal": 0.6,
  "speaker_org_signal": 0.5,
  "is_deadline_open": 0,
  "source_count": 1
}