Applied Adversarial Epistemics Tracker

Upcoming events

Events, training, fellowships, mixers, and CFPs across Applied Adversarial Epistemics, instruments for epistemic access to model cognition, and the broader AI safety, alignment, and governance community. Refreshed weekly. Sorted by salience. What's AAE?

Recently added

type
format
topic

NeurIPS 2026

conference ★ 0.67 CFP closes May 4, 2026
📅 Dec 6, 2026 – Dec 13, 2026 📍 In-person via NeurIPS: Safety-related Workshops

Neural Information Processing Systems 2026 across three satellite locations. Major ML conference with multiple tracks including workshops, position papers, evaluations & datasets. Safety-related workshops listed separately. Paper deadline May 5-7, workshop applications June 7.

#interpretability#evals#alignment#ml-research#safety-research major-conferencemulti-location
Salience signals
{
  "type_weight": 1,
  "source_trust": 0.85,
  "topic_relevance": 0.7,
  "time_proximity": 0.17081081081081081,
  "community_signal": 0.75,
  "speaker_org_signal": 0.75,
  "is_deadline_open": 1,
  "source_count": 1
}

ICML 2026 Workshop on Mechanistic Interpretability

workshop ★ 0.95 CFP closes May 8, 2026

Annual mechanistic interpretability workshop at ICML bringing together researchers to discuss advances in understanding neural network internals. High-quality venue for circuit tracing, probe development, SAE work, and manifold interpretability research. Organized by leading researchers in the field.

#interpretability#alignment#circuit-tracing#sparse-autoencoders#mechanistic-interpretability#instrument-science ICMLworkshopmechanistic-interpretabilityICML-workshopthird-iteration
Salience signals
{
  "type_weight": 0.85,
  "source_trust": 0.9,
  "topic_relevance": 0.95,
  "time_proximity": 0.7685534591194969,
  "community_signal": 0.85,
  "speaker_org_signal": 0.85,
  "is_deadline_open": 1,
  "source_count": 1
}

Pluralistic Alignment @ ICML 2026

workshop ★ 0.75 CFP closes May 8, 2026
📅 Jul 11, 2026 📍 Seoul, South Korea via Pluralistic Alignment Workshop at ICML

Workshop on Pluralistic AI: Aligning with the Diversity of Human Values. The workshop invites submissions of 4-8 pages following ICML 2026 template through OpenReview. All papers appear on the workshop website but remain non-archival. Focus on integrating diverse perspectives into AI alignment frameworks.

#alignment#governance#pluralistic-alignment#pluralistic-values ICML-workshopinterdisciplinary
Salience signals
{
  "type_weight": 0.85,
  "source_trust": 0.85,
  "topic_relevance": 0.9,
  "time_proximity": 0.7635220125786164,
  "community_signal": 0.75,
  "speaker_org_signal": 0.8,
  "is_deadline_open": 0,
  "source_count": 1
}

BlueDot Technical AI Safety Project Sprint May 2026

workshop ★ 0.92 Apps close May 10, 2026
📅 May 18, 2026 – Jun 21, 2026 📍 Virtual via BlueDot Impact

5-week project-based course for engineers and early researchers to work with an AI safety expert on a contribution to AI safety research or engineering. Includes mentorship, regular check-ins, and a published write-up. Covers alignment, mechanistic interpretability, evaluations, red-teaming, AI control, and scalable oversight.

#alignment#interpretability#evals
Salience signals
{
  "type_weight": 0.85,
  "source_trust": 0.9,
  "topic_relevance": 0.9,
  "time_proximity": 0.8,
  "community_signal": 0.8,
  "speaker_org_signal": 0.85,
  "is_deadline_open": 1,
  "source_count": 1
}

TAIS 2026 - Technical AI Safety Conference

conference ★ 0.95 Reg closes May 13, 2026

One-day technical AI safety conference at Oxford. Third iteration organized by Oxford Martin AI Governance Initiative and Noeon Research. Free admission, open to attendees from all backgrounds regardless of prior research experience. Previous talks available on YouTube.

#alignment#governance#safety-research#evals#interpretability conferencetechnicalOxfordfreeone-day
Salience signals
{
  "type_weight": 1,
  "source_trust": 0.9,
  "topic_relevance": 0.95,
  "time_proximity": 0.6857142857142857,
  "community_signal": 0.8,
  "speaker_org_signal": 0.85,
  "is_deadline_open": 1,
  "source_count": 1
}

CAMBRIA Bootcamp May 2026

fellowship ★ 0.80 Apps close May 18, 2026
📅 May 18, 2026 – Jun 5, 2026 📍 Cambridge, USA via CAMBRIA - Cambridge Bootcamp for Research in Interpretability and Alignment

CAMBRIA is a 3-week ML upskilling bootcamp focused on interpretability and RL, based on the ARENA curriculum. Participants receive housing, meals, 24/7 office access in Harvard Square, dedicated teaching assistants, and travel support. Run by Cambridge Boston Alignment Initiative.

#interpretability#alignment#control#mechanistic-interpretability bootcampARENA-curriculum
Salience signals
{
  "type_weight": 0.8,
  "source_trust": 0.85,
  "topic_relevance": 1,
  "time_proximity": 0.8,
  "community_signal": 0.8,
  "speaker_org_signal": 0.8,
  "is_deadline_open": 0,
  "source_count": 1
}

EA Global: London 2026

conference ★ 0.82 Apps close May 20, 2026
📅 May 29, 2026 – May 31, 2026 📍 London, UK via EA Global London 2026 , EA Forum Events

EA Global London 2026 conference organized by Centre for Effective Altruism. Three-day gathering with heavy AI safety attendance. Applications open with default ticket price £500, reduced-price and free tickets available. In scope for social event reasons given in the AAE codex.

#alignment#governance effective-altruism
Salience signals
{
  "type_weight": 1,
  "source_trust": 0.8,
  "topic_relevance": 0.7,
  "time_proximity": 0.979874213836478,
  "community_signal": 0.8,
  "speaker_org_signal": 0.7,
  "is_deadline_open": 1,
  "source_count": 1
}

Secure Program Synthesis Hackathon 2026

hackathon ★ 0.80 Reg closes May 21, 2026
📅 May 22, 2026 – May 24, 2026 📍 Hybrid via Apart Research , Apart Research

A weekend hackathon focused on secure program synthesis, exploring AI safety challenges in automated code generation. Part of Apart Research's ongoing AI safety sprint series with 42+ community-driven research events.

#alignment#control#safety-research#evals#security#automated-research#safety-applications hybridsprint
Salience signals
{
  "type_weight": 0.65,
  "source_trust": 0.85,
  "topic_relevance": 0.85,
  "time_proximity": 0.9142857142857143,
  "community_signal": 0.7,
  "speaker_org_signal": 0.7,
  "is_deadline_open": 1,
  "source_count": 1
}

Pivotal Research Fellowship 2026 Q3

fellowship ★ 0.70 Early-bird ends May 22, 2026
📅 Jun 29, 2026 – Aug 28, 2026 📍 London, UK via Pivotal Research Fellowship

9-week AI safety research fellowship based in London with potential 6-month extensions (70-90% acceptance rate). Provides £6K-£8K stipend, travel, £2K housing allowance, meals, and compute resources. 129 alumni across 7 completed cohorts. Focus on technical AI safety research.

#alignment#interpretability#governance#evals#safety-research#technical-safety Londonstipendquarterly
Salience signals
{
  "type_weight": 0.8,
  "source_trust": 0.75,
  "topic_relevance": 0.9,
  "time_proximity": 0.8238993710691824,
  "community_signal": 0.7,
  "speaker_org_signal": 0.7,
  "is_deadline_open": 0,
  "source_count": 1
}

ICML 2026

conference ★ 0.74 Early-bird ends May 24, 2026
📅 Jul 6, 2026 – Jul 11, 2026 📍 Seoul, South Korea via ICML: Safety-related Workshops

International Conference on Machine Learning 2026. Main conference July 7-9 with tutorials July 6 and workshops July 10-11. Safety-related workshops listed separately. Major ML venue with growing AI safety presence.

#alignment#interpretability#evals#ml-research major-conference
Salience signals
{
  "type_weight": 1,
  "source_trust": 0.85,
  "topic_relevance": 0.65,
  "time_proximity": 0.7886792452830189,
  "community_signal": 0.7,
  "speaker_org_signal": 0.7,
  "is_deadline_open": 1,
  "source_count": 1
}

BlueDot Incubator Week June 2026

hackathon ★ 0.85 Apps close May 26, 2026
📅 Jun 1, 2026 – Jun 5, 2026 📍 London, UK via BlueDot Impact

Five-day intensive programme for AI safety founders going from idea to funded. Successful pitches receive £50k in equity-free seed funding. Part of BlueDot Impact's incubator and rapid-funding initiatives supporting concrete AI safety work.

#alignment#governance startupincubator
Salience signals
{
  "type_weight": 0.65,
  "source_trust": 0.9,
  "topic_relevance": 0.85,
  "time_proximity": 0.9647798742138365,
  "community_signal": 0.75,
  "speaker_org_signal": 0.75,
  "is_deadline_open": 1,
  "source_count": 1
}

ILIAD 2026 - Theoretical AI Alignment Conference

conference ★ 0.92 Apps close Jun 1, 2026
📅 Aug 3, 2026 – Aug 7, 2026 📍 Berkeley, USA via ILIAD - Theoretical AI Alignment Conference

5-day unconference format bringing together 100+ researchers focused on theoretical AI alignment with mathematical emphasis. Topics include Singular Learning Theory, Agent Foundations, Causal Incentives, Computational Mechanics, Safety-by-Debate, and Scalable Oversight. Free attendance with travel support available. Organized by Iliad umbrella organization.

#alignment#theory#interpretability#agent-foundations#formal-foundations#theoretical-alignment#theoretical-foundations conferenceunconferencetheoreticalfreemathematical
Salience signals
{
  "type_weight": 1,
  "source_trust": 0.85,
  "topic_relevance": 0.95,
  "time_proximity": 0.6477987421383647,
  "community_signal": 0.8,
  "speaker_org_signal": 0.8,
  "is_deadline_open": 1,
  "source_count": 1
}

Cambridge ERA:AI Fellowship 2026

fellowship ★ 0.88 Apps close Jun 1, 2026

10-week Cambridge-based fellowship on existential AI risk across three domains: technical AI safety, governance, and technical AI governance. Fully funded with stipend, meals, transportation, visa, and lodging. Mentorship from expert researchers, 30+ events, and dedicated research management support. Open to talented individuals at any career stage.

#alignment#governance#evals#safety-research#technical-safety Cambridge-UKstipend
Salience signals
{
  "type_weight": 0.8,
  "source_trust": 0.85,
  "topic_relevance": 0.95,
  "time_proximity": 0.7886792452830189,
  "community_signal": 0.75,
  "speaker_org_signal": 0.75,
  "is_deadline_open": 1,
  "source_count": 1
}

Foresight Vision Weekend UK 2026

conference ★ 0.73 Reg closes Jun 4, 2026
📅 Jun 5, 2026 – Jun 7, 2026 📍 London, UK via Foresight Institute , Foresight Institute

Foresight Institute's flagship conference gathering leading scientists, entrepreneurs, funders, and policymakers to explore the frontiers of science and technology, and plan for flourishing futures. The event includes tracks on AI safety and secure AI technologies.

#governance#alignment#ai-safety#frontier-tech#frontier-technology#policy networkingflagship
Salience signals
{
  "type_weight": 1,
  "source_trust": 0.75,
  "topic_relevance": 0.8,
  "time_proximity": 0.9446540880503145,
  "community_signal": 0.75,
  "speaker_org_signal": 0.75,
  "is_deadline_open": 0,
  "source_count": 1
}

AI Risk Content Hackathon

hackathon ★ 0.76 Reg closes Jun 6, 2026
📅 Jun 6, 2026 📍 London, UK via BlueDot Impact Events Calendar (Luma)

One-day hackathon focused on creating AI risk content and educational materials. Part of BlueDot Impact's community-building initiatives to improve public understanding of AI safety challenges.

#alignment#governance content-creation
Salience signals
{
  "type_weight": 0.65,
  "source_trust": 0.85,
  "topic_relevance": 0.75,
  "time_proximity": 0.939622641509434,
  "community_signal": 0.7,
  "speaker_org_signal": 0.7,
  "is_deadline_open": 1,
  "source_count": 1
}

UNIDIR Global Conference on AI, Security and Ethics 2026

conference ★ 0.81 Reg closes Jun 7, 2026

United Nations Institute for Disarmament Research conference addressing artificial intelligence implications for international peace and security. Part of UNIDIR's Security and Technology Programme. International AI policy and disarmament venue in scope per AI governance community interests.

#governance#policy#ai-ethics#international-policy UNinternationalgovernancehybridGenevainternational-policy
Salience signals
{
  "type_weight": 1,
  "source_trust": 0.85,
  "topic_relevance": 0.75,
  "time_proximity": 0.879245283018868,
  "community_signal": 0.65,
  "speaker_org_signal": 0.75,
  "is_deadline_open": 1,
  "source_count": 1
}

Australian AI Safety Forum 2026

conference ★ 0.83 CFP closes Jun 15, 2026
📅 Jul 7, 2026 – Jul 8, 2026 📍 Sydney, Australia via Australian AI Safety Forum

Two-day interdisciplinary AI safety forum grounded in the International AI Safety Report. Brings together researchers, government officials, industry practitioners, and civil society to discuss measurement, evaluation, governance, and Australia's role in global AI safety. Organized by Gradient Institute with government and research institution support.

#evals#governance#evaluation#measurement-science#alignment interdisciplinary
Salience signals
{
  "type_weight": 1,
  "source_trust": 0.75,
  "topic_relevance": 0.9,
  "time_proximity": 0.7836477987421384,
  "community_signal": 0.65,
  "speaker_org_signal": 0.7,
  "is_deadline_open": 1,
  "source_count": 1
}

Seoul Alignment Workshop 2026

workshop ★ 0.80 Reg closes Jul 5, 2026

Part of FAR.AI's ongoing Alignment Workshop series bringing together global leaders in academia and industry to explore strategies for mitigating risks from Artificial General Intelligence. Focus on technical alignment research and safety frameworks.

#alignment#governance#control#interpretability#technical-safety
Salience signals
{
  "type_weight": 0.85,
  "source_trust": 0.9,
  "topic_relevance": 0.95,
  "time_proximity": 0.7886792452830189,
  "community_signal": 0.75,
  "speaker_org_signal": 0.85,
  "is_deadline_open": 0,
  "source_count": 1
}

CAMBRIA Bootcamp July 2026

fellowship ★ 0.89 Apps close Jul 6, 2026
📅 Jul 6, 2026 – Jul 24, 2026 📍 Manhattan, USA via CAMBRIA - Cambridge Bootcamp for Research in Interpretability and Alignment

Three-week ML upskilling bootcamp for AI safety focusing on interpretability and RL, based on ARENA curriculum. Fully funded with housing, meals, 24/7 office access, and dedicated teaching assistants. Run by Cambridge Boston Alignment Initiative.

#interpretability#alignment#control#mechanistic-interpretability bootcampARENA-curriculum
Salience signals
{
  "type_weight": 0.8,
  "source_trust": 0.85,
  "topic_relevance": 0.95,
  "time_proximity": 0.7886792452830189,
  "community_signal": 0.75,
  "speaker_org_signal": 0.8,
  "is_deadline_open": 1,
  "source_count": 1
}

Agentic AI Summit 2026

conference ★ 0.80 CFP closes Jul 15, 2026

Summit bringing together academic leaders, entrepreneurs, AI experts, venture capitalists, and policymakers to discuss the future of AI and Agentic AI. Call for Papers and Startup Spotlight applications open. In-person and livestream available.

#alignment#evals#governance
Salience signals
{
  "type_weight": 1,
  "source_trust": 0.85,
  "topic_relevance": 0.75,
  "time_proximity": 0.6477987421383647,
  "community_signal": 0.75,
  "speaker_org_signal": 0.8,
  "is_deadline_open": 1,
  "source_count": 1
}

Secure & Sovereign AI Workshop 2026

workshop ★ 0.74 Reg closes Jul 17, 2026
📅 Jul 18, 2026 – Jul 19, 2026 📍 Berlin, Germany via Foresight Institute , Foresight Institute

Technical workshop hosted by Foresight Institute bringing together experts to address bottlenecks in secure and sovereign AI systems. Focus on security challenges and governance frameworks for AI development.

#governance#evals#alignment#safety-research#security#control technicalBerlin
Salience signals
{
  "type_weight": 0.85,
  "source_trust": 0.75,
  "topic_relevance": 0.8,
  "time_proximity": 0.7283018867924529,
  "community_signal": 0.65,
  "speaker_org_signal": 0.7,
  "is_deadline_open": 1,
  "source_count": 1
}

OpenAI Safety Fellowship 2026

fellowship ★ 0.78 Early-bird ends Jul 25, 2026
📅 Sep 14, 2026 – Feb 5, 2027 📍 Berkeley, USA · Hybrid via OpenAI Safety Fellowship , Constellation Astra Fellowship

OpenAI Safety Fellowship supports external researchers, engineers, and practitioners to pursue rigorous, high-impact research on safety and alignment of advanced AI systems. Priority areas include safety evaluation, ethics, robustness, scalable mitigations, privacy-preserving safety, agentic oversight, and high-severity misuse domains. Fellows receive monthly stipend, compute resources, ongoing mentorship from OpenAI staff, API credits, and workspace at Constellation in Berkeley (remote participation allowed).

#alignment#safety-research#governance#evals#control#safety-evals#robustness#oversight#safety-evaluation remote-allowedAPI-credits
Salience signals
{
  "type_weight": 0.8,
  "source_trust": 0.9,
  "topic_relevance": 1,
  "time_proximity": 0.43647798742138366,
  "community_signal": 0.85,
  "speaker_org_signal": 0.9,
  "is_deadline_open": 0,
  "source_count": 1
}

CAMBRIA Bootcamp August 2026

fellowship ★ 0.86 Apps close Aug 10, 2026
📅 Aug 10, 2026 – Aug 28, 2026 📍 Cambridge, USA via CAMBRIA - Cambridge Bootcamp for Research in Interpretability and Alignment

Three-week ML upskilling bootcamp for AI safety focusing on interpretability and RL, based on ARENA curriculum. Fully funded with housing, meals, 24/7 office access, and dedicated teaching assistants. Run by Cambridge Boston Alignment Initiative.

#interpretability#alignment#control#mechanistic-interpretability bootcampARENA-curriculum
Salience signals
{
  "type_weight": 0.8,
  "source_trust": 0.85,
  "topic_relevance": 0.95,
  "time_proximity": 0.6125786163522012,
  "community_signal": 0.75,
  "speaker_org_signal": 0.8,
  "is_deadline_open": 1,
  "source_count": 1
}

Astra Fellowship Fall 2026

fellowship ★ 0.74 Apps close Sep 26, 2026
📅 Sep 14, 2026 – Feb 5, 2027 📍 Berkeley, USA via Constellation Astra Fellowship , OpenAI Safety Fellowship

Astra is a fully funded, in-person fellowship program operating from Constellation's Berkeley research center. Fellows work on technical, governance, strategy, and field-building projects with senior mentors. Benefits include $8,400 monthly stipend, ~$15K/month research budget for empirical fellows, visa support, workspace access, weekly mentorship, and placement services. Extension period available through June 31.

#alignment#control#evals#governance#safety-research#interpretability#strategy fellowshipempiricalgovernancestrategyhigh-stipendcompute-budget
Salience signals
{
  "type_weight": 0.8,
  "source_trust": 0.85,
  "topic_relevance": 0.95,
  "time_proximity": 0.43647798742138366,
  "community_signal": 0.85,
  "speaker_org_signal": 0.85,
  "is_deadline_open": 0,
  "source_count": 1
}

Anthropic Fellows Program July 2026

fellowship ★ 0.96 Apps closed May 3, 2026
📅 Jul 20, 2026 – Nov 20, 2026 📍 Hybrid via Anthropic Alignment Blog

Four-month Anthropic Fellows Program providing funding, compute (~$15k/month), and close mentorship from Anthropic researchers. Weekly stipend of $3,850 USD / £2,310 / $4,300 CAD. Focus on scalable oversight, adversarial robustness, interpretability, AI welfare, and safety evaluations. Over 40% of first cohort fellows joined Anthropic full-time.

#alignment#interpretability#control#evals#adversarial-robustness fellowshipresearchAnthropic
Salience signals
{
  "type_weight": 0.8,
  "source_trust": 0.95,
  "topic_relevance": 0.95,
  "time_proximity": 0.7182389937106919,
  "community_signal": 0.85,
  "speaker_org_signal": 0.95,
  "is_deadline_open": 1,
  "source_count": 1
}

CBAI Summer Research Fellowship 2026

fellowship ★ 0.93 Apps closed Apr 12, 2026

Nine-week AI safety research fellowship for 30 fellows with $10,000 stipend, housing in Harvard dorms, 24/7 office access in Harvard Square. Weekly 1-2 hour individual mentorship from researchers at Harvard, MIT, Northeastern. Up to $10,000 in compute credits per fellow, conference submission support, weekly speaker events, networking, workshops. Rolling application with 4-stage process: form, 15-min interview, mentor-specific tasks, mentor interview. International students with OPT/CPT eligible, full in-person participation required (18+ only).

#alignment#interpretability#governance#evals#biosecurity#safety-research fellowshipresearchCambridgeHarvardstipendhousingcompute-credits
Salience signals
{
  "type_weight": 0.8,
  "source_trust": 0.85,
  "topic_relevance": 0.95,
  "time_proximity": 0.9245283018867925,
  "community_signal": 0.75,
  "speaker_org_signal": 0.85,
  "is_deadline_open": 1,
  "source_count": 1
}

MATS Summer 2026 Fellowship

fellowship ★ 0.90 Apps closed Jan 18, 2026
📅 Jun 1, 2026 – Aug 31, 2026 📍 Berkeley, USA via MATS: ML Alignment & Theory Scholars , MATS: ML Alignment & Theory Scholars

MATS is a 12-week research fellowship for scholars working on AI alignment and theory with mentorship from leading researchers. Fellows receive $15k stipend, $12k compute resources, private housing, catered meals, research manager support, and access to office space in Berkeley and London. Extension opportunities available for top performers.

#alignment#interpretability#governance#theory#control#safety-research#technical-safety fellowshipmentorshipMATSresearchhigh-prestigestipend
Salience signals
{
  "type_weight": 0.8,
  "source_trust": 0.95,
  "topic_relevance": 1,
  "time_proximity": 0.9647798742138365,
  "community_signal": 0.9,
  "speaker_org_signal": 0.9,
  "is_deadline_open": 0,
  "source_count": 1
}

Workshop on Assurance and Verification of AI Development (AVID)

workshop ★ 0.88 Apps closed May 1, 2026
📅 May 17, 2026 📍 San Francisco, USA via FAR AI - Foundational AI Research

Workshop on secure and verifiable AI development, bringing together researchers, builders, and funders across ML, hardware security, systems, cryptography, and computer security. Focuses on verification techniques for AI safety. Colocated with IEEE Security and Privacy conference. Organized by FAR.AI.

#evals#alignment#safety-research#security workshopverificationcryptographyhardware-security
Salience signals
{
  "type_weight": 0.85,
  "source_trust": 0.9,
  "topic_relevance": 0.85,
  "time_proximity": 0.9428571428571428,
  "community_signal": 0.65,
  "speaker_org_signal": 0.85,
  "is_deadline_open": 1,
  "source_count": 1
}

CAIS AI and Society Fellowship 2026

fellowship ★ 0.85 Apps closed Mar 24, 2026
📅 Jun 1, 2026 – Aug 21, 2026 📍 San Francisco, USA via Center for AI Safety

The CAIS AI and Society Fellowship is a fully-funded three-month research program for scholars in economics, law, international relations, and adjacent disciplines to investigate how advanced AI may reshape social, economic, geopolitical, and legal systems. Fellows receive $25,000 stipend, covered travel to San Francisco, daily meals, and work with significant autonomy defining their own research directions at CAIS offices. Features regular guest speakers from Stanford, law schools, and international affairs experts.

#governance#policy fellowshipgovernancepolicyresearch
Salience signals
{
  "type_weight": 0.8,
  "source_trust": 0.9,
  "topic_relevance": 0.85,
  "time_proximity": 0.959748427672956,
  "community_signal": 0.8,
  "speaker_org_signal": 0.9,
  "is_deadline_open": 0,
  "source_count": 2
}

ARENA 8.0

fellowship ★ 0.85 Apps closed Apr 1, 2026

ARENA (Alignment Research Engineering Accelerator) 8.0 is a 4-5 week intensive bootcamp providing skills, community, and confidence for technical AI safety contributions. Fully funded including travel, visa, accommodation, and meals. Based on established curriculum with 2-3 cohorts per year.

#alignment#interpretability#control#technical-safety#evals bootcamptechnicalintensive
Salience signals
{
  "type_weight": 0.8,
  "source_trust": 0.9,
  "topic_relevance": 0.95,
  "time_proximity": 1,
  "community_signal": 0.85,
  "speaker_org_signal": 0.85,
  "is_deadline_open": 0,
  "source_count": 1
}

GovAI DC Summer Fellowship 2026

fellowship ★ 0.79 Apps closed Mar 1, 2026
📅 Jun 8, 2026 – Aug 28, 2026 📍 Washington DC, USA via GovAI - Centre for the Governance of AI

3-month fellowship to launch or accelerate impactful careers in American AI governance and policy. Fellows conduct independent research projects under expert mentorship while building professional networks and developing policy expertise. Focus areas include public policy, political science, engineering, economics, biosecurity, cybersecurity, China studies, and risk management. Prioritizes bipartisan engagement, rigorous analysis, and practical policy relevance. $21,000 stipend plus travel support, weekday lunches, and DC office space. US work authorization required.

#governance#policy fellowshippolicygovernanceDC
Salience signals
{
  "type_weight": 0.8,
  "source_trust": 0.9,
  "topic_relevance": 0.9,
  "time_proximity": 0.89937106918239,
  "community_signal": 0.8,
  "speaker_org_signal": 0.85,
  "is_deadline_open": 0,
  "source_count": 1
}

GovAI Summer Fellowship 2026 - Research Track

fellowship ★ 0.79 Apps closed Jan 4, 2026
📅 Jun 8, 2026 – Aug 28, 2026 📍 London, UK via GovAI - Centre for the Governance of AI

3-month fellowship for conducting independent research on AI governance topics. Fellows receive mentorship from field experts, participate in seminars and Q&A sessions, and build professional networks. Research outputs may include reports, white papers, journal articles, op-eds, or blog posts. £12,000 stipend plus travel support and weekday lunches. Open to candidates from government, academia, industry, or civil society with expertise in policy, political science, computer science, economics, or risk management. Visa sponsorship available.

#governance#policy fellowshipresearchgovernanceLondon
Salience signals
{
  "type_weight": 0.8,
  "source_trust": 0.9,
  "topic_relevance": 0.9,
  "time_proximity": 0.89937106918239,
  "community_signal": 0.8,
  "speaker_org_signal": 0.85,
  "is_deadline_open": 0,
  "source_count": 1
}

Anthropic Fellows Program May 2026

fellowship ★ 0.78 Apps closed Apr 15, 2026
📅 May 1, 2026 – Aug 31, 2026 📍 Hybrid via Anthropic Alignment Blog , Anthropic Alignment Blog

4-month empirical AI safety research fellowship with Anthropic. Fellows work on scalable oversight, adversarial robustness, AI control, model organisms, mechanistic interpretability, AI security, and model welfare. Includes $3,850/week stipend, ~$15k/month compute, and close mentorship from Anthropic researchers. Over 80% of first cohort fellows produced papers.

#alignment#interpretability#control#evals#adversarial-robustness fellowshipresearchAnthropicapplications-open-may-cohort
Salience signals
{
  "type_weight": 0.8,
  "source_trust": 0.95,
  "topic_relevance": 0.95,
  "time_proximity": 0.3666666666666667,
  "community_signal": 0.9,
  "speaker_org_signal": 0.95,
  "is_deadline_open": 0,
  "source_count": 1
}
📅 Jun 5, 2026 – Jun 7, 2026 📍 London, UK via Foresight Institute , Foresight Institute

Foresight Institute flagship event gathering leading scientists, entrepreneurs, funders, and policymakers to explore frontiers of science and technology including AI safety. 40-year-old organization focused on transformative technology with dedicated AI safety track.

#alignment#governance frontier-sciencemulti-track
Salience signals
{
  "type_weight": 1,
  "source_trust": 0.75,
  "topic_relevance": 0.7,
  "time_proximity": 0.9446540880503145,
  "community_signal": 0.65,
  "speaker_org_signal": 0.7,
  "is_deadline_open": 1,
  "source_count": 1
}

MARS V 2026

fellowship ★ 0.75
📅 Jul 13, 2026 – Oct 31, 2026 📍 Cambridge, UK · Hybrid via MARS - Mentorship for Alignment Researchers at CAISH

Part-time hybrid mentorship programme matching exceptional students and early-career researchers with experienced AI safety researchers. Includes one-week in-person kick-off in Cambridge, remote research phase Aug-Sept, and final output in Oct. 8-15+ hours weekly, 2-4 researchers per mentor. Provides $2K+ compute, Claude Max, accommodation, meals, travel funding, and research manager support. Research published at NeurIPS, ICLR venues.

#alignment#interpretability#evals#governance mentorship
Salience signals
{
  "type_weight": 0.8,
  "source_trust": 0.8,
  "topic_relevance": 0.95,
  "time_proximity": 0.7534591194968554,
  "community_signal": 0.75,
  "speaker_org_signal": 0.8,
  "is_deadline_open": 0,
  "source_count": 1
}
📅 Jul 10, 2026 📍 Seoul, South Korea via ICML: Safety-related Workshops

Second Workshop on Technical AI Governance Research at ICML 2026, focusing on technical approaches to AI governance, policy, and regulation. Part of the main conference workshop track.

#governance#policy ICMLworkshopgovernancetechnical-governance
Salience signals
{
  "type_weight": 0.85,
  "source_trust": 0.85,
  "topic_relevance": 0.9,
  "time_proximity": 0.7383647798742139,
  "community_signal": 0.75,
  "speaker_org_signal": 0.7,
  "is_deadline_open": 0,
  "source_count": 1
}
📅 Jul 10, 2026 📍 Seoul, South Korea via ICML: Safety-related Workshops

Workshop at ICML 2026 focused on identifying, diagnosing, and fixing failure modes in agentic AI systems. Covers reproducible triggers for failures, diagnostic tracing methods, and verified repair approaches. Highly relevant to AI safety and robustness.

#evals#alignment ICMLworkshopfailure-modesagentsdiagnostics
Salience signals
{
  "type_weight": 0.85,
  "source_trust": 0.85,
  "topic_relevance": 0.9,
  "time_proximity": 0.7383647798742139,
  "community_signal": 0.75,
  "speaker_org_signal": 0.7,
  "is_deadline_open": 0,
  "source_count": 1
}
📅 Jul 11, 2026 📍 Seoul, South Korea via ICML: Safety-related Workshops

Second Workshop on Agents in the Wild focusing on safety and security of AI agents deployed in real-world environments. Addresses challenges in ensuring safe and secure operation of autonomous agents. Part of ICML 2026 workshop track.

#alignment#evals ICMLworkshopagentssafetysecurity
Salience signals
{
  "type_weight": 0.85,
  "source_trust": 0.85,
  "topic_relevance": 0.9,
  "time_proximity": 0.7333333333333334,
  "community_signal": 0.75,
  "speaker_org_signal": 0.7,
  "is_deadline_open": 0,
  "source_count": 1
}

ACL 2026 Workshop on Evaluating AI in Practice

workshop ★ 0.72 CFP closed Mar 12, 2026
📅 Jul 3, 2026 – Jul 4, 2026 📍 San Diego, USA via EvalEval Coalition

Two-day workshop surfacing practical insights from across the AI evaluation ecosystem. Addresses tensions between model developers and evaluation researchers. Topics: evaluation methodology & measurement theory, infrastructure/cost/stakeholders, sociotechnical impacts. Full (6-8p), short (≤4p), tiny (≤2p) papers welcome. Non-archival by default with archival opt-in. Dual submissions allowed. At least one author must present in-person. Authors expected to review submissions.

#evals#safety-research#measurement ACL-workshoptwo-day
Salience signals
{
  "type_weight": 0.85,
  "source_trust": 0.75,
  "topic_relevance": 0.9,
  "time_proximity": 0.7987421383647799,
  "community_signal": 0.7,
  "speaker_org_signal": 0.8,
  "is_deadline_open": 0,
  "source_count": 1
}
📅 May 5, 2026 – Jun 30, 2026 📍 Virtual via BlueDot Impact Events Calendar (Luma)

Weekly AI safety evaluations paper reading club hosted by BlueDot Impact. Meets every Tuesday at 4:00 PM UTC to discuss evaluation methodologies, safety benchmarks, and measurement frameworks. Open to all interested in AI safety evals research.

#evals#alignment weeklypaper-discussion
Salience signals
{
  "type_weight": 0.45,
  "source_trust": 0.85,
  "topic_relevance": 0.9,
  "time_proximity": 0.4285714285714286,
  "community_signal": 0.7,
  "speaker_org_signal": 0.7,
  "is_deadline_open": 1,
  "source_count": 1
}

Global South AIS Hackathon

hackathon ★ 0.66
📅 Jun 19, 2026 – Jun 21, 2026 📍 Hybrid via Apart Research , Apart Research

Weekend AI safety hackathon focused on Global South participation and perspectives. Hybrid format allowing both online and in-person participation. Organized by Apart Research as part of their 55+ research sprints series with 6,000+ participants across 200+ global locations.

#alignment#safety-research#evals#governance
Salience signals
{
  "type_weight": 0.65,
  "source_trust": 0.85,
  "topic_relevance": 0.8,
  "time_proximity": 0.8490566037735849,
  "community_signal": 0.7,
  "speaker_org_signal": 0.7,
  "is_deadline_open": 0,
  "source_count": 1
}

SPAR is a part-time remote research program pairing aspiring researchers with experienced mentors from Google DeepMind, RAND, Apollo Research, MATS, UK AISI, and other organizations for three-month projects. Participants dedicate 5–40 hours/week depending on availability. The program culminates in Demo Day where mentees present findings. Well-established programme with broad mentor base.

#alignment#governance#evals#safety-research#interpretability#technical-safety remotementorship
Salience signals
{
  "type_weight": 0.8,
  "source_trust": 0.85,
  "topic_relevance": 0.95,
  "time_proximity": 0,
  "community_signal": 0.8,
  "speaker_org_signal": 0.85,
  "is_deadline_open": 0,
  "source_count": 1
}

International Programme on AI Evaluation 2026

fellowship ★ 0.65 Apps closed Jan 15, 2026
📅 Feb 1, 2026 – May 29, 2026 📍 Valencia, Spain · Hybrid via International Programme on AI Evaluation , International Programme on AI Evaluation

World's first academic programme dedicated to AI evaluation combining technical depth with policy and governance perspectives. 150 hours total: 90 hours online (lectures, networking, activities), 20 hours hands-on courses, 40 hours in-person capstone week in Valencia. Cohort of 40 top global participants. Fully funded scholarships available. Graduates receive 15 ECTS Expert Diploma from ValgrAI. Faculty from Cambridge, Stanford, Princeton, EU AI Office, UK AI Safety Institute, FAR AI, Apollo Research. Funded by Coefficient Giving.

#evals#safety-research#governance#policy fellowshipevalsacademichybriddiplomafundedprestigious
Salience signals
{
  "type_weight": 0.8,
  "source_trust": 0.85,
  "topic_relevance": 0.95,
  "time_proximity": 0,
  "community_signal": 0.75,
  "speaker_org_signal": 0.9,
  "is_deadline_open": 0,
  "source_count": 1
}

CCN 2026 - Cognitive Computational Neuroscience

conference ★ 0.60 CFP closed Apr 30, 2026
📅 Aug 3, 2026 – Aug 6, 2026 📍 New York, USA via Cognitive Computational Neuroscience (CCN)

9th annual Cognitive Computational Neuroscience conference bringing together researchers in cognitive science, neuroscience, and AI. Focus on mechanistic interpretability, neural modeling, and brain-AI comparison. AAE attendees follow for predictive-coding, metacognition, and signal-detection-theory measurement work applicable to LLMs.

#interpretability#cognitive-science#control#mechanistic-interpretability#formal-foundations#instrument-science#computational-neuroscience#measurement-science cog-neuromeasurement-sciencecognitive-science
Salience signals
{
  "type_weight": 1,
  "source_trust": 0.8,
  "topic_relevance": 0.7,
  "time_proximity": 0.6477987421383647,
  "community_signal": 0.6,
  "speaker_org_signal": 0.65,
  "is_deadline_open": 0,
  "source_count": 1
}
📅 May 4, 2026 – May 5, 2026 📍 Geneva, Switzerland · Hybrid via UNIDIR - United Nations Institute for Disarmament Research

UN Institute for Disarmament Research conference addressing contemporary cybersecurity challenges. Part of UNIDIR's broader work on security and technology initiatives. In scope per AAE's sociotechnical threat surface and international AI policy interests.

#governance#cyber-security UNdisarmament
Salience signals
{
  "type_weight": 1,
  "source_trust": 0.85,
  "topic_relevance": 0.65,
  "time_proximity": 0.4285714285714286,
  "community_signal": 0.5,
  "speaker_org_signal": 0.85,
  "is_deadline_open": 0,
  "source_count": 1
}
📅 Jun 15, 2026 – Jun 27, 2026 📍 New York, USA via Machine Learning Summer School (MLSS) - Columbia

Two-week intensive summer school at Columbia University covering machine learning topics including mechanistic interpretability, alignment/safety, RAG & agents, and LLM systems. Approximately 200 PhD students participate alongside faculty and industry speakers. In-scope due to dedicated alignment and mechanistic interpretability tracks.

#interpretability#alignment
Salience signals
{
  "type_weight": 0.35,
  "source_trust": 0.75,
  "topic_relevance": 0.7,
  "time_proximity": 0.869182389937107,
  "community_signal": 0.5,
  "speaker_org_signal": 0.6,
  "is_deadline_open": 0,
  "source_count": 1
}
📅 May 5, 2026 📍 Berkeley, USA via AI Safety Awareness Group Oakland (Meetup)

A free, accessible workshop hosted by AI Safety Awareness Group Oakland exploring AI's trajectory and societal impact. No technical background required. Features live demonstrations of current AI systems, interactive forecasting activities, and discussions about AI's implications for work, relationships, and society over the next 1-5 years.

#governance#alignment
Salience signals
{
  "type_weight": 0.45,
  "source_trust": 0.7,
  "topic_relevance": 0.7,
  "time_proximity": 0.5714285714285714,
  "community_signal": 0.6,
  "speaker_org_signal": 0.5,
  "is_deadline_open": 0,
  "source_count": 1
}