Applied Adversarial Epistemics Tracker

Upcoming events

Events, training, fellowships, mixers, and CFPs across Applied Adversarial Epistemics, instruments for epistemic access to model cognition, and the broader AI safety, alignment, and governance community. Refreshed weekly. Sorted by salience. What's AAE?

Recently added

type
format
topic

Anthropic Fellows Program July 2026

fellowship ★ 0.96 Apps close May 3, 2026
📅 Jul 20, 2026 – Nov 20, 2026 📍 Hybrid via Anthropic Alignment Blog

Anthropic's four-month fellowship program for AI safety research. Weekly stipend of $3,850 USD / £2,310 GBP / $4,300 CAD, plus ~$15k/month compute budget and close mentorship from Anthropic researchers. Priority areas include scalable oversight, adversarial robustness, and interpretability. Application deadline May 3.

#alignment#interpretability#control#evals#adversarial-robustness fellowshipresearchAnthropic
Salience signals
{
  "type_weight": 0.8,
  "source_trust": 0.95,
  "topic_relevance": 0.95,
  "time_proximity": 0.6930817610062893,
  "community_signal": 0.85,
  "speaker_org_signal": 0.95,
  "is_deadline_open": 1,
  "source_count": 1
}

OpenAI Safety Fellowship 2026

fellowship ★ 0.89 Apps close May 3, 2026
📅 Sep 14, 2026 – Feb 5, 2027 📍 Berkeley, USA · Hybrid via OpenAI Safety Fellowship , Constellation Astra Fellowship

Fellowship partnering with Constellation where fellows work with OpenAI mentors and a peer cohort to produce substantial research output (paper, benchmark, or dataset). Focus areas: safety evaluation, ethics, robustness, scalable mitigations, privacy-preserving safety, agentic oversight, high-severity misuse. Monthly stipends, compute resources, mentorship, and API credits provided. Remote participation permitted. No access to OpenAI internal systems. Values research ability, technical judgment, and execution over credentials.

#alignment#safety-research#governance#evals#control#safety-evals#robustness#oversight remote-allowedAPI-credits
Salience signals
{
  "type_weight": 0.8,
  "source_trust": 0.9,
  "topic_relevance": 0.95,
  "time_proximity": 0.43144654088050316,
  "community_signal": 0.85,
  "speaker_org_signal": 0.95,
  "is_deadline_open": 1,
  "source_count": 1
}

MARS V - Mentorship for Alignment Research Students

fellowship ★ 0.86 Apps close May 3, 2026
📅 Jul 13, 2026 – Oct 31, 2026 📍 Cambridge, UK · Hybrid via MARS - Mentorship for Alignment Researchers at CAISH

Part-time research programme pairing teams of 2-4 with experienced mentors to produce published AI safety research. Includes prework (June), one-week in-person kick-off (July 13-19 or 20-26), and 8-10 week remote phase (Aug-Oct). Provides $2,000+ compute budget, Claude Max (5x) for technical teams, dedicated research manager, and travel funding. 8-15+ hours weekly commitment. Operated by Cambridge AI Safety Hub.

#alignment#safety-research#interpretability mentorshiphybrid-format
Salience signals
{
  "type_weight": 0.8,
  "source_trust": 0.8,
  "topic_relevance": 0.95,
  "time_proximity": 0.7484276729559749,
  "community_signal": 0.7,
  "speaker_org_signal": 0.8,
  "is_deadline_open": 1,
  "source_count": 1
}

Astra Fellowship Fall 2026

fellowship ★ 0.86 Apps close May 3, 2026
📅 Sep 14, 2026 – Feb 5, 2027 📍 Berkeley, USA via Constellation Astra Fellowship , OpenAI Safety Fellowship

Fully funded 5-month in-person fellowship with two specialized tracks: Empirical Stream (ML research on alignment, control, evals, oversight) and Strategy & Governance Stream (for those with catastrophic AI risk familiarity). Monthly stipend of $8,400, research budget (~$15K/month compute for empirical fellows), weekly mentorship from senior experts, visa support, workspace access, and placement services. Strong track record placing fellows at safety orgs.

#alignment#control#evals#governance#safety-research#interpretability fellowshipempiricalgovernancestrategyhigh-stipendcompute-budget
Salience signals
{
  "type_weight": 0.8,
  "source_trust": 0.85,
  "topic_relevance": 0.95,
  "time_proximity": 0.43144654088050316,
  "community_signal": 0.85,
  "speaker_org_signal": 0.85,
  "is_deadline_open": 1,
  "source_count": 1
}

Pivotal Research Fellowship 2026 Q3

fellowship ★ 0.85 Apps close May 3, 2026
📅 Jun 29, 2026 – Aug 28, 2026 📍 London, UK via Pivotal Research Fellowship

Full-time AI safety research fellowship in London with weekly 1-on-1s with established researchers. Stipend of £6,000–£8,000 (senior fellows £8,000), plus coverage for travel, housing (£2,000 for non-London residents), meals at LISA, and research compute. Dedicated management support and access to LISA's co-working space. 70-90% of recent fellows received extensions providing up to 6 additional months of funding. Four-stage evaluation process. Open to anyone 18+ regardless of academic background.

#alignment#interpretability#governance#evals#safety-research Londonstipendquarterly
Salience signals
{
  "type_weight": 0.8,
  "source_trust": 0.75,
  "topic_relevance": 0.95,
  "time_proximity": 0.8188679245283019,
  "community_signal": 0.7,
  "speaker_org_signal": 0.75,
  "is_deadline_open": 1,
  "source_count": 1
}

NeurIPS 2026

conference ★ 0.73 CFP closes May 4, 2026
📅 Dec 6, 2026 – Dec 13, 2026 📍 In-person via NeurIPS: Safety-related Workshops

Premier machine learning conference with multiple safety-related workshops. Multi-location format across Sydney, Atlanta, and Paris. Workshop applications accepted until June 6. Paper abstract submission May 4, full papers May 6. Author notifications September 24. Cross-reference workshop list for safety-specific content.

#interpretability#evals#alignment#ml-research#safety-research major-conferencemulti-location
Salience signals
{
  "type_weight": 1,
  "source_trust": 0.85,
  "topic_relevance": 0.75,
  "time_proximity": 0.17,
  "community_signal": 0.85,
  "speaker_org_signal": 0.8,
  "is_deadline_open": 1,
  "source_count": 1
}

ICML 2026 Workshop on Mechanistic Interpretability

workshop ★ 1.00 CFP closes May 8, 2026

Third iteration of mechanistic interpretability workshop at ICML, developing principled methods to analyze and understand neural network internals (weights and activations). CFP open via OpenReview. Organized by researchers from Google DeepMind, Harvard, Northeastern, Imperial College London, and others. Focuses on understanding how neural networks make decisions through interpretability research.

#interpretability#alignment#circuit-tracing#sparse-autoencoders#mechanistic-interpretability ICMLworkshopmechanistic-interpretabilityICML-workshopthird-iteration
Salience signals
{
  "type_weight": 0.85,
  "source_trust": 0.9,
  "topic_relevance": 0.98,
  "time_proximity": 0.7635220125786164,
  "community_signal": 0.9,
  "speaker_org_signal": 0.95,
  "is_deadline_open": 1,
  "source_count": 1
}

Pluralistic Alignment @ ICML 2026

workshop ★ 0.89 CFP closes May 8, 2026
📅 Jul 11, 2026 📍 Seoul, South Korea via Pluralistic Alignment Workshop at ICML

Workshop on integrating diverse perspectives into AI alignment frameworks across philosophy, ML, HCI, social sciences, and policy. 4-8 page papers (plus unlimited refs) via OpenReview with double-blind review. Non-archival venue. Explores how to align AI systems with human preferences and societal values by addressing diverse viewpoints from multiple disciplines. Welcomes researchers from computer and social science backgrounds.

#alignment#governance#pluralistic-alignment ICML-workshopinterdisciplinary
Salience signals
{
  "type_weight": 0.85,
  "source_trust": 0.85,
  "topic_relevance": 0.9,
  "time_proximity": 0.7584905660377359,
  "community_signal": 0.8,
  "speaker_org_signal": 0.85,
  "is_deadline_open": 1,
  "source_count": 1
}

TAIS 2026 - Technical AI Safety Conference

conference ★ 0.98 Reg closes May 13, 2026

Free one-day technical AI safety conference bringing together researchers and professionals interested in AI safety. Third iteration, first time in the UK (previous iterations in 2024-2025). Organized by Oxford Martin AI Governance Initiative and Noeon Research. Welcomes participants from all backgrounds regardless of research experience. Registration via Luma.

#alignment#governance#safety-research#evals#interpretability conferencetechnicalOxfordfreeone-day
Salience signals
{
  "type_weight": 1,
  "source_trust": 0.9,
  "topic_relevance": 0.95,
  "time_proximity": 0.7142857142857143,
  "community_signal": 0.85,
  "speaker_org_signal": 0.9,
  "is_deadline_open": 1,
  "source_count": 1
}

CAMBRIA Bootcamp May 2026

fellowship ★ 0.90 Apps close May 18, 2026
📅 May 18, 2026 – Jun 5, 2026 📍 Cambridge, USA via CAMBRIA - Cambridge Bootcamp for Research in Interpretability and Alignment

Three-week ML upskilling bootcamp for AI safety, focusing on interpretability and RL based on ARENA curriculum. Operated by Cambridge Boston Alignment Initiative with housing, meals, 24/7 office access, dedicated teaching assistants, and travel support.

#interpretability#alignment#control bootcampARENA-curriculum
Salience signals
{
  "type_weight": 0.8,
  "source_trust": 0.85,
  "topic_relevance": 0.95,
  "time_proximity": 0.8285714285714285,
  "community_signal": 0.7,
  "speaker_org_signal": 0.85,
  "is_deadline_open": 1,
  "source_count": 1
}

EA Global: London 2026

conference ★ 0.87 Apps close May 20, 2026
📅 May 29, 2026 – May 31, 2026 📍 London, UK via EA Global London 2026

Effective Altruism Global conference connecting experts and peers to collaborate on projects and tackle global challenges. EA Globals designed for individuals with solid understanding of core EA ideas actively applying them. Applications open for all 2026 EA Global events with single application form. AI safety is major track at EAG events.

#alignment#governance
Salience signals
{
  "type_weight": 1,
  "source_trust": 0.8,
  "topic_relevance": 0.75,
  "time_proximity": 0.969811320754717,
  "community_signal": 0.85,
  "speaker_org_signal": 0.75,
  "is_deadline_open": 1,
  "source_count": 1
}

Secure Program Synthesis Hackathon 2026

hackathon ★ 0.85 Reg closes May 21, 2026
📅 May 22, 2026 – May 24, 2026 📍 Hybrid via Apart Research

Apart Research hackathon focused on secure program synthesis. Hybrid format with online and in-person participation options. Part of Apart's monthly sprint and hackathon series which has attracted 3,000+ participants across 42 sprints globally.

#alignment#control#safety-research hybridsprint
Salience signals
{
  "type_weight": 0.65,
  "source_trust": 0.85,
  "topic_relevance": 0.85,
  "time_proximity": 0.9428571428571428,
  "community_signal": 0.75,
  "speaker_org_signal": 0.85,
  "is_deadline_open": 1,
  "source_count": 1
}

ICML 2026

conference ★ 0.82 Early-bird ends May 24, 2026
📅 Jul 6, 2026 – Jul 11, 2026 📍 Seoul, South Korea via ICML: Safety-related Workshops

Premier machine learning conference with Expo/Tutorial Day (July 6), Main Conference (July 7-9), and Workshops (July 10-11). Workshops announced April 6. Safety-related workshops should be cross-referenced from full workshop list. Early registration pricing before May 24.

#alignment#interpretability#evals#ml-research major-conference
Salience signals
{
  "type_weight": 1,
  "source_trust": 0.85,
  "topic_relevance": 0.7,
  "time_proximity": 0.7836477987421384,
  "community_signal": 0.85,
  "speaker_org_signal": 0.75,
  "is_deadline_open": 1,
  "source_count": 1
}

ILIAD 2026 - Theoretical AI Alignment Conference

conference ★ 0.93 Apps close Jun 1, 2026
📅 Aug 3, 2026 – Aug 7, 2026 📍 Berkeley, USA via ILIAD - Theoretical AI Alignment Conference

5-day multi-track unconference bringing together 100+ researchers in theoretical AI alignment with mathematical emphasis. Unconference format allows participants to propose and lead sessions. Research areas: Singular Learning Theory, Agent Foundations, Causal Incentives, Computational Mechanics. Free to attend with limited needs-based travel funding. Limited onsite accommodation available. Organized by Iliad umbrella organization for applied mathematics research in alignment.

#alignment#theory#interpretability#agent-foundations#formal-foundations#theoretical-alignment conferenceunconferencetheoreticalfreemathematical
Salience signals
{
  "type_weight": 1,
  "source_trust": 0.85,
  "topic_relevance": 0.95,
  "time_proximity": 0.6427672955974842,
  "community_signal": 0.8,
  "speaker_org_signal": 0.85,
  "is_deadline_open": 1,
  "source_count": 1
}

Cambridge ERA:AI Fellowship 2026

fellowship ★ 0.90 Apps close Jun 1, 2026

10-week fellowship focused on technical AI safety, AI governance, or technical AI governance. Fellows work on individual research projects with weekly mentorship from expert mentors while participating in 30+ events throughout Cambridge. Competitive stipend for all fellows with full coverage of meals during working hours, transport, visa, and lodging expenses. Welcomes talented individuals at any career stage motivated to contribute to AI safety and governance research.

#alignment#governance#evals#safety-research Cambridge-UKstipend
Salience signals
{
  "type_weight": 0.8,
  "source_trust": 0.85,
  "topic_relevance": 0.95,
  "time_proximity": 0.7836477987421384,
  "community_signal": 0.75,
  "speaker_org_signal": 0.85,
  "is_deadline_open": 1,
  "source_count": 1
}

Foresight Vision Weekend UK 2026

conference ★ 0.78 Reg closes Jun 4, 2026
📅 Jun 5, 2026 – Jun 7, 2026 📍 London, UK via Foresight Institute , Foresight Institute

Foresight Institute flagship event gathering leading scientists, entrepreneurs, funders, and policymakers to explore frontiers of science and technology including AI safety. 40-year-old organization focused on transformative technology. Part of Vision Weekend series held globally.

#governance#alignment#ai-safety#frontier-tech networkingflagship
Salience signals
{
  "type_weight": 1,
  "source_trust": 0.75,
  "topic_relevance": 0.7,
  "time_proximity": 0.939622641509434,
  "community_signal": 0.7,
  "speaker_org_signal": 0.75,
  "is_deadline_open": 1,
  "source_count": 1
}

AI Risk Content Hackathon

hackathon ★ 0.77 Reg closes Jun 6, 2026
📅 Jun 6, 2026 📍 London, UK via BlueDot Impact Events Calendar (Luma)

A 1-day AI safety video creation sprint hosted by BlueDot Impact in London. Participants create AI risk content to help communicate key safety concepts to wider audiences. Part of BlueDot's community building efforts to support the AI safety ecosystem.

#alignment#governance
Salience signals
{
  "type_weight": 0.65,
  "source_trust": 0.85,
  "topic_relevance": 0.75,
  "time_proximity": 0.9345911949685535,
  "community_signal": 0.65,
  "speaker_org_signal": 0.85,
  "is_deadline_open": 1,
  "source_count": 1
}

UNIDIR Global Conference on AI, Security and Ethics 2026

conference ★ 0.71 Reg closes Jun 7, 2026

UN Institute for Disarmament Research global conference on AI policy and ethical implications. Focus on AI governance, security, and ethics at international level. In scope per AAE's governance track and international AI policy interests. Hosted at UN headquarters in Geneva.

#governance#policy#ai-ethics#international-policy UNinternationalgovernancehybridGeneva
Salience signals
{
  "type_weight": 1,
  "source_trust": 0.85,
  "topic_relevance": 0.75,
  "time_proximity": 0.8742138364779874,
  "community_signal": 0.65,
  "speaker_org_signal": 0.85,
  "is_deadline_open": 0,
  "source_count": 1
}

Seoul Alignment Workshop 2026

workshop ★ 0.79 Reg closes Jul 5, 2026

Part of FAR.AI's ongoing Alignment Workshop series. Gathering of global leaders exploring effective strategies for mitigating risks from advanced AI systems. Organized by FAR.AI.

#alignment#governance#control#interpretability
Salience signals
{
  "type_weight": 0.85,
  "source_trust": 0.9,
  "topic_relevance": 0.9,
  "time_proximity": 0.7836477987421384,
  "community_signal": 0.8,
  "speaker_org_signal": 0.9,
  "is_deadline_open": 0,
  "source_count": 1
}

CAMBRIA Bootcamp July 2026

fellowship ★ 0.89 Apps close Jul 6, 2026
📅 Jul 6, 2026 – Jul 24, 2026 📍 Manhattan, USA via CAMBRIA - Cambridge Bootcamp for Research in Interpretability and Alignment

Three-week ML upskilling bootcamp for AI safety, focusing on interpretability and RL based on ARENA curriculum. Operated by Cambridge Boston Alignment Initiative with housing, meals, 24/7 office access, dedicated teaching assistants, and travel support. Manhattan cohort hosted by Collider.

#interpretability#alignment#control bootcampARENA-curriculum
Salience signals
{
  "type_weight": 0.8,
  "source_trust": 0.85,
  "topic_relevance": 0.95,
  "time_proximity": 0.7836477987421384,
  "community_signal": 0.7,
  "speaker_org_signal": 0.85,
  "is_deadline_open": 1,
  "source_count": 1
}

Agentic AI Summit 2026

conference ★ 0.80 CFP closes Jul 15, 2026

Summit bringing together academic leaders, entrepreneurs, AI experts, venture capitalists, and policymakers to discuss the future of AI and Agentic AI. Call for Papers and Startup Spotlight applications open. In-person and livestream available.

#alignment#evals#governance
Salience signals
{
  "type_weight": 1,
  "source_trust": 0.85,
  "topic_relevance": 0.75,
  "time_proximity": 0.6477987421383647,
  "community_signal": 0.75,
  "speaker_org_signal": 0.8,
  "is_deadline_open": 1,
  "source_count": 1
}

Secure & Sovereign AI Workshop 2026

workshop ★ 0.75 Reg closes Jul 17, 2026
📅 Jul 18, 2026 – Jul 19, 2026 📍 Berlin, Germany via Foresight Institute , Foresight Institute

Technical workshop bringing together top talent to address challenges in secure AI. Part of Foresight Institute's secure AI focus area. Two-day technical deep-dive for researchers and practitioners working on AI security and sovereignty challenges.

#governance#evals#alignment#safety-research#security technicalBerlin
Salience signals
{
  "type_weight": 0.85,
  "source_trust": 0.75,
  "topic_relevance": 0.8,
  "time_proximity": 0.7232704402515724,
  "community_signal": 0.65,
  "speaker_org_signal": 0.75,
  "is_deadline_open": 1,
  "source_count": 1
}

CAMBRIA Bootcamp August 2026

fellowship ★ 0.85 Apps close Aug 10, 2026
📅 Aug 10, 2026 – Aug 28, 2026 📍 Cambridge, USA via CAMBRIA - Cambridge Bootcamp for Research in Interpretability and Alignment

Three-week ML upskilling bootcamp for AI safety, focusing on interpretability and RL based on ARENA curriculum. Operated by Cambridge Boston Alignment Initiative with housing, meals, 24/7 office access, dedicated teaching assistants, and travel support.

#interpretability#alignment#control bootcampARENA-curriculum
Salience signals
{
  "type_weight": 0.8,
  "source_trust": 0.85,
  "topic_relevance": 0.95,
  "time_proximity": 0.6075471698113207,
  "community_signal": 0.7,
  "speaker_org_signal": 0.85,
  "is_deadline_open": 1,
  "source_count": 1
}

CBAI Summer Research Fellowship 2026

fellowship ★ 0.93 Apps closed Apr 12, 2026

Nine-week AI safety research fellowship for 30 fellows with $10,000 stipend, housing in Harvard dorms, 24/7 office access in Harvard Square. Weekly 1-2 hour individual mentorship from researchers at Harvard, MIT, Northeastern. Up to $10,000 in compute credits per fellow, conference submission support, weekly speaker events, networking, workshops. Rolling application with 4-stage process: form, 15-min interview, mentor-specific tasks, mentor interview. International students with OPT/CPT eligible, full in-person participation required (18+ only).

#alignment#interpretability#governance#evals#biosecurity#safety-research fellowshipresearchCambridgeHarvardstipendhousingcompute-credits
Salience signals
{
  "type_weight": 0.8,
  "source_trust": 0.85,
  "topic_relevance": 0.95,
  "time_proximity": 0.9245283018867925,
  "community_signal": 0.75,
  "speaker_org_signal": 0.85,
  "is_deadline_open": 1,
  "source_count": 1
}

MATS Summer 2026 Fellowship

fellowship ★ 0.88 Apps closed Jan 18, 2026
📅 Jun 1, 2026 – Aug 31, 2026 📍 Berkeley, USA via MATS: ML Alignment & Theory Scholars , MATS: ML Alignment & Theory Scholars

12-week intensive fellowship with mentorship from leading AI safety researchers. Fellows receive $15k stipend, $12k compute budget, housing, meals, and workspace. Top-performing fellows may extend for additional 6 months with continued funding. Includes research management support, seminars, and networking events. Also offered in London cohort.

#alignment#interpretability#governance#theory#control#safety-research fellowshipmentorshipMATSresearchhigh-prestigestipend
Salience signals
{
  "type_weight": 0.8,
  "source_trust": 0.95,
  "topic_relevance": 0.95,
  "time_proximity": 0.959748427672956,
  "community_signal": 0.9,
  "speaker_org_signal": 0.95,
  "is_deadline_open": 0,
  "source_count": 1
}

Workshop on Assurance and Verification of AI Development (AVID)

workshop ★ 0.88 Apps closed May 1, 2026
📅 May 17, 2026 📍 San Francisco, USA via FAR AI - Foundational AI Research

Workshop on secure and verifiable AI development, bringing together researchers, builders, and funders across ML, hardware security, systems, cryptography, and computer security. Focuses on verification techniques for AI safety. Colocated with IEEE Security and Privacy conference. Organized by FAR.AI.

#evals#alignment#safety-research#security workshopverificationcryptographyhardware-security
Salience signals
{
  "type_weight": 0.85,
  "source_trust": 0.9,
  "topic_relevance": 0.85,
  "time_proximity": 0.9428571428571428,
  "community_signal": 0.65,
  "speaker_org_signal": 0.85,
  "is_deadline_open": 1,
  "source_count": 1
}

ARENA 8.0

fellowship ★ 0.86 Apps closed Apr 1, 2026

4-5 week intensive in-person bootcamp focused on alignment research engineering. Covers travel, visa, accommodation, meals, drinks and snacks. Three-stage selection process: application form (under 90 min), coding assessment (6 questions, 1 hour), and 30-minute interview. One of 2-3 bootcamps run annually.

#alignment#interpretability#control bootcamptechnicalintensive
Salience signals
{
  "type_weight": 0.8,
  "source_trust": 0.9,
  "topic_relevance": 0.95,
  "time_proximity": 0.9949685534591195,
  "community_signal": 0.85,
  "speaker_org_signal": 0.9,
  "is_deadline_open": 0,
  "source_count": 1
}

CAIS AI and Society Fellowship 2026

fellowship ★ 0.85 Apps closed Mar 24, 2026
📅 Jun 1, 2026 – Aug 21, 2026 📍 San Francisco, USA via Center for AI Safety

The CAIS AI and Society Fellowship is a fully-funded three-month research program for scholars in economics, law, international relations, and adjacent disciplines to investigate how advanced AI may reshape social, economic, geopolitical, and legal systems. Fellows receive $25,000 stipend, covered travel to San Francisco, daily meals, and work with significant autonomy defining their own research directions at CAIS offices. Features regular guest speakers from Stanford, law schools, and international affairs experts.

#governance#policy fellowshipgovernancepolicyresearch
Salience signals
{
  "type_weight": 0.8,
  "source_trust": 0.9,
  "topic_relevance": 0.85,
  "time_proximity": 0.959748427672956,
  "community_signal": 0.8,
  "speaker_org_signal": 0.9,
  "is_deadline_open": 0,
  "source_count": 2
}

GovAI DC Summer Fellowship 2026

fellowship ★ 0.79 Apps closed Mar 1, 2026
📅 Jun 8, 2026 – Aug 28, 2026 📍 Washington DC, USA via GovAI - Centre for the Governance of AI

3-month fellowship to launch or accelerate impactful careers in American AI governance and policy. Fellows conduct independent research projects under expert mentorship while building professional networks and developing policy expertise. Focus areas include public policy, political science, engineering, economics, biosecurity, cybersecurity, China studies, and risk management. Prioritizes bipartisan engagement, rigorous analysis, and practical policy relevance. $21,000 stipend plus travel support, weekday lunches, and DC office space. US work authorization required.

#governance#policy fellowshippolicygovernanceDC
Salience signals
{
  "type_weight": 0.8,
  "source_trust": 0.9,
  "topic_relevance": 0.9,
  "time_proximity": 0.89937106918239,
  "community_signal": 0.8,
  "speaker_org_signal": 0.85,
  "is_deadline_open": 0,
  "source_count": 1
}

GovAI Summer Fellowship 2026 - Research Track

fellowship ★ 0.79 Apps closed Jan 4, 2026
📅 Jun 8, 2026 – Aug 28, 2026 📍 London, UK via GovAI - Centre for the Governance of AI

3-month fellowship for conducting independent research on AI governance topics. Fellows receive mentorship from field experts, participate in seminars and Q&A sessions, and build professional networks. Research outputs may include reports, white papers, journal articles, op-eds, or blog posts. £12,000 stipend plus travel support and weekday lunches. Open to candidates from government, academia, industry, or civil society with expertise in policy, political science, computer science, economics, or risk management. Visa sponsorship available.

#governance#policy fellowshipresearchgovernanceLondon
Salience signals
{
  "type_weight": 0.8,
  "source_trust": 0.9,
  "topic_relevance": 0.9,
  "time_proximity": 0.89937106918239,
  "community_signal": 0.8,
  "speaker_org_signal": 0.85,
  "is_deadline_open": 0,
  "source_count": 1
}

Anthropic Fellows Program May 2026

fellowship ★ 0.77 Apps closed Apr 15, 2026
📅 May 1, 2026 – Jul 31, 2026 📍 Hybrid via Anthropic Alignment Blog , Anthropic Alignment Blog

AI safety research fellowship pilot initiative designed to accelerate AI safety research and foster research talent in the field. Fellows work on steering and controlling future powerful AI systems and evaluating associated risks.

#alignment#interpretability#control#evals#adversarial-robustness fellowshipresearchAnthropicapplications-open-may-cohort
Salience signals
{
  "type_weight": 0.8,
  "source_trust": 0.95,
  "topic_relevance": 0.95,
  "time_proximity": 0.3888888888888889,
  "community_signal": 0.85,
  "speaker_org_signal": 0.95,
  "is_deadline_open": 0,
  "source_count": 1
}
📅 Jul 10, 2026 📍 Seoul, South Korea via ICML: Safety-related Workshops

Second Workshop on Technical AI Governance Research at ICML 2026, focusing on technical approaches to AI governance, policy, and regulation. Part of the main conference workshop track.

#governance#policy ICMLworkshopgovernancetechnical-governance
Salience signals
{
  "type_weight": 0.85,
  "source_trust": 0.85,
  "topic_relevance": 0.9,
  "time_proximity": 0.7383647798742139,
  "community_signal": 0.75,
  "speaker_org_signal": 0.7,
  "is_deadline_open": 0,
  "source_count": 1
}
📅 Jul 10, 2026 📍 Seoul, South Korea via ICML: Safety-related Workshops

Workshop at ICML 2026 focused on identifying, diagnosing, and fixing failure modes in agentic AI systems. Covers reproducible triggers for failures, diagnostic tracing methods, and verified repair approaches. Highly relevant to AI safety and robustness.

#evals#alignment ICMLworkshopfailure-modesagentsdiagnostics
Salience signals
{
  "type_weight": 0.85,
  "source_trust": 0.85,
  "topic_relevance": 0.9,
  "time_proximity": 0.7383647798742139,
  "community_signal": 0.75,
  "speaker_org_signal": 0.7,
  "is_deadline_open": 0,
  "source_count": 1
}
📅 Jul 11, 2026 📍 Seoul, South Korea via ICML: Safety-related Workshops

Second Workshop on Agents in the Wild focusing on safety and security of AI agents deployed in real-world environments. Addresses challenges in ensuring safe and secure operation of autonomous agents. Part of ICML 2026 workshop track.

#alignment#evals ICMLworkshopagentssafetysecurity
Salience signals
{
  "type_weight": 0.85,
  "source_trust": 0.85,
  "topic_relevance": 0.9,
  "time_proximity": 0.7333333333333334,
  "community_signal": 0.75,
  "speaker_org_signal": 0.7,
  "is_deadline_open": 0,
  "source_count": 1
}

ACL 2026 Workshop on Evaluating AI in Practice

workshop ★ 0.72 CFP closed Mar 12, 2026
📅 Jul 3, 2026 – Jul 4, 2026 📍 San Diego, USA via EvalEval Coalition

Two-day workshop surfacing practical insights from across the AI evaluation ecosystem. Addresses tensions between model developers and evaluation researchers. Topics: evaluation methodology & measurement theory, infrastructure/cost/stakeholders, sociotechnical impacts. Full (6-8p), short (≤4p), tiny (≤2p) papers welcome. Non-archival by default with archival opt-in. Dual submissions allowed. At least one author must present in-person. Authors expected to review submissions.

#evals#safety-research#measurement ACL-workshoptwo-day
Salience signals
{
  "type_weight": 0.85,
  "source_trust": 0.75,
  "topic_relevance": 0.9,
  "time_proximity": 0.7987421383647799,
  "community_signal": 0.7,
  "speaker_org_signal": 0.8,
  "is_deadline_open": 0,
  "source_count": 1
}

CCN 2026 - Cognitive Computational Neuroscience

conference ★ 0.69 CFP closed Apr 30, 2026
📅 Aug 3, 2026 – Aug 6, 2026 📍 New York, USA via Cognitive Computational Neuroscience (CCN)

9th annual forum for researchers in cognitive science, neuroscience, and AI understanding computational foundations of complex behavior. Focus areas relevant to AAE: measuring and expanding representational competencies of modern AI systems, mechanistic interpretability of DNNs, representation learning and representational alignment, understanding commonalities/differences between biological and artificial intelligent systems. Keynote speakers from Princeton, MIT, Okinawa Institute, UC Berkeley, University of Alberta.

#interpretability#cognitive-science#control#mechanistic-interpretability cog-neuromeasurement-science
Salience signals
{
  "type_weight": 1,
  "source_trust": 0.8,
  "topic_relevance": 0.85,
  "time_proximity": 0.6427672955974842,
  "community_signal": 0.65,
  "speaker_org_signal": 0.75,
  "is_deadline_open": 0,
  "source_count": 1
}

Global South AIS Hackathon

hackathon ★ 0.66
📅 Jun 19, 2026 – Jun 21, 2026 📍 Hybrid via Apart Research , Apart Research

Weekend AI safety hackathon focused on Global South participation and perspectives. Hybrid format allowing both online and in-person participation. Organized by Apart Research as part of their 55+ research sprints series with 6,000+ participants across 200+ global locations.

#alignment#safety-research#evals#governance
Salience signals
{
  "type_weight": 0.65,
  "source_trust": 0.85,
  "topic_relevance": 0.8,
  "time_proximity": 0.8490566037735849,
  "community_signal": 0.7,
  "speaker_org_signal": 0.7,
  "is_deadline_open": 0,
  "source_count": 1
}

International Programme on AI Evaluation 2026

fellowship ★ 0.65 Apps closed Jan 15, 2026
📅 Feb 1, 2026 – May 29, 2026 📍 Valencia, Spain · Hybrid via International Programme on AI Evaluation , International Programme on AI Evaluation

World's first academic programme dedicated to AI evaluation combining technical depth with policy and governance perspectives. 150 hours total: 90 hours online (lectures, networking, activities), 20 hours hands-on courses, 40 hours in-person capstone week in Valencia. Cohort of 40 top global participants. Fully funded scholarships available. Graduates receive 15 ECTS Expert Diploma from ValgrAI. Faculty from Cambridge, Stanford, Princeton, EU AI Office, UK AI Safety Institute, FAR AI, Apollo Research. Funded by Coefficient Giving.

#evals#safety-research#governance#policy fellowshipevalsacademichybriddiplomafundedprestigious
Salience signals
{
  "type_weight": 0.8,
  "source_trust": 0.85,
  "topic_relevance": 0.95,
  "time_proximity": 0,
  "community_signal": 0.75,
  "speaker_org_signal": 0.9,
  "is_deadline_open": 0,
  "source_count": 1
}

Part-time, remote research fellowship pairing aspiring AI safety researchers with experienced mentors from Google DeepMind, RAND, Apollo Research, MATS, UK AISI for three-month projects. Flexible 5-40 hours/week commitment with 130+ mentors for Spring 2026. Concludes with Demo Day featuring posters, talks, and career fair.

#alignment#governance#evals#safety-research#interpretability remotementorship
Salience signals
{
  "type_weight": 0.8,
  "source_trust": 0.85,
  "topic_relevance": 0.95,
  "time_proximity": 0,
  "community_signal": 0.75,
  "speaker_org_signal": 0.9,
  "is_deadline_open": 0,
  "source_count": 1
}
📅 Jun 5, 2026 – Jun 7, 2026 📍 London, UK via Foresight Institute , Foresight Institute

Foresight Institute's flagship conference bringing together leading scientists, entrepreneurs, funders, and policymakers to explore frontiers of science and technology. Features tracks on AI safety, security, and other emerging technologies. Part of Foresight's 40-year mission advancing transformative technology.

#alignment#governance frontier-sciencemulti-track
Salience signals
{
  "type_weight": 1,
  "source_trust": 0.75,
  "topic_relevance": 0.65,
  "time_proximity": 0.919496855345912,
  "community_signal": 0.5,
  "speaker_org_signal": 0.6,
  "is_deadline_open": 0,
  "source_count": 1
}
📅 May 4, 2026 – May 5, 2026 📍 Geneva, Switzerland · Hybrid via UNIDIR - United Nations Institute for Disarmament Research

UN Institute for Disarmament Research conference addressing contemporary cybersecurity challenges. Part of UNIDIR's broader work on security and technology initiatives. In scope per AAE's sociotechnical threat surface and international AI policy interests.

#governance#cyber-security UNdisarmament
Salience signals
{
  "type_weight": 1,
  "source_trust": 0.85,
  "topic_relevance": 0.65,
  "time_proximity": 0.4285714285714286,
  "community_signal": 0.5,
  "speaker_org_signal": 0.85,
  "is_deadline_open": 0,
  "source_count": 1
}
📅 Jun 15, 2026 – Jun 27, 2026 📍 New York, USA via Machine Learning Summer School (MLSS) - Columbia

Two-week intensive summer school at Columbia University covering machine learning topics including mechanistic interpretability, alignment/safety, RAG & agents, and LLM systems. Approximately 200 PhD students participate alongside faculty and industry speakers. In-scope due to dedicated alignment and mechanistic interpretability tracks.

#interpretability#alignment
Salience signals
{
  "type_weight": 0.35,
  "source_trust": 0.75,
  "topic_relevance": 0.7,
  "time_proximity": 0.869182389937107,
  "community_signal": 0.5,
  "speaker_org_signal": 0.6,
  "is_deadline_open": 0,
  "source_count": 1
}
📅 May 5, 2026 📍 Berkeley, USA via AI Safety Awareness Group Oakland (Meetup)

A free, accessible workshop hosted by AI Safety Awareness Group Oakland exploring AI's trajectory and societal impact. No technical background required. Features live demonstrations of current AI systems, interactive forecasting activities, and discussions about AI's implications for work, relationships, and society over the next 1-5 years.

#governance#alignment
Salience signals
{
  "type_weight": 0.45,
  "source_trust": 0.7,
  "topic_relevance": 0.7,
  "time_proximity": 0.5714285714285714,
  "community_signal": 0.6,
  "speaker_org_signal": 0.5,
  "is_deadline_open": 0,
  "source_count": 1
}