conference
★ 0.73 CFP closes May 4, 2026
Premier machine learning conference with multiple safety-related workshops. Multi-location format across Sydney, Atlanta, and Paris. Workshop applications accepted until June 6. Paper abstract submission May 4, full papers May 6. Author notifications September 24. Cross-reference workshop list for safety-specific content.
#interpretability#evals#alignment#ml-research#safety-research major-conferencemulti-location
Salience signals
{
"type_weight": 1,
"source_trust": 0.85,
"topic_relevance": 0.75,
"time_proximity": 0.17,
"community_signal": 0.85,
"speaker_org_signal": 0.8,
"is_deadline_open": 1,
"source_count": 1
} workshop
★ 0.85 CFP closes May 8, 2026
The third iteration of the mechanistic interpretability workshop at ICML brings together diverse perspectives from the community to discuss recent advances, build common understanding, and chart future directions in mechanistic interpretability—the study of understanding neural network internals and decision-making processes. Organized by researchers from Google DeepMind, Harvard, Northeastern, Imperial College London, Oxford, and Yonsei University.
#interpretability#alignment#circuit-tracing#sparse-autoencoders#mechanistic-interpretability#instrument-science ICMLworkshopmechanistic-interpretabilityICML-workshopthird-iteration
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.9,
"topic_relevance": 1,
"time_proximity": 0.7685534591194969,
"community_signal": 0.85,
"speaker_org_signal": 0.9,
"is_deadline_open": 0,
"source_count": 1
} workshop
★ 0.75 CFP closes May 8, 2026
Workshop on Pluralistic AI: Aligning with the Diversity of Human Values. The workshop invites submissions of 4-8 pages following ICML 2026 template through OpenReview. All papers appear on the workshop website but remain non-archival. Focus on integrating diverse perspectives into AI alignment frameworks.
#alignment#governance#pluralistic-alignment#pluralistic-values ICML-workshopinterdisciplinary
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.85,
"topic_relevance": 0.9,
"time_proximity": 0.7635220125786164,
"community_signal": 0.75,
"speaker_org_signal": 0.8,
"is_deadline_open": 0,
"source_count": 1
} workshop
★ 0.92 Apps close May 10, 2026
5-week project-based course for engineers and early researchers to work with an AI safety expert on a contribution to AI safety research or engineering. Includes mentorship, regular check-ins, and a published write-up. Covers alignment, mechanistic interpretability, evaluations, red-teaming, AI control, and scalable oversight.
#alignment#interpretability#evals
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.9,
"topic_relevance": 0.9,
"time_proximity": 0.8,
"community_signal": 0.8,
"speaker_org_signal": 0.85,
"is_deadline_open": 1,
"source_count": 1
} fellowship
★ 0.86 Apps close May 10, 2026
Part-time research programme pairing teams of 2-4 with experienced mentors to produce published AI safety research. Includes prework (June), one-week in-person kick-off (July 13-19 or 20-26), and 8-10 week remote phase (Aug-Oct). Provides $2,000+ compute budget, Claude Max (5x) for technical teams, dedicated research manager, and travel funding. 8-15+ hours weekly commitment. Operated by Cambridge AI Safety Hub.
#alignment#safety-research#interpretability mentorshiphybrid-format
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.8,
"topic_relevance": 0.95,
"time_proximity": 0.7484276729559749,
"community_signal": 0.7,
"speaker_org_signal": 0.8,
"is_deadline_open": 1,
"source_count": 1
} conference
★ 0.96 Reg closes May 13, 2026
Annual technical AI safety conference at Oxford organized by Oxford Martin AI Governance Initiative and Noeon Research. Third iteration. Free to attend. First time in the UK; previous editions in 2024 and 2025. Registration open via Luma. Welcomes attendees from all backgrounds regardless of prior research experience.
#alignment#governance#safety-research#evals#interpretability conferencetechnicalOxfordfreeone-day
Salience signals
{
"type_weight": 1,
"source_trust": 0.9,
"topic_relevance": 0.95,
"time_proximity": 0.6857142857142857,
"community_signal": 0.85,
"speaker_org_signal": 0.85,
"is_deadline_open": 1,
"source_count": 1
} fellowship
★ 0.80 Apps close May 18, 2026
CAMBRIA is a 3-week ML upskilling bootcamp focused on interpretability and RL, based on the ARENA curriculum. Participants receive housing, meals, 24/7 office access in Harvard Square, dedicated teaching assistants, and travel support. Run by Cambridge Boston Alignment Initiative.
#interpretability#alignment#control#mechanistic-interpretability bootcampARENA-curriculum
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.85,
"topic_relevance": 1,
"time_proximity": 0.8,
"community_signal": 0.8,
"speaker_org_signal": 0.8,
"is_deadline_open": 0,
"source_count": 1
} conference
★ 0.81 Apps close May 20, 2026
EA Global London 2026 connects experts and peers to collaborate on projects and tackle global challenges. Designed for individuals with solid understanding of core EA ideas who are actively applying them. Heavy AI safety attendance given EA community's focus on existential risk. Applications open for all 2026 EA Global events.
#alignment#governance
Salience signals
{
"type_weight": 1,
"source_trust": 0.8,
"topic_relevance": 0.7,
"time_proximity": 0.979874213836478,
"community_signal": 0.85,
"speaker_org_signal": 0.6,
"is_deadline_open": 1,
"source_count": 1
} hackathon
★ 0.77 Reg closes May 21, 2026
A hackathon organized by Apart Research focused on secure program synthesis and AI safety. The event addresses challenges in automated code generation, verification, and security considerations in AI-assisted programming.
#alignment#control#safety-research#evals#security#automated-research#safety-applications hybridsprint
Salience signals
{
"type_weight": 0.65,
"source_trust": 0.85,
"topic_relevance": 0.75,
"time_proximity": 0.9142857142857143,
"community_signal": 0.7,
"speaker_org_signal": 0.8,
"is_deadline_open": 1,
"source_count": 1
} fellowship
★ 0.74 Early-bird ends May 22, 2026
Pivotal Research Fellowship is a quarterly AI safety research program in London. Fellows receive £6,000–£8,000 stipend, travel coverage, £2,000 housing stipend for non-London participants, meals and compute resources. The program includes weekly one-on-one mentorship, research management support, in-person workspace at LISA, expert workshops and speaker sessions. Extension opportunities up to 6 months available for 70–90% of participants.
#alignment#interpretability#governance#evals#safety-research#technical-safety Londonstipendquarterly
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.75,
"topic_relevance": 0.95,
"time_proximity": 0.8238993710691824,
"community_signal": 0.75,
"speaker_org_signal": 0.75,
"is_deadline_open": 0,
"source_count": 1
} conference
★ 0.82 Early-bird ends May 24, 2026
Premier machine learning conference with Expo/Tutorial Day (July 6), Main Conference (July 7-9), and Workshops (July 10-11). Workshops announced April 6. Safety-related workshops should be cross-referenced from full workshop list. Early registration pricing before May 24.
#alignment#interpretability#evals#ml-research major-conference
Salience signals
{
"type_weight": 1,
"source_trust": 0.85,
"topic_relevance": 0.7,
"time_proximity": 0.7836477987421384,
"community_signal": 0.85,
"speaker_org_signal": 0.75,
"is_deadline_open": 1,
"source_count": 1
} conference
★ 0.82 Apps close Jun 1, 2026
ILIAD is a 5-day multi-track unconference focused on theoretical AI alignment research with 100+ researchers. The unconference format means participants can propose and lead their own sessions. Topics include Singular Learning Theory, Agent Foundations, Causal Incentives, and Computational Mechanics. Free to attend with limited onsite bedrooms available and financial support for travel/housing on needs basis.
#alignment#theory#interpretability#agent-foundations#formal-foundations#theoretical-alignment#theoretical-foundations conferenceunconferencetheoreticalfreemathematical
Salience signals
{
"type_weight": 1,
"source_trust": 0.85,
"topic_relevance": 1,
"time_proximity": 0.6477987421383647,
"community_signal": 0.85,
"speaker_org_signal": 0.8,
"is_deadline_open": 0,
"source_count": 1
} fellowship
★ 0.78 Apps close Jun 1, 2026
ERA is a 10-week fellowship in Cambridge, UK for talented individuals at any career stage working on technical AI safety, AI governance, or technical AI governance. Fellows work on individual research projects with weekly mentorship from established researchers. The program provides competitive stipend, meals during working hours, full coverage of transport/visas/lodging, 30+ events, compute resources, and dedicated research management support.
#alignment#governance#evals#safety-research#technical-safety Cambridge-UKstipend
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.85,
"topic_relevance": 0.95,
"time_proximity": 0.7886792452830189,
"community_signal": 0.8,
"speaker_org_signal": 0.8,
"is_deadline_open": 0,
"source_count": 1
} conference
★ 0.73 Reg closes Jun 4, 2026
Foresight Institute's flagship conference gathering leading scientists, entrepreneurs, funders, and policymakers to explore the frontiers of science and technology, and plan for flourishing futures. The event includes tracks on AI safety and secure AI technologies.
#governance#alignment#ai-safety#frontier-tech#frontier-technology#policy networkingflagship
Salience signals
{
"type_weight": 1,
"source_trust": 0.75,
"topic_relevance": 0.8,
"time_proximity": 0.9446540880503145,
"community_signal": 0.75,
"speaker_org_signal": 0.75,
"is_deadline_open": 0,
"source_count": 1
} hackathon
★ 0.77 Reg closes Jun 6, 2026
A 1-day AI safety video creation sprint hosted by BlueDot Impact in London. Participants create AI risk content to help communicate key safety concepts to wider audiences. Part of BlueDot's community building efforts to support the AI safety ecosystem.
#alignment#governance
Salience signals
{
"type_weight": 0.65,
"source_trust": 0.85,
"topic_relevance": 0.75,
"time_proximity": 0.9345911949685535,
"community_signal": 0.65,
"speaker_org_signal": 0.85,
"is_deadline_open": 1,
"source_count": 1
} conference
★ 0.68 Reg closes Jun 7, 2026
UNIDIR's global conference addressing artificial intelligence's implications for international peace and security. Part of UNIDIR's Security and Technology Programme focusing on AI governance, ethics, and international policy frameworks.
#governance#policy#ai-ethics#international-policy UNinternationalgovernancehybridGeneva
Salience signals
{
"type_weight": 1,
"source_trust": 0.85,
"topic_relevance": 0.75,
"time_proximity": 0.879245283018868,
"community_signal": 0.6,
"speaker_org_signal": 0.75,
"is_deadline_open": 0,
"source_count": 1
} workshop
★ 0.85 Reg closes Jul 5, 2026
Part of the ongoing Alignment Workshop series organized by FAR.AI, bringing together global leaders to explore strategies for mitigating risks from Artificial General Intelligence (AGI). The workshop focuses on technical alignment approaches and safety strategies.
#alignment#governance#control#interpretability#technical-safety
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.9,
"topic_relevance": 1,
"time_proximity": 0.7886792452830189,
"community_signal": 0.85,
"speaker_org_signal": 0.9,
"is_deadline_open": 0,
"source_count": 1
} fellowship
★ 0.80 Apps close Jul 6, 2026
CAMBRIA is a 3-week ML upskilling bootcamp focused on interpretability and RL, based on the ARENA curriculum. Participants receive housing, meals, 24/7 office access, dedicated teaching assistants, and travel support. Run by Cambridge Boston Alignment Initiative.
#interpretability#alignment#control#mechanistic-interpretability bootcampARENA-curriculum
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.85,
"topic_relevance": 1,
"time_proximity": 0.7886792452830189,
"community_signal": 0.8,
"speaker_org_signal": 0.8,
"is_deadline_open": 0,
"source_count": 1
} conference
★ 0.80 CFP closes Jul 15, 2026
Summit bringing together academic leaders, entrepreneurs, AI experts, venture capitalists, and policymakers to discuss the future of AI and Agentic AI. Call for Papers and Startup Spotlight applications open. In-person and livestream available.
#alignment#evals#governance
Salience signals
{
"type_weight": 1,
"source_trust": 0.85,
"topic_relevance": 0.75,
"time_proximity": 0.6477987421383647,
"community_signal": 0.75,
"speaker_org_signal": 0.8,
"is_deadline_open": 1,
"source_count": 1
} workshop
★ 0.75 Reg closes Jul 17, 2026
Technical workshop bringing together top talent to address challenges in secure AI. Part of Foresight Institute's secure AI focus area. Two-day technical deep-dive for researchers and practitioners working on AI security and sovereignty challenges.
#governance#evals#alignment#safety-research#security technicalBerlin
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.75,
"topic_relevance": 0.8,
"time_proximity": 0.7232704402515724,
"community_signal": 0.65,
"speaker_org_signal": 0.75,
"is_deadline_open": 1,
"source_count": 1
} fellowship
★ 0.78 Early-bird ends Jul 25, 2026
OpenAI Safety Fellowship supports external researchers, engineers, and practitioners to pursue rigorous, high-impact research on safety and alignment of advanced AI systems. Priority areas include safety evaluation, ethics, robustness, scalable mitigations, privacy-preserving safety, agentic oversight, and high-severity misuse domains. Fellows receive monthly stipend, compute resources, ongoing mentorship from OpenAI staff, API credits, and workspace at Constellation in Berkeley (remote participation allowed).
#alignment#safety-research#governance#evals#control#safety-evals#robustness#oversight#safety-evaluation remote-allowedAPI-credits
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.9,
"topic_relevance": 1,
"time_proximity": 0.43647798742138366,
"community_signal": 0.85,
"speaker_org_signal": 0.9,
"is_deadline_open": 0,
"source_count": 1
} fellowship
★ 0.77 Apps close Aug 10, 2026
CAMBRIA is a 3-week ML upskilling bootcamp focused on interpretability and RL, based on the ARENA curriculum. Participants receive housing, meals, 24/7 office access in Harvard Square, dedicated teaching assistants, and travel support. Run by Cambridge Boston Alignment Initiative.
#interpretability#alignment#control#mechanistic-interpretability bootcampARENA-curriculum
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.85,
"topic_relevance": 1,
"time_proximity": 0.6125786163522012,
"community_signal": 0.8,
"speaker_org_signal": 0.8,
"is_deadline_open": 0,
"source_count": 1
} fellowship
★ 0.74 Apps close Sep 26, 2026
Astra is a fully funded, in-person fellowship program operating from Constellation's Berkeley research center. Fellows work on technical, governance, strategy, and field-building projects with senior mentors. Benefits include $8,400 monthly stipend, ~$15K/month research budget for empirical fellows, visa support, workspace access, weekly mentorship, and placement services. Extension period available through June 31.
#alignment#control#evals#governance#safety-research#interpretability#strategy fellowshipempiricalgovernancestrategyhigh-stipendcompute-budget
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.85,
"topic_relevance": 0.95,
"time_proximity": 0.43647798742138366,
"community_signal": 0.85,
"speaker_org_signal": 0.85,
"is_deadline_open": 0,
"source_count": 1
} fellowship
★ 0.93 Apps closed Apr 12, 2026
Nine-week AI safety research fellowship for 30 fellows with $10,000 stipend, housing in Harvard dorms, 24/7 office access in Harvard Square. Weekly 1-2 hour individual mentorship from researchers at Harvard, MIT, Northeastern. Up to $10,000 in compute credits per fellow, conference submission support, weekly speaker events, networking, workshops. Rolling application with 4-stage process: form, 15-min interview, mentor-specific tasks, mentor interview. International students with OPT/CPT eligible, full in-person participation required (18+ only).
#alignment#interpretability#governance#evals#biosecurity#safety-research fellowshipresearchCambridgeHarvardstipendhousingcompute-credits
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.85,
"topic_relevance": 0.95,
"time_proximity": 0.9245283018867925,
"community_signal": 0.75,
"speaker_org_signal": 0.85,
"is_deadline_open": 1,
"source_count": 1
} fellowship
★ 0.90 Apps closed Jan 18, 2026
MATS is a 12-week research fellowship for scholars working on AI alignment and theory with mentorship from leading researchers. Fellows receive $15k stipend, $12k compute resources, private housing, catered meals, research manager support, and access to office space in Berkeley and London. Extension opportunities available for top performers.
#alignment#interpretability#governance#theory#control#safety-research#technical-safety fellowshipmentorshipMATSresearchhigh-prestigestipend
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.95,
"topic_relevance": 1,
"time_proximity": 0.9647798742138365,
"community_signal": 0.9,
"speaker_org_signal": 0.9,
"is_deadline_open": 0,
"source_count": 1
} workshop
★ 0.88 Apps closed May 1, 2026
Workshop on secure and verifiable AI development, bringing together researchers, builders, and funders across ML, hardware security, systems, cryptography, and computer security. Focuses on verification techniques for AI safety. Colocated with IEEE Security and Privacy conference. Organized by FAR.AI.
#evals#alignment#safety-research#security workshopverificationcryptographyhardware-security
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.9,
"topic_relevance": 0.85,
"time_proximity": 0.9428571428571428,
"community_signal": 0.65,
"speaker_org_signal": 0.85,
"is_deadline_open": 1,
"source_count": 1
} fellowship
★ 0.86 Apps closed Apr 1, 2026
ARENA 8.0 is an intensive 4-5 week in-person bootcamp focused on technical AI safety education. The program aims to provide talented individuals with the skills, community, and confidence to contribute directly to technical AI safety. ARENA covers travel, visa, accommodation, and meals for all participants.
#alignment#interpretability#control#technical-safety bootcamptechnicalintensive
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.9,
"topic_relevance": 1,
"time_proximity": 1,
"community_signal": 0.85,
"speaker_org_signal": 0.8,
"is_deadline_open": 0,
"source_count": 1
} fellowship
★ 0.85 Apps closed Mar 24, 2026
The CAIS AI and Society Fellowship is a fully-funded three-month research program for scholars in economics, law, international relations, and adjacent disciplines to investigate how advanced AI may reshape social, economic, geopolitical, and legal systems. Fellows receive $25,000 stipend, covered travel to San Francisco, daily meals, and work with significant autonomy defining their own research directions at CAIS offices. Features regular guest speakers from Stanford, law schools, and international affairs experts.
#governance#policy fellowshipgovernancepolicyresearch
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.9,
"topic_relevance": 0.85,
"time_proximity": 0.959748427672956,
"community_signal": 0.8,
"speaker_org_signal": 0.9,
"is_deadline_open": 0,
"source_count": 2
} fellowship
★ 0.84 Apps closed May 3, 2026
4-month empirical AI safety research fellowship with Anthropic. Fellows work on scalable oversight, adversarial robustness, AI control, model organisms, mechanistic interpretability, AI security, and model welfare. Includes $3,850/week stipend, ~$15k/month compute, and close mentorship from Anthropic researchers. Applications were due May 3, 2026.
#alignment#interpretability#control#evals#adversarial-robustness fellowshipresearchAnthropic
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.95,
"topic_relevance": 0.95,
"time_proximity": 0.7182389937106919,
"community_signal": 0.9,
"speaker_org_signal": 0.95,
"is_deadline_open": 0,
"source_count": 1
} fellowship
★ 0.79 Apps closed Mar 1, 2026
3-month fellowship to launch or accelerate impactful careers in American AI governance and policy. Fellows conduct independent research projects under expert mentorship while building professional networks and developing policy expertise. Focus areas include public policy, political science, engineering, economics, biosecurity, cybersecurity, China studies, and risk management. Prioritizes bipartisan engagement, rigorous analysis, and practical policy relevance. $21,000 stipend plus travel support, weekday lunches, and DC office space. US work authorization required.
#governance#policy fellowshippolicygovernanceDC
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.9,
"topic_relevance": 0.9,
"time_proximity": 0.89937106918239,
"community_signal": 0.8,
"speaker_org_signal": 0.85,
"is_deadline_open": 0,
"source_count": 1
} fellowship
★ 0.79 Apps closed Jan 4, 2026
3-month fellowship for conducting independent research on AI governance topics. Fellows receive mentorship from field experts, participate in seminars and Q&A sessions, and build professional networks. Research outputs may include reports, white papers, journal articles, op-eds, or blog posts. £12,000 stipend plus travel support and weekday lunches. Open to candidates from government, academia, industry, or civil society with expertise in policy, political science, computer science, economics, or risk management. Visa sponsorship available.
#governance#policy fellowshipresearchgovernanceLondon
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.9,
"topic_relevance": 0.9,
"time_proximity": 0.89937106918239,
"community_signal": 0.8,
"speaker_org_signal": 0.85,
"is_deadline_open": 0,
"source_count": 1
} fellowship
★ 0.78 Apps closed Apr 15, 2026
4-month empirical AI safety research fellowship with Anthropic. Fellows work on scalable oversight, adversarial robustness, AI control, model organisms, mechanistic interpretability, AI security, and model welfare. Includes $3,850/week stipend, ~$15k/month compute, and close mentorship from Anthropic researchers. Over 80% of first cohort fellows produced papers.
#alignment#interpretability#control#evals#adversarial-robustness fellowshipresearchAnthropicapplications-open-may-cohort
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.95,
"topic_relevance": 0.95,
"time_proximity": 0.3666666666666667,
"community_signal": 0.9,
"speaker_org_signal": 0.95,
"is_deadline_open": 0,
"source_count": 1
} Interdisciplinary forum grounded in the science of AI safety as explored in the International AI Safety Report. The annual event in Sydney addresses measurement, evaluation, and governance of AI systems. Focuses on questions like 'How do we measure what AI systems can do, and what counts as a meaningful safety evaluation?'
#evals#governance#evaluation#measurement-science
Salience signals
{
"type_weight": 1,
"source_trust": 0.75,
"topic_relevance": 0.9,
"time_proximity": 0.7836477987421384,
"community_signal": 0.7,
"speaker_org_signal": 0.75,
"is_deadline_open": 0,
"source_count": 1
} Second Workshop on Technical AI Governance Research at ICML 2026, focusing on technical approaches to AI governance, policy, and regulation. Part of the main conference workshop track.
#governance#policy ICMLworkshopgovernancetechnical-governance
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.85,
"topic_relevance": 0.9,
"time_proximity": 0.7383647798742139,
"community_signal": 0.75,
"speaker_org_signal": 0.7,
"is_deadline_open": 0,
"source_count": 1
} Workshop at ICML 2026 focused on identifying, diagnosing, and fixing failure modes in agentic AI systems. Covers reproducible triggers for failures, diagnostic tracing methods, and verified repair approaches. Highly relevant to AI safety and robustness.
#evals#alignment ICMLworkshopfailure-modesagentsdiagnostics
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.85,
"topic_relevance": 0.9,
"time_proximity": 0.7383647798742139,
"community_signal": 0.75,
"speaker_org_signal": 0.7,
"is_deadline_open": 0,
"source_count": 1
} Second Workshop on Agents in the Wild focusing on safety and security of AI agents deployed in real-world environments. Addresses challenges in ensuring safe and secure operation of autonomous agents. Part of ICML 2026 workshop track.
#alignment#evals ICMLworkshopagentssafetysecurity
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.85,
"topic_relevance": 0.9,
"time_proximity": 0.7333333333333334,
"community_signal": 0.75,
"speaker_org_signal": 0.7,
"is_deadline_open": 0,
"source_count": 1
} workshop
★ 0.72 CFP closed Mar 12, 2026
Two-day workshop surfacing practical insights from across the AI evaluation ecosystem. Addresses tensions between model developers and evaluation researchers. Topics: evaluation methodology & measurement theory, infrastructure/cost/stakeholders, sociotechnical impacts. Full (6-8p), short (≤4p), tiny (≤2p) papers welcome. Non-archival by default with archival opt-in. Dual submissions allowed. At least one author must present in-person. Authors expected to review submissions.
#evals#safety-research#measurement ACL-workshoptwo-day
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.75,
"topic_relevance": 0.9,
"time_proximity": 0.7987421383647799,
"community_signal": 0.7,
"speaker_org_signal": 0.8,
"is_deadline_open": 0,
"source_count": 1
} Weekend AI safety hackathon focused on Global South participation and perspectives. Hybrid format allowing both online and in-person participation. Organized by Apart Research as part of their 55+ research sprints series with 6,000+ participants across 200+ global locations.
#alignment#safety-research#evals#governance
Salience signals
{
"type_weight": 0.65,
"source_trust": 0.85,
"topic_relevance": 0.8,
"time_proximity": 0.8490566037735849,
"community_signal": 0.7,
"speaker_org_signal": 0.7,
"is_deadline_open": 0,
"source_count": 1
} conference
★ 0.66 CFP closed Apr 30, 2026
The 9th annual Cognitive Computational Neuroscience conference featuring keynote speakers and oral presentations with papers presented as posters, plus community-proposed programming including GACs (Generative Adversarial Collaborations) and K&Ts (Keynote-and-Tutorial presentations). AAE attendees follow for predictive-coding, metacognition, signal-detection-theory measurement work applicable to LLMs.
#interpretability#cognitive-science#control#mechanistic-interpretability#formal-foundations#instrument-science#computational-neuroscience#measurement-science cog-neuromeasurement-science
Salience signals
{
"type_weight": 1,
"source_trust": 0.8,
"topic_relevance": 0.75,
"time_proximity": 0.6477987421383647,
"community_signal": 0.7,
"speaker_org_signal": 0.75,
"is_deadline_open": 0,
"source_count": 1
} SPAR is a part-time remote research program pairing aspiring researchers with experienced mentors from Google DeepMind, RAND, Apollo Research, MATS, UK AISI, and other organizations for three-month projects. Participants dedicate 5–40 hours/week depending on availability. The program culminates in Demo Day where mentees present findings. Well-established programme with broad mentor base.
#alignment#governance#evals#safety-research#interpretability#technical-safety remotementorship
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.85,
"topic_relevance": 0.95,
"time_proximity": 0,
"community_signal": 0.8,
"speaker_org_signal": 0.85,
"is_deadline_open": 0,
"source_count": 1
} fellowship
★ 0.65 Apps closed Jan 15, 2026
World's first academic programme dedicated to AI evaluation combining technical depth with policy and governance perspectives. 150 hours total: 90 hours online (lectures, networking, activities), 20 hours hands-on courses, 40 hours in-person capstone week in Valencia. Cohort of 40 top global participants. Fully funded scholarships available. Graduates receive 15 ECTS Expert Diploma from ValgrAI. Faculty from Cambridge, Stanford, Princeton, EU AI Office, UK AI Safety Institute, FAR AI, Apollo Research. Funded by Coefficient Giving.
#evals#safety-research#governance#policy fellowshipevalsacademichybriddiplomafundedprestigious
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.85,
"topic_relevance": 0.95,
"time_proximity": 0,
"community_signal": 0.75,
"speaker_org_signal": 0.9,
"is_deadline_open": 0,
"source_count": 1
} Foresight Institute's flagship conference bringing together leading scientists, entrepreneurs, funders, and policymakers to explore frontiers of science and technology. Features tracks on AI safety, security, and other emerging technologies. Part of Foresight's 40-year mission advancing transformative technology.
#alignment#governance frontier-sciencemulti-track
Salience signals
{
"type_weight": 1,
"source_trust": 0.75,
"topic_relevance": 0.65,
"time_proximity": 0.919496855345912,
"community_signal": 0.5,
"speaker_org_signal": 0.6,
"is_deadline_open": 0,
"source_count": 1
} UN Institute for Disarmament Research conference addressing contemporary cybersecurity challenges. Part of UNIDIR's broader work on security and technology initiatives. In scope per AAE's sociotechnical threat surface and international AI policy interests.
#governance#cyber-security UNdisarmament
Salience signals
{
"type_weight": 1,
"source_trust": 0.85,
"topic_relevance": 0.65,
"time_proximity": 0.4285714285714286,
"community_signal": 0.5,
"speaker_org_signal": 0.85,
"is_deadline_open": 0,
"source_count": 1
} Two-week intensive summer school at Columbia University covering machine learning topics including mechanistic interpretability, alignment/safety, RAG & agents, and LLM systems. Approximately 200 PhD students participate alongside faculty and industry speakers. In-scope due to dedicated alignment and mechanistic interpretability tracks.
#interpretability#alignment
Salience signals
{
"type_weight": 0.35,
"source_trust": 0.75,
"topic_relevance": 0.7,
"time_proximity": 0.869182389937107,
"community_signal": 0.5,
"speaker_org_signal": 0.6,
"is_deadline_open": 0,
"source_count": 1
} A free, accessible workshop hosted by AI Safety Awareness Group Oakland exploring AI's trajectory and societal impact. No technical background required. Features live demonstrations of current AI systems, interactive forecasting activities, and discussions about AI's implications for work, relationships, and society over the next 1-5 years.
#governance#alignment
Salience signals
{
"type_weight": 0.45,
"source_trust": 0.7,
"topic_relevance": 0.7,
"time_proximity": 0.5714285714285714,
"community_signal": 0.6,
"speaker_org_signal": 0.5,
"is_deadline_open": 0,
"source_count": 1
}