conference
★ 0.41 CFP closes May 6, 2026
?
Neural Information Processing Systems annual conference held across three satellite locations: Sydney, Atlanta, and Paris. Main conference for machine learning research. Individual safety-related workshops are in scope when announced; generic ML sessions are out of scope for AAE tracker.
#interpretability#evals#alignment#ml-research#safety-research#mechanistic-interpretability#machine-learning major-conferencemulti-location
Salience signals
{
"type_weight": 1,
"source_trust": 0.85,
"topic_relevance": 0.4,
"time_proximity": 0.17162162162162165,
"community_signal": 0.5,
"speaker_org_signal": 0.3,
"is_deadline_open": 1,
"source_count": 1
} workshop
★ 0.89 CFP closes May 8, 2026
Annual mechanistic interpretability workshop at ICML 2026. Welcomes work that furthers mechanistic interpretability, including research on feature geometry, circuit analyses, interpretability for practical applications, safety, and scientific discovery. Topics include evidence for cognitive phenomena such as latent reasoning, implicit planning, search algorithms, internal world models, beliefs, personas, and reasoning processes. Also covers identifying and debugging undesirable model behaviors, developing safer models, and detecting alignment faking or emergent misalignment. Non-archival workshop.
#interpretability#alignment#circuit-tracing#sparse-autoencoders#mechanistic-interpretability#instrument-science#adversarial-robustness ICMLworkshopmechanistic-interpretabilityICML-workshopthird-iteration
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.9,
"topic_relevance": 0.95,
"time_proximity": 0.7735849056603774,
"community_signal": 0.9,
"speaker_org_signal": 0.85,
"is_deadline_open": 0,
"source_count": 2
} workshop
★ 0.85 CFP closes May 8, 2026
?
Workshop on Pluralistic AI: Aligning with the Diversity of Human Values at ICML 2026. Submissions should be anonymized papers 4 to 8 pages following ICML 2026 template through OpenReview. Acceptance notifications on May 22, 2026.
#alignment#governance#pluralistic-alignment#pluralistic-values#measurement-science ICML-workshopinterdisciplinary
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.85,
"topic_relevance": 0.9,
"time_proximity": 0.7685534591194969,
"community_signal": 0.7,
"speaker_org_signal": 0.75,
"is_deadline_open": 1,
"source_count": 1
} workshop
★ 0.75 CFP closes May 8, 2026
Workshop on Pluralistic AI: Aligning with the Diversity of Human Values. Covers technical, philosophical, and societal aspects of pluralistic AI, spanning machine learning, human-computer interaction, philosophy, social sciences, and policy studies. Featured speakers include Mitchell Gordon (MIT), Atoosa Kasirzadeh (Carnegie Mellon University), and Xiaoyuan Yi (Microsoft Research Asia). Accepts various formats including position papers, works in progress, and policy papers.
#alignment#governance
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.85,
"topic_relevance": 0.8,
"time_proximity": 0.7685534591194969,
"community_signal": 0.75,
"speaker_org_signal": 0.75,
"is_deadline_open": 0,
"source_count": 2
} workshop
★ 0.92 Apps close May 10, 2026
5-week project-based course for engineers and early researchers to work with an AI safety expert on a contribution to AI safety research or engineering. Includes mentorship, regular check-ins, and a published write-up. Covers alignment, mechanistic interpretability, evaluations, red-teaming, AI control, and scalable oversight.
#alignment#interpretability#evals
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.9,
"topic_relevance": 0.9,
"time_proximity": 0.8,
"community_signal": 0.8,
"speaker_org_signal": 0.85,
"is_deadline_open": 1,
"source_count": 1
} fellowship
★ 0.90 Apps close May 10, 2026
?
Stage 2 of MARS V application process, open only to candidates invited after Stage 1 review. This is the mentor-selection phase for invited applicants. Same program structure as Stage 1: part-time hybrid mentorship combining one-week in-person sprint with remote research phase.
#alignment#technical-safety#mentorship#control#interpretability#evals#governance
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.8,
"topic_relevance": 1,
"time_proximity": 0.7584905660377359,
"community_signal": 0.7,
"speaker_org_signal": 0.85,
"is_deadline_open": 1,
"source_count": 1
} conference
★ 0.75 Apps close May 10, 2026
A three-day conference assembling individuals committed to effective altruism, featuring presentations on cutting-edge research, workshops to enhance decision-making and execution, and extensive networking opportunities. Friday evening opening reception, full-day Saturday and Sunday programming concluding around 6pm Sunday. Topics include AI safety, global health, and other pressing global challenges.
#alignment#governance#evals effective-altruism
Salience signals
{
"type_weight": 1,
"source_trust": 0.8,
"topic_relevance": 0.7,
"time_proximity": 0.9849056603773585,
"community_signal": 0.85,
"speaker_org_signal": 0.6,
"is_deadline_open": 0,
"source_count": 2
} conference
★ 0.44 CFP closes May 10, 2026
?
The 29th annual meeting of the Association for the Scientific Study of Consciousness brings together researchers from around the world to share the latest findings in the scientific study of consciousness. Topics include empirical, theoretical, and philosophical investigations into neural correlates of consciousness and subjective experience. Relevant to AAE for metacognition and measurement-science approaches applicable to LLM interpretability work.
#consciousness#measurement-science#cognitive-science#metacognition
Salience signals
{
"type_weight": 1,
"source_trust": 0.6,
"topic_relevance": 0.5,
"time_proximity": 0.8238993710691824,
"community_signal": 0.3,
"speaker_org_signal": 0.2,
"is_deadline_open": 1,
"source_count": 1
} conference
★ 0.51 Early-bird ends May 24, 2026
?
International Conference on Machine Learning in Seoul. Main conference for machine learning research. Individual safety-related workshops are in scope when announced; generic ML sessions are out of scope for AAE tracker. Workshop days are July 10-11.
#alignment#interpretability#evals#ml-research#mechanistic-interpretability#governance#machine-learning major-conference
Salience signals
{
"type_weight": 1,
"source_trust": 0.85,
"topic_relevance": 0.4,
"time_proximity": 0.7937106918238994,
"community_signal": 0.5,
"speaker_org_signal": 0.3,
"is_deadline_open": 1,
"source_count": 1
} conference
★ 0.93 Apps close Jun 1, 2026
?
5-day, multi-track unconference with 100+ researchers focused on theoretical AI alignment. Covers mathematical approaches including Singular Learning Theory, Agent Foundations, Causal Incentives, Computational Mechanics, Safety-by-Debate, and Scalable Oversight. Free to attend with limited needs-based funding available for travel and accommodation.
#alignment#theoretical-foundations#agent-foundations#formal-foundations#formal-methods
Salience signals
{
"type_weight": 1,
"source_trust": 0.85,
"topic_relevance": 1,
"time_proximity": 0.6528301886792452,
"community_signal": 0.75,
"speaker_org_signal": 0.8,
"is_deadline_open": 1,
"source_count": 1
} conference
★ 0.79 Reg closes Jun 7, 2026
Multi-stakeholder conference convening diplomats, policymakers, academia, civil society, industry and research laboratories to examine artificial intelligence's implications for international peace and security. Building on the 2025 inaugural event, discussions address AI governance, military applications, and policy frameworks grounded in international law. Abstracts accepted for technology solutions, policy approaches, and concrete use-cases.
#governance#sociotechnical-threats
Salience signals
{
"type_weight": 1,
"source_trust": 0.85,
"topic_relevance": 0.8,
"time_proximity": 0.8842767295597485,
"community_signal": 0.7,
"speaker_org_signal": 0.8,
"is_deadline_open": 0,
"source_count": 2
} conference
★ 0.76 CFP closes Jun 15, 2026
?
Two-day interdisciplinary forum grounded in the science of AI safety. Brings together researchers, policymakers, and practitioners from research, government, industry, and civil society. Program includes keynote presentations, panel discussions, parallel workshops, lightning talks, and structured networking. Explores technical AI safety challenges, governance approaches, risk assessment, and evaluations.
#evals#governance#evaluation#measurement-science#alignment#evaluations#policy#technical-safety interdisciplinary
Salience signals
{
"type_weight": 1,
"source_trust": 0.75,
"topic_relevance": 1,
"time_proximity": 0.7886792452830189,
"community_signal": 0.65,
"speaker_org_signal": 0.7,
"is_deadline_open": 0,
"source_count": 1
} fellowship
★ 0.96 Apps closed Mar 1, 2026
?
Three-month bipartisan fellowship designed to launch or accelerate impactful careers in American AI governance and policy. Participants deepen understanding of the field, connect with network of experts, and build skills and professional profile. $21,000 stipend. Alumni have secured positions at leading AI companies (DeepMind, OpenAI, Anthropic).
#governance#policy fellowshippolicygovernanceDC
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.9,
"topic_relevance": 1,
"time_proximity": 0.929559748427673,
"community_signal": 0.9,
"speaker_org_signal": 1,
"is_deadline_open": 0,
"source_count": 2
} fellowship
★ 0.96 Apps closed Jan 4, 2026
?
Three-month fellowship where fellows conduct independent research on AI governance topic of their choice with mentorship from leading experts. £12,000 stipend. GovAI was founded to help decision-makers navigate the transition to advanced AI through rigorous research and talent fostering. Alumni have secured positions at DeepMind, OpenAI, Anthropic.
#governance#policy fellowshipresearchgovernanceLondon
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.9,
"topic_relevance": 1,
"time_proximity": 0.929559748427673,
"community_signal": 0.9,
"speaker_org_signal": 1,
"is_deadline_open": 0,
"source_count": 2
} fellowship
★ 0.96 Apps closed Jan 18, 2026
The largest MATS program to date with 120 fellows and 100 mentors. Fellows work full-time on AI alignment, security, or governance projects across five tracks: Empirical, Policy and Strategy, Theory, Technical Governance, and Compute Infrastructure. Fellows receive weekly mentor meetings, research seminars 2-3 times weekly. Historically ~70% of fellows receive 6-12 month extensions with funding for stipends, housing, and compute. Fellows connected with mentors or organizational research groups including Anthropic's Alignment Science team, UK AISI, Redwood Research, ARC, and LawZero.
#alignment#mechanistic-interpretability#governance#interpretability#evals
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.95,
"topic_relevance": 0.95,
"time_proximity": 0.969811320754717,
"community_signal": 0.95,
"speaker_org_signal": 0.95,
"is_deadline_open": 0,
"source_count": 2
} fellowship
★ 0.95 Apps closed Apr 12, 2026
?
Nine-week AI safety research fellowship run by Cambridge Boston Alignment Initiative. Accepts 30 fellows (undergraduate, Master's, PhD students, postdocs, and recent graduates). Includes $10,000 stipend, accommodation in Harvard dorms, meals, workspace access, and up to $10,000 in compute credits. Applications reviewed on rolling basis through four-stage process. International students on OPT/CPT eligible but visa sponsorship not available.
#alignment#interpretability#governance#evals#biosecurity#safety-research fellowshipresearchCambridgeHarvardstipendhousingcompute-credits
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.85,
"topic_relevance": 0.95,
"time_proximity": 0.9345911949685535,
"community_signal": 0.8,
"speaker_org_signal": 0.85,
"is_deadline_open": 1,
"source_count": 1
} fellowship
★ 0.92 Apps closed Mar 8, 2026
A 4-5 week ML bootcamp with focus on AI safety, providing talented individuals with ML engineering skills, community, and confidence to contribute directly to technical AI safety. Covers transformers and interpretability, reinforcement learning, model evaluations, and concludes with a capstone project. ARENA alumni go on to become MATS scholars, LASR participants, and AI safety engineers at Apollo Research, METR, UK AISI. First week (May 25-June 1) is optional review of Neural Network Fundamentals.
#alignment#interpretability#technical-safety#mechanistic-interpretability#evals
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.9,
"topic_relevance": 0.95,
"time_proximity": 0.9714285714285714,
"community_signal": 0.9,
"speaker_org_signal": 0.85,
"is_deadline_open": 0,
"source_count": 2
} fellowship
★ 0.90 Apps closed May 3, 2026
?
A four-month fellowship providing fellows with weekly stipend of $3,850 USD / £2,310 GBP / $4,300 CAD, funding for compute (~$15k/month), and close mentorship from Anthropic researchers. Program focuses on AI safety research with emphasis on ability to execute on research rather than formal credentials. Seeks individuals who can code well in Python and take ambiguous problems and make concrete progress.
#alignment#technical-safety#interpretability#evals
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.95,
"topic_relevance": 0.95,
"time_proximity": 0.7232704402515724,
"community_signal": 0.9,
"speaker_org_signal": 0.95,
"is_deadline_open": 0,
"source_count": 2
} Part of FAR.AI's ongoing Alignment Workshop series bringing together global leaders in academia and industry to examine potential risks from AGI and collaboratively explore effective strategies for mitigating these risks. Focuses on critical topics including model evaluations, interpretability, robustness, scalable oversight, and AI governance. Off-the-record by default with speakers choosing whether to present on-the-record. Continues series previously held in San Diego, Singapore, Vienna, New Orleans, San Francisco, and London.
#alignment#governance#control#interpretability#technical-safety#agi-risk#evals#agi-safety
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.9,
"topic_relevance": 0.95,
"time_proximity": 0.7937106918238994,
"community_signal": 0.85,
"speaker_org_signal": 0.9,
"is_deadline_open": 0,
"source_count": 2
} workshop
★ 0.88 Apps closed May 1, 2026
?
Workshop on secure and verifiable AI development, bringing together researchers, builders, and funders across ML, hardware security, systems, cryptography, and computer security. Focuses on verification techniques for AI safety. Colocated with IEEE Security and Privacy conference. Organized by FAR.AI.
#evals#alignment#safety-research#security workshopverificationcryptographyhardware-security
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.9,
"topic_relevance": 0.85,
"time_proximity": 0.9428571428571428,
"community_signal": 0.65,
"speaker_org_signal": 0.85,
"is_deadline_open": 1,
"source_count": 1
} fellowship
★ 0.84 Apps closed Mar 24, 2026
A three-month fellowship for scholars in economics, law, international relations, and adjacent fields to research the societal impacts of rapid AI progress. Fellows work with significant autonomy, defining and pursuing their own research directions on how advanced AI may reshape social, economic, geopolitical, and legal systems. Includes regular guest speaker events and networking with CAIS researchers and Bay Area academics. Inaugural year, building on successful 2023 philosophy fellowship.
#governance#policy#alignment fellowshipgovernancepolicyresearch
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.9,
"topic_relevance": 0.85,
"time_proximity": 0.969811320754717,
"community_signal": 0.8,
"speaker_org_signal": 0.85,
"is_deadline_open": 0,
"source_count": 2
} fellowship
★ 0.84 Apps closed May 3, 2026
Part-time research programme pairing teams of 2-4 with experienced mentors to produce published AI safety research. Program includes one-week in-person kick-off at Cambridge AI Safety Hub (July 13-19 or 20-26), followed by 8-10 weeks of remote research culminating in publications. Targets university students and early-career researchers with coding and research experience. CAISH covers travel, accommodation, meals, office space, $2k+ compute budget, and Claude Max access. Mentors include researchers from Redwood Research, Google DeepMind, RAND, and Anthropic.
#alignment#interpretability#evals
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.8,
"topic_relevance": 0.95,
"time_proximity": 0.7584905660377359,
"community_signal": 0.85,
"speaker_org_signal": 0.85,
"is_deadline_open": 0,
"source_count": 2
} Flagship conference by Foresight Institute gathering leading scientists, entrepreneurs, funders, and policymakers to explore frontiers of science and technology. Includes AI safety track among broader frontier science topics. 40-year-old organization focused on transformative technology. Three-day event open for registration.
#alignment#governance frontier-sciencemulti-track
Salience signals
{
"type_weight": 1,
"source_trust": 0.75,
"topic_relevance": 0.8,
"time_proximity": 0.949685534591195,
"community_signal": 0.7,
"speaker_org_signal": 0.75,
"is_deadline_open": 1,
"source_count": 1
} Free, one-day AI Safety event organized by Oxford Martin AI Governance Initiative and Noeon Research. Welcomes attendees from all backgrounds regardless of prior research experience. Third iteration, first time held in UK following previous editions in 2024 and 2025.
#alignment#governance#technical-safety
Salience signals
{
"type_weight": 1,
"source_trust": 0.9,
"topic_relevance": 1,
"time_proximity": 0.6571428571428571,
"community_signal": 0.75,
"speaker_org_signal": 0.85,
"is_deadline_open": 0,
"source_count": 1
} Three-week ML upskilling bootcamp focused on AI safety, interpretability, and reinforcement learning based on the ARENA curriculum. Run by Cambridge Boston Alignment Initiative in Manhattan, hosted by Collider. Includes housing, meals, dedicated teaching assistants, and travel support. Prerequisites include Python familiarity and comfort with multivariable calculus and linear algebra.
#interpretability#alignment#technical-safety
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.85,
"topic_relevance": 0.95,
"time_proximity": 0.7937106918238994,
"community_signal": 0.8,
"speaker_org_signal": 0.85,
"is_deadline_open": 0,
"source_count": 1
} fellowship
★ 0.79 Apps closed May 3, 2026
OpenAI's safety fellowship program in partnership with Constellation. Physical workspace offered in Berkeley at Constellation, though remote participation is also permitted. Program runs approximately 5 months from September through February.
#alignment#technical-safety
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.9,
"topic_relevance": 1,
"time_proximity": 0.44150943396226416,
"community_signal": 0.8,
"speaker_org_signal": 1,
"is_deadline_open": 0,
"source_count": 1
} Three-week ML upskilling bootcamp focused on AI safety, interpretability, and reinforcement learning based on the ARENA curriculum. Run by Cambridge Boston Alignment Initiative in Cambridge, MA. May 18 to June 5 cohort (mostly outside the astronomical summer window; CBAI brands all 2026 cohorts as Summer 2026 internally). Includes housing, meals, 24/7 office access in Harvard Square, and dedicated teaching assistants. Prerequisites: Python familiarity and comfort with multivariable calculus and linear algebra. Travel support provided. CBAI does not publish a closing date for this cohort, so no specific deadline is recorded here.
#interpretability#alignment#technical-safety
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.85,
"topic_relevance": 0.95,
"time_proximity": 0.7714285714285715,
"community_signal": 0.8,
"speaker_org_signal": 0.85,
"is_deadline_open": 0,
"source_count": 1
} 10-week Cambridge-based fellowship welcoming talented individuals at any career stage working on technical AI safety research, AI governance and policy, or technical AI governance. Provides competitive stipend, meals during working hours, full coverage of transport/visas/lodging, expert mentorship, research management support, compute resources, and 30+ community events throughout the program.
#alignment#governance#technical-safety
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.85,
"topic_relevance": 1,
"time_proximity": 0.7937106918238994,
"community_signal": 0.7,
"speaker_org_signal": 0.8,
"is_deadline_open": 0,
"source_count": 1
} Four-month AI safety research fellowship. Fellows receive weekly stipend of 3,850 USD / 2,310 GBP / 4,300 CAD, funding for compute (~$15k/month), and close mentorship from Anthropic researchers. Designed to accelerate AI safety research and foster research talent.
#alignment#technical-safety#interpretability
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.95,
"topic_relevance": 0.95,
"time_proximity": 0.3666666666666667,
"community_signal": 0.9,
"speaker_org_signal": 0.95,
"is_deadline_open": 0,
"source_count": 1
} fellowship
★ 0.77 Apps closed May 3, 2026
?
Part-time hybrid mentorship programme for alignment researchers, run by Cambridge AI Safety Hub. Combines one-week in-person sprint (July 13-26) with remote research phase (August to October). Provides $2k+ compute budgets and Claude Max access. Features 24+ mentors from Redwood Research, Google DeepMind, RAND, and universities across technical AI safety and governance tracks. Approximately 4 months total duration.
#alignment#technical-safety#mentorship#control#interpretability#evals#governance
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.8,
"topic_relevance": 1,
"time_proximity": 0.7584905660377359,
"community_signal": 0.7,
"speaker_org_signal": 0.85,
"is_deadline_open": 0,
"source_count": 1
} Flagship event gathering leading scientists, entrepreneurs, funders, and policymakers to explore the frontiers of science and technology. Includes AI safety track. Part of Foresight Institute's Vision Weekend series. 40-year-old organization focused on transformative technology.
#governance#frontier-science#technical-safety
Salience signals
{
"type_weight": 1,
"source_trust": 0.75,
"topic_relevance": 0.7,
"time_proximity": 0.9446540880503145,
"community_signal": 0.65,
"speaker_org_signal": 0.7,
"is_deadline_open": 1,
"source_count": 1
} Three-week ML upskilling bootcamp focused on AI safety, interpretability, and reinforcement learning based on the ARENA curriculum. Run by Cambridge Boston Alignment Initiative in Cambridge, MA. Includes housing, meals, 24/7 office access in Harvard Square, and dedicated teaching assistants. Prerequisites include Python familiarity and comfort with multivariable calculus and linear algebra. Travel support provided.
#interpretability#alignment#technical-safety
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.85,
"topic_relevance": 0.95,
"time_proximity": 0.6176100628930817,
"community_signal": 0.8,
"speaker_org_signal": 0.85,
"is_deadline_open": 0,
"source_count": 1
} A three-day hackathon where teams prototype tools for verifying AI-generated code. Participants focus on one of four tracks: specification elicitation, specification validation, spec-driven development, or adversarial robustness testing for formal methods tools. Target participants include software engineers with expertise in proof engineering, fuzzing, model checking, or related areas. Top teams receive invitations to apply for the Secure Program Synthesis Fellowship (June-October 2026) with mentorship, compute resources, and API credits. Organized by Apart Research and Atlas Computing.
#alignment#control#safety-research#evals#security#automated-research#safety-applications#technical-safety#evaluations#governance#verification#code-safety#adversarial-robustness#scaling-infrastructure hybridsprint
Salience signals
{
"type_weight": 0.65,
"source_trust": 0.85,
"topic_relevance": 0.85,
"time_proximity": 0.8857142857142857,
"community_signal": 0.75,
"speaker_org_signal": 0.7,
"is_deadline_open": 0,
"source_count": 2
} fellowship
★ 0.74 Apps closed Jan 14, 2026
?
Part-time remote research fellowship enabling aspiring AI safety and policy researchers to work on impactful research projects with professionals. Spring 2026 offers 130+ projects (50% increase from Fall 2025) across AI safety, governance, and security. Mentors from Google DeepMind, RAND, Apollo Research, MATS, UK AISI, MIRI, Goodfire, and universities like Cambridge, Harvard, Oxford, MIT. Projects require 5-20 hours per week. Topics include neural circuit breakers for deception detection, emergent misalignment, reasoning model organisms, and autonomous AI governance.
#alignment#governance#evals#safety-research#interpretability#technical-safety remotementorship
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.85,
"topic_relevance": 0.95,
"time_proximity": 0,
"community_signal": 0.9,
"speaker_org_signal": 0.9,
"is_deadline_open": 0,
"source_count": 2
} fellowship
★ 0.73 Apps closed May 3, 2026
?
9-week AI safety research fellowship in London with expert mentorship, offering up to 6-month extensions. Provides £6,000 to £8,000 stipend plus coverage for travel, housing (£2,000 for non-London fellows), meals, and compute resources. 70 to 90% of fellows in recent cohorts received extensions for continued work beyond the initial period.
#alignment#governance#technical-safety#mechanistic-interpretability
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.75,
"topic_relevance": 1,
"time_proximity": 0.8289308176100629,
"community_signal": 0.65,
"speaker_org_signal": 0.7,
"is_deadline_open": 0,
"source_count": 1
} Five-day intensive programme for AI safety founders going from idea to funded. Successful pitches receive £50k in equity-free seed funding. Part of BlueDot Impact's incubator and rapid-funding initiatives supporting concrete AI safety work.
#alignment#governance startupincubator
Salience signals
{
"type_weight": 0.65,
"source_trust": 0.9,
"topic_relevance": 0.85,
"time_proximity": 0.9647798742138365,
"community_signal": 0.75,
"speaker_org_signal": 0.75,
"is_deadline_open": 0,
"source_count": 1
} Second Workshop on Technical AI Governance Research at ICML 2026, focusing on technical approaches to AI governance, policy, and regulation. Part of the main conference workshop track.
#governance#policy ICMLworkshopgovernancetechnical-governance
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.85,
"topic_relevance": 0.9,
"time_proximity": 0.7383647798742139,
"community_signal": 0.75,
"speaker_org_signal": 0.7,
"is_deadline_open": 0,
"source_count": 1
} Workshop at ICML 2026 focused on identifying, diagnosing, and fixing failure modes in agentic AI systems. Covers reproducible triggers for failures, diagnostic tracing methods, and verified repair approaches. Highly relevant to AI safety and robustness.
#evals#alignment ICMLworkshopfailure-modesagentsdiagnostics
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.85,
"topic_relevance": 0.9,
"time_proximity": 0.7383647798742139,
"community_signal": 0.75,
"speaker_org_signal": 0.7,
"is_deadline_open": 0,
"source_count": 1
} Second Workshop on Agents in the Wild focusing on safety and security of AI agents deployed in real-world environments. Addresses challenges in ensuring safe and secure operation of autonomous agents. Part of ICML 2026 workshop track.
#alignment#evals ICMLworkshopagentssafetysecurity
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.85,
"topic_relevance": 0.9,
"time_proximity": 0.7333333333333334,
"community_signal": 0.75,
"speaker_org_signal": 0.7,
"is_deadline_open": 0,
"source_count": 1
} Fully funded, in-person program pairing senior advisors with emerging talent on 5-month technical, governance, strategy, and field-building projects. $8,400 monthly stipend, ~$15K/month research budget for empirical fellows (compute), workspace at Berkeley research center, weekly mentorship from experts, visa support for international applicants. Applications for Fall 2026 cohort closed May 3rd. Strong placement rates at safety orgs.
#alignment#governance#technical-safety
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.85,
"topic_relevance": 0.9,
"time_proximity": 0.43647798742138366,
"community_signal": 0.85,
"speaker_org_signal": 0.85,
"is_deadline_open": 0,
"source_count": 1
} Weekly AI safety evaluations paper reading club hosted by BlueDot Impact. Meets every Tuesday at 4:00 PM UTC to discuss evaluation methodologies, safety benchmarks, and measurement frameworks. Open to all interested in AI safety evals research.
#evals#alignment weeklypaper-discussion
Salience signals
{
"type_weight": 0.45,
"source_trust": 0.85,
"topic_relevance": 0.9,
"time_proximity": 0.4285714285714286,
"community_signal": 0.7,
"speaker_org_signal": 0.7,
"is_deadline_open": 1,
"source_count": 1
} Eight-week virtual reading group run by MIT AI Alignment (MAIA). Topics include AI's trajectory, misalignment, technical safety, policy, and careers in the field. Two hours per week commitment. Led by small groups facilitated by MAIA members. Free, no prior AI background required. Applications open through May 22. MAIA is MIT student group conducting AI alignment research with membership in the hundreds, supported by CBAI.
#alignment#governance#technical-safety
Salience signals
{
"type_weight": 0.45,
"source_trust": 0.8,
"topic_relevance": 0.9,
"time_proximity": 0.8857142857142857,
"community_signal": 0.75,
"speaker_org_signal": 0.8,
"is_deadline_open": 0,
"source_count": 1
} Summit bringing together academic leaders, entrepreneurs, AI experts, venture capitalists, and policymakers to discuss the future of AI and Agentic AI. Call for Papers and Startup Spotlight applications open. In-person and livestream available.
#alignment#evals#governance
Salience signals
{
"type_weight": 1,
"source_trust": 0.85,
"topic_relevance": 0.75,
"time_proximity": 0.6477987421383647,
"community_signal": 0.75,
"speaker_org_signal": 0.8,
"is_deadline_open": 0,
"source_count": 1
} One-day hackathon organized by BlueDot Impact focused on AI risk content creation. Brings together community members to develop educational and outreach materials related to AI safety and existential risk. Part of BlueDot's ongoing series of community events building the workforce needed to safely navigate AGI.
#alignment#governance#evals content-creation
Salience signals
{
"type_weight": 0.65,
"source_trust": 0.85,
"topic_relevance": 0.75,
"time_proximity": 0.9446540880503145,
"community_signal": 0.75,
"speaker_org_signal": 0.8,
"is_deadline_open": 0,
"source_count": 1
} Part-time remote research fellowship pairing aspiring researchers with 130+ experienced mentors from Google DeepMind, RAND, Apollo Research, MATS, UK AISI, and other top organizations for three-month AI alignment projects. Participants commit 5-40 hours weekly. Covers project expenses including compute and API/LLM access. Culminates in virtual Demo Day with prizes totaling $7,000. Optional continuation beyond May 16. Mentor application track: experienced researchers from Google DeepMind, RAND, Apollo Research, MATS, UK AISI etc. apply to mentor a project. Mentor application deadline 2025-12-05 (passed).
#alignment#governance#evals#safety-research#interpretability#technical-safety remotementorship
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.85,
"topic_relevance": 0.95,
"time_proximity": 0,
"community_signal": 0.8,
"speaker_org_signal": 0.9,
"is_deadline_open": 0,
"source_count": 1
} Weekend AI safety hackathon focused on Global South participation and perspectives. Hybrid format allowing both online and in-person participation. Organized by Apart Research as part of their 55+ research sprints series with 6,000+ participants across 200+ global locations.
#alignment#safety-research#evals#governance
Salience signals
{
"type_weight": 0.65,
"source_trust": 0.85,
"topic_relevance": 0.8,
"time_proximity": 0.8490566037735849,
"community_signal": 0.7,
"speaker_org_signal": 0.7,
"is_deadline_open": 0,
"source_count": 1
} workshop
★ 0.65 CFP closed Mar 12, 2026
?
Workshop at ACL 2026 focusing on AI evaluation in practice, centering the tensions and collaborations between model developers and evaluation researchers. Accepts full papers (6-8 pages), short papers (up to 4 pages), or tiny papers/extended abstracts (up to 2 pages). Authors expected to serve as reviewers.
#evals#safety-research#measurement#evaluation-methodology ACL-workshoptwo-day
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.75,
"topic_relevance": 0.85,
"time_proximity": 0.8088050314465409,
"community_signal": 0.6,
"speaker_org_signal": 0.65,
"is_deadline_open": 0,
"source_count": 1
} Foresight Institute flagship event gathering leading scientists, entrepreneurs, funders, and policymakers to explore the frontiers of science and technology. Multiple tracks including AI safety topics.
#frontier-science#ai-safety
Salience signals
{
"type_weight": 1,
"source_trust": 0.75,
"topic_relevance": 0.7,
"time_proximity": 0.949685534591195,
"community_signal": 0.6,
"speaker_org_signal": 0.7,
"is_deadline_open": 0,
"source_count": 1
} fellowship
★ 0.64 Apps closed Jan 15, 2026
?
First global academic programme dedicated to AI evaluation, combining technical depth with policy and governance. 150-hour expert diploma covering capabilities and safety evaluations. Includes 90 hours online instruction, 20 hours hands-on courses, and 40-hour in-person capstone week in Valencia. Faculty from Cambridge, Stanford, Princeton, EU AI Office, UK AI Safety Institute, FAR AI, Apollo Research. Targets professionals joining AI Safety Institutes, government agencies, and industry research labs.
#evals#safety-research#governance#policy fellowshipevalsacademichybriddiplomafundedprestigious
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.85,
"topic_relevance": 0.95,
"time_proximity": 0,
"community_signal": 0.7,
"speaker_org_signal": 0.9,
"is_deadline_open": 0,
"source_count": 1
} Technical workshop focusing on secure AI topics. Brings together top talent to solve the bottlenecks holding back progress in secure and sovereign AI systems.
#governance#evals#alignment#safety-research#security#control#technical-safety#ai-security technicalBerlin
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.75,
"topic_relevance": 0.8,
"time_proximity": 0.7333333333333334,
"community_signal": 0.6,
"speaker_org_signal": 0.7,
"is_deadline_open": 0,
"source_count": 1
} 9th annual Cognitive Computational Neuroscience conference. Primarily single-track featuring keynote speakers and oral presentations. Paper submissions presented as posters with select papers chosen for oral presentation. AAE attendees follow for predictive-coding, metacognition, signal-detection-theory measurement work applicable to LLMs.
#measurement-science#cognitive-science#interpretability#neuroscience
Salience signals
{
"type_weight": 1,
"source_trust": 0.8,
"topic_relevance": 0.6,
"time_proximity": 0.6528301886792452,
"community_signal": 0.5,
"speaker_org_signal": 0.6,
"is_deadline_open": 0,
"source_count": 1
} Two-day conference on global cooperation for cybersecurity resilience and stability. Organized by United Nations Institute for Disarmament Research. Addresses international frameworks for cyber governance and security cooperation.
#governance#cyber-security UNdisarmament
Salience signals
{
"type_weight": 1,
"source_trust": 0.85,
"topic_relevance": 0.6,
"time_proximity": 0.4,
"community_signal": 0.5,
"speaker_org_signal": 0.75,
"is_deadline_open": 0,
"source_count": 1
} Two-week intensive summer school at Columbia University covering machine learning topics including mechanistic interpretability, alignment/safety, RAG & agents, and LLM systems. Approximately 200 PhD students participate alongside faculty and industry speakers. In-scope due to dedicated alignment and mechanistic interpretability tracks.
#interpretability#alignment
Salience signals
{
"type_weight": 0.35,
"source_trust": 0.75,
"topic_relevance": 0.7,
"time_proximity": 0.869182389937107,
"community_signal": 0.5,
"speaker_org_signal": 0.6,
"is_deadline_open": 0,
"source_count": 1
} A free, accessible workshop hosted by AI Safety Awareness Group Oakland exploring AI's trajectory and societal impact. No technical background required. Features live demonstrations of current AI systems, interactive forecasting activities, and discussions about AI's implications for work, relationships, and society over the next 1-5 years.
#governance#alignment
Salience signals
{
"type_weight": 0.45,
"source_trust": 0.7,
"topic_relevance": 0.7,
"time_proximity": 0.5714285714285714,
"community_signal": 0.6,
"speaker_org_signal": 0.5,
"is_deadline_open": 0,
"source_count": 1
}