conference
★ 0.41 CFP closes May 6, 2026
?
Neural Information Processing Systems annual conference held across three satellite locations: Sydney, Atlanta, and Paris. Main conference for machine learning research. Individual safety-related workshops are in scope when announced; generic ML sessions are out of scope for AAE tracker.
#interpretability#evals#alignment#ml-research#safety-research#mechanistic-interpretability#machine-learning major-conferencemulti-location
Salience signals
{
"type_weight": 1,
"source_trust": 0.85,
"topic_relevance": 0.4,
"time_proximity": 0.17162162162162165,
"community_signal": 0.5,
"speaker_org_signal": 0.3,
"is_deadline_open": 1,
"source_count": 1
} workshop
★ 0.98 CFP closes May 8, 2026
Annual mechanistic interpretability workshop at ICML. High-quality venue for mech interp research. Continuing the series from ICML 2024 and NeurIPS 2025. Submissions accepted through OpenReview.
#interpretability#alignment#circuit-tracing#sparse-autoencoders#mechanistic-interpretability#instrument-science ICMLworkshopmechanistic-interpretabilityICML-workshopthird-iteration
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.9,
"topic_relevance": 1,
"time_proximity": 0.7735849056603774,
"community_signal": 0.85,
"speaker_org_signal": 0.85,
"is_deadline_open": 1,
"source_count": 1
} workshop
★ 0.85 CFP closes May 8, 2026
?
Workshop on Pluralistic AI: Aligning with the Diversity of Human Values at ICML 2026. Submissions should be anonymized papers 4 to 8 pages following ICML 2026 template through OpenReview. Acceptance notifications on May 22, 2026.
#alignment#governance#pluralistic-alignment#pluralistic-values#measurement-science ICML-workshopinterdisciplinary
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.85,
"topic_relevance": 0.9,
"time_proximity": 0.7685534591194969,
"community_signal": 0.7,
"speaker_org_signal": 0.75,
"is_deadline_open": 1,
"source_count": 1
} workshop
★ 0.92 Apps close May 10, 2026
5-week project-based course for engineers and early researchers to work with an AI safety expert on a contribution to AI safety research or engineering. Includes mentorship, regular check-ins, and a published write-up. Covers alignment, mechanistic interpretability, evaluations, red-teaming, AI control, and scalable oversight.
#alignment#interpretability#evals
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.9,
"topic_relevance": 0.9,
"time_proximity": 0.8,
"community_signal": 0.8,
"speaker_org_signal": 0.85,
"is_deadline_open": 1,
"source_count": 1
} fellowship
★ 0.90 Apps close May 10, 2026
?
Stage 2 of MARS V application process, open only to candidates invited after Stage 1 review. This is the mentor-selection phase for invited applicants. Same program structure as Stage 1: part-time hybrid mentorship combining one-week in-person sprint with remote research phase.
#alignment#technical-safety#mentorship#control#interpretability#evals#governance
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.8,
"topic_relevance": 1,
"time_proximity": 0.7584905660377359,
"community_signal": 0.7,
"speaker_org_signal": 0.85,
"is_deadline_open": 1,
"source_count": 1
} conference
★ 0.44 CFP closes May 10, 2026
?
The 29th annual meeting of the Association for the Scientific Study of Consciousness brings together researchers from around the world to share the latest findings in the scientific study of consciousness. Topics include empirical, theoretical, and philosophical investigations into neural correlates of consciousness and subjective experience. Relevant to AAE for metacognition and measurement-science approaches applicable to LLM interpretability work.
#consciousness#measurement-science#cognitive-science#metacognition
Salience signals
{
"type_weight": 1,
"source_trust": 0.6,
"topic_relevance": 0.5,
"time_proximity": 0.8238993710691824,
"community_signal": 0.3,
"speaker_org_signal": 0.2,
"is_deadline_open": 1,
"source_count": 1
} conference
★ 0.94 Apps close May 20, 2026
?
Major Effective Altruism conference with significant AI safety attendance. Run by Centre for Effective Altruism. Designed for individuals with solid understanding of core EA ideas actively applying them. One application form covers all 2026 EA Global events. AI safety community heavily represented.
#alignment#governance effective-altruism
Salience signals
{
"type_weight": 1,
"source_trust": 0.8,
"topic_relevance": 0.7,
"time_proximity": 0.979874213836478,
"community_signal": 1,
"speaker_org_signal": 0.75,
"is_deadline_open": 1,
"source_count": 2
} conference
★ 0.51 Early-bird ends May 24, 2026
?
International Conference on Machine Learning in Seoul. Main conference for machine learning research. Individual safety-related workshops are in scope when announced; generic ML sessions are out of scope for AAE tracker. Workshop days are July 10-11.
#alignment#interpretability#evals#ml-research#mechanistic-interpretability#governance#machine-learning major-conference
Salience signals
{
"type_weight": 1,
"source_trust": 0.85,
"topic_relevance": 0.4,
"time_proximity": 0.7937106918238994,
"community_signal": 0.5,
"speaker_org_signal": 0.3,
"is_deadline_open": 1,
"source_count": 1
} conference
★ 0.93 Apps close Jun 1, 2026
?
5-day, multi-track unconference with 100+ researchers focused on theoretical AI alignment. Covers mathematical approaches including Singular Learning Theory, Agent Foundations, Causal Incentives, Computational Mechanics, Safety-by-Debate, and Scalable Oversight. Free to attend with limited needs-based funding available for travel and accommodation.
#alignment#theoretical-foundations#agent-foundations#formal-foundations#formal-methods
Salience signals
{
"type_weight": 1,
"source_trust": 0.85,
"topic_relevance": 1,
"time_proximity": 0.6528301886792452,
"community_signal": 0.75,
"speaker_org_signal": 0.8,
"is_deadline_open": 1,
"source_count": 1
} conference
★ 0.76 CFP closes Jun 15, 2026
?
Two-day interdisciplinary forum grounded in the science of AI safety. Brings together researchers, policymakers, and practitioners from research, government, industry, and civil society. Program includes keynote presentations, panel discussions, parallel workshops, lightning talks, and structured networking. Explores technical AI safety challenges, governance approaches, risk assessment, and evaluations.
#evals#governance#evaluation#measurement-science#alignment#evaluations#policy#technical-safety interdisciplinary
Salience signals
{
"type_weight": 1,
"source_trust": 0.75,
"topic_relevance": 1,
"time_proximity": 0.7886792452830189,
"community_signal": 0.65,
"speaker_org_signal": 0.7,
"is_deadline_open": 0,
"source_count": 1
} fellowship
★ 0.99 Apps closed May 3, 2026
?
Four-month AI safety research fellowship providing funding, compute, and direct mentorship from Anthropic researchers. Fellows work on real safety and security projects. Includes weekly stipend of $3,850 USD and compute funding of approximately $15,000 per month. Over 40% of fellows from the first cohort subsequently joined Anthropic full-time.
#alignment#technical-safety#interpretability#evals
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.95,
"topic_relevance": 1,
"time_proximity": 0.7232704402515724,
"community_signal": 0.8,
"speaker_org_signal": 1,
"is_deadline_open": 1,
"source_count": 1
} fellowship
★ 0.96 Apps closed Mar 1, 2026
?
Three-month bipartisan fellowship designed to launch or accelerate impactful careers in American AI governance and policy. Participants deepen understanding of the field, connect with network of experts, and build skills and professional profile. $21,000 stipend. Alumni have secured positions at leading AI companies (DeepMind, OpenAI, Anthropic).
#governance#policy fellowshippolicygovernanceDC
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.9,
"topic_relevance": 1,
"time_proximity": 0.929559748427673,
"community_signal": 0.9,
"speaker_org_signal": 1,
"is_deadline_open": 0,
"source_count": 2
} fellowship
★ 0.96 Apps closed Jan 4, 2026
?
Three-month fellowship where fellows conduct independent research on AI governance topic of their choice with mentorship from leading experts. £12,000 stipend. GovAI was founded to help decision-makers navigate the transition to advanced AI through rigorous research and talent fostering. Alumni have secured positions at DeepMind, OpenAI, Anthropic.
#governance#policy fellowshipresearchgovernanceLondon
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.9,
"topic_relevance": 1,
"time_proximity": 0.929559748427673,
"community_signal": 0.9,
"speaker_org_signal": 1,
"is_deadline_open": 0,
"source_count": 2
} fellowship
★ 0.95 Apps closed Apr 12, 2026
?
Nine-week AI safety research fellowship run by Cambridge Boston Alignment Initiative. Accepts 30 fellows (undergraduate, Master's, PhD students, postdocs, and recent graduates). Includes $10,000 stipend, accommodation in Harvard dorms, meals, workspace access, and up to $10,000 in compute credits. Applications reviewed on rolling basis through four-stage process. International students on OPT/CPT eligible but visa sponsorship not available.
#alignment#interpretability#governance#evals#biosecurity#safety-research fellowshipresearchCambridgeHarvardstipendhousingcompute-credits
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.85,
"topic_relevance": 0.95,
"time_proximity": 0.9345911949685535,
"community_signal": 0.8,
"speaker_org_signal": 0.85,
"is_deadline_open": 1,
"source_count": 1
} ML Alignment and Theory Scholars summer cohort. 12-week fellowship running early June to late August with in-person cohorts in Berkeley and London. Applications closed but still collecting Expression of Interest. Top-performing fellows may extend research for additional 6-12 months through London-based extension program with continued funding and mentorship. September onwards extension phase available for accepted fellows.
#alignment#mechanistic-interpretability#governance#interpretability
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.95,
"topic_relevance": 1,
"time_proximity": 0.969811320754717,
"community_signal": 0.9,
"speaker_org_signal": 0.95,
"is_deadline_open": 0,
"source_count": 1
} workshop
★ 0.88 Apps closed May 1, 2026
?
Workshop on secure and verifiable AI development, bringing together researchers, builders, and funders across ML, hardware security, systems, cryptography, and computer security. Focuses on verification techniques for AI safety. Colocated with IEEE Security and Privacy conference. Organized by FAR.AI.
#evals#alignment#safety-research#security workshopverificationcryptographyhardware-security
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.9,
"topic_relevance": 0.85,
"time_proximity": 0.9428571428571428,
"community_signal": 0.65,
"speaker_org_signal": 0.85,
"is_deadline_open": 1,
"source_count": 1
} One-month intensive bootcamp providing talented individuals with the skills, community, and confidence to contribute directly to technical AI safety. In-person at LISA in Shoreditch, London. ARENA covers all major costs including travel, visas, accommodation, and meals for participants. Applications currently closed.
#alignment#interpretability#technical-safety#mechanistic-interpretability
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.9,
"topic_relevance": 1,
"time_proximity": 0.9714285714285714,
"community_signal": 0.85,
"speaker_org_signal": 0.85,
"is_deadline_open": 0,
"source_count": 1
} Flagship conference by Foresight Institute gathering leading scientists, entrepreneurs, funders, and policymakers to explore frontiers of science and technology. Includes AI safety track among broader frontier science topics. 40-year-old organization focused on transformative technology. Three-day event open for registration.
#alignment#governance frontier-sciencemulti-track
Salience signals
{
"type_weight": 1,
"source_trust": 0.75,
"topic_relevance": 0.8,
"time_proximity": 0.949685534591195,
"community_signal": 0.7,
"speaker_org_signal": 0.75,
"is_deadline_open": 1,
"source_count": 1
} Part of the ongoing Alignment Workshop series organized by FAR AI. Aims to deepen collective understanding of potential risks from Artificial General Intelligence and collaboratively explore effective strategies for mitigating these risks. Scheduled alongside ICML 2026 in Seoul.
#alignment#governance#control#interpretability#technical-safety#agi-risk#evals#agi-safety
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.9,
"topic_relevance": 1,
"time_proximity": 0.7937106918238994,
"community_signal": 0.75,
"speaker_org_signal": 0.9,
"is_deadline_open": 0,
"source_count": 1
} Free, one-day AI Safety event organized by Oxford Martin AI Governance Initiative and Noeon Research. Welcomes attendees from all backgrounds regardless of prior research experience. Third iteration, first time held in UK following previous editions in 2024 and 2025.
#alignment#governance#technical-safety
Salience signals
{
"type_weight": 1,
"source_trust": 0.9,
"topic_relevance": 1,
"time_proximity": 0.6571428571428571,
"community_signal": 0.75,
"speaker_org_signal": 0.85,
"is_deadline_open": 0,
"source_count": 1
} Three-week ML upskilling bootcamp focused on AI safety, interpretability, and reinforcement learning based on the ARENA curriculum. Run by Cambridge Boston Alignment Initiative in Manhattan, hosted by Collider. Includes housing, meals, dedicated teaching assistants, and travel support. Prerequisites include Python familiarity and comfort with multivariable calculus and linear algebra.
#interpretability#alignment#technical-safety
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.85,
"topic_relevance": 0.95,
"time_proximity": 0.7937106918238994,
"community_signal": 0.8,
"speaker_org_signal": 0.85,
"is_deadline_open": 0,
"source_count": 1
} fellowship
★ 0.79 Apps closed May 3, 2026
OpenAI's safety fellowship program in partnership with Constellation. Physical workspace offered in Berkeley at Constellation, though remote participation is also permitted. Program runs approximately 5 months from September through February.
#alignment#technical-safety
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.9,
"topic_relevance": 1,
"time_proximity": 0.44150943396226416,
"community_signal": 0.8,
"speaker_org_signal": 1,
"is_deadline_open": 0,
"source_count": 1
} Three-week ML upskilling bootcamp focused on AI safety, interpretability, and reinforcement learning based on the ARENA curriculum. Run by Cambridge Boston Alignment Initiative in Cambridge, MA. May 18 to June 5 cohort (mostly outside the astronomical summer window; CBAI brands all 2026 cohorts as Summer 2026 internally). Includes housing, meals, 24/7 office access in Harvard Square, and dedicated teaching assistants. Prerequisites: Python familiarity and comfort with multivariable calculus and linear algebra. Travel support provided. CBAI does not publish a closing date for this cohort, so no specific deadline is recorded here.
#interpretability#alignment#technical-safety
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.85,
"topic_relevance": 0.95,
"time_proximity": 0.7714285714285715,
"community_signal": 0.8,
"speaker_org_signal": 0.85,
"is_deadline_open": 0,
"source_count": 1
} 10-week Cambridge-based fellowship welcoming talented individuals at any career stage working on technical AI safety research, AI governance and policy, or technical AI governance. Provides competitive stipend, meals during working hours, full coverage of transport/visas/lodging, expert mentorship, research management support, compute resources, and 30+ community events throughout the program.
#alignment#governance#technical-safety
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.85,
"topic_relevance": 1,
"time_proximity": 0.7937106918238994,
"community_signal": 0.7,
"speaker_org_signal": 0.8,
"is_deadline_open": 0,
"source_count": 1
} Four-month AI safety research fellowship. Fellows receive weekly stipend of 3,850 USD / 2,310 GBP / 4,300 CAD, funding for compute (~$15k/month), and close mentorship from Anthropic researchers. Designed to accelerate AI safety research and foster research talent.
#alignment#technical-safety#interpretability
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.95,
"topic_relevance": 0.95,
"time_proximity": 0.3666666666666667,
"community_signal": 0.9,
"speaker_org_signal": 0.95,
"is_deadline_open": 0,
"source_count": 1
} fellowship
★ 0.77 Apps closed Mar 24, 2026
Three-month research program investigating societal impacts of advanced AI and the institutions and policies that could help societies respond well. Organized by the Center for AI Safety. Application deadline March 24. Focus on how current AI systems work, their societal-scale risks, and how to manage them.
#governance#policy#alignment fellowshipgovernancepolicyresearch
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.9,
"topic_relevance": 0.85,
"time_proximity": 0.969811320754717,
"community_signal": 0.75,
"speaker_org_signal": 0.85,
"is_deadline_open": 0,
"source_count": 1
} fellowship
★ 0.77 Apps closed May 3, 2026
?
Part-time hybrid mentorship programme for alignment researchers, run by Cambridge AI Safety Hub. Combines one-week in-person sprint (July 13-26) with remote research phase (August to October). Provides $2k+ compute budgets and Claude Max access. Features 24+ mentors from Redwood Research, Google DeepMind, RAND, and universities across technical AI safety and governance tracks. Approximately 4 months total duration.
#alignment#technical-safety#mentorship#control#interpretability#evals#governance
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.8,
"topic_relevance": 1,
"time_proximity": 0.7584905660377359,
"community_signal": 0.7,
"speaker_org_signal": 0.85,
"is_deadline_open": 0,
"source_count": 1
} Flagship event gathering leading scientists, entrepreneurs, funders, and policymakers to explore the frontiers of science and technology. Includes AI safety track. Part of Foresight Institute's Vision Weekend series. 40-year-old organization focused on transformative technology.
#governance#frontier-science#technical-safety
Salience signals
{
"type_weight": 1,
"source_trust": 0.75,
"topic_relevance": 0.7,
"time_proximity": 0.9446540880503145,
"community_signal": 0.65,
"speaker_org_signal": 0.7,
"is_deadline_open": 1,
"source_count": 1
} Three-week ML upskilling bootcamp focused on AI safety, interpretability, and reinforcement learning based on the ARENA curriculum. Run by Cambridge Boston Alignment Initiative in Cambridge, MA. Includes housing, meals, 24/7 office access in Harvard Square, and dedicated teaching assistants. Prerequisites include Python familiarity and comfort with multivariable calculus and linear algebra. Travel support provided.
#interpretability#alignment#technical-safety
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.85,
"topic_relevance": 0.95,
"time_proximity": 0.6176100628930817,
"community_signal": 0.8,
"speaker_org_signal": 0.85,
"is_deadline_open": 0,
"source_count": 1
} fellowship
★ 0.73 Apps closed May 3, 2026
?
9-week AI safety research fellowship in London with expert mentorship, offering up to 6-month extensions. Provides £6,000 to £8,000 stipend plus coverage for travel, housing (£2,000 for non-London fellows), meals, and compute resources. 70 to 90% of fellows in recent cohorts received extensions for continued work beyond the initial period.
#alignment#governance#technical-safety#mechanistic-interpretability
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.75,
"topic_relevance": 1,
"time_proximity": 0.8289308176100629,
"community_signal": 0.65,
"speaker_org_signal": 0.7,
"is_deadline_open": 0,
"source_count": 1
} Five-day intensive programme for AI safety founders going from idea to funded. Successful pitches receive £50k in equity-free seed funding. Part of BlueDot Impact's incubator and rapid-funding initiatives supporting concrete AI safety work.
#alignment#governance startupincubator
Salience signals
{
"type_weight": 0.65,
"source_trust": 0.9,
"topic_relevance": 0.85,
"time_proximity": 0.9647798742138365,
"community_signal": 0.75,
"speaker_org_signal": 0.75,
"is_deadline_open": 0,
"source_count": 1
} Second Workshop on Technical AI Governance Research at ICML 2026, focusing on technical approaches to AI governance, policy, and regulation. Part of the main conference workshop track.
#governance#policy ICMLworkshopgovernancetechnical-governance
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.85,
"topic_relevance": 0.9,
"time_proximity": 0.7383647798742139,
"community_signal": 0.75,
"speaker_org_signal": 0.7,
"is_deadline_open": 0,
"source_count": 1
} Workshop at ICML 2026 focused on identifying, diagnosing, and fixing failure modes in agentic AI systems. Covers reproducible triggers for failures, diagnostic tracing methods, and verified repair approaches. Highly relevant to AI safety and robustness.
#evals#alignment ICMLworkshopfailure-modesagentsdiagnostics
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.85,
"topic_relevance": 0.9,
"time_proximity": 0.7383647798742139,
"community_signal": 0.75,
"speaker_org_signal": 0.7,
"is_deadline_open": 0,
"source_count": 1
} Second Workshop on Agents in the Wild focusing on safety and security of AI agents deployed in real-world environments. Addresses challenges in ensuring safe and secure operation of autonomous agents. Part of ICML 2026 workshop track.
#alignment#evals ICMLworkshopagentssafetysecurity
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.85,
"topic_relevance": 0.9,
"time_proximity": 0.7333333333333334,
"community_signal": 0.75,
"speaker_org_signal": 0.7,
"is_deadline_open": 0,
"source_count": 1
} Fully funded, in-person program pairing senior advisors with emerging talent on 5-month technical, governance, strategy, and field-building projects. $8,400 monthly stipend, ~$15K/month research budget for empirical fellows (compute), workspace at Berkeley research center, weekly mentorship from experts, visa support for international applicants. Applications for Fall 2026 cohort closed May 3rd. Strong placement rates at safety orgs.
#alignment#governance#technical-safety
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.85,
"topic_relevance": 0.9,
"time_proximity": 0.43647798742138366,
"community_signal": 0.85,
"speaker_org_signal": 0.85,
"is_deadline_open": 0,
"source_count": 1
} Weekly AI safety evaluations paper reading club hosted by BlueDot Impact. Meets every Tuesday at 4:00 PM UTC to discuss evaluation methodologies, safety benchmarks, and measurement frameworks. Open to all interested in AI safety evals research.
#evals#alignment weeklypaper-discussion
Salience signals
{
"type_weight": 0.45,
"source_trust": 0.85,
"topic_relevance": 0.9,
"time_proximity": 0.4285714285714286,
"community_signal": 0.7,
"speaker_org_signal": 0.7,
"is_deadline_open": 1,
"source_count": 1
} Eight-week virtual reading group run by MIT AI Alignment (MAIA). Topics include AI's trajectory, misalignment, technical safety, policy, and careers in the field. Two hours per week commitment. Led by small groups facilitated by MAIA members. Free, no prior AI background required. Applications open through May 22. MAIA is MIT student group conducting AI alignment research with membership in the hundreds, supported by CBAI.
#alignment#governance#technical-safety
Salience signals
{
"type_weight": 0.45,
"source_trust": 0.8,
"topic_relevance": 0.9,
"time_proximity": 0.8857142857142857,
"community_signal": 0.75,
"speaker_org_signal": 0.8,
"is_deadline_open": 0,
"source_count": 1
} Summit bringing together academic leaders, entrepreneurs, AI experts, venture capitalists, and policymakers to discuss the future of AI and Agentic AI. Call for Papers and Startup Spotlight applications open. In-person and livestream available.
#alignment#evals#governance
Salience signals
{
"type_weight": 1,
"source_trust": 0.85,
"topic_relevance": 0.75,
"time_proximity": 0.6477987421383647,
"community_signal": 0.75,
"speaker_org_signal": 0.8,
"is_deadline_open": 0,
"source_count": 1
} One-day hackathon organized by BlueDot Impact focused on AI risk content creation. Brings together community members to develop educational and outreach materials related to AI safety and existential risk. Part of BlueDot's ongoing series of community events building the workforce needed to safely navigate AGI.
#alignment#governance#evals content-creation
Salience signals
{
"type_weight": 0.65,
"source_trust": 0.85,
"topic_relevance": 0.75,
"time_proximity": 0.9446540880503145,
"community_signal": 0.75,
"speaker_org_signal": 0.8,
"is_deadline_open": 0,
"source_count": 1
} UNIDIR conference on artificial intelligence security and ethics considerations. Part of UNIDIR's Security and Technology Programme. UNIDIR conducts research on disarmament and international security, with AI being a key focus area alongside cyber security, space security, and conventional weapons control.
#governance#policy#international-security#ai-security#ethics
Salience signals
{
"type_weight": 1,
"source_trust": 0.85,
"topic_relevance": 0.75,
"time_proximity": 0.8842767295597485,
"community_signal": 0.5,
"speaker_org_signal": 0.75,
"is_deadline_open": 0,
"source_count": 1
} Part-time remote research fellowship pairing aspiring researchers with 130+ experienced mentors from Google DeepMind, RAND, Apollo Research, MATS, UK AISI, and other top organizations for three-month AI alignment projects. Participants commit 5-40 hours weekly. Covers project expenses including compute and API/LLM access. Culminates in virtual Demo Day with prizes totaling $7,000. Optional continuation beyond May 16. Mentee application track: aspiring researchers (undergraduate, graduate/PhD students, and professionals at different experience levels without requiring prior research experience) apply to be paired with mentors. Mentee application deadline 2026-01-14 (passed).
#alignment#governance#evals#safety-research#interpretability#technical-safety remotementorship
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.85,
"topic_relevance": 0.95,
"time_proximity": 0,
"community_signal": 0.8,
"speaker_org_signal": 0.9,
"is_deadline_open": 0,
"source_count": 1
} Part-time remote research fellowship pairing aspiring researchers with 130+ experienced mentors from Google DeepMind, RAND, Apollo Research, MATS, UK AISI, and other top organizations for three-month AI alignment projects. Participants commit 5-40 hours weekly. Covers project expenses including compute and API/LLM access. Culminates in virtual Demo Day with prizes totaling $7,000. Optional continuation beyond May 16. Mentor application track: experienced researchers from Google DeepMind, RAND, Apollo Research, MATS, UK AISI etc. apply to mentor a project. Mentor application deadline 2025-12-05 (passed).
#alignment#governance#evals#safety-research#interpretability#technical-safety remotementorship
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.85,
"topic_relevance": 0.95,
"time_proximity": 0,
"community_signal": 0.8,
"speaker_org_signal": 0.9,
"is_deadline_open": 0,
"source_count": 1
} Weekend AI safety hackathon focused on Global South participation and perspectives. Hybrid format allowing both online and in-person participation. Organized by Apart Research as part of their 55+ research sprints series with 6,000+ participants across 200+ global locations.
#alignment#safety-research#evals#governance
Salience signals
{
"type_weight": 0.65,
"source_trust": 0.85,
"topic_relevance": 0.8,
"time_proximity": 0.8490566037735849,
"community_signal": 0.7,
"speaker_org_signal": 0.7,
"is_deadline_open": 0,
"source_count": 1
} workshop
★ 0.65 CFP closed Mar 12, 2026
?
Workshop at ACL 2026 focusing on AI evaluation in practice, centering the tensions and collaborations between model developers and evaluation researchers. Accepts full papers (6-8 pages), short papers (up to 4 pages), or tiny papers/extended abstracts (up to 2 pages). Authors expected to serve as reviewers.
#evals#safety-research#measurement#evaluation-methodology ACL-workshoptwo-day
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.75,
"topic_relevance": 0.85,
"time_proximity": 0.8088050314465409,
"community_signal": 0.6,
"speaker_org_signal": 0.65,
"is_deadline_open": 0,
"source_count": 1
} Foresight Institute flagship event gathering leading scientists, entrepreneurs, funders, and policymakers to explore the frontiers of science and technology. Multiple tracks including AI safety topics.
#frontier-science#ai-safety
Salience signals
{
"type_weight": 1,
"source_trust": 0.75,
"topic_relevance": 0.7,
"time_proximity": 0.949685534591195,
"community_signal": 0.6,
"speaker_org_signal": 0.7,
"is_deadline_open": 0,
"source_count": 1
} fellowship
★ 0.64 Apps closed Jan 15, 2026
?
First global academic programme dedicated to AI evaluation, combining technical depth with policy and governance. 150-hour expert diploma covering capabilities and safety evaluations. Includes 90 hours online instruction, 20 hours hands-on courses, and 40-hour in-person capstone week in Valencia. Faculty from Cambridge, Stanford, Princeton, EU AI Office, UK AI Safety Institute, FAR AI, Apollo Research. Targets professionals joining AI Safety Institutes, government agencies, and industry research labs.
#evals#safety-research#governance#policy fellowshipevalsacademichybriddiplomafundedprestigious
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.85,
"topic_relevance": 0.95,
"time_proximity": 0,
"community_signal": 0.7,
"speaker_org_signal": 0.9,
"is_deadline_open": 0,
"source_count": 1
} Technical workshop focusing on secure AI topics. Brings together top talent to solve the bottlenecks holding back progress in secure and sovereign AI systems.
#governance#evals#alignment#safety-research#security#control#technical-safety#ai-security technicalBerlin
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.75,
"topic_relevance": 0.8,
"time_proximity": 0.7333333333333334,
"community_signal": 0.6,
"speaker_org_signal": 0.7,
"is_deadline_open": 0,
"source_count": 1
} Three-day event bringing researchers and engineers together to prototype tools for verifying AI-generated code. Co-organized with Atlas Computing. Top teams can apply to a four-month SPS Fellowship following the hackathon. Hybrid format with online and in-person participation options.
#alignment#control#safety-research#evals#security#automated-research#safety-applications#technical-safety#evaluations#governance#verification#code-safety hybridsprint
Salience signals
{
"type_weight": 0.65,
"source_trust": 0.85,
"topic_relevance": 0.75,
"time_proximity": 0.8857142857142857,
"community_signal": 0.6,
"speaker_org_signal": 0.7,
"is_deadline_open": 0,
"source_count": 1
} 9th annual Cognitive Computational Neuroscience conference. Primarily single-track featuring keynote speakers and oral presentations. Paper submissions presented as posters with select papers chosen for oral presentation. AAE attendees follow for predictive-coding, metacognition, signal-detection-theory measurement work applicable to LLMs.
#measurement-science#cognitive-science#interpretability#neuroscience
Salience signals
{
"type_weight": 1,
"source_trust": 0.8,
"topic_relevance": 0.6,
"time_proximity": 0.6528301886792452,
"community_signal": 0.5,
"speaker_org_signal": 0.6,
"is_deadline_open": 0,
"source_count": 1
} Two-day conference on global cooperation for cybersecurity resilience and stability. Organized by United Nations Institute for Disarmament Research. Addresses international frameworks for cyber governance and security cooperation.
#governance#cyber-security UNdisarmament
Salience signals
{
"type_weight": 1,
"source_trust": 0.85,
"topic_relevance": 0.6,
"time_proximity": 0.4,
"community_signal": 0.5,
"speaker_org_signal": 0.75,
"is_deadline_open": 0,
"source_count": 1
} Two-week intensive summer school at Columbia University covering machine learning topics including mechanistic interpretability, alignment/safety, RAG & agents, and LLM systems. Approximately 200 PhD students participate alongside faculty and industry speakers. In-scope due to dedicated alignment and mechanistic interpretability tracks.
#interpretability#alignment
Salience signals
{
"type_weight": 0.35,
"source_trust": 0.75,
"topic_relevance": 0.7,
"time_proximity": 0.869182389937107,
"community_signal": 0.5,
"speaker_org_signal": 0.6,
"is_deadline_open": 0,
"source_count": 1
} A free, accessible workshop hosted by AI Safety Awareness Group Oakland exploring AI's trajectory and societal impact. No technical background required. Features live demonstrations of current AI systems, interactive forecasting activities, and discussions about AI's implications for work, relationships, and society over the next 1-5 years.
#governance#alignment
Salience signals
{
"type_weight": 0.45,
"source_trust": 0.7,
"topic_relevance": 0.7,
"time_proximity": 0.5714285714285714,
"community_signal": 0.6,
"speaker_org_signal": 0.5,
"is_deadline_open": 0,
"source_count": 1
}