conference
★ 0.69 CFP closes May 4, 2026
Major ML conference with three satellite locations (Sydney, Atlanta, Paris). Abstract submission May 4, full paper May 6, workshop proposals June 6. Safety-related workshops typically included. Track the workshop list for mechanistic interpretability, alignment, and safety topics when workshop decisions announced (Sept 29).
#interpretability#evals#alignment#ml-research#safety-research#mechanistic-interpretability major-conferencemulti-location
Salience signals
{
"type_weight": 1,
"source_trust": 0.85,
"topic_relevance": 0.7,
"time_proximity": 0.17081081081081081,
"community_signal": 0.8,
"speaker_org_signal": 0.75,
"is_deadline_open": 1,
"source_count": 1
} workshop
★ 0.97 CFP closes May 8, 2026
Workshop addressing how neural networks grow in influence and capability while understanding the mechanisms behind their decisions remains a fundamental scientific challenge. Brings together researchers to discuss advances in mechanistic interpretability focused on analyzing model internals to understand behavior and underlying computation. Submission deadline May 8th 2026. Organized by researchers from Google DeepMind, Harvard, Northeastern, Imperial College London, Oxford, and others.
#interpretability#alignment#circuit-tracing#sparse-autoencoders#mechanistic-interpretability#instrument-science ICMLworkshopmechanistic-interpretabilityICML-workshopthird-iteration
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.9,
"topic_relevance": 0.95,
"time_proximity": 0.7685534591194969,
"community_signal": 0.9,
"speaker_org_signal": 0.9,
"is_deadline_open": 1,
"source_count": 1
} workshop
★ 0.85 CFP closes May 8, 2026
Workshop exploring how to align with the diversity of human values by examining how to integrate diverse perspectives into AI alignment frameworks. Addresses the gap in current methods that struggle to capture the full spectrum of conflicting real-world values across diverse populations. 4-8 page submissions, non-archival. Topics include philosophical frameworks, ML methods for annotation disagreements, HCI design, consensus-building, policy implications, and real-world applications.
#alignment#governance#pluralistic-alignment#pluralistic-values#measurement-science ICML-workshopinterdisciplinary
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.85,
"topic_relevance": 0.85,
"time_proximity": 0.7635220125786164,
"community_signal": 0.8,
"speaker_org_signal": 0.75,
"is_deadline_open": 1,
"source_count": 1
} fellowship
★ 0.94 Apps close May 10, 2026
Mentorship programme pairing teams of 2-4 with experienced mentors to produce published AI safety research. 1-week in-person kick-off (July 13-19 or 20-26), followed by 8-10 weeks remote part-time work (Aug-Sept). $2,000+ compute budget, Claude Max access, travel funding and accommodation, dedicated research manager, office space and catering. 8-15+ hours/week commitment. Research areas: control, interpretability, evaluations, governance/policy. Operated by Cambridge AI Safety Hub.
#alignment#technical-safety#mentorship#control#interpretability#evals#governance
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.8,
"topic_relevance": 1,
"time_proximity": 0.7534591194968554,
"community_signal": 0.85,
"speaker_org_signal": 0.85,
"is_deadline_open": 1,
"source_count": 1
} workshop
★ 0.92 Apps close May 10, 2026
5-week project-based course for engineers and early researchers to work with an AI safety expert on a contribution to AI safety research or engineering. Includes mentorship, regular check-ins, and a published write-up. Covers alignment, mechanistic interpretability, evaluations, red-teaming, AI control, and scalable oversight.
#alignment#interpretability#evals
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.9,
"topic_relevance": 0.9,
"time_proximity": 0.8,
"community_signal": 0.8,
"speaker_org_signal": 0.85,
"is_deadline_open": 1,
"source_count": 1
} conference
★ 1.00 Reg closes May 13, 2026
Free one-day AI Safety conference. Third iteration, first time in UK. Organized by Oxford Martin AI Governance Initiative and Noeon Research. Registration open on Luma. Welcomes attendees from all backgrounds regardless of prior research experience. Paper submissions accepted. Previous content available on YouTube.
#alignment#governance#safety-research#evals#interpretability conferencetechnicalOxfordfreeone-day
Salience signals
{
"type_weight": 1,
"source_trust": 0.9,
"topic_relevance": 1,
"time_proximity": 0.6857142857142857,
"community_signal": 0.9,
"speaker_org_signal": 0.9,
"is_deadline_open": 1,
"source_count": 1
} fellowship
★ 0.91 Apps close May 17, 2026
Three-week ML upskilling bootcamp for AI safety focusing on interpretability and RL, based on ARENA curriculum. Run by Cambridge Boston Alignment Initiative in Cambridge, MA. Covers housing, meals, 24/7 office access in Harvard Square, dedicated teaching assistants, and travel support. Prerequisites include Python familiarity and comfort with multivariable calculus and linear algebra concepts. Applications for Summer 2026 are open.
#interpretability#alignment#technical-safety
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.85,
"topic_relevance": 0.95,
"time_proximity": 0.8,
"community_signal": 0.8,
"speaker_org_signal": 0.8,
"is_deadline_open": 1,
"source_count": 1
} conference
★ 0.94 Apps close May 20, 2026
Major Effective Altruism conference with significant AI safety attendance. Run by Centre for Effective Altruism. Designed for individuals with solid understanding of core EA ideas actively applying them. One application form covers all 2026 EA Global events. AI safety community heavily represented.
#alignment#governance effective-altruism
Salience signals
{
"type_weight": 1,
"source_trust": 0.8,
"topic_relevance": 0.7,
"time_proximity": 0.979874213836478,
"community_signal": 1,
"speaker_org_signal": 0.75,
"is_deadline_open": 1,
"source_count": 2
} hackathon
★ 0.94 Reg closes May 21, 2026
Three-day hackathon bringing researchers and engineers together to prototype tools for verifying what AI is writing. Co-organized with Atlas Computing. Top teams invited to apply to four-month SPS Fellowship that follows. Focus on program synthesis security and AI code generation verification.
#alignment#control#safety-research#evals#security#automated-research#safety-applications#technical-safety#evaluations#governance hybridsprint
Salience signals
{
"type_weight": 0.65,
"source_trust": 0.85,
"topic_relevance": 0.9,
"time_proximity": 0.9142857142857143,
"community_signal": 0.8,
"speaker_org_signal": 0.8,
"is_deadline_open": 1,
"source_count": 2
} fellowship
★ 0.78 Apps close May 22, 2026
AI safety research fellowship at LISA London. £6,000-£8,000 stipend (£8,000 for Senior Fellows), travel covered, £2,000 housing support for non-London residents, meals and compute included. Four-stage selection. 70-90% extension rate in recent cohorts. 129 alumni across 7 completed cohorts. Open to anyone 18+ committed to safe AI development from diverse backgrounds.
#alignment#governance#technical-safety#mechanistic-interpretability
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.75,
"topic_relevance": 1,
"time_proximity": 0.8238993710691824,
"community_signal": 0.8,
"speaker_org_signal": 0.75,
"is_deadline_open": 0,
"source_count": 1
} reading-group
★ 0.75 Apps close May 22, 2026
8-week virtual reading group covering AI's trajectory, misalignment, technical safety, policy, and careers in the field. Led by small groups with MAIA facilitators. 2 hours per week commitment. Free. No prior AI experience required, though preference given to MIT students. Curriculum structured around AI Safety Fundamentals program.
#alignment#governance#technical-safety
Salience signals
{
"type_weight": 0.45,
"source_trust": 0.8,
"topic_relevance": 0.85,
"time_proximity": 0.9142857142857143,
"community_signal": 0.7,
"speaker_org_signal": 0.7,
"is_deadline_open": 1,
"source_count": 1
} fellowship
★ 0.98 Apps close May 24, 2026
4-5 week in-person ML bootcamp focused on AI safety at London Initiative for Safe AI (LISA). Provides talented individuals with ML engineering skills, community, and confidence to contribute to technical AI safety. Covers travel, visa, accommodation, and provides meals. Alumni go on to MATS, LASR, Pivotal, and positions at Apollo Research, METR, UK AISI.
#alignment#interpretability#technical-safety#mechanistic-interpretability
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.9,
"topic_relevance": 1,
"time_proximity": 1,
"community_signal": 1,
"speaker_org_signal": 0.9,
"is_deadline_open": 0,
"source_count": 2
} conference
★ 0.80 Early-bird ends May 24, 2026
Major ML conference. July 6 for Expo/Tutorials, July 7-9 main conference, July 10-11 workshops. Workshops announced April 6. Author notification April 30, early registration May 24. Track safety-related workshops (mechanistic interpretability, alignment) when full workshop list published.
#alignment#interpretability#evals#ml-research#mechanistic-interpretability major-conference
Salience signals
{
"type_weight": 1,
"source_trust": 0.85,
"topic_relevance": 0.7,
"time_proximity": 0.7886792452830189,
"community_signal": 0.8,
"speaker_org_signal": 0.75,
"is_deadline_open": 1,
"source_count": 1
} hackathon
★ 0.85 Apps close May 26, 2026
Five-day intensive programme for AI safety founders going from idea to funded. Successful pitches receive £50k in equity-free seed funding. Part of BlueDot Impact's incubator and rapid-funding initiatives supporting concrete AI safety work.
#alignment#governance startupincubator
Salience signals
{
"type_weight": 0.65,
"source_trust": 0.9,
"topic_relevance": 0.85,
"time_proximity": 0.9647798742138365,
"community_signal": 0.75,
"speaker_org_signal": 0.75,
"is_deadline_open": 1,
"source_count": 1
} conference
★ 0.97 Apps close Jun 1, 2026
5-day multi-track unconference bringing together 100+ researchers focused on theoretical AI alignment. Topics include Singular Learning Theory, Agent Foundations, Causal Incentives, Computational Mechanics. Participants can propose and lead own sessions. Free to attend. Financial assistance for travel/accommodation available on needs basis. Application deadline June 1. Organized by Iliad (umbrella for applied mathematics research in alignment).
#alignment#theoretical-foundations#agent-foundations#formal-foundations
Salience signals
{
"type_weight": 1,
"source_trust": 0.85,
"topic_relevance": 1,
"time_proximity": 0.6477987421383647,
"community_signal": 0.85,
"speaker_org_signal": 0.85,
"is_deadline_open": 1,
"source_count": 1
} conference
★ 0.69 CFP closes Jun 1, 2026
9th annual Cognitive Computational Neuroscience conference bringing together researchers in cognitive science, neuroscience, and artificial intelligence. Primarily single-track featuring keynote speakers and oral presentations. Paper submissions presented as posters with select submissions for oral presentations. Community-proposed programming including Generative Adversarial Collaborations (GACs) and Keynote-and-Tutorial presentations. AAE attendees follow for predictive-coding, metacognition, and signal-detection-theory measurement work applicable to LLMs.
#measurement-science#cognitive-science#interpretability
Salience signals
{
"type_weight": 1,
"source_trust": 0.8,
"topic_relevance": 0.7,
"time_proximity": 0.6477987421383647,
"community_signal": 0.6,
"speaker_org_signal": 0.6,
"is_deadline_open": 1,
"source_count": 1
} conference
★ 0.60 CFP closes Jun 1, 2026
29th annual meeting of the Association for the Scientific Study of Consciousness. Academic society promoting rigorous research directed toward understanding the nature, function, and underlying mechanisms of consciousness. Includes members from cognitive science, medicine, neuroscience, philosophy, and other relevant disciplines. AAE attendees follow for measurement/metacognition sessions relevant to LLM interpretability. Registration and submissions currently open.
#consciousness#measurement-science#cognitive-science
Salience signals
{
"type_weight": 1,
"source_trust": 0.6,
"topic_relevance": 0.65,
"time_proximity": 0.8188679245283019,
"community_signal": 0.5,
"speaker_org_signal": 0.5,
"is_deadline_open": 1,
"source_count": 1
} hackathon
★ 0.76 Reg closes Jun 6, 2026
One-day hackathon focused on creating AI risk content and educational materials. Part of BlueDot Impact's community-building initiatives to improve public understanding of AI safety challenges.
#alignment#governance content-creation
Salience signals
{
"type_weight": 0.65,
"source_trust": 0.85,
"topic_relevance": 0.75,
"time_proximity": 0.939622641509434,
"community_signal": 0.7,
"speaker_org_signal": 0.7,
"is_deadline_open": 1,
"source_count": 1
} conference
★ 0.73 CFP closes Jun 15, 2026
A two-day interdisciplinary forum grounded in the science of AI safety, bringing together researchers, policymakers, and practitioners from various sectors to discuss Australia's role in AI governance and safety. Features keynote presentations, panel discussions, parallel workshops, lightning talks, and structured networking. Organized by Gradient Institute with backing from the Australian Government Department of Industry, Science and Resources.
#evals#governance#evaluation#measurement-science#alignment#evaluations#policy interdisciplinary
Salience signals
{
"type_weight": 1,
"source_trust": 0.75,
"topic_relevance": 0.8,
"time_proximity": 0.7836477987421384,
"community_signal": 0.6,
"speaker_org_signal": 0.5,
"is_deadline_open": 1,
"source_count": 1
} fellowship
★ 0.96 Apps close Jun 30, 2026
10-week fellowship in Cambridge welcoming researchers in Technical AI Safety, AI Governance, and Technical AI Governance. Individual research projects with weekly mentorship from expert researchers, 30+ events, access to compute resources. Competitive stipend, meals during working hours, transport/visas/lodging fully covered. Open to talented individuals from around the world at any career stage. Community-building with dedicated Community Health Manager.
#alignment#governance#technical-safety
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.85,
"topic_relevance": 1,
"time_proximity": 0.7886792452830189,
"community_signal": 0.85,
"speaker_org_signal": 0.85,
"is_deadline_open": 1,
"source_count": 1
} workshop
★ 1.00 Reg closes Jul 5, 2026
FAR.AI alignment workshop series convening top ML researchers, AI practitioners, and policy experts in Seoul. Core topics include model evaluations, interpretability, robustness, scalable oversight, and AI governance. Part of FAR.AI's international workshop series following the London workshop.
#alignment#governance#control#interpretability#technical-safety#agi-risk#evals
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.9,
"topic_relevance": 1,
"time_proximity": 0.7886792452830189,
"community_signal": 0.85,
"speaker_org_signal": 0.95,
"is_deadline_open": 1,
"source_count": 1
} fellowship
★ 0.91 Apps close Jul 5, 2026
Three-week ML upskilling bootcamp for AI safety focusing on interpretability and RL, based on ARENA curriculum. Run by Cambridge Boston Alignment Initiative in Manhattan. Covers housing, meals, 24/7 office access, dedicated teaching assistants, and travel support. Prerequisites include Python familiarity and comfort with multivariable calculus and linear algebra concepts. Applications for Summer 2026 are open.
#interpretability#alignment#technical-safety
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.85,
"topic_relevance": 0.95,
"time_proximity": 0.7886792452830189,
"community_signal": 0.8,
"speaker_org_signal": 0.8,
"is_deadline_open": 1,
"source_count": 1
} conference
★ 0.80 CFP closes Jul 15, 2026
Summit bringing together academic leaders, entrepreneurs, AI experts, venture capitalists, and policymakers to discuss the future of AI and Agentic AI. Call for Papers and Startup Spotlight applications open. In-person and livestream available.
#alignment#evals#governance
Salience signals
{
"type_weight": 1,
"source_trust": 0.85,
"topic_relevance": 0.75,
"time_proximity": 0.6477987421383647,
"community_signal": 0.75,
"speaker_org_signal": 0.8,
"is_deadline_open": 1,
"source_count": 1
} workshop
★ 0.69 Reg closes Jul 17, 2026
Technical workshop bringing together top talent to solve the bottlenecks holding back progress at the frontier of science and technology. Part of Foresight Institute's offerings. Focus on secure and sovereign AI systems.
#governance#evals#alignment#safety-research#security#control#technical-safety technicalBerlin
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.75,
"topic_relevance": 0.75,
"time_proximity": 0.7283018867924529,
"community_signal": 0.6,
"speaker_org_signal": 0.65,
"is_deadline_open": 1,
"source_count": 1
} fellowship
★ 0.77 Early-bird ends Jul 25, 2026
Fellowship running September 14, 2026 through February 5, 2027, supporting researchers pursuing rigorous, high-impact research on safety and alignment of advanced AI systems. Fellows work in peer group setting with mentorship from OpenAI staff. Workspace offered at Constellation in Berkeley, remote participation permitted. Provides monthly stipends, computational resources, ongoing guidance. Requires meaningful research deliverables (papers, benchmarks, datasets). Welcomes diverse academic backgrounds. Partnering with Constellation.
#alignment#safety-research#governance#evals#control#safety-evals#robustness#oversight#safety-evaluation remote-allowedAPI-credits
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.9,
"topic_relevance": 0.95,
"time_proximity": 0.43647798742138366,
"community_signal": 0.85,
"speaker_org_signal": 0.95,
"is_deadline_open": 0,
"source_count": 1
} fellowship
★ 0.87 Apps close Aug 9, 2026
Three-week ML upskilling bootcamp for AI safety focusing on interpretability and RL, based on ARENA curriculum. Run by Cambridge Boston Alignment Initiative in Cambridge, MA. Covers housing, meals, 24/7 office access in Harvard Square, dedicated teaching assistants, and travel support. Prerequisites include Python familiarity and comfort with multivariable calculus and linear algebra concepts. Applications for Summer 2026 are open.
#interpretability#alignment#technical-safety
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.85,
"topic_relevance": 0.95,
"time_proximity": 0.6125786163522012,
"community_signal": 0.8,
"speaker_org_signal": 0.8,
"is_deadline_open": 1,
"source_count": 1
} fellowship
★ 0.96 Apps closed Mar 1, 2026
Three-month bipartisan fellowship designed to launch or accelerate impactful careers in American AI governance and policy. Participants deepen understanding of the field, connect with network of experts, and build skills and professional profile. $21,000 stipend. Alumni have secured positions at leading AI companies (DeepMind, OpenAI, Anthropic).
#governance#policy fellowshippolicygovernanceDC
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.9,
"topic_relevance": 1,
"time_proximity": 0.929559748427673,
"community_signal": 0.9,
"speaker_org_signal": 1,
"is_deadline_open": 0,
"source_count": 2
} fellowship
★ 0.96 Apps closed Jan 4, 2026
Three-month fellowship where fellows conduct independent research on AI governance topic of their choice with mentorship from leading experts. £12,000 stipend. GovAI was founded to help decision-makers navigate the transition to advanced AI through rigorous research and talent fostering. Alumni have secured positions at DeepMind, OpenAI, Anthropic.
#governance#policy fellowshipresearchgovernanceLondon
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.9,
"topic_relevance": 1,
"time_proximity": 0.929559748427673,
"community_signal": 0.9,
"speaker_org_signal": 1,
"is_deadline_open": 0,
"source_count": 2
} fellowship
★ 0.94 Apps closed Jan 18, 2026
12-week research fellowship pairing aspiring alignment researchers with leading mentors. $15,000 stipend, $12,000 compute budget, private housing and catered meals. In-person cohorts in Berkeley and London. Successful fellows may extend research for 6-12 months. Open to diverse backgrounds with strong technical aptitude and motivation toward AI safety.
#alignment#mechanistic-interpretability#governance
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.95,
"topic_relevance": 1,
"time_proximity": 0.9647798742138365,
"community_signal": 1,
"speaker_org_signal": 1,
"is_deadline_open": 0,
"source_count": 1
} fellowship
★ 0.93 Apps closed Apr 12, 2026
Nine-week AI safety research fellowship for 30 fellows with $10,000 stipend, housing in Harvard dorms, 24/7 office access in Harvard Square. Weekly 1-2 hour individual mentorship from researchers at Harvard, MIT, Northeastern. Up to $10,000 in compute credits per fellow, conference submission support, weekly speaker events, networking, workshops. Rolling application with 4-stage process: form, 15-min interview, mentor-specific tasks, mentor interview. International students with OPT/CPT eligible, full in-person participation required (18+ only).
#alignment#interpretability#governance#evals#biosecurity#safety-research fellowshipresearchCambridgeHarvardstipendhousingcompute-credits
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.85,
"topic_relevance": 0.95,
"time_proximity": 0.9245283018867925,
"community_signal": 0.75,
"speaker_org_signal": 0.85,
"is_deadline_open": 1,
"source_count": 1
} fellowship
★ 0.91 Apps closed Mar 24, 2026
Three-month research program investigating societal impacts of advanced AI and institutions/policies for responding to challenges. In-person at CAIS San Francisco offices with opportunities to participate in events with researchers from CAIS and external Bay Area institutions, including regular guest speakers, workshops, and social gatherings. Application deadline March 24.
#governance#policy#alignment fellowshipgovernancepolicyresearch
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.9,
"topic_relevance": 1,
"time_proximity": 0.9647798742138365,
"community_signal": 0.9,
"speaker_org_signal": 1,
"is_deadline_open": 0,
"source_count": 1
} workshop
★ 0.88 Apps closed May 1, 2026
Workshop on secure and verifiable AI development, bringing together researchers, builders, and funders across ML, hardware security, systems, cryptography, and computer security. Focuses on verification techniques for AI safety. Colocated with IEEE Security and Privacy conference. Organized by FAR.AI.
#evals#alignment#safety-research#security workshopverificationcryptographyhardware-security
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.9,
"topic_relevance": 0.85,
"time_proximity": 0.9428571428571428,
"community_signal": 0.65,
"speaker_org_signal": 0.85,
"is_deadline_open": 1,
"source_count": 1
} fellowship
★ 0.86 Apps closed May 3, 2026
Four-month AI safety research fellowship. Fellows receive weekly stipend of 3,850 USD / 2,310 GBP / 4,300 CAD, funding for compute (~$15k/month), and close mentorship from Anthropic researchers. Designed to accelerate AI safety research and foster research talent. Application deadline May 3, 2026.
#alignment#technical-safety#interpretability
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.95,
"topic_relevance": 0.95,
"time_proximity": 0.8138364779874214,
"community_signal": 0.9,
"speaker_org_signal": 0.95,
"is_deadline_open": 0,
"source_count": 1
} United Nations Institute for Disarmament Research conference addressing artificial intelligence alongside international security and ethical considerations. International AI policy and disarmament-related conferences in scope per the safety community's broader governance interests.
#governance#policy#international-security
Salience signals
{
"type_weight": 1,
"source_trust": 0.85,
"topic_relevance": 0.75,
"time_proximity": 0.879245283018868,
"community_signal": 0.7,
"speaker_org_signal": 0.8,
"is_deadline_open": 1,
"source_count": 1
} fellowship
★ 0.78 Apps closed Apr 30, 2026
Four-month AI safety research fellowship. Fellows receive weekly stipend of 3,850 USD / 2,310 GBP / 4,300 CAD, funding for compute (~$15k/month), and close mentorship from Anthropic researchers. Designed to accelerate AI safety research and foster research talent.
#alignment#technical-safety#interpretability
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.95,
"topic_relevance": 0.95,
"time_proximity": 0.3666666666666667,
"community_signal": 0.9,
"speaker_org_signal": 0.95,
"is_deadline_open": 0,
"source_count": 1
} Flagship event gathering leading scientists, entrepreneurs, funders, and policymakers to explore the frontiers of science and technology. Includes AI safety track. Part of Foresight Institute's Vision Weekend series. 40-year-old organization focused on transformative technology.
#governance#frontier-science#technical-safety
Salience signals
{
"type_weight": 1,
"source_trust": 0.75,
"topic_relevance": 0.7,
"time_proximity": 0.9446540880503145,
"community_signal": 0.65,
"speaker_org_signal": 0.7,
"is_deadline_open": 1,
"source_count": 1
} Foresight Institute flagship event gathering leading scientists, entrepreneurs, funders, and policymakers to explore frontiers of science and technology including AI safety. 40-year-old organization focused on transformative technology with dedicated AI safety track.
#alignment#governance frontier-sciencemulti-track
Salience signals
{
"type_weight": 1,
"source_trust": 0.75,
"topic_relevance": 0.7,
"time_proximity": 0.9446540880503145,
"community_signal": 0.65,
"speaker_org_signal": 0.7,
"is_deadline_open": 1,
"source_count": 1
} Second Workshop on Technical AI Governance Research at ICML 2026, focusing on technical approaches to AI governance, policy, and regulation. Part of the main conference workshop track.
#governance#policy ICMLworkshopgovernancetechnical-governance
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.85,
"topic_relevance": 0.9,
"time_proximity": 0.7383647798742139,
"community_signal": 0.75,
"speaker_org_signal": 0.7,
"is_deadline_open": 0,
"source_count": 1
} Workshop at ICML 2026 focused on identifying, diagnosing, and fixing failure modes in agentic AI systems. Covers reproducible triggers for failures, diagnostic tracing methods, and verified repair approaches. Highly relevant to AI safety and robustness.
#evals#alignment ICMLworkshopfailure-modesagentsdiagnostics
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.85,
"topic_relevance": 0.9,
"time_proximity": 0.7383647798742139,
"community_signal": 0.75,
"speaker_org_signal": 0.7,
"is_deadline_open": 0,
"source_count": 1
} Second Workshop on Agents in the Wild focusing on safety and security of AI agents deployed in real-world environments. Addresses challenges in ensuring safe and secure operation of autonomous agents. Part of ICML 2026 workshop track.
#alignment#evals ICMLworkshopagentssafetysecurity
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.85,
"topic_relevance": 0.9,
"time_proximity": 0.7333333333333334,
"community_signal": 0.75,
"speaker_org_signal": 0.7,
"is_deadline_open": 0,
"source_count": 1
} fellowship
★ 0.71 Apps closed May 3, 2026
Fully funded, in-person program pairing senior advisors with emerging talent on 5-month technical, governance, strategy, and field-building projects. $8,400 monthly stipend, ~$15K/month research budget for empirical fellows (compute), workspace at Berkeley research center, weekly mentorship from experts, visa support for international applicants. Applications for Fall 2026 cohort closed May 3rd. Strong placement rates at safety orgs.
#alignment#governance#technical-safety
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.85,
"topic_relevance": 0.9,
"time_proximity": 0.43647798742138366,
"community_signal": 0.85,
"speaker_org_signal": 0.85,
"is_deadline_open": 0,
"source_count": 1
} Weekly AI safety evaluations paper reading club hosted by BlueDot Impact. Meets every Tuesday at 4:00 PM UTC to discuss evaluation methodologies, safety benchmarks, and measurement frameworks. Open to all interested in AI safety evals research.
#evals#alignment weeklypaper-discussion
Salience signals
{
"type_weight": 0.45,
"source_trust": 0.85,
"topic_relevance": 0.9,
"time_proximity": 0.4285714285714286,
"community_signal": 0.7,
"speaker_org_signal": 0.7,
"is_deadline_open": 1,
"source_count": 1
} fellowship
★ 0.69 Apps closed Jan 14, 2026
Part-time remote mentorship program (5-40 hours weekly) matching aspiring researchers with experienced mentors from Google DeepMind, RAND, Apollo Research, MATS, UK AISI, etc. for 3-month projects. Mentor applications Nov 5-Dec 5, mentee applications Dec 17-Jan 14. Culminates at Demo Day with prizes and presentations to representatives from METR, Redwood Research, GovAI. Optional continuation after May 16. Open to undergrads, grad students, and professionals; no prior research experience required.
#alignment#governance#evals#safety-research#interpretability#technical-safety remotementorship
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.85,
"topic_relevance": 1,
"time_proximity": 0,
"community_signal": 0.85,
"speaker_org_signal": 0.9,
"is_deadline_open": 0,
"source_count": 1
} Weekend AI safety hackathon focused on Global South participation and perspectives. Hybrid format allowing both online and in-person participation. Organized by Apart Research as part of their 55+ research sprints series with 6,000+ participants across 200+ global locations.
#alignment#safety-research#evals#governance
Salience signals
{
"type_weight": 0.65,
"source_trust": 0.85,
"topic_relevance": 0.8,
"time_proximity": 0.8490566037735849,
"community_signal": 0.7,
"speaker_org_signal": 0.7,
"is_deadline_open": 0,
"source_count": 1
} fellowship
★ 0.65 Apps closed Jan 15, 2026
World's first academic programme dedicated to AI evaluation combining technical depth with policy and governance perspectives. 150 hours total: 90 hours online (lectures, networking, activities), 20 hours hands-on courses, 40 hours in-person capstone week in Valencia. Cohort of 40 top global participants. Fully funded scholarships available. Graduates receive 15 ECTS Expert Diploma from ValgrAI. Faculty from Cambridge, Stanford, Princeton, EU AI Office, UK AI Safety Institute, FAR AI, Apollo Research. Funded by Coefficient Giving.
#evals#safety-research#governance#policy fellowshipevalsacademichybriddiplomafundedprestigious
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.85,
"topic_relevance": 0.95,
"time_proximity": 0,
"community_signal": 0.75,
"speaker_org_signal": 0.9,
"is_deadline_open": 0,
"source_count": 1
} workshop
★ 0.64 CFP closed Mar 12, 2026
Two-day workshop focused on tensions between model developers and evaluation researchers, surfacing practical insights from across the evaluation ecosystem. Organized by EvalEval Coalition, hosted by Hugging Face, University of Edinburgh, and EleutherAI. Accepts full papers (6-8 pages), short papers (up to 4 pages), and tiny papers (up to 2 pages). Two-way anonymized review process.
#evals#safety-research#measurement ACL-workshoptwo-day
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.75,
"topic_relevance": 0.8,
"time_proximity": 0.8037735849056604,
"community_signal": 0.6,
"speaker_org_signal": 0.7,
"is_deadline_open": 0,
"source_count": 1
} Two-day conference on global cooperation for cybersecurity resilience and stability. Organized by United Nations Institute for Disarmament Research. Addresses international frameworks for cyber governance and security cooperation.
#governance#cyber-security UNdisarmament
Salience signals
{
"type_weight": 1,
"source_trust": 0.85,
"topic_relevance": 0.6,
"time_proximity": 0.4,
"community_signal": 0.5,
"speaker_org_signal": 0.75,
"is_deadline_open": 0,
"source_count": 1
} Two-week intensive summer school at Columbia University covering machine learning topics including mechanistic interpretability, alignment/safety, RAG & agents, and LLM systems. Approximately 200 PhD students participate alongside faculty and industry speakers. In-scope due to dedicated alignment and mechanistic interpretability tracks.
#interpretability#alignment
Salience signals
{
"type_weight": 0.35,
"source_trust": 0.75,
"topic_relevance": 0.7,
"time_proximity": 0.869182389937107,
"community_signal": 0.5,
"speaker_org_signal": 0.6,
"is_deadline_open": 0,
"source_count": 1
} A free, accessible workshop hosted by AI Safety Awareness Group Oakland exploring AI's trajectory and societal impact. No technical background required. Features live demonstrations of current AI systems, interactive forecasting activities, and discussions about AI's implications for work, relationships, and society over the next 1-5 years.
#governance#alignment
Salience signals
{
"type_weight": 0.45,
"source_trust": 0.7,
"topic_relevance": 0.7,
"time_proximity": 0.5714285714285714,
"community_signal": 0.6,
"speaker_org_signal": 0.5,
"is_deadline_open": 0,
"source_count": 1
}