conference
โ
0.96 Reg closes May 13, 2026
?
Third annual Technical AI Safety Conference. Free admission. Registration open via Luma. Organized by Oxford Martin AI Governance Initiative and Noeon Research. Sponsored by MATS, supported by IASEAI and Foresight Institute. First UK edition of the conference.
#alignment#governance#technical-safety#control#evals
Salience signals
{
"type_weight": 1,
"source_trust": 0.9,
"topic_relevance": 1,
"time_proximity": 0.5428571428571429,
"community_signal": 0.85,
"speaker_org_signal": 0.85,
"is_deadline_open": 1,
"source_count": 1
} fellowship
โ
0.83 Apps close May 15, 2026
?
Three-week ML upskilling bootcamp for AI safety, focusing on interpretability and RL, based on ARENA curriculum. Run by Cambridge Boston Alignment Initiative. Includes housing, meals, 24/7 office access, and dedicated teaching assistants. Requires familiarity with Python and comfort with multivariable calculus and linear algebra.
#interpretability#alignment#technical-safety
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.85,
"topic_relevance": 0.9,
"time_proximity": 0.6571428571428571,
"community_signal": 0.7,
"speaker_org_signal": 0.8,
"is_deadline_open": 1,
"source_count": 1
} conference
โ
0.84 Apps close May 20, 2026
?
EA Global conference in London featuring heavy AI safety attendance and programming. Organized by Centre for Effective Altruism. Travel support available for those who need it. In scope for social and community event reasons as key gathering for AI safety community.
#alignment#governance#evals effective-altruism
Salience signals
{
"type_weight": 1,
"source_trust": 0.8,
"topic_relevance": 0.75,
"time_proximity": 0.9714285714285714,
"community_signal": 0.8,
"speaker_org_signal": 0.7,
"is_deadline_open": 1,
"source_count": 1
} hackathon
โ
0.75 Reg closes May 21, 2026
?
Three-day hackathon bringing together researchers and engineers to prototype tools for verifying AI-generated code. Co-organized with Atlas Computing. Top teams are invited to apply to the four-month SPS Fellowship that follows.
#alignment#control#safety-research#evals#security#automated-research#safety-applications#technical-safety#evaluations#governance#verification#code-safety#adversarial-robustness#scaling-infrastructure hybridsprint
Salience signals
{
"type_weight": 0.65,
"source_trust": 0.85,
"topic_relevance": 0.8,
"time_proximity": 0.7714285714285715,
"community_signal": 0.65,
"speaker_org_signal": 0.75,
"is_deadline_open": 1,
"source_count": 1
} conference
โ
0.79 Early-bird ends May 24, 2026
?
Premier gathering of professionals dedicated to the advancement of machine learning. July 6: Expo/Tutorials, July 7-9: Main Conference, July 10-11: Workshops. Early registration pricing available until May 24. Check workshop list for safety-related workshops including Mechanistic Interpretability and Pluralistic Alignment.
#alignment#interpretability#evals#ml-research#mechanistic-interpretability#governance#machine-learning major-conference
Salience signals
{
"type_weight": 1,
"source_trust": 0.85,
"topic_relevance": 0.7,
"time_proximity": 0.8138364779874214,
"community_signal": 0.75,
"speaker_org_signal": 0.7,
"is_deadline_open": 1,
"source_count": 1
} conference
โ
0.92 Apps close Jun 1, 2026
?
Five-day, multi-track unconference with 100+ researchers. Topics include Singular Learning Theory, Agent Foundations, Causal Incentives, Computational Mechanics, Safety-by-Debate, Scalable Oversight. Free to attend with limited travel/accommodation reimbursement available. Venue has onsite bedrooms for booking. Organized by Iliad, umbrella organization for applied mathematics research in alignment.
#alignment#theoretical-foundations#agent-foundations#formal-foundations#formal-methods#control
Salience signals
{
"type_weight": 1,
"source_trust": 0.85,
"topic_relevance": 0.95,
"time_proximity": 0.6729559748427673,
"community_signal": 0.8,
"speaker_org_signal": 0.8,
"is_deadline_open": 1,
"source_count": 1
} conference
โ
0.67 CFP closes Jun 6, 2026
?
NeurIPS 2026 across three satellite locations: Sydney, Australia; Atlanta, Georgia; and Paris, France. Main conference papers, evaluations & datasets track, position papers, reproducibility submissions, competitions, workshops, and tutorials. General Chairs: Hsuan-Tien Lin (National Taiwan University) and Razvan Pascanu (Google DeepMind, Mila). Check workshop list for safety-related tracks.
#interpretability#evals#alignment#ml-research#safety-research#mechanistic-interpretability#machine-learning major-conferencemulti-location
Salience signals
{
"type_weight": 1,
"source_trust": 0.85,
"topic_relevance": 0.7,
"time_proximity": 0.17486486486486488,
"community_signal": 0.75,
"speaker_org_signal": 0.7,
"is_deadline_open": 1,
"source_count": 1
} conference
โ
0.79 Reg closes Jun 7, 2026
United Nations Institute for Disarmament Research conference on AI governance and security implications. International AI policy and disarmament-related conference in scope per the safety community's broader governance interests. 20K+ event participants annually across UNIDIR conferences.
#governance#sociotechnical-threats#policy
Salience signals
{
"type_weight": 1,
"source_trust": 0.85,
"topic_relevance": 0.75,
"time_proximity": 0.9044025157232705,
"community_signal": 0.6,
"speaker_org_signal": 0.7,
"is_deadline_open": 1,
"source_count": 1
} conference
โ
0.80 CFP closes Jun 15, 2026
?
Two-day interdisciplinary forum exploring technical AI safety, governance frameworks, risk assessment, evaluations and testing. Features keynotes, workshops, panels, and networking sessions bringing together researchers, policymakers, and industry practitioners. Organized by Gradient Institute (Australian charity) with support from Australian Government Department of Industry, Science and Resources.
#evals#governance#evaluation#measurement-science#alignment#evaluations#policy#technical-safety interdisciplinary
Salience signals
{
"type_weight": 1,
"source_trust": 0.75,
"topic_relevance": 0.85,
"time_proximity": 0.8088050314465409,
"community_signal": 0.6,
"speaker_org_signal": 0.7,
"is_deadline_open": 1,
"source_count": 1
} fellowship
โ
1.00 Apps close Jun 30, 2026
?
Four-month AI safety research fellowship. Fellows receive $3,850 USD weekly stipend, ~$15k monthly compute funding, and close mentorship from Anthropic researchers. Research areas include scalable oversight, adversarial robustness, and other safety topics. No PhD required. Successful fellows come from physics, mathematics, computer science, cybersecurity, and other quantitative backgrounds.
#alignment#technical-safety#interpretability#evals#control
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.95,
"topic_relevance": 0.95,
"time_proximity": 0.7433962264150944,
"community_signal": 0.9,
"speaker_org_signal": 0.95,
"is_deadline_open": 1,
"source_count": 2
} fellowship
โ
0.87 Apps close Jun 30, 2026
?
10-week fellowship for researchers and entrepreneurs focused on mitigating risks from frontier AI. Welcomes talented individuals at any career stage motivated to contribute to AI safety and governance research. Comprehensive support including competitive stipend, meals during working hours, transportation coverage, visa assistance, and lodging. Research focus: Technical AI Safety, AI Governance, and Technical AI Governance.
#alignment#governance#technical-safety
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.85,
"topic_relevance": 0.9,
"time_proximity": 0.8138364779874214,
"community_signal": 0.75,
"speaker_org_signal": 0.8,
"is_deadline_open": 1,
"source_count": 1
} workshop
โ
0.73 Reg closes Jul 2, 2026
?
Workshop on tensions between model developers and evaluation researchers, addressing methodological rigor, sociotechnical perspectives, and community collaborations in AI evaluation practices. Accepts full papers (6-8 pages), short papers (up to 4 pages), and tiny papers (up to 2 pages). In-person only, at least one author per accepted paper must attend. Includes shared task for building standardized LLM evaluation database.
#evals#safety-research#measurement#evaluation-methodology ACL-workshoptwo-day
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.8,
"topic_relevance": 0.9,
"time_proximity": 0.8289308176100629,
"community_signal": 0.7,
"speaker_org_signal": 0.75,
"is_deadline_open": 0,
"source_count": 1
} fellowship
โ
0.86 Apps close Jul 3, 2026
?
Three-week ML upskilling bootcamp for AI safety, focusing on interpretability and RL, based on ARENA curriculum. Run by Cambridge Boston Alignment Initiative. Includes housing, meals, and dedicated teaching assistants. Requires familiarity with Python and comfort with multivariable calculus and linear algebra.
#interpretability#alignment#technical-safety
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.85,
"topic_relevance": 0.9,
"time_proximity": 0.8138364779874214,
"community_signal": 0.7,
"speaker_org_signal": 0.8,
"is_deadline_open": 1,
"source_count": 1
} workshop
โ
0.95 Reg closes Jul 5, 2026
?
Part of FAR.AI's ongoing Alignment Workshop series, bringing together global leaders in academia and industry to discuss AGI risks and mitigation strategies. Organized by FAR.AI.
#alignment#governance#control#interpretability#technical-safety#agi-risk#evals#agi-safety
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.9,
"topic_relevance": 0.95,
"time_proximity": 0.8138364779874214,
"community_signal": 0.8,
"speaker_org_signal": 0.85,
"is_deadline_open": 1,
"source_count": 1
} fellowship
โ
0.82 Apps close Aug 7, 2026
?
Three-week ML upskilling bootcamp for AI safety, focusing on interpretability and RL, based on ARENA curriculum. Run by Cambridge Boston Alignment Initiative. Includes housing, meals, 24/7 office access, and dedicated teaching assistants. Requires familiarity with Python and comfort with multivariable calculus and linear algebra.
#interpretability#alignment#technical-safety
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.85,
"topic_relevance": 0.9,
"time_proximity": 0.6377358490566037,
"community_signal": 0.7,
"speaker_org_signal": 0.8,
"is_deadline_open": 1,
"source_count": 1
} workshop
โ
0.99 CFP closed May 8, 2026
Annual mechanistic interpretability workshop at ICML. Following workshops at ICML 2024 and NeurIPS 2025, this edition brings together diverse perspectives from the community to discuss recent advances, build common understanding and chart future directions. Submit on OpenReview.
#interpretability#alignment#circuit-tracing#sparse-autoencoders#mechanistic-interpretability#instrument-science#adversarial-robustness ICMLworkshopmechanistic-interpretabilityICML-workshopthird-iteration
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.9,
"topic_relevance": 1,
"time_proximity": 0.7937106918238994,
"community_signal": 0.85,
"speaker_org_signal": 0.9,
"is_deadline_open": 1,
"source_count": 1
} fellowship
โ
0.96 Apps closed Mar 1, 2026
?
Three-month bipartisan fellowship designed to launch or accelerate impactful careers in American AI governance and policy. Participants deepen understanding of the field, connect with network of experts, and build skills and professional profile. $21,000 stipend. Alumni have secured positions at leading AI companies (DeepMind, OpenAI, Anthropic).
#governance#policy fellowshippolicygovernanceDC
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.9,
"topic_relevance": 1,
"time_proximity": 0.929559748427673,
"community_signal": 0.9,
"speaker_org_signal": 1,
"is_deadline_open": 0,
"source_count": 2
} fellowship
โ
0.96 Apps closed Jan 4, 2026
?
Three-month fellowship where fellows conduct independent research on AI governance topic of their choice with mentorship from leading experts. ยฃ12,000 stipend. GovAI was founded to help decision-makers navigate the transition to advanced AI through rigorous research and talent fostering. Alumni have secured positions at DeepMind, OpenAI, Anthropic.
#governance#policy fellowshipresearchgovernanceLondon
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.9,
"topic_relevance": 1,
"time_proximity": 0.929559748427673,
"community_signal": 0.9,
"speaker_org_signal": 1,
"is_deadline_open": 0,
"source_count": 2
} fellowship
โ
0.95 Apps closed Apr 12, 2026
?
Nine-week AI safety research fellowship run by Cambridge Boston Alignment Initiative. Accepts 30 fellows (undergraduate, Master's, PhD students, postdocs, and recent graduates). Includes $10,000 stipend, accommodation in Harvard dorms, meals, workspace access, and up to $10,000 in compute credits. Applications reviewed on rolling basis through four-stage process. International students on OPT/CPT eligible but visa sponsorship not available.
#alignment#interpretability#governance#evals#biosecurity#safety-research fellowshipresearchCambridgeHarvardstipendhousingcompute-credits
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.85,
"topic_relevance": 0.95,
"time_proximity": 0.9345911949685535,
"community_signal": 0.8,
"speaker_org_signal": 0.85,
"is_deadline_open": 1,
"source_count": 1
} fellowship
โ
0.92 Apps closed Mar 8, 2026
A 4-5 week ML bootcamp with focus on AI safety, providing talented individuals with ML engineering skills, community, and confidence to contribute directly to technical AI safety. Covers transformers and interpretability, reinforcement learning, model evaluations, and concludes with a capstone project. ARENA alumni go on to become MATS scholars, LASR participants, and AI safety engineers at Apollo Research, METR, UK AISI. First week (May 25-June 1) is optional review of Neural Network Fundamentals.
#alignment#interpretability#technical-safety#mechanistic-interpretability#evals
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.9,
"topic_relevance": 0.95,
"time_proximity": 0.9714285714285714,
"community_signal": 0.9,
"speaker_org_signal": 0.85,
"is_deadline_open": 0,
"source_count": 2
} workshop
โ
0.92 Apps closed May 10, 2026
5-week project-based course for engineers and early researchers to work with an AI safety expert on a contribution to AI safety research or engineering. Includes mentorship, regular check-ins, and a published write-up. Covers alignment, mechanistic interpretability, evaluations, red-teaming, AI control, and scalable oversight.
#alignment#interpretability#evals
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.9,
"topic_relevance": 0.9,
"time_proximity": 0.8,
"community_signal": 0.8,
"speaker_org_signal": 0.85,
"is_deadline_open": 1,
"source_count": 1
} fellowship
โ
0.88 Apps closed Jan 18, 2026
10-week AI safety research fellowship with optional 6-12 month extension in London. $1250 weekly stipend plus $2k weekly compute resources. Five research tracks: Technical Governance, Empirical, Policy & Strategy, Theory, and Compute Infrastructure. Applications closed but EOI still being collected. Welcomes diverse backgrounds with strong motivation to contribute to AI safety.
#alignment#mechanistic-interpretability#governance#interpretability#evals
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.95,
"topic_relevance": 0.95,
"time_proximity": 0.989937106918239,
"community_signal": 0.9,
"speaker_org_signal": 0.9,
"is_deadline_open": 0,
"source_count": 1
} workshop
โ
0.88 Apps closed May 1, 2026
?
Workshop on secure and verifiable AI development, bringing together researchers, builders, and funders across ML, hardware security, systems, cryptography, and computer security. Focuses on verification techniques for AI safety. Colocated with IEEE Security and Privacy conference. Organized by FAR.AI.
#evals#alignment#safety-research#security workshopverificationcryptographyhardware-security
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.9,
"topic_relevance": 0.85,
"time_proximity": 0.9428571428571428,
"community_signal": 0.65,
"speaker_org_signal": 0.85,
"is_deadline_open": 1,
"source_count": 1
} workshop
โ
0.85 CFP closed May 8, 2026
?
Workshop on Pluralistic AI: Aligning with the Diversity of Human Values at ICML 2026. Submissions should be anonymized papers 4 to 8 pages following ICML 2026 template through OpenReview. Acceptance notifications on May 22, 2026.
#alignment#governance#pluralistic-alignment#pluralistic-values#measurement-science ICML-workshopinterdisciplinary
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.85,
"topic_relevance": 0.9,
"time_proximity": 0.7685534591194969,
"community_signal": 0.7,
"speaker_org_signal": 0.75,
"is_deadline_open": 1,
"source_count": 1
} fellowship
โ
0.84 Apps closed Mar 24, 2026
A three-month fellowship for scholars in economics, law, international relations, and adjacent fields to research the societal impacts of rapid AI progress. Fellows work with significant autonomy, defining and pursuing their own research directions on how advanced AI may reshape social, economic, geopolitical, and legal systems. Includes regular guest speaker events and networking with CAIS researchers and Bay Area academics. Inaugural year, building on successful 2023 philosophy fellowship.
#governance#policy#alignment fellowshipgovernancepolicyresearch
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.9,
"topic_relevance": 0.85,
"time_proximity": 0.969811320754717,
"community_signal": 0.8,
"speaker_org_signal": 0.85,
"is_deadline_open": 0,
"source_count": 2
} fellowship
โ
0.84 Apps closed May 3, 2026
Part-time research programme pairing teams of 2-4 with experienced mentors to produce published AI safety research. Program includes one-week in-person kick-off at Cambridge AI Safety Hub (July 13-19 or 20-26), followed by 8-10 weeks of remote research culminating in publications. Targets university students and early-career researchers with coding and research experience. CAISH covers travel, accommodation, meals, office space, $2k+ compute budget, and Claude Max access. Mentors include researchers from Redwood Research, Google DeepMind, RAND, and Anthropic.
#alignment#interpretability#evals
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.8,
"topic_relevance": 0.95,
"time_proximity": 0.7584905660377359,
"community_signal": 0.85,
"speaker_org_signal": 0.85,
"is_deadline_open": 0,
"source_count": 2
} Flagship conference by Foresight Institute gathering leading scientists, entrepreneurs, funders, and policymakers to explore frontiers of science and technology. Includes AI safety track among broader frontier science topics. 40-year-old organization focused on transformative technology. Three-day event open for registration.
#alignment#governance frontier-sciencemulti-track
Salience signals
{
"type_weight": 1,
"source_trust": 0.75,
"topic_relevance": 0.8,
"time_proximity": 0.949685534591195,
"community_signal": 0.7,
"speaker_org_signal": 0.75,
"is_deadline_open": 1,
"source_count": 1
} fellowship
โ
0.83 Apps closed Apr 15, 2026
?
Four-month AI safety research fellowship. Fellows receive $3,850 USD weekly stipend, ~$15k monthly compute funding, and close mentorship from Anthropic researchers. Research areas include scalable oversight, adversarial robustness, and other safety topics. No PhD required. Successful fellows come from physics, mathematics, computer science, cybersecurity, and other quantitative backgrounds.
#alignment#technical-safety#interpretability#control
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.95,
"topic_relevance": 0.95,
"time_proximity": 0.3111111111111111,
"community_signal": 0.9,
"speaker_org_signal": 0.95,
"is_deadline_open": 0,
"source_count": 2
} workshop
โ
0.83 CFP closed May 8, 2026
Workshop on pluralistic AI alignment, addressing technical, philosophical, and societal dimensions. Topics include ML methods for handling diverse values, HCI design, philosophical frameworks, and policy considerations. Papers 4-8 pages following ICML 2026 template. Non-archival. At least one author must commit to reviewing submissions.
#alignment#governance
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.85,
"topic_relevance": 0.85,
"time_proximity": 0.7886792452830189,
"community_signal": 0.7,
"speaker_org_signal": 0.75,
"is_deadline_open": 1,
"source_count": 1
} fellowship
โ
0.78 Apps closed May 3, 2026
External researchers and engineers conduct work on AI safety and alignment. Focus areas include safety evaluation, ethics, robustness, scalable mitigations, privacy-preserving safety methods, agentic oversight. Monthly stipend, compute resources, API credits, and ongoing mentorship from OpenAI staff. Workspace in Berkeley at Constellation, remote participation permitted. Partnering with Constellation.
#alignment#technical-safety#evals#control
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.9,
"topic_relevance": 0.95,
"time_proximity": 0.46163522012578617,
"community_signal": 0.9,
"speaker_org_signal": 0.95,
"is_deadline_open": 0,
"source_count": 1
} Flagship event gathering leading scientists, entrepreneurs, funders, and policymakers to explore the frontiers of science and technology. Includes AI safety track. Part of Foresight Institute's Vision Weekend series. 40-year-old organization focused on transformative technology.
#governance#frontier-science#technical-safety
Salience signals
{
"type_weight": 1,
"source_trust": 0.75,
"topic_relevance": 0.7,
"time_proximity": 0.9446540880503145,
"community_signal": 0.65,
"speaker_org_signal": 0.7,
"is_deadline_open": 1,
"source_count": 1
} Five-day intensive programme for AI safety founders going from idea to funded. Successful pitches receive ยฃ50k in equity-free seed funding. Part of BlueDot Impact's incubator and rapid-funding initiatives supporting concrete AI safety work.
#alignment#governance startupincubator
Salience signals
{
"type_weight": 0.65,
"source_trust": 0.9,
"topic_relevance": 0.85,
"time_proximity": 0.9647798742138365,
"community_signal": 0.75,
"speaker_org_signal": 0.75,
"is_deadline_open": 0,
"source_count": 1
} Second Workshop on Technical AI Governance Research at ICML 2026, focusing on technical approaches to AI governance, policy, and regulation. Part of the main conference workshop track.
#governance#policy ICMLworkshopgovernancetechnical-governance
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.85,
"topic_relevance": 0.9,
"time_proximity": 0.7383647798742139,
"community_signal": 0.75,
"speaker_org_signal": 0.7,
"is_deadline_open": 0,
"source_count": 1
} Workshop at ICML 2026 focused on identifying, diagnosing, and fixing failure modes in agentic AI systems. Covers reproducible triggers for failures, diagnostic tracing methods, and verified repair approaches. Highly relevant to AI safety and robustness.
#evals#alignment ICMLworkshopfailure-modesagentsdiagnostics
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.85,
"topic_relevance": 0.9,
"time_proximity": 0.7383647798742139,
"community_signal": 0.75,
"speaker_org_signal": 0.7,
"is_deadline_open": 0,
"source_count": 1
} Second Workshop on Agents in the Wild focusing on safety and security of AI agents deployed in real-world environments. Addresses challenges in ensuring safe and secure operation of autonomous agents. Part of ICML 2026 workshop track.
#alignment#evals ICMLworkshopagentssafetysecurity
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.85,
"topic_relevance": 0.9,
"time_proximity": 0.7333333333333334,
"community_signal": 0.75,
"speaker_org_signal": 0.7,
"is_deadline_open": 0,
"source_count": 1
} fellowship
โ
0.72 Apps closed May 3, 2026
?
Part-time, hybrid research programme matching exceptional students and early-career researchers with experienced mentors. Features in-person kick-off week (July 13-19 or July 20-26), followed by 8-10 weeks remote research phase. Teams of 2-4 participants work on publishable AI safety research. Includes $2k+ compute budget, Claude Max, accommodation, meals, and travel funding for in-person sprint.
#alignment#technical-safety#mentorship#control#interpretability#evals#governance
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.8,
"topic_relevance": 0.9,
"time_proximity": 0.7786163522012579,
"community_signal": 0.7,
"speaker_org_signal": 0.8,
"is_deadline_open": 0,
"source_count": 1
} fellowship
โ
0.72 Apps closed May 10, 2026
?
Stage 2 of MARS V selection process, for invited candidates only. Part-time, hybrid research programme matching exceptional students and early-career researchers with experienced mentors. Includes $2k+ compute budget, Claude Max, accommodation, meals, and travel funding.
#alignment#technical-safety#mentorship#control#interpretability#evals#governance
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.8,
"topic_relevance": 0.9,
"time_proximity": 0.7786163522012579,
"community_signal": 0.7,
"speaker_org_signal": 0.8,
"is_deadline_open": 0,
"source_count": 1
} Fully funded, in-person program pairing senior advisors with emerging talent on 5-month technical, governance, strategy, and field-building projects. $8,400 monthly stipend, ~$15K/month research budget for empirical fellows (compute), workspace at Berkeley research center, weekly mentorship from experts, visa support for international applicants. Applications for Fall 2026 cohort closed May 3rd. Strong placement rates at safety orgs.
#alignment#governance#technical-safety
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.85,
"topic_relevance": 0.9,
"time_proximity": 0.43647798742138366,
"community_signal": 0.85,
"speaker_org_signal": 0.85,
"is_deadline_open": 0,
"source_count": 1
} fellowship
โ
0.71 Apps closed May 3, 2026
?
Nine-week summer fellowship with potential extensions up to 6 months. Participants receive ยฃ6,000-ยฃ8,000 stipend, travel coverage to London, ยฃ2,000 housing assistance, meals at LISA workspace, and compute resources. 70-90% of recent fellows received extensions. Weekly mentorship from established researchers. 129 alumni total across 7 cohorts with 9.1/10 peer recommendation rating.
#alignment#governance#technical-safety#mechanistic-interpretability#interpretability
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.75,
"topic_relevance": 0.9,
"time_proximity": 0.8490566037735849,
"community_signal": 0.7,
"speaker_org_signal": 0.75,
"is_deadline_open": 0,
"source_count": 1
} Weekly AI safety evaluations paper reading club hosted by BlueDot Impact. Meets every Tuesday at 4:00 PM UTC to discuss evaluation methodologies, safety benchmarks, and measurement frameworks. Open to all interested in AI safety evals research.
#evals#alignment weeklypaper-discussion
Salience signals
{
"type_weight": 0.45,
"source_trust": 0.85,
"topic_relevance": 0.9,
"time_proximity": 0.4285714285714286,
"community_signal": 0.7,
"speaker_org_signal": 0.7,
"is_deadline_open": 1,
"source_count": 1
} Eight-week virtual reading group run by MIT AI Alignment (MAIA). Topics include AI's trajectory, misalignment, technical safety, policy, and careers in the field. Two hours per week commitment. Led by small groups facilitated by MAIA members. Free, no prior AI background required. Applications open through May 22. MAIA is MIT student group conducting AI alignment research with membership in the hundreds, supported by CBAI.
#alignment#governance#technical-safety
Salience signals
{
"type_weight": 0.45,
"source_trust": 0.8,
"topic_relevance": 0.9,
"time_proximity": 0.8857142857142857,
"community_signal": 0.75,
"speaker_org_signal": 0.8,
"is_deadline_open": 0,
"source_count": 1
} Summit bringing together academic leaders, entrepreneurs, AI experts, venture capitalists, and policymakers to discuss the future of AI and Agentic AI. Call for Papers and Startup Spotlight applications open. In-person and livestream available.
#alignment#evals#governance
Salience signals
{
"type_weight": 1,
"source_trust": 0.85,
"topic_relevance": 0.75,
"time_proximity": 0.6477987421383647,
"community_signal": 0.75,
"speaker_org_signal": 0.8,
"is_deadline_open": 0,
"source_count": 1
} One-day hackathon organized by BlueDot Impact focused on AI risk content creation. Brings together community members to develop educational and outreach materials related to AI safety and existential risk. Part of BlueDot's ongoing series of community events building the workforce needed to safely navigate AGI.
#alignment#governance#evals content-creation
Salience signals
{
"type_weight": 0.65,
"source_trust": 0.85,
"topic_relevance": 0.75,
"time_proximity": 0.9446540880503145,
"community_signal": 0.75,
"speaker_org_signal": 0.8,
"is_deadline_open": 0,
"source_count": 1
} Part-time remote research fellowship pairing aspiring researchers with 130+ experienced mentors from Google DeepMind, RAND, Apollo Research, MATS, UK AISI, and other top organizations for three-month AI alignment projects. Participants commit 5-40 hours weekly. Covers project expenses including compute and API/LLM access. Culminates in virtual Demo Day with prizes totaling $7,000. Optional continuation beyond May 16. Mentor application track: experienced researchers from Google DeepMind, RAND, Apollo Research, MATS, UK AISI etc. apply to mentor a project. Mentor application deadline 2025-12-05 (passed).
#alignment#governance#evals#safety-research#interpretability#technical-safety remotementorship
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.85,
"topic_relevance": 0.95,
"time_proximity": 0,
"community_signal": 0.8,
"speaker_org_signal": 0.9,
"is_deadline_open": 0,
"source_count": 1
} Weekend AI safety hackathon focused on Global South participation and perspectives. Hybrid format allowing both online and in-person participation. Organized by Apart Research as part of their 55+ research sprints series with 6,000+ participants across 200+ global locations.
#alignment#safety-research#evals#governance
Salience signals
{
"type_weight": 0.65,
"source_trust": 0.85,
"topic_relevance": 0.8,
"time_proximity": 0.8490566037735849,
"community_signal": 0.7,
"speaker_org_signal": 0.7,
"is_deadline_open": 0,
"source_count": 1
} Foresight Institute flagship event gathering leading scientists, entrepreneurs, funders, and policymakers to explore the frontiers of science and technology. Multiple tracks including AI safety topics.
#frontier-science#ai-safety
Salience signals
{
"type_weight": 1,
"source_trust": 0.75,
"topic_relevance": 0.7,
"time_proximity": 0.949685534591195,
"community_signal": 0.6,
"speaker_org_signal": 0.7,
"is_deadline_open": 0,
"source_count": 1
} fellowship
โ
0.64 Apps closed Jan 15, 2026
?
First global academic programme dedicated to AI evaluation, combining technical depth with policy and governance. 150-hour expert diploma covering capabilities and safety evaluations. Includes 90 hours online instruction, 20 hours hands-on courses, and 40-hour in-person capstone week in Valencia. Faculty from Cambridge, Stanford, Princeton, EU AI Office, UK AI Safety Institute, FAR AI, Apollo Research. Targets professionals joining AI Safety Institutes, government agencies, and industry research labs.
#evals#safety-research#governance#policy fellowshipevalsacademichybriddiplomafundedprestigious
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.85,
"topic_relevance": 0.95,
"time_proximity": 0,
"community_signal": 0.7,
"speaker_org_signal": 0.9,
"is_deadline_open": 0,
"source_count": 1
} Technical workshop focusing on secure AI topics. Brings together top talent to solve the bottlenecks holding back progress in secure and sovereign AI systems.
#governance#evals#alignment#safety-research#security#control#technical-safety#ai-security technicalBerlin
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.75,
"topic_relevance": 0.8,
"time_proximity": 0.7333333333333334,
"community_signal": 0.6,
"speaker_org_signal": 0.7,
"is_deadline_open": 0,
"source_count": 1
} fellowship
โ
0.61 Apps closed Jan 14, 2026
?
Part-time, remote research fellowship pairing aspiring researchers (undergrad, graduate, PhD students, professionals) with experienced mentors from Google DeepMind, RAND, Apollo Research, MATS, UK AISI, etc. Three months of structured research, 5-40 hours per week flexible commitment. Culminates in Demo Day with presentations and career fair. 130+ mentors for Spring 2026. Project expenses covered (compute, APIs, LLMs) but no personal compensation.
#alignment#governance#evals#safety-research#interpretability#technical-safety remotementorship
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.85,
"topic_relevance": 0.9,
"time_proximity": 0,
"community_signal": 0.75,
"speaker_org_signal": 0.8,
"is_deadline_open": 0,
"source_count": 1
} 9th annual Cognitive Computational Neuroscience conference. Primarily single-track featuring keynote speakers and oral presentations. Paper submissions presented as posters with select papers chosen for oral presentation. ACO attendees follow for predictive-coding, metacognition, signal-detection-theory measurement work applicable to LLMs.
#measurement-science#cognitive-science#interpretability#neuroscience
Salience signals
{
"type_weight": 1,
"source_trust": 0.8,
"topic_relevance": 0.6,
"time_proximity": 0.6528301886792452,
"community_signal": 0.5,
"speaker_org_signal": 0.6,
"is_deadline_open": 0,
"source_count": 1
} Two-week intensive summer school at Columbia University covering machine learning topics including mechanistic interpretability, alignment/safety, RAG & agents, and LLM systems. Approximately 200 PhD students participate alongside faculty and industry speakers. In-scope due to dedicated alignment and mechanistic interpretability tracks.
#interpretability#alignment
Salience signals
{
"type_weight": 0.35,
"source_trust": 0.75,
"topic_relevance": 0.7,
"time_proximity": 0.869182389937107,
"community_signal": 0.5,
"speaker_org_signal": 0.6,
"is_deadline_open": 0,
"source_count": 1
} conference
โ
0.44 CFP closed May 10, 2026
?
The 29th annual meeting of the Association for the Scientific Study of Consciousness brings together researchers from around the world to share the latest findings in the scientific study of consciousness. Topics include empirical, theoretical, and philosophical investigations into neural correlates of consciousness and subjective experience. Relevant to ACO for metacognition and measurement-science approaches applicable to LLM interpretability work.
#consciousness#measurement-science#cognitive-science#metacognition
Salience signals
{
"type_weight": 1,
"source_trust": 0.6,
"topic_relevance": 0.5,
"time_proximity": 0.8238993710691824,
"community_signal": 0.3,
"speaker_org_signal": 0.2,
"is_deadline_open": 1,
"source_count": 1
}