fellowship
★ 1.00 Apps close May 3, 2026
9-week AI safety research fellowship with expert mentorship. Features weekly 1-on-1s with established researchers, in-person co-working, financial support (£6,000–£8,000 stipend), and opportunities for up to 6-month extensions. 70-90% of recent cohorts received extensions. Includes travel, housing (£2,000 for non-London fellows), meals, and compute. 7 cohorts completed with 129 alumni.
#alignment#interpretability
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.75,
"topic_relevance": 0.9,
"time_proximity": 0.8138364779874214,
"community_signal": 0.75,
"speaker_org_signal": 0.7,
"is_deadline_open": 1,
"source_count": 1
} fellowship
★ 1.00 Apps close May 3, 2026
Anthropic's four-month fellowship program for AI safety research. Weekly stipend of $3,850 USD / £2,310 GBP / $4,300 CAD, plus ~$15k/month compute budget and close mentorship from Anthropic researchers. Priority areas include scalable oversight, adversarial robustness, and interpretability. Application deadline May 3.
#alignment#interpretability#control#evals#adversarial-robustness fellowshipresearchAnthropic
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.95,
"topic_relevance": 0.95,
"time_proximity": 0.6930817610062893,
"community_signal": 0.85,
"speaker_org_signal": 0.95,
"is_deadline_open": 1,
"source_count": 1
} fellowship
★ 0.85 Apps close May 3, 2026
External research program supporting independent safety and alignment work on advanced AI systems. Focus areas include safety evaluation, ethics, robustness, scalable mitigations, privacy-preserving safety methods, agentic oversight, and high-severity misuse domains. Participants mentor with OpenAI staff, collaborate within cohort, and deliver substantial research outputs (papers, benchmarks, or datasets). Provides stipends, compute resources, and mentorship.
#alignment#safety-research#governance#evals#control
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.9,
"topic_relevance": 0.95,
"time_proximity": 0.42641509433962266,
"community_signal": 0.85,
"speaker_org_signal": 0.95,
"is_deadline_open": 0,
"source_count": 1
} fellowship
★ 0.82 Apps close May 3, 2026
Five-month fully funded fellowship pairing emerging talent with expert mentors on technical, governance, strategy, and field-building projects. Provides monthly stipends of $8,400, research budgets (~$15K/month for empirical fellows), workspace access, mentorship, and career placement support. Two streams: Empirical (ML research in alignment, control, evals, oversight) and Strategy & Governance.
#alignment#control#evals#governance#safety-research#interpretability fellowshipempiricalgovernancestrategy
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.85,
"topic_relevance": 0.95,
"time_proximity": 0.42641509433962266,
"community_signal": 0.8,
"speaker_org_signal": 0.9,
"is_deadline_open": 0,
"source_count": 1
} conference
★ 0.70 CFP closes May 6, 2026
Fortieth Annual Conference on Neural Information Processing Systems. Multi-track interdisciplinary annual meeting featuring invited talks, demonstrations, symposia, oral and poster presentations, professional exposition, tutorials, and topical workshops. Safety-related workshops typically included.
#interpretability#evals#alignment
Salience signals
{
"type_weight": 1,
"source_trust": 0.85,
"topic_relevance": 0.7,
"time_proximity": 0.1691891891891892,
"community_signal": 0.75,
"speaker_org_signal": 0.7,
"is_deadline_open": 0,
"source_count": 1
} workshop
★ 1.00 CFP closes May 8, 2026
Workshop advancing understanding of neural network internals through mechanistic interpretability research. Gathers perspectives from academic, industry, and independent research communities to discuss recent progress and future directions. Understanding mechanisms behind neural network decisions remains a fundamental scientific challenge.
#interpretability#alignment#circuit-tracing#sparse-autoencoders ICMLworkshopmechanistic-interpretability
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.9,
"topic_relevance": 1,
"time_proximity": 0.7584905660377359,
"community_signal": 0.85,
"speaker_org_signal": 0.8,
"is_deadline_open": 1,
"source_count": 1
} workshop
★ 1.00 CFP closes May 8, 2026
Workshop focusing on integrating diverse perspectives into AI alignment frameworks. Examines how to integrate diverse perspectives, values, and expertise into pluralistic AI alignment frameworks and explores approaches to multi-objective alignment that address value conflicts in pluralistic societies.
#alignment
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.85,
"topic_relevance": 0.9,
"time_proximity": 0.7534591194968554,
"community_signal": 0.75,
"speaker_org_signal": 0.75,
"is_deadline_open": 1,
"source_count": 1
} conference
★ 1.00 Reg closes May 13, 2026
Free one-day AI safety conference hosted by Oxford Martin AI Governance Initiative and Noeon Research. Welcomes researchers and professionals from all backgrounds interested in discussing AI safety, regardless of prior experience. Third iteration of the conference series (previous in Tokyo 2024, 2025).
#alignment#governance#safety-research#evals#interpretability conferencetechnicalOxfordfree
Salience signals
{
"type_weight": 1,
"source_trust": 0.9,
"topic_relevance": 0.95,
"time_proximity": 0.7428571428571429,
"community_signal": 0.75,
"speaker_org_signal": 0.85,
"is_deadline_open": 1,
"source_count": 1
} conference
★ 1.00 Apps close May 20, 2026
Effective Altruism Global conference connecting experts and peers to collaborate on projects and tackle global challenges. EA Globals designed for individuals with solid understanding of core EA ideas actively applying them. Applications open for all 2026 EA Global events with single application form. AI safety is major track at EAG events.
#alignment#governance
Salience signals
{
"type_weight": 1,
"source_trust": 0.8,
"topic_relevance": 0.75,
"time_proximity": 0.969811320754717,
"community_signal": 0.85,
"speaker_org_signal": 0.75,
"is_deadline_open": 1,
"source_count": 1
} hackathon
★ 0.99 Reg closes May 21, 2026
Weekend hackathon focused on secure program synthesis research. Organized by Apart Research as part of their monthly sprint series.
#alignment#control
Salience signals
{
"type_weight": 0.65,
"source_trust": 0.85,
"topic_relevance": 0.8,
"time_proximity": 0.9714285714285714,
"community_signal": 0.7,
"speaker_org_signal": 0.75,
"is_deadline_open": 1,
"source_count": 1
} conference
★ 0.78 Early-bird ends May 24, 2026
Forty-Third International Conference on Machine Learning. Premier gathering for professionals advancing machine learning, artificial intelligence, statistics, and data science. Applications in computer vision, computational biology, speech recognition, and robotics. Safety-related workshops typically included.
#alignment#interpretability
Salience signals
{
"type_weight": 1,
"source_trust": 0.85,
"topic_relevance": 0.7,
"time_proximity": 0.7786163522012579,
"community_signal": 0.75,
"speaker_org_signal": 0.7,
"is_deadline_open": 0,
"source_count": 1
} fellowship
★ 1.00 Apps close Jun 1, 2026
10-week fully funded fellowship for researchers and entrepreneurs focused on mitigating risks from frontier AI. Program supports work across technical AI safety, AI governance, and technical AI governance. Fellows receive competitive stipend, full coverage of meals, transport, visas, and lodging, weekly mentorship, 30+ hosted events, and access to compute resources.
#alignment#governance#evals
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.85,
"topic_relevance": 0.9,
"time_proximity": 0.7786163522012579,
"community_signal": 0.7,
"speaker_org_signal": 0.75,
"is_deadline_open": 1,
"source_count": 1
} conference
★ 1.00 Apps close Jun 1, 2026
Five-day multi-track conference bringing together 100+ researchers in theoretical AI alignment. Uses unconference format where participants can propose and lead sessions. Topics include Singular Learning Theory, Agent Foundations, Causal Incentives, Computational Mechanics, Safety-by-Debate, and Scalable Oversight. Free to attend with limited financial support for accommodation and travel.
#alignment#theory#interpretability#agent-foundations#formal-foundations conferenceunconferencetheoreticalfree
Salience signals
{
"type_weight": 1,
"source_trust": 0.85,
"topic_relevance": 0.95,
"time_proximity": 0.6377358490566037,
"community_signal": 0.8,
"speaker_org_signal": 0.75,
"is_deadline_open": 1,
"source_count": 1
} conference
★ 0.97 Reg closes Jun 4, 2026
Flagship conference gathering leading scientists, entrepreneurs, funders, and policymakers to explore frontiers of science and technology. Includes AI safety track among other emerging technology topics.
#governance#alignment
Salience signals
{
"type_weight": 1,
"source_trust": 0.75,
"topic_relevance": 0.7,
"time_proximity": 0.9345911949685535,
"community_signal": 0.7,
"speaker_org_signal": 0.7,
"is_deadline_open": 1,
"source_count": 1
} conference
★ 1.00 Reg closes Jun 7, 2026
Part of UNIDIR's Security and Technology programme, focusing on artificial intelligence implications for international peace and security. United Nations Institute for Disarmament Research conference addressing AI governance and ethics at the international level.
#governance#policy UNinternationalgovernancehybrid
Salience signals
{
"type_weight": 1,
"source_trust": 0.85,
"topic_relevance": 0.8,
"time_proximity": 0.869182389937107,
"community_signal": 0.7,
"speaker_org_signal": 0.8,
"is_deadline_open": 1,
"source_count": 1
} workshop
★ 1.00 Reg closes Jul 5, 2026
Part of the ongoing Alignment Workshop series, this gathering brings together global leaders in academia and industry to deepen understanding of potential risks from Artificial General Intelligence (AGI) and explore mitigation strategies.
#alignment#governance#control#interpretability
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.9,
"topic_relevance": 0.95,
"time_proximity": 0.7786163522012579,
"community_signal": 0.8,
"speaker_org_signal": 0.9,
"is_deadline_open": 1,
"source_count": 1
} conference
★ 0.99 CFP closes Jul 15, 2026
Summit bringing together academic leaders, entrepreneurs, AI experts, venture capitalists, and policymakers to discuss the future of AI and Agentic AI. Call for Papers and Startup Spotlight applications open. In-person and livestream available.
#alignment#evals#governance
Salience signals
{
"type_weight": 1,
"source_trust": 0.85,
"topic_relevance": 0.75,
"time_proximity": 0.6477987421383647,
"community_signal": 0.75,
"speaker_org_signal": 0.8,
"is_deadline_open": 1,
"source_count": 1
} workshop
★ 0.99 Reg closes Jul 17, 2026
Technical workshop bringing together top talent to address bottlenecks in advancing secure and sovereign AI. Focus on technical challenges in AI safety and security.
#governance#evals#alignment
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.75,
"topic_relevance": 0.85,
"time_proximity": 0.7182389937106919,
"community_signal": 0.75,
"speaker_org_signal": 0.75,
"is_deadline_open": 1,
"source_count": 1
} conference
★ 0.96 CFP closed Apr 30, 2026
9th Annual Conference on Cognitive Computational Neuroscience for researchers in cognitive science, neuroscience, and artificial intelligence focused on understanding computations underlying complex behavior. Features keynote speakers (Brenden Lake, Ila Fiete, Kenji Doya, Doris Tsao, Alona Fyshe), oral presentations, poster sessions, symposia and tutorials. Focus areas include brain information processing, AI system representations, biological vs artificial intelligence, machine learning for neuroscience, deep neural network interpretability.
#interpretability#cognitive-science
Salience signals
{
"type_weight": 1,
"source_trust": 0.8,
"topic_relevance": 0.75,
"time_proximity": 0.6377358490566037,
"community_signal": 0.7,
"speaker_org_signal": 0.75,
"is_deadline_open": 1,
"source_count": 1
} fellowship
★ 0.92 Apps closed Apr 1, 2026
ARENA's 8th cohort is a 4-5 week in-person AI safety bootcamp at LISA in Shoreditch, London. Covers technical skills for alignment research engineering. Provides travel, visa expenses, accommodation, and meals for all participants. Applications are now closed.
#alignment#interpretability bootcamptechnicalintensive
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.9,
"topic_relevance": 0.9,
"time_proximity": 0.9748427672955975,
"community_signal": 0.85,
"speaker_org_signal": 0.85,
"is_deadline_open": 0,
"source_count": 2
} fellowship
★ 0.92 Apps closed Jan 18, 2026
12-week in-person research fellowship connecting talented researchers with top mentors in AI alignment, transparency, and security. Fellows conduct research, attend talks, workshops, and networking events. Top performers can extend for 6 additional months. Provides $15k stipend, $12k compute budget, private housing, catered meals, office workspace, and mentorship support. Applications closed but collecting EOI.
#alignment#interpretability#governance#theory#control fellowshipmentorshipMATSresearch
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.95,
"topic_relevance": 0.95,
"time_proximity": 0.9547169811320755,
"community_signal": 0.9,
"speaker_org_signal": 0.9,
"is_deadline_open": 0,
"source_count": 1
} Foresight Institute's flagship conference bringing together leading scientists, entrepreneurs, funders, and policymakers to explore frontiers of science and technology. Features tracks on AI safety, security, and other emerging technologies. Part of Foresight's 40-year mission advancing transformative technology.
#alignment#governance frontier-sciencemulti-track
Salience signals
{
"type_weight": 1,
"source_trust": 0.75,
"topic_relevance": 0.65,
"time_proximity": 0.919496855345912,
"community_signal": 0.5,
"speaker_org_signal": 0.6,
"is_deadline_open": 0,
"source_count": 1
} fellowship
★ 0.87 Apps closed Mar 1, 2026
3-month fellowship to launch or accelerate impactful careers in American AI governance and policy. Fellows conduct independent research projects under expert mentorship while building professional networks and developing policy expertise. Focus areas include public policy, political science, engineering, economics, biosecurity, cybersecurity, China studies, and risk management. Prioritizes bipartisan engagement, rigorous analysis, and practical policy relevance. $21,000 stipend plus travel support, weekday lunches, and DC office space. US work authorization required.
#governance#policy fellowshippolicygovernanceDC
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.9,
"topic_relevance": 0.9,
"time_proximity": 0.89937106918239,
"community_signal": 0.8,
"speaker_org_signal": 0.85,
"is_deadline_open": 0,
"source_count": 1
} fellowship
★ 0.87 Apps closed Jan 4, 2026
3-month fellowship for conducting independent research on AI governance topics. Fellows receive mentorship from field experts, participate in seminars and Q&A sessions, and build professional networks. Research outputs may include reports, white papers, journal articles, op-eds, or blog posts. £12,000 stipend plus travel support and weekday lunches. Open to candidates from government, academia, industry, or civil society with expertise in policy, political science, computer science, economics, or risk management. Visa sponsorship available.
#governance#policy fellowshipresearchgovernanceLondon
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.9,
"topic_relevance": 0.9,
"time_proximity": 0.89937106918239,
"community_signal": 0.8,
"speaker_org_signal": 0.85,
"is_deadline_open": 0,
"source_count": 1
} fellowship
★ 0.86 Apps closed Apr 12, 2026
Cambridge Boston Alignment Initiative's intensive nine-week summer fellowship for AI safety and biosecurity research. Fellows work on technical AI safety projects with expert mentorship in Cambridge, MA. Fully funded program covering housing, stipend, and research support.
#alignment#interpretability#governance#evals#biosecurity fellowshipresearchCambridgeHarvard
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.85,
"topic_relevance": 0.85,
"time_proximity": 0.9044025157232705,
"community_signal": 0.7,
"speaker_org_signal": 0.8,
"is_deadline_open": 0,
"source_count": 2
} fellowship
★ 0.85 Apps closed Mar 24, 2026
Three-month fully-funded research fellowship for scholars in economics, law, international relations, and related fields focusing on societal impacts of advanced AI and institutions/policies for effective response. Fellows receive $25,000 stipend, covered travel, daily meals, and access to CAIS expertise and Bay Area network. Emphasizes producing publicly shareable research on AI's impact on economic distribution, corporate accountability, and geopolitical competition.
#governance#policy fellowshipgovernancepolicyresearch
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.9,
"topic_relevance": 0.85,
"time_proximity": 0.9345911949685535,
"community_signal": 0.75,
"speaker_org_signal": 0.9,
"is_deadline_open": 0,
"source_count": 1
} fellowship
★ 0.85 Apps closed Apr 15, 2026
AI safety research fellowship pilot initiative designed to accelerate AI safety research and foster research talent in the field. Fellows work on steering and controlling future powerful AI systems and evaluating associated risks.
#alignment#interpretability#control#evals#adversarial-robustness fellowshipresearchAnthropicapplications-open-may-cohort
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.95,
"topic_relevance": 0.95,
"time_proximity": 0.3888888888888889,
"community_signal": 0.85,
"speaker_org_signal": 0.95,
"is_deadline_open": 0,
"source_count": 1
} workshop
★ 0.84 Apps closed May 1, 2026
Workshop on secure and verifiable AI development, bringing together researchers, builders, and funders across ML, hardware security, systems, cryptography, and computer security. Focuses on verification techniques for AI safety. Colocated with IEEE Security and Privacy conference. Organized by FAR.AI.
#evals#alignment#safety-research#security workshopverificationcryptographyhardware-security
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.9,
"topic_relevance": 0.85,
"time_proximity": 0.9428571428571428,
"community_signal": 0.65,
"speaker_org_signal": 0.85,
"is_deadline_open": 1,
"source_count": 1
} Second Workshop on Technical AI Governance Research at ICML 2026, focusing on technical approaches to AI governance, policy, and regulation. Part of the main conference workshop track.
#governance#policy ICMLworkshopgovernancetechnical-governance
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.85,
"topic_relevance": 0.9,
"time_proximity": 0.7383647798742139,
"community_signal": 0.75,
"speaker_org_signal": 0.7,
"is_deadline_open": 0,
"source_count": 1
} Workshop at ICML 2026 focused on identifying, diagnosing, and fixing failure modes in agentic AI systems. Covers reproducible triggers for failures, diagnostic tracing methods, and verified repair approaches. Highly relevant to AI safety and robustness.
#evals#alignment ICMLworkshopfailure-modesagentsdiagnostics
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.85,
"topic_relevance": 0.9,
"time_proximity": 0.7383647798742139,
"community_signal": 0.75,
"speaker_org_signal": 0.7,
"is_deadline_open": 0,
"source_count": 1
} Second Workshop on Agents in the Wild focusing on safety and security of AI agents deployed in real-world environments. Addresses challenges in ensuring safe and secure operation of autonomous agents. Part of ICML 2026 workshop track.
#alignment#evals ICMLworkshopagentssafetysecurity
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.85,
"topic_relevance": 0.9,
"time_proximity": 0.7333333333333334,
"community_signal": 0.75,
"speaker_org_signal": 0.7,
"is_deadline_open": 0,
"source_count": 1
} workshop
★ 0.82 CFP closed Mar 12, 2026
Workshop examining tensions between model developers and evaluation researchers in AI systems assessment. Addresses gaps in coverage across methodological rigor, sociotechnical considerations, scalability, and community-informed approaches. Features call for papers and shared task for building unifying, standardized database of LLM Evaluations. Co-located with ACL 2026.
#evals
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.75,
"topic_relevance": 0.9,
"time_proximity": 0.7937106918238994,
"community_signal": 0.75,
"speaker_org_signal": 0.75,
"is_deadline_open": 0,
"source_count": 1
} fellowship
★ 0.77 Apps closed Jan 15, 2026
World's first academic programme dedicated to AI evaluation. 150-hour initiative combining technical depth with policy and governance perspectives. Includes 90 hours online work (lectures, networking, activities), 20 hours hands-on courses, and 40-hour in-person capstone week. Awards 15 ECTS Expert Diploma. 40 top global participants with fully funded scholarships. Faculty from Cambridge, Stanford, Princeton, Oxford, MIT, EU AI Office, UK AI Safety Institute, Google DeepMind, Microsoft Research.
#evals#safety-research#governance fellowshipevalsacademichybrid
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.85,
"topic_relevance": 0.95,
"time_proximity": 0,
"community_signal": 0.8,
"speaker_org_signal": 0.9,
"is_deadline_open": 0,
"source_count": 1
} Weekend AI safety hackathon focused on Global South participation and perspectives. Hybrid format allowing both online and in-person participation. Organized by Apart Research as part of their 55+ research sprints series with 6,000+ participants across 200+ global locations.
#alignment#safety-research#evals#governance
Salience signals
{
"type_weight": 0.65,
"source_trust": 0.85,
"topic_relevance": 0.8,
"time_proximity": 0.8490566037735849,
"community_signal": 0.7,
"speaker_org_signal": 0.7,
"is_deadline_open": 0,
"source_count": 1
} Two-day conference on cybersecurity and technology policy covering AI security topics. Part of UNIDIR's Security and Technology Programme addressing implications of AI for international peace and security. In-scope for AAE tracker as international AI policy/disarmament falls within governance community's concerns.
#governance
Salience signals
{
"type_weight": 1,
"source_trust": 0.85,
"topic_relevance": 0.65,
"time_proximity": 0.5428571428571429,
"community_signal": 0.5,
"speaker_org_signal": 0.6,
"is_deadline_open": 0,
"source_count": 1
} Two-week intensive summer school at Columbia University covering machine learning topics including mechanistic interpretability, alignment/safety, RAG & agents, and LLM systems. Approximately 200 PhD students participate alongside faculty and industry speakers. In-scope due to dedicated alignment and mechanistic interpretability tracks.
#interpretability#alignment
Salience signals
{
"type_weight": 0.35,
"source_trust": 0.75,
"topic_relevance": 0.7,
"time_proximity": 0.869182389937107,
"community_signal": 0.5,
"speaker_org_signal": 0.6,
"is_deadline_open": 0,
"source_count": 1
} A free, accessible workshop hosted by AI Safety Awareness Group Oakland exploring AI's trajectory and societal impact. No technical background required. Features live demonstrations of current AI systems, interactive forecasting activities, and discussions about AI's implications for work, relationships, and society over the next 1-5 years.
#governance#alignment
Salience signals
{
"type_weight": 0.45,
"source_trust": 0.7,
"topic_relevance": 0.7,
"time_proximity": 0.5714285714285714,
"community_signal": 0.6,
"speaker_org_signal": 0.5,
"is_deadline_open": 0,
"source_count": 1
}