fellowship
★ 0.96 Apps close May 3, 2026
Anthropic's four-month fellowship program for AI safety research. Weekly stipend of $3,850 USD / £2,310 GBP / $4,300 CAD, plus ~$15k/month compute budget and close mentorship from Anthropic researchers. Priority areas include scalable oversight, adversarial robustness, and interpretability. Application deadline May 3.
#alignment#interpretability#control#evals#adversarial-robustness fellowshipresearchAnthropic
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.95,
"topic_relevance": 0.95,
"time_proximity": 0.6930817610062893,
"community_signal": 0.85,
"speaker_org_signal": 0.95,
"is_deadline_open": 1,
"source_count": 1
} fellowship
★ 0.95 Apps close May 3, 2026
MARS (Mentorship for Alignment Researchers at CAISH) is a part-time research programme pairing teams of 2-4 participants with experienced mentors from DeepMind, RAND, Apollo Research, MATS, UK AISI to produce published AI safety research. Includes one-week in-person kick-off in Cambridge (July 13-19 or 20-26), followed by 8-10 week remote phase. Focus on AI control, interpretability, evaluations, robustness, and governance. Fully funded with $2000+ compute budgets.
#alignment#interpretability#evals#governance
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.8,
"topic_relevance": 0.95,
"time_proximity": 0.7484276729559749,
"community_signal": 0.75,
"speaker_org_signal": 0.9,
"is_deadline_open": 1,
"source_count": 2
} fellowship
★ 0.95 Apps close May 3, 2026
OpenAI Safety Fellowship supports external researchers pursuing rigorous AI safety work with OpenAI mentors and Constellation partnership. Fellows work on safety evaluation, ethics, robustness, scalable mitigations, privacy-preserving methods, agentic oversight, and misuse prevention. Includes monthly stipend, compute resources, API credits, and physical workspace in Berkeley. Fellows must produce substantial research output such as a paper, benchmark, or dataset.
#alignment#safety-research#governance#evals#control
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.9,
"topic_relevance": 0.95,
"time_proximity": 0.43144654088050316,
"community_signal": 0.85,
"speaker_org_signal": 0.95,
"is_deadline_open": 1,
"source_count": 2
} fellowship
★ 0.93 Apps close May 3, 2026
Fully funded five-month in-person fellowship at Constellation's Berkeley research center. Fellows work on frontier AI safety projects with mentorship from senior experts and support from Constellation's research management team. Monthly stipend of $8,400, research budget (~$15K monthly for compute), visa support, weekly mentorship, career support, and placement services. Focus on empirical ML research (alignment, control, evaluations, oversight) and strategy & governance. Over 80% of first cohort now works in AI safety at Redwood Research, METR, Anthropic, OpenAI, and Google DeepMind.
#alignment#control#evals#governance#safety-research#interpretability fellowshipempiricalgovernancestrategy
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.85,
"topic_relevance": 0.95,
"time_proximity": 0.43144654088050316,
"community_signal": 0.85,
"speaker_org_signal": 0.9,
"is_deadline_open": 1,
"source_count": 2
} fellowship
★ 0.91 Apps close May 3, 2026
A 9-week AI safety research fellowship based in London at LISA, with potential 6-month extensions. Fellows receive weekly one-on-one mentorship with established researchers, in-person workspace with weekday meals, £6,000-£8,000 stipend, travel coverage, £2,000 housing allowance for non-London fellows, and compute resources. Selection includes written application, video interview, mentor-specific work task, and personal interview. Alumni work at Google DeepMind, Anthropic, and UK AISI.
#alignment#interpretability#governance#evals
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.75,
"topic_relevance": 0.9,
"time_proximity": 0.8188679245283019,
"community_signal": 0.75,
"speaker_org_signal": 0.8,
"is_deadline_open": 1,
"source_count": 2
} conference
★ 0.60 CFP closes May 6, 2026
Fortieth Annual Conference on Neural Information Processing Systems. Three locations: Sydney, Australia (Dec 6-12), Atlanta, Georgia (Dec 8-13), Paris, France (Dec 9-13). Multiple submission tracks including main papers, workshops, position papers, competitions, and new Evaluations & Datasets Track. Paper abstract deadline May 4, full papers May 6, workshop applications June 6, author notifications Sept 24. General Chairs: Hsuan-Tien Lin (National Taiwan University) and Razvan Pascanu (Google DeepMind, Mila). In-scope for AAE tracker primarily through safety-related workshops.
#interpretability#evals#alignment
Salience signals
{
"type_weight": 1,
"source_trust": 0.85,
"topic_relevance": 0.7,
"time_proximity": 0.17,
"community_signal": 0.8,
"speaker_org_signal": 0.8,
"is_deadline_open": 0,
"source_count": 1
} workshop
★ 0.82 CFP closes May 8, 2026
Third annual mechanistic interpretability workshop at ICML. Focuses on developing principled methods to analyze and understand a model's internals–weights and activations. Unites researchers from academia, industry, and independent research to discuss recent advances in mechanistic interpretability. Submission deadline May 8, 2026 (AoE) via OpenReview. Organized by researchers from Google DeepMind, Harvard, Northeastern University, Imperial College London, and others.
#interpretability#alignment#circuit-tracing#sparse-autoencoders ICMLworkshopmechanistic-interpretability
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.9,
"topic_relevance": 0.95,
"time_proximity": 0.7635220125786164,
"community_signal": 0.85,
"speaker_org_signal": 0.9,
"is_deadline_open": 0,
"source_count": 1
} workshop
★ 0.72 CFP closes May 8, 2026
Workshop exploring Aligning with the Diversity of Human Values, addressing how to integrate diverse perspectives into AI alignment frameworks that handle conflicting values across populations. Accepts 4-8 page papers (plus unlimited references/appendices) following ICML 2026 template via OpenReview. Welcomes works in progress, position papers, policy papers, and academic papers. Seeks interdisciplinary contributions spanning machine learning, human-computer interaction, philosophy, social sciences, and policy studies. All submissions must include at least one author who agrees to serve as a reviewer. Paper deadline May 8, acceptances May 22, camera-ready June 10.
#alignment#governance
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.85,
"topic_relevance": 0.85,
"time_proximity": 0.7584905660377359,
"community_signal": 0.75,
"speaker_org_signal": 0.8,
"is_deadline_open": 0,
"source_count": 1
} conference
★ 0.98 Reg closes May 13, 2026
Free, one-day, AI Safety event focused on technical discussions about artificial intelligence safety. Third iteration of the Technical AI Safety conference series. Welcomes researchers and professionals from all backgrounds, regardless of prior experience. Organized jointly by the Oxford Martin AI Governance Initiative and Noeon Research. Registration open via Luma. Past conference recordings available on YouTube.
#alignment#governance#safety-research#evals#interpretability conferencetechnicalOxfordfree
Salience signals
{
"type_weight": 1,
"source_trust": 0.9,
"topic_relevance": 0.95,
"time_proximity": 0.7142857142857143,
"community_signal": 0.85,
"speaker_org_signal": 0.9,
"is_deadline_open": 1,
"source_count": 1
} fellowship
★ 0.91 Apps close May 18, 2026
CAMBRIA (Cambridge Bootcamp for Research in Interpretability and Alignment) is a 3-week intensive ML upskilling bootcamp for AI safety researchers, focusing on mechanistic interpretability and reinforcement learning. Based on the ARENA curriculum, participants receive housing, meals, 24/7 office access, and dedicated teaching assistants at Harvard Square.
#interpretability#alignment
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.85,
"topic_relevance": 0.95,
"time_proximity": 0.8285714285714285,
"community_signal": 0.75,
"speaker_org_signal": 0.85,
"is_deadline_open": 1,
"source_count": 1
} conference
★ 0.87 Apps close May 20, 2026
Effective Altruism Global conference connecting experts and peers to collaborate on projects and tackle global challenges. EA Globals designed for individuals with solid understanding of core EA ideas actively applying them. Applications open for all 2026 EA Global events with single application form. AI safety is major track at EAG events.
#alignment#governance
Salience signals
{
"type_weight": 1,
"source_trust": 0.8,
"topic_relevance": 0.75,
"time_proximity": 0.969811320754717,
"community_signal": 0.85,
"speaker_org_signal": 0.75,
"is_deadline_open": 1,
"source_count": 1
} hackathon
★ 0.81 Reg closes May 21, 2026
Apart Research hackathon focused on secure program synthesis and AI safety. Three-day hybrid event with online and in-person participation options. Part of Apart Research's ongoing series of AI safety research sprints and hackathons with mentorship and collaborative research opportunities.
#alignment#control
Salience signals
{
"type_weight": 0.65,
"source_trust": 0.85,
"topic_relevance": 0.8,
"time_proximity": 0.9428571428571428,
"community_signal": 0.7,
"speaker_org_signal": 0.85,
"is_deadline_open": 1,
"source_count": 1
} conference
★ 0.72 Early-bird ends May 24, 2026
Forty-Third International Conference on Machine Learning. Schedule: July 6 Expo/Tutorial Day, July 7-9 Main Conference, July 10-11 Workshops. Workshop details announced April 6, 2026. Early registration deadline May 24, 2026. In-scope for AAE tracker primarily through safety-related workshops (Mechanistic Interpretability, Pluralistic Alignment, Technical AI Governance Research workshops confirmed).
#alignment#interpretability#evals
Salience signals
{
"type_weight": 1,
"source_trust": 0.85,
"topic_relevance": 0.75,
"time_proximity": 0.7836477987421384,
"community_signal": 0.8,
"speaker_org_signal": 0.8,
"is_deadline_open": 0,
"source_count": 1
} conference
★ 0.94 Apps close Jun 1, 2026
5-day, multi-track conference bringing together researchers in theoretical AI alignment. Unconference format where participants can propose and lead their own sessions. Free to attend. Focuses on mathematical approaches to alignment, covering topics like Singular Learning Theory, Agent Foundations, and Causal Incentives. Limited financial assistance available for travel and accommodation on needs-based basis. Application deadline June 1. Limited onsite bedrooms available for booking after registration. Organized by Iliad, an umbrella organization for applied mathematics research in AI alignment.
#alignment#theory#interpretability#agent-foundations#formal-foundations conferenceunconferencetheoreticalfree
Salience signals
{
"type_weight": 1,
"source_trust": 0.85,
"topic_relevance": 0.95,
"time_proximity": 0.6427672955974842,
"community_signal": 0.85,
"speaker_org_signal": 0.85,
"is_deadline_open": 1,
"source_count": 1
} fellowship
★ 0.76 Apps close Jun 1, 2026
10-week fellowship in Cambridge, UK targeting researchers and entrepreneurs working on mitigating risks from frontier AI. Three research areas: Technical AI Safety (ensuring advanced systems have appropriate safeguards), Governance (international cooperation and regulatory frameworks for frontier AI), and Technical AI Governance (intersection of technology and policy). Provides competitive stipend, meals, transport, visas, lodging, mentorship from expert researchers, 30+ events over the fellowship period, dedicated research management support, and compute resources.
#alignment#governance#evals
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.85,
"topic_relevance": 0.9,
"time_proximity": 0.7836477987421384,
"community_signal": 0.8,
"speaker_org_signal": 0.85,
"is_deadline_open": 0,
"source_count": 1
} conference
★ 0.73 Reg closes Jun 4, 2026
Foresight Institute's flagship conference gathering leading scientists, entrepreneurs, funders, and policymakers to explore the frontiers of science and technology. Features multiple tracks including AI safety, with discussions on beneficial AGI development, alignment research, and transformative technology governance. 40-year-old organization supporting ambitious interdisciplinary science through grants, prizes, fellowships, and events.
#governance#alignment
Salience signals
{
"type_weight": 1,
"source_trust": 0.75,
"topic_relevance": 0.7,
"time_proximity": 0.939622641509434,
"community_signal": 0.7,
"speaker_org_signal": 0.75,
"is_deadline_open": 0,
"source_count": 2
} hackathon
★ 0.77 Reg closes Jun 6, 2026
A 1-day AI safety video creation sprint hosted by BlueDot Impact in London. Participants create AI risk content to help communicate key safety concepts to wider audiences. Part of BlueDot's community building efforts to support the AI safety ecosystem.
#alignment#governance
Salience signals
{
"type_weight": 0.65,
"source_trust": 0.85,
"topic_relevance": 0.75,
"time_proximity": 0.9345911949685535,
"community_signal": 0.65,
"speaker_org_signal": 0.85,
"is_deadline_open": 1,
"source_count": 1
} conference
★ 0.78 Reg closes Jun 7, 2026
UNIDIR's Global Conference on AI, Security and Ethics brings together international policymakers, researchers, and practitioners to address how artificial intelligence intersects with international peace, security, and disarmament. Part of UNIDIR's mandate as a UN research institute examining emerging technology domains within security contexts. Topics include AI governance, autonomous weapons systems, and ethical frameworks for AI deployment in security contexts.
#governance#policy UNinternationalgovernancehybrid
Salience signals
{
"type_weight": 1,
"source_trust": 0.85,
"topic_relevance": 0.8,
"time_proximity": 0.8742138364779874,
"community_signal": 0.6,
"speaker_org_signal": 0.85,
"is_deadline_open": 0,
"source_count": 2
} workshop
★ 0.79 Reg closes Jul 5, 2026
Part of FAR.AI's ongoing Alignment Workshop series. Gathering of global leaders exploring effective strategies for mitigating risks from advanced AI systems. Organized by FAR.AI.
#alignment#governance#control#interpretability
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.9,
"topic_relevance": 0.9,
"time_proximity": 0.7836477987421384,
"community_signal": 0.8,
"speaker_org_signal": 0.9,
"is_deadline_open": 0,
"source_count": 1
} fellowship
★ 0.90 Apps close Jul 6, 2026
CAMBRIA (Cambridge Bootcamp for Research in Interpretability and Alignment) is a 3-week intensive ML upskilling bootcamp for AI safety researchers, focusing on mechanistic interpretability and reinforcement learning. Based on the ARENA curriculum, this Manhattan cohort is hosted at Collider with full housing, meals, and dedicated teaching support.
#interpretability#alignment
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.85,
"topic_relevance": 0.95,
"time_proximity": 0.7836477987421384,
"community_signal": 0.75,
"speaker_org_signal": 0.85,
"is_deadline_open": 1,
"source_count": 1
} conference
★ 0.80 CFP closes Jul 15, 2026
Summit bringing together academic leaders, entrepreneurs, AI experts, venture capitalists, and policymakers to discuss the future of AI and Agentic AI. Call for Papers and Startup Spotlight applications open. In-person and livestream available.
#alignment#evals#governance
Salience signals
{
"type_weight": 1,
"source_trust": 0.85,
"topic_relevance": 0.75,
"time_proximity": 0.6477987421383647,
"community_signal": 0.75,
"speaker_org_signal": 0.8,
"is_deadline_open": 1,
"source_count": 1
} workshop
★ 0.70 Reg closes Jul 17, 2026
Technical workshop hosted by Foresight Institute focused on secure AI development and sovereign AI considerations. Explores technical approaches to building AI systems with robust security properties and frameworks for maintaining control over AI development and deployment. Registration open. Part of Foresight's broader program supporting beneficial transformative technology.
#governance#evals#alignment
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.75,
"topic_relevance": 0.8,
"time_proximity": 0.7232704402515724,
"community_signal": 0.65,
"speaker_org_signal": 0.75,
"is_deadline_open": 0,
"source_count": 2
} fellowship
★ 0.87 Apps close Aug 10, 2026
CAMBRIA (Cambridge Bootcamp for Research in Interpretability and Alignment) is a 3-week intensive ML upskilling bootcamp for AI safety researchers, focusing on mechanistic interpretability and reinforcement learning. Based on the ARENA curriculum, participants receive housing, meals, 24/7 office access, and dedicated teaching assistants at Harvard Square.
#interpretability#alignment
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.85,
"topic_relevance": 0.95,
"time_proximity": 0.6075471698113207,
"community_signal": 0.75,
"speaker_org_signal": 0.85,
"is_deadline_open": 1,
"source_count": 1
} fellowship
★ 0.88 Apps closed Jan 18, 2026
12-week research fellowship from early June to late August with extension phase beginning in September. Berkeley and London cohorts. $15,000 stipend from AI Safety Support, $12,000 compute budget, private housing, and catered meals provided. Top performers can extend for 6–12 months with continued funding and mentorship. Five specialized tracks: Empirical research, Policy and strategy, Theory, Technical governance, Compute infrastructure. Applications closed for Summer 2026, collecting expressions of interest. Application period: Dec 16 - Jan 18; final offers: late March/early April.
#alignment#interpretability#governance#theory#control fellowshipmentorshipMATSresearch
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.95,
"topic_relevance": 0.95,
"time_proximity": 0.959748427672956,
"community_signal": 0.9,
"speaker_org_signal": 0.95,
"is_deadline_open": 0,
"source_count": 1
} workshop
★ 0.88 Apps closed May 1, 2026
Workshop on secure and verifiable AI development, bringing together researchers, builders, and funders across ML, hardware security, systems, cryptography, and computer security. Focuses on verification techniques for AI safety. Colocated with IEEE Security and Privacy conference. Organized by FAR.AI.
#evals#alignment#safety-research#security workshopverificationcryptographyhardware-security
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.9,
"topic_relevance": 0.85,
"time_proximity": 0.9428571428571428,
"community_signal": 0.65,
"speaker_org_signal": 0.85,
"is_deadline_open": 1,
"source_count": 1
} fellowship
★ 0.85 Apps closed Mar 24, 2026
The CAIS AI and Society Fellowship is a fully-funded three-month research program for scholars in economics, law, international relations, and adjacent disciplines to investigate how advanced AI may reshape social, economic, geopolitical, and legal systems. Fellows receive $25,000 stipend, covered travel to San Francisco, daily meals, and work with significant autonomy defining their own research directions at CAIS offices. Features regular guest speakers from Stanford, law schools, and international affairs experts.
#governance#policy fellowshipgovernancepolicyresearch
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.9,
"topic_relevance": 0.85,
"time_proximity": 0.959748427672956,
"community_signal": 0.8,
"speaker_org_signal": 0.9,
"is_deadline_open": 0,
"source_count": 2
} fellowship
★ 0.82 Apps closed Apr 1, 2026
ARENA 8.0 alignment research engineering accelerator cohort in London. Five-week intensive in-person programme at the London Initiative for Safe AI (LISA). Three-stage application process: initial form (under 90 minutes), coding test (six questions, 1-hour limit), and 30-minute interview. Applications for this cohort are now closed; interested candidates can submit expression of interest for future iterations.
#alignment#interpretability bootcamptechnicalintensive
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.9,
"topic_relevance": 0.9,
"time_proximity": 0.9949685534591195,
"community_signal": 0.85,
"speaker_org_signal": 0.85,
"is_deadline_open": 0,
"source_count": 1
} fellowship
★ 0.79 Apps closed Mar 1, 2026
3-month fellowship to launch or accelerate impactful careers in American AI governance and policy. Fellows conduct independent research projects under expert mentorship while building professional networks and developing policy expertise. Focus areas include public policy, political science, engineering, economics, biosecurity, cybersecurity, China studies, and risk management. Prioritizes bipartisan engagement, rigorous analysis, and practical policy relevance. $21,000 stipend plus travel support, weekday lunches, and DC office space. US work authorization required.
#governance#policy fellowshippolicygovernanceDC
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.9,
"topic_relevance": 0.9,
"time_proximity": 0.89937106918239,
"community_signal": 0.8,
"speaker_org_signal": 0.85,
"is_deadline_open": 0,
"source_count": 1
} fellowship
★ 0.79 Apps closed Jan 4, 2026
3-month fellowship for conducting independent research on AI governance topics. Fellows receive mentorship from field experts, participate in seminars and Q&A sessions, and build professional networks. Research outputs may include reports, white papers, journal articles, op-eds, or blog posts. £12,000 stipend plus travel support and weekday lunches. Open to candidates from government, academia, industry, or civil society with expertise in policy, political science, computer science, economics, or risk management. Visa sponsorship available.
#governance#policy fellowshipresearchgovernanceLondon
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.9,
"topic_relevance": 0.9,
"time_proximity": 0.89937106918239,
"community_signal": 0.8,
"speaker_org_signal": 0.85,
"is_deadline_open": 0,
"source_count": 1
} fellowship
★ 0.79 Apps closed Apr 12, 2026
Cambridge Boston Alignment Initiative's intensive nine-week summer fellowship for AI safety and biosecurity research. Fellows work on technical AI safety projects with expert mentorship in Cambridge, MA. Fully funded program covering housing, stipend, and research support.
#alignment#interpretability#governance#evals#biosecurity fellowshipresearchCambridgeHarvard
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.85,
"topic_relevance": 0.85,
"time_proximity": 0.9044025157232705,
"community_signal": 0.7,
"speaker_org_signal": 0.8,
"is_deadline_open": 0,
"source_count": 2
} fellowship
★ 0.77 Apps closed Apr 15, 2026
AI safety research fellowship pilot initiative designed to accelerate AI safety research and foster research talent in the field. Fellows work on steering and controlling future powerful AI systems and evaluating associated risks.
#alignment#interpretability#control#evals#adversarial-robustness fellowshipresearchAnthropicapplications-open-may-cohort
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.95,
"topic_relevance": 0.95,
"time_proximity": 0.3888888888888889,
"community_signal": 0.85,
"speaker_org_signal": 0.95,
"is_deadline_open": 0,
"source_count": 1
} Second Workshop on Technical AI Governance Research at ICML 2026, focusing on technical approaches to AI governance, policy, and regulation. Part of the main conference workshop track.
#governance#policy ICMLworkshopgovernancetechnical-governance
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.85,
"topic_relevance": 0.9,
"time_proximity": 0.7383647798742139,
"community_signal": 0.75,
"speaker_org_signal": 0.7,
"is_deadline_open": 0,
"source_count": 1
} Workshop at ICML 2026 focused on identifying, diagnosing, and fixing failure modes in agentic AI systems. Covers reproducible triggers for failures, diagnostic tracing methods, and verified repair approaches. Highly relevant to AI safety and robustness.
#evals#alignment ICMLworkshopfailure-modesagentsdiagnostics
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.85,
"topic_relevance": 0.9,
"time_proximity": 0.7383647798742139,
"community_signal": 0.75,
"speaker_org_signal": 0.7,
"is_deadline_open": 0,
"source_count": 1
} fellowship
★ 0.73 Apps closed Jan 15, 2026
The world's first academic programme dedicated to AI Evaluation, representing an initial step toward establishing AI Evaluation & Safety as a formal academic discipline. 40 globally-selected participants receive fully funded scholarships for 90 hours of online instruction, 20 hours of hands-on courses, and a 40-hour in-person capstone week in Valencia. Faculty from Cambridge, Stanford, Princeton, EU AI Office, UK AISI, FAR AI, Apollo Research, and Google DeepMind. Graduates receive 15 ECTS Expert Diploma from ValgrAI. Feb-May 2026 cohort.
#evals#safety-research#governance#policy fellowshipevalsacademichybrid
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.85,
"topic_relevance": 0.95,
"time_proximity": 0,
"community_signal": 0.8,
"speaker_org_signal": 0.95,
"is_deadline_open": 0,
"source_count": 2
} Second Workshop on Agents in the Wild focusing on safety and security of AI agents deployed in real-world environments. Addresses challenges in ensuring safe and secure operation of autonomous agents. Part of ICML 2026 workshop track.
#alignment#evals ICMLworkshopagentssafetysecurity
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.85,
"topic_relevance": 0.9,
"time_proximity": 0.7333333333333334,
"community_signal": 0.75,
"speaker_org_signal": 0.7,
"is_deadline_open": 0,
"source_count": 1
} workshop
★ 0.69 CFP closed Mar 12, 2026
Two-day workshop at ACL 2026 focused on surfacing practical insights from across the evaluation ecosystem. Examines tensions between model developers and evaluation researchers. Three main themes: evaluation methodology and measurement theory, evaluation infrastructure/costs/stakeholders, and assessing sociotechnical impacts of generative AI systems. Accepts full papers (6-8 pages), short papers (up to 4 pages), and tiny papers/extended abstracts (up to 2 pages). Two-way anonymized review required. Submission via OpenReview.
#evals
Salience signals
{
"type_weight": 0.85,
"source_trust": 0.75,
"topic_relevance": 0.85,
"time_proximity": 0.7987421383647799,
"community_signal": 0.7,
"speaker_org_signal": 0.75,
"is_deadline_open": 0,
"source_count": 1
} Weekend AI safety hackathon focused on Global South participation and perspectives. Hybrid format allowing both online and in-person participation. Organized by Apart Research as part of their 55+ research sprints series with 6,000+ participants across 200+ global locations.
#alignment#safety-research#evals#governance
Salience signals
{
"type_weight": 0.65,
"source_trust": 0.85,
"topic_relevance": 0.8,
"time_proximity": 0.8490566037735849,
"community_signal": 0.7,
"speaker_org_signal": 0.7,
"is_deadline_open": 0,
"source_count": 1
} SPAR's largest round yet with 130+ projects across AI safety, governance, security. Part-time remote research program pairing aspiring researchers with experienced mentors for three-month projects. Application deadline January 14, 2026. Programme runs February 16 to May 16. Open to undergraduates, graduate students, PhD candidates, and professionals. Projects typically require 5–20 hours/week. Mentors from Google DeepMind, RAND, Apollo Research, MATS, SecureBio, UK AISI, Forethought, AEI, MIRI, Goodfire, Rethink Priorities, LawZero, SaferAI, Mila, plus universities like Cambridge, Harvard, Oxford, and MIT. Covers technical (ML, CS, math, physics, biology, cybersecurity) and policy/governance (law, IR, public policy, political science, economics) backgrounds. Biosecurity projects offered for first time in Spring 2026.
#alignment#governance#evals
Salience signals
{
"type_weight": 0.8,
"source_trust": 0.85,
"topic_relevance": 0.9,
"time_proximity": 0,
"community_signal": 0.85,
"speaker_org_signal": 0.9,
"is_deadline_open": 0,
"source_count": 1
} conference
★ 0.60 CFP closed Apr 30, 2026
9th Annual Cognitive Computational Neuroscience Conference at NYU. Brings together researchers from cognitive science, neuroscience, and artificial intelligence focused on understanding the computations that underlie complex behavior. Primarily single-track format with keynote speakers (Brenden M. Lake, Ila Fiete, Kenji Doya, Doris Tsao, Alona Fyshe), oral presentations, and community-proposed programming including Generative Adversarial Collaborations (GACs) and Keynote-and-Tutorial sessions. AAE attendees follow for predictive-coding, metacognition, signal-detection-theory measurement work applicable to LLMs.
#interpretability#cognitive-science#control
Salience signals
{
"type_weight": 1,
"source_trust": 0.8,
"topic_relevance": 0.7,
"time_proximity": 0.6427672955974842,
"community_signal": 0.6,
"speaker_org_signal": 0.7,
"is_deadline_open": 0,
"source_count": 1
} Foresight Institute's flagship conference bringing together leading scientists, entrepreneurs, funders, and policymakers to explore frontiers of science and technology. Features tracks on AI safety, security, and other emerging technologies. Part of Foresight's 40-year mission advancing transformative technology.
#alignment#governance frontier-sciencemulti-track
Salience signals
{
"type_weight": 1,
"source_trust": 0.75,
"topic_relevance": 0.65,
"time_proximity": 0.919496855345912,
"community_signal": 0.5,
"speaker_org_signal": 0.6,
"is_deadline_open": 0,
"source_count": 1
} Two-day conference on cybersecurity and technology policy covering AI security topics. Part of UNIDIR's Security and Technology Programme addressing implications of AI for international peace and security. In-scope for AAE tracker as international AI policy/disarmament falls within governance community's concerns.
#governance
Salience signals
{
"type_weight": 1,
"source_trust": 0.85,
"topic_relevance": 0.65,
"time_proximity": 0.5428571428571429,
"community_signal": 0.5,
"speaker_org_signal": 0.6,
"is_deadline_open": 0,
"source_count": 1
} Two-week intensive summer school at Columbia University covering machine learning topics including mechanistic interpretability, alignment/safety, RAG & agents, and LLM systems. Approximately 200 PhD students participate alongside faculty and industry speakers. In-scope due to dedicated alignment and mechanistic interpretability tracks.
#interpretability#alignment
Salience signals
{
"type_weight": 0.35,
"source_trust": 0.75,
"topic_relevance": 0.7,
"time_proximity": 0.869182389937107,
"community_signal": 0.5,
"speaker_org_signal": 0.6,
"is_deadline_open": 0,
"source_count": 1
} A free, accessible workshop hosted by AI Safety Awareness Group Oakland exploring AI's trajectory and societal impact. No technical background required. Features live demonstrations of current AI systems, interactive forecasting activities, and discussions about AI's implications for work, relationships, and society over the next 1-5 years.
#governance#alignment
Salience signals
{
"type_weight": 0.45,
"source_trust": 0.7,
"topic_relevance": 0.7,
"time_proximity": 0.5714285714285714,
"community_signal": 0.6,
"speaker_org_signal": 0.5,
"is_deadline_open": 0,
"source_count": 1
}