Applied Adversarial Epistemics Tracker

Upcoming events

Events, training, fellowships, mixers, and CFPs across Applied Adversarial Epistemics — instruments for epistemic access to model cognition — and the broader AI safety, alignment, and governance community. Refreshed weekly. Sorted by salience. What's AAE?

Recently added

type
format
topic

ICML 2026 Workshop on Mechanistic Interpretability

workshop ★ 1.00 CFP closes May 8, 2026

Annual mechanistic interpretability workshop at ICML 2026 in Seoul. Focuses on developing principled methods to analyze and understand model internals - weights and activations. Brings together researchers from academia, industry, and independent research. CFP deadline May 8 (AoE). Follows successful previous editions at ICML 2024 and NeurIPS 2025.

#interpretability#alignment#circuit-tracing#sparse-autoencoders ICMLworkshopmechanistic-interpretability
Salience signals
{
  "type_weight": 0.85,
  "source_trust": 0.9,
  "topic_relevance": 0.95,
  "time_proximity": 0.7433962264150944,
  "community_signal": 0.85,
  "speaker_org_signal": 0.85,
  "is_cfp_open": 1,
  "source_count": 2
}

Second Pluralistic Alignment Workshop

workshop ★ 1.00 CFP closes May 3, 2026

Workshop at ICML 2026 exploring pluralistic AI: aligning with the diversity of human values. Accepts 4-8 page papers plus unlimited references. Topics span machine learning, philosophy, HCI, social sciences, and policy on methods for pluralistic ML training, value conflict handling, and approaches to diverse societal values. CFP deadline May 3, camera-ready June 10. Non-archival format accepting position papers, works in progress, policy papers, and academic papers.

#alignment#governance ICMLworkshopalignmentpluralistic
Salience signals
{
  "type_weight": 0.85,
  "source_trust": 0.85,
  "topic_relevance": 0.8,
  "time_proximity": 0.7383647798742139,
  "community_signal": 0.7,
  "speaker_org_signal": 0.75,
  "is_cfp_open": 1,
  "source_count": 2
}
📅 Jul 20, 2026 – Nov 20, 2026 📍 Hybrid via Anthropic Alignment Blog

Anthropic's four-month fellowship program for AI safety research. Weekly stipend of $3,850 USD / £2,310 GBP / $4,300 CAD, plus ~$15k/month compute budget and close mentorship from Anthropic researchers. Priority areas include scalable oversight, adversarial robustness, and interpretability. Application deadline May 3.

#alignment#interpretability#control#evals#adversarial-robustness fellowshipresearchAnthropic
Salience signals
{
  "type_weight": 0.8,
  "source_trust": 0.95,
  "topic_relevance": 0.95,
  "time_proximity": 0.6930817610062893,
  "community_signal": 0.85,
  "speaker_org_signal": 0.95,
  "is_cfp_open": 1,
  "source_count": 1
}

Agentic AI Summit 2026

conference ★ 1.00
📅 Aug 1, 2026 – Aug 2, 2026 📍 Berkeley, USA via Berkeley Center for Responsible, Decentralized Intelligence (RDI)

Large-scale summit hosted by Berkeley's Center for Responsible, Decentralized Intelligence covering the full technological stack of agentic AI including foundation models, agent frameworks, evaluation methods, infrastructure, and deployment. Explicitly addresses critical safety and security challenges with a goal to advance responsible and secure agentic AI systems. Expected 5,000+ in-person attendees plus global livestream.

#alignment#evals#governance
Salience signals
{
  "type_weight": 1,
  "source_trust": 0.85,
  "topic_relevance": 0.8,
  "time_proximity": 0.6327044025157232,
  "community_signal": 0.75,
  "speaker_org_signal": 0.85,
  "is_cfp_open": 1,
  "source_count": 1
}
📅 Aug 3, 2026 – Aug 7, 2026 📍 Berkeley, USA via ILIAD - Theoretical AI Alignment Conference

5-day multi-track unconference bringing together researchers in theoretical AI alignment. Covers mathematical approaches including Singular Learning Theory, Agent Foundations, Causal Incentives, Computational Mechanics, and Scalable Oversight. Unconference format where participants can propose and lead sessions. Free to attend. Application deadline June 1. Limited travel and accommodation funding available needs-based. Limited onsite bedrooms available for booking after registration.

#alignment#theory#interpretability#agent-foundations#formal-foundations conferenceunconferencetheoreticalfree
Salience signals
{
  "type_weight": 1,
  "source_trust": 0.85,
  "topic_relevance": 0.9,
  "time_proximity": 0.6226415094339622,
  "community_signal": 0.8,
  "speaker_org_signal": 0.8,
  "is_cfp_open": 1,
  "source_count": 1
}

Astra Fellowship Fall 2026

fellowship ★ 1.00
📅 Sep 14, 2026 – Feb 5, 2027 📍 Berkeley, USA via Constellation Astra Fellowship , OpenAI Safety Fellowship

Fully funded 5-month in-person AI safety fellowship pairing senior advisors with emerging talent on technical, governance, strategy, and field-building projects. Monthly stipend of $8,400, ~$15k compute budget for empirical fellows, workspace at Constellation Berkeley. Applications close Sept 26, onboarding completes Dec 31, program runs Jan 5 - Mar 31 2027 with extension through June 31. Over 80% of first cohort now working full-time in AI safety.

#alignment#control#evals#governance#safety-research#interpretability fellowshipempiricalgovernancestrategy
Salience signals
{
  "type_weight": 0.8,
  "source_trust": 0.85,
  "topic_relevance": 0.9,
  "time_proximity": 0.41132075471698115,
  "community_signal": 0.8,
  "speaker_org_signal": 0.85,
  "is_cfp_open": 1,
  "source_count": 2
}

OpenAI Safety Fellowship 2026

fellowship ★ 1.00
📅 Sep 14, 2026 – Feb 5, 2027 📍 Berkeley, USA · Hybrid via OpenAI Safety Fellowship , Constellation Astra Fellowship

OpenAI's safety fellowship program (Sept 2026 - Feb 2027) for researchers pursuing work on safety and alignment of advanced AI systems. Priority areas include safety evaluation, ethics, robustness, scalable mitigations, privacy-preserving safety methods, agentic oversight, and high-severity misuse domains. Monthly stipend, compute support, mentorship, and workspace in Berkeley. Application deadline May 3, notification July 25.

#alignment#safety-research#governance#evals
Salience signals
{
  "type_weight": 0.8,
  "source_trust": 0.9,
  "topic_relevance": 0.95,
  "time_proximity": 0.41132075471698115,
  "community_signal": 0.85,
  "speaker_org_signal": 0.95,
  "is_cfp_open": 1,
  "source_count": 2
}

UNIDIR's global conference bringing together diplomats, policymakers, military experts, industry leaders, academia, civil society, and research labs to discuss AI security and ethics. Part of UNIDIR's broader work on international AI policy, disarmament, and governance - directly relevant to the AI safety community's governance interests.

#governance#policy UNinternationalgovernancehybrid
Salience signals
{
  "type_weight": 1,
  "source_trust": 0.85,
  "topic_relevance": 0.75,
  "time_proximity": 0.8540880503144654,
  "community_signal": 0.6,
  "speaker_org_signal": 0.8,
  "is_cfp_open": 1,
  "source_count": 1
}
📅 Jun 29, 2026 – Aug 28, 2026 📍 London, UK via Pivotal Research Fellowship

Quarterly AI safety research fellowship based in London. Fellows pursue independent research projects with mentorship from alignment researchers. Applications for Q3 2026 cohort due May 3.

#alignment#interpretability
Salience signals
{
  "type_weight": 0.8,
  "source_trust": 0.75,
  "topic_relevance": 0.85,
  "time_proximity": 0.7987421383647799,
  "community_signal": 0.65,
  "speaker_org_signal": 0.7,
  "is_cfp_open": 1,
  "source_count": 1
}

ARENA 8.0

fellowship ★ 0.92

ARENA's 8th cohort is a 4-5 week in-person AI safety bootcamp at LISA in Shoreditch, London. Covers technical skills for alignment research engineering. Provides travel, visa expenses, accommodation, and meals for all participants. Applications are now closed.

#alignment#interpretability bootcamptechnicalintensive
Salience signals
{
  "type_weight": 0.8,
  "source_trust": 0.9,
  "topic_relevance": 0.9,
  "time_proximity": 0.9748427672955975,
  "community_signal": 0.85,
  "speaker_org_signal": 0.85,
  "is_cfp_open": 0,
  "source_count": 2
}

MATS Summer 2026 Fellowship

fellowship ★ 0.92
📅 Jun 1, 2026 – Aug 31, 2026 📍 Berkeley, USA via MATS — ML Alignment & Theory Scholars , MATS — ML Alignment & Theory Scholars

12-week ML Alignment & Theory Scholars fellowship program connecting talented researchers with top mentors in AI alignment, interpretability, and governance. Research areas include AI control, scalable oversight, model organisms, model internals, model welfare, security, policy at AI compute/geopolitics/cybersecurity intersection, and agent foundations. $15k stipend, $12k compute budget, housing, meals, travel, research manager. Applications closed. ~75% continue in 6-12 month extension. Rated 9.4/10 average satisfaction.

#alignment#interpretability#governance#theory#control fellowshipmentorshipMATSresearch
Salience signals
{
  "type_weight": 0.8,
  "source_trust": 0.95,
  "topic_relevance": 0.95,
  "time_proximity": 0.939622641509434,
  "community_signal": 0.9,
  "speaker_org_signal": 0.95,
  "is_cfp_open": 0,
  "source_count": 1
}

NeurIPS 2026

conference ★ 0.92 CFP closes May 6, 2026
📅 Dec 6, 2026 – Dec 13, 2026 📍 In-person via NeurIPS — Safety-related Workshops

Neural Information Processing Systems 2026 held across three satellite locations: Sydney, Atlanta, and Paris. Features Evaluations & Datasets Track, workshops, competitions, and safety-related tracks. Abstract deadline May 4, full paper deadline May 6, author notifications Sept 24. In-scope for safety-related workshops and eval track submissions.

#interpretability#evals#alignment
Salience signals
{
  "type_weight": 1,
  "source_trust": 0.85,
  "topic_relevance": 0.75,
  "time_proximity": 0.16675675675675677,
  "community_signal": 0.8,
  "speaker_org_signal": 0.75,
  "is_cfp_open": 1,
  "source_count": 1
}

TAIS 2026 - Technical AI Safety Conference

conference ★ 0.91 CFP closes May 1, 2026
📅 May 14, 2026 📍 Oxford, UK via Technical AI Safety (TAIS) Conference

Technical AI Safety conference at Oxford Martin School. Free admission. Third iteration, first time in UK (previous in 2024 and 2025). Welcomes researchers and professionals from all backgrounds interested in AI safety discussions regardless of prior research experience. Organized by Oxford Martin AI Governance Initiative and Noeon Research. Registration now open.

#alignment#governance#safety-research#evals#interpretability conferencetechnicalOxfordfree
Salience signals
{
  "type_weight": 1,
  "source_trust": 0.9,
  "topic_relevance": 0.95,
  "time_proximity": 0.8285714285714285,
  "community_signal": 0.85,
  "speaker_org_signal": 0.85,
  "is_cfp_open": 0,
  "source_count": 1
}
📅 Jun 5, 2026 – Jun 7, 2026 📍 London, UK via Foresight Institute , Foresight Institute

Foresight Institute's flagship conference bringing together leading scientists, entrepreneurs, funders, and policymakers to explore frontiers of science and technology. Features tracks on AI safety, security, and other emerging technologies. Part of Foresight's 40-year mission advancing transformative technology.

#alignment#governance frontier-sciencemulti-track
Salience signals
{
  "type_weight": 1,
  "source_trust": 0.75,
  "topic_relevance": 0.65,
  "time_proximity": 0.919496855345912,
  "community_signal": 0.5,
  "speaker_org_signal": 0.6,
  "is_cfp_open": 1,
  "source_count": 1
}

Seoul Alignment Workshop 2026

workshop ★ 0.88

One-day alignment workshop bringing together global leaders in academia and industry to deepen collective understanding of AGI risks and explore mitigation strategies. Part of FAR.AI's global Alignment Workshop series following events in London, San Diego, Singapore, Vienna, New Orleans, and San Francisco.

#alignment#governance#control#interpretability
Salience signals
{
  "type_weight": 0.85,
  "source_trust": 0.9,
  "topic_relevance": 0.95,
  "time_proximity": 0.7635220125786164,
  "community_signal": 0.8,
  "speaker_org_signal": 0.9,
  "is_cfp_open": 0,
  "source_count": 1
}
📅 Feb 1, 2026 – May 29, 2026 📍 Valencia, Spain · Hybrid via International Programme on AI Evaluation , International Programme on AI Evaluation

The inaugural International Programme on AI Evaluation is a 150-hour academic programme combining online lectures, hands-on courses, and an in-person capstone week in Valencia. With 40 fully-funded scholarships, it addresses the critical shortage of experts in AI evaluation needed by AI Safety Institutes and leading labs worldwide. Participants receive a 15 ECTS Expert Diploma.

#evals#safety-research#governance fellowshipevalsacademichybrid
Salience signals
{
  "type_weight": 0.8,
  "source_trust": 0.85,
  "topic_relevance": 0.85,
  "time_proximity": 0,
  "community_signal": 0.6,
  "speaker_org_signal": 0.85,
  "is_cfp_open": 1,
  "source_count": 1
}

GovAI DC Summer Fellowship 2026

fellowship ★ 0.87
📅 Jun 8, 2026 – Aug 28, 2026 📍 Washington DC, USA via GovAI - Centre for the Governance of AI

3-month fellowship to launch or accelerate impactful careers in American AI governance and policy. Fellows conduct independent research projects under expert mentorship while building professional networks and developing policy expertise. Focus areas include public policy, political science, engineering, economics, biosecurity, cybersecurity, China studies, and risk management. Prioritizes bipartisan engagement, rigorous analysis, and practical policy relevance. $21,000 stipend plus travel support, weekday lunches, and DC office space. US work authorization required.

#governance#policy fellowshippolicygovernanceDC
Salience signals
{
  "type_weight": 0.8,
  "source_trust": 0.9,
  "topic_relevance": 0.9,
  "time_proximity": 0.89937106918239,
  "community_signal": 0.8,
  "speaker_org_signal": 0.85,
  "is_cfp_open": 0,
  "source_count": 1
}
📅 Jun 8, 2026 – Aug 28, 2026 📍 London, UK via GovAI - Centre for the Governance of AI

3-month fellowship for conducting independent research on AI governance topics. Fellows receive mentorship from field experts, participate in seminars and Q&A sessions, and build professional networks. Research outputs may include reports, white papers, journal articles, op-eds, or blog posts. £12,000 stipend plus travel support and weekday lunches. Open to candidates from government, academia, industry, or civil society with expertise in policy, political science, computer science, economics, or risk management. Visa sponsorship available.

#governance#policy fellowshipresearchgovernanceLondon
Salience signals
{
  "type_weight": 0.8,
  "source_trust": 0.9,
  "topic_relevance": 0.9,
  "time_proximity": 0.89937106918239,
  "community_signal": 0.8,
  "speaker_org_signal": 0.85,
  "is_cfp_open": 0,
  "source_count": 1
}

Cambridge Boston Alignment Initiative's intensive nine-week summer fellowship for AI safety and biosecurity research. Fellows work on technical AI safety projects with expert mentorship in Cambridge, MA. Fully funded program covering housing, stipend, and research support.

#alignment#interpretability#governance#evals#biosecurity fellowshipresearchCambridgeHarvard
Salience signals
{
  "type_weight": 0.8,
  "source_trust": 0.85,
  "topic_relevance": 0.85,
  "time_proximity": 0.9044025157232705,
  "community_signal": 0.7,
  "speaker_org_signal": 0.8,
  "is_cfp_open": 0,
  "source_count": 2
}
📅 May 1, 2026 – Aug 31, 2026 📍 Hybrid via Anthropic Alignment Blog

Anthropic's four-month fellowship program for AI safety research. Weekly stipend of $3,850 USD / £2,310 GBP / $4,300 CAD, plus ~$15k/month compute budget and close mentorship from Anthropic researchers. Priority areas include scalable oversight, adversarial robustness, and interpretability.

#alignment#interpretability#control#evals#adversarial-robustness fellowshipresearchAnthropicapplications-open-may-cohort
Salience signals
{
  "type_weight": 0.8,
  "source_trust": 0.95,
  "topic_relevance": 0.95,
  "time_proximity": 0.4571428571428572,
  "community_signal": 0.85,
  "speaker_org_signal": 0.95,
  "is_cfp_open": 0,
  "source_count": 1
}
📅 Jun 1, 2026 – Aug 21, 2026 📍 San Francisco, USA via Center for AI Safety

Three-month fully-funded research fellowship for scholars in economics, law, international relations, and related fields focusing on societal impacts of advanced AI and institutions/policies for effective response. Fellows receive $25,000 stipend, covered travel, daily meals, and access to CAIS expertise and Bay Area network. Emphasizes producing publicly shareable research on AI's impact on economic distribution, corporate accountability, and geopolitical competition.

#governance#policy fellowshipgovernancepolicyresearch
Salience signals
{
  "type_weight": 0.8,
  "source_trust": 0.9,
  "topic_relevance": 0.85,
  "time_proximity": 0.9345911949685535,
  "community_signal": 0.75,
  "speaker_org_signal": 0.9,
  "is_cfp_open": 0,
  "source_count": 1
}
📅 May 17, 2026 📍 San Francisco, USA via FAR AI - Foundational AI Research

Workshop on secure and verifiable AI development, bringing together researchers, builders, and funders across ML, hardware security, systems, cryptography, and computer security. Focuses on verification techniques for AI safety. Colocated with IEEE Security and Privacy conference. Organized by FAR.AI.

#evals#alignment#safety-research#security workshopverificationcryptographyhardware-security
Salience signals
{
  "type_weight": 0.85,
  "source_trust": 0.9,
  "topic_relevance": 0.85,
  "time_proximity": 0.9428571428571428,
  "community_signal": 0.65,
  "speaker_org_signal": 0.85,
  "is_cfp_open": 0,
  "source_count": 1
}
📅 Jul 10, 2026 📍 Seoul, South Korea via ICML — Safety-related Workshops

Second Workshop on Technical AI Governance Research at ICML 2026, focusing on technical approaches to AI governance, policy, and regulation. Part of the main conference workshop track.

#governance#policy ICMLworkshopgovernancetechnical-governance
Salience signals
{
  "type_weight": 0.85,
  "source_trust": 0.85,
  "topic_relevance": 0.9,
  "time_proximity": 0.7383647798742139,
  "community_signal": 0.75,
  "speaker_org_signal": 0.7,
  "is_cfp_open": 0,
  "source_count": 1
}
📅 Jul 10, 2026 📍 Seoul, South Korea via ICML — Safety-related Workshops

Workshop at ICML 2026 focused on identifying, diagnosing, and fixing failure modes in agentic AI systems. Covers reproducible triggers for failures, diagnostic tracing methods, and verified repair approaches. Highly relevant to AI safety and robustness.

#evals#alignment ICMLworkshopfailure-modesagentsdiagnostics
Salience signals
{
  "type_weight": 0.85,
  "source_trust": 0.85,
  "topic_relevance": 0.9,
  "time_proximity": 0.7383647798742139,
  "community_signal": 0.75,
  "speaker_org_signal": 0.7,
  "is_cfp_open": 0,
  "source_count": 1
}
📅 Jul 11, 2026 📍 Seoul, South Korea via ICML — Safety-related Workshops

Second Workshop on Agents in the Wild focusing on safety and security of AI agents deployed in real-world environments. Addresses challenges in ensuring safe and secure operation of autonomous agents. Part of ICML 2026 workshop track.

#alignment#evals ICMLworkshopagentssafetysecurity
Salience signals
{
  "type_weight": 0.85,
  "source_trust": 0.85,
  "topic_relevance": 0.9,
  "time_proximity": 0.7333333333333334,
  "community_signal": 0.75,
  "speaker_org_signal": 0.7,
  "is_cfp_open": 0,
  "source_count": 1
}

Global South AIS Hackathon

hackathon ★ 0.77
📅 Jun 19, 2026 – Jun 21, 2026 📍 Hybrid via Apart Research , Apart Research

Weekend AI safety hackathon focused on Global South participation and perspectives. Hybrid format allowing both online and in-person participation. Organized by Apart Research as part of their 55+ research sprints series with 6,000+ participants across 200+ global locations.

#alignment#safety-research#evals#governance
Salience signals
{
  "type_weight": 0.65,
  "source_trust": 0.85,
  "topic_relevance": 0.8,
  "time_proximity": 0.8490566037735849,
  "community_signal": 0.7,
  "speaker_org_signal": 0.7,
  "is_cfp_open": 0,
  "source_count": 1
}

Foresight Vision Weekend UK 2026

conference ★ 0.73
📅 Jun 5, 2026 – Jun 7, 2026 📍 London, UK via Foresight Institute

Flagship conference gathering leading scientists, entrepreneurs, funders, and policymakers to explore frontiers of science and technology including AI safety. Foresight is a 40-year-old organization focused on transformative technology. Registration available.

#governance#alignment
Salience signals
{
  "type_weight": 1,
  "source_trust": 0.75,
  "topic_relevance": 0.65,
  "time_proximity": 0.919496855345912,
  "community_signal": 0.6,
  "speaker_org_signal": 0.65,
  "is_cfp_open": 0,
  "source_count": 1
}
📅 Aug 2, 2026 📍 San Diego, USA via EvalEval Coalition

Workshop at ACL 2026 focused on AI evaluation in practice, centering tensions and collaborations between model developers and evaluators as generative AI systems are increasingly integrated into real-world products and decision-making pipelines. Organized by EvalEval Coalition.

#evals#governance
Salience signals
{
  "type_weight": 0.85,
  "source_trust": 0.75,
  "topic_relevance": 0.8,
  "time_proximity": 0.6276729559748427,
  "community_signal": 0.6,
  "speaker_org_signal": 0.65,
  "is_cfp_open": 0,
  "source_count": 1
}
📅 Jul 18, 2026 – Jul 19, 2026 📍 Berlin, Germany via Foresight Institute , Foresight Institute

Two-day workshop on AI safety and security in Berlin. Focus on secure and sovereign AI systems. Organized by Foresight Institute. Registration available.

#governance#evals#alignment
Salience signals
{
  "type_weight": 0.85,
  "source_trust": 0.75,
  "topic_relevance": 0.75,
  "time_proximity": 0.7031446540880504,
  "community_signal": 0.6,
  "speaker_org_signal": 0.65,
  "is_cfp_open": 0,
  "source_count": 1
}
📅 Aug 3, 2026 – Aug 6, 2026 📍 New York, USA via Cognitive Computational Neuroscience (CCN)

9th annual Cognitive Computational Neuroscience conference at NYU. Forum for discussion among researchers in cognitive science, neuroscience, and AI on computations underlying complex behavior. In-scope for AAE due to neural network interpretability, predictive coding, and metacognition measurement sessions. Single-track format with keynotes from Brenden Lake, Ila Fiete, Kenji Doya, Doris Tsao, and Alona Fyshe.

#interpretability#cognitive-science
Salience signals
{
  "type_weight": 1,
  "source_trust": 0.8,
  "topic_relevance": 0.7,
  "time_proximity": 0.6226415094339622,
  "community_signal": 0.55,
  "speaker_org_signal": 0.7,
  "is_cfp_open": 0,
  "source_count": 1
}
📅 May 4, 2026 – May 5, 2026 📍 Geneva, Switzerland · Hybrid via UNIDIR - United Nations Institute for Disarmament Research

Two-day conference on cybersecurity and technology policy covering AI security topics. Part of UNIDIR's Security and Technology Programme addressing implications of AI for international peace and security. In-scope for AAE tracker as international AI policy/disarmament falls within governance community's concerns.

#governance
Salience signals
{
  "type_weight": 1,
  "source_trust": 0.85,
  "topic_relevance": 0.65,
  "time_proximity": 0.5428571428571429,
  "community_signal": 0.5,
  "speaker_org_signal": 0.6,
  "is_cfp_open": 0,
  "source_count": 1
}
📅 Jun 15, 2026 – Jun 27, 2026 📍 New York, USA via Machine Learning Summer School (MLSS) - Columbia

Two-week intensive summer school at Columbia University covering machine learning topics including mechanistic interpretability, alignment/safety, RAG & agents, and LLM systems. Approximately 200 PhD students participate alongside faculty and industry speakers. In-scope due to dedicated alignment and mechanistic interpretability tracks.

#interpretability#alignment
Salience signals
{
  "type_weight": 0.35,
  "source_trust": 0.75,
  "topic_relevance": 0.7,
  "time_proximity": 0.869182389937107,
  "community_signal": 0.5,
  "speaker_org_signal": 0.6,
  "is_cfp_open": 0,
  "source_count": 1
}
📅 May 5, 2026 📍 Berkeley, USA via AI Safety Awareness Group Oakland (Meetup)

A free, accessible workshop hosted by AI Safety Awareness Group Oakland exploring AI's trajectory and societal impact. No technical background required. Features live demonstrations of current AI systems, interactive forecasting activities, and discussions about AI's implications for work, relationships, and society over the next 1-5 years.

#governance#alignment
Salience signals
{
  "type_weight": 0.45,
  "source_trust": 0.7,
  "topic_relevance": 0.7,
  "time_proximity": 0.5714285714285714,
  "community_signal": 0.6,
  "speaker_org_signal": 0.5,
  "is_cfp_open": 0,
  "source_count": 1
}