Applied Adversarial Epistemics Tracker

Sources & transparency

Every event listed on this site comes from one of the sources below. The agent self-reports each weekly run so you can see what changed and why.

⚠ 3 active sources failing

Active (54)

org trust 0.85 last scraped Apr 29, 2026 ok

First global academic programme focused on AI evaluations. Funded by Coefficient Giving. Faculty from Cambridge, Stanford, Princeton, EU AI Office, UK AI Safety Institute, FAR AI, Apollo Research.

active
newsletter trust 0.75

Weekly newsletter aggregating AI safety events globally. Good cross-reference source.

active
aggregator trust 0.85 last scraped Apr 29, 2026 ok

Annual AI governance workshop at AAAI. Third iteration. Focus on alignment, morality, law, and design. Organized by researchers from IBM, ETH Zurich, and other institutions.

active
aggregator trust 0.80

Workshop on AI persuasion and manipulation capabilities. Aligned with governance/safety concerns.

active
org trust 0.95 last scraped Apr 29, 2026 no-events

UK government AISI. Hosts evaluation workshops and publishes calls for collaboration.

active
aggregator trust 0.95 last scraped Apr 29, 2026 rate-limited

Tightly scoped to alignment research.

active
org trust 0.95 last scraped Apr 29, 2026 ok

Anthropic's alignment research blog. High-trust source for safety research and fellowship announcements.

active
org trust 0.90 last scraped Apr 29, 2026 no-events

Look for hackathons, fellowships, and ControlConf-style events.

active
org trust 0.85 last scraped Apr 29, 2026 ok

Organizes AI safety research sprints and hackathons. 55+ sprints with 6,000+ participants across 200+ global locations.

active
org trust 0.90 last scraped Apr 29, 2026 no-events

Apollo specifically for eval-design and scheming-detection workshops. Distinct from the apolloresearch.ai/blog feed.

active
org trust 0.90 last scraped Apr 29, 2026 no-events

AI safety org focused on evaluations and deceptive alignment. Watches for cohort calls and workshop announcements.

active
org trust 0.90 last scraped Apr 29, 2026 ok

Cohort program; CFPs run several times a year.

active
org trust 0.75

UC Berkeley student community for AI safety. Holds Decal course and research program.

active
org trust 0.85 last scraped Apr 29, 2026 ok

UC Berkeley center focused on responsible and secure AI systems. Hosts Agentic AI Summit and other safety-focused events.

active
org trust 0.90 last scraped Apr 29, 2026 no-events

Runs AI safety fellowships and AI governance courses. Cohort-based.

active
aggregator trust 0.70

Boston community group for AI safety discussions. Regular meetups.

active
org trust 0.90 last scraped Apr 29, 2026 ok

Workshops, hackathons, ML Safety Newsletter cross-references.

active
org trust 0.85 last scraped Apr 29, 2026 no-events

Cambridge, MA-based 501(c)(3) non-profit founded in 2022. Runs research fellowships in AI alignment and governance.

active
org trust 0.85

CBAI's AIxBiosecurity Summer Fellowship program. Distinct from main CBAI page.

active
aggregator trust 0.80 last scraped Apr 29, 2026 ok

Annual cog-sci × ML conference. AAE attendees follow for predictive-coding, metacognition, signal-detection-theory measurement work applicable to LLMs.

active
org trust 0.85 last scraped Apr 29, 2026 ok

Constellation's Astra Fellowship program. Fellows work on AI safety projects with expert mentorship. Strong placement rates at safety orgs.

active
aggregator trust 0.90 last scraped Apr 29, 2026 ok

Annual conference on AI control organized by Redwood Research and FAR.AI. Focuses on reducing risks from misalignment through safeguards that work even when AI models are trying to undermine them.

active
org trust 0.85 last scraped Apr 29, 2026 no-events

Filter to safety/alignment specifically.

active
aggregator trust 0.80 last scraped Apr 29, 2026 no-events

Mixed AI-safety and EA-broad events. Filter aggressively.

active
org trust 0.85 last scraped Apr 29, 2026 no-events

EleutherAI mech-interp working group. Active reading-group + paper-discussion cadence.

active
aggregator trust 0.75

Coalition focused on AI evaluation practices. Runs workshops at major conferences.

active
org trust 0.90 last scraped Apr 29, 2026 ok

Foundational AI Research organization. Runs workshops on AI safety verification, secure compute, and other technical safety topics. FAR Labs provides coworking space and events for Bay Area AI safety community.

active
org trust 0.85 last scraped Apr 29, 2026 ok

Bay Area coworking + event venue for AI safety / interpretability researchers. Regular talks and reading groups.

active
org trust 0.75 last scraped Apr 29, 2026 ok

Foresight Institute hosts Vision Weekend events with frontier science and technology tracks including AI safety. 40-year-old organization focused on transformative technology.

active
org trust 0.75 last scraped Apr 29, 2026 no-events

Sociotechnical risk and threat-actor analysis for advanced AI. Source for counter-autonomous / facilitation-forensics events.

active
org trust 0.90 last scraped Apr 29, 2026 no-events

Mechanistic interpretability research org. Watches for SAE / probe / circuit tracing announcements.

active
org trust 0.90 last scraped Apr 29, 2026 no-events

Centre for the Governance of AI. Runs summer fellowships in DC and London focused on AI governance and policy. High-trust research organization.

active
aggregator trust 0.85 last scraped Apr 29, 2026 ok

Annual; cross-reference workshop list with safety keywords.

active
aggregator trust 0.85 last scraped Apr 29, 2026 ok

Annual theoretical AI alignment conference. Unconference format. Organized by Iliad, an umbrella organization for applied mathematics research in alignment.

active
aggregator trust 0.90 last scraped Apr 29, 2026 error

Primary AI-safety adjacent event aggregator. High signal-to-noise.

active
org trust 0.85

London coworking space and community hub for AI safety. Hosts ARENA and mixers.

active
aggregator trust 0.70 last scraped Apr 29, 2026 no-events

Lu.ma is the de-facto calendar for safety community informal events (mixers, salons, hackathons). JS-heavy — fetch with web_fetch_rendered.

active
aggregator trust 0.70 last scraped Apr 29, 2026 no-events

Lu.ma alignment-tagged events. Use web_fetch_rendered.

active
org trust 0.80

MIT student group conducting AI alignment research. Runs reading groups and ML bootcamps. Supported by CBAI. Membership in hundreds.

active
org trust 0.95 last scraped Apr 29, 2026 ok

Fellowship cohort and CFPs.

active
aggregator trust 0.90 last scraped Apr 29, 2026 ok

Annual mechanistic interpretability workshop at ICML. High-quality venue for mech interp research. Organized by leading researchers in the field.

active
aggregator trust 0.65 last scraped Apr 29, 2026 error

Meetup.com aggregator for AI safety meetups. JS-heavy — use web_fetch_rendered.

active
org trust 0.90 last scraped Apr 29, 2026 no-events

Evaluations org. Watches for hires + workshop announcements.

active
newsletter trust 0.90 last scraped Apr 29, 2026 no-events

Often the first to mention new workshops and CFPs.

active
aggregator trust 0.75

Annual ML summer school covering alignment, mechanistic interpretability, and LLM systems.

active
aggregator trust 0.85 last scraped Apr 29, 2026 ok

Annual; cross-reference workshop list with safety keywords.

active
org trust 0.90 last scraped Apr 29, 2026 ok

OpenAI's safety fellowship program, partnering with Constellation. High-trust source for AI safety research fellowships.

active
org trust 0.75

AI safety research fellowship based in London. Quarterly cohorts.

active
aggregator trust 0.85 last scraped Apr 29, 2026 ok

Annual workshop at ICML on pluralistic AI alignment. Focus on integrating diverse perspectives into AI alignment frameworks.

active
aggregator trust 0.85 last scraped Apr 29, 2026 ok

Annual workshop on representational alignment at ICLR. Focus on alignment among artificial and biological information processing systems.

active
org trust 0.90 last scraped Apr 29, 2026 no-events

Control / interpretability focus. Watches for ControlConf, REMIX, etc.

active
aggregator trust 0.75 last scraped Apr 29, 2026 no-events

Cross-disciplinary conference on decision making, learning, cognition. Increasingly hosts LLM-cog-sci crossover work.

active
aggregator trust 0.90 last scraped Apr 29, 2026 ok

Annual technical AI safety conference at Oxford. Third iteration. Organized by Oxford Martin AI Governance Initiative and Noeon Research.

active
org trust 0.85 last scraped Apr 29, 2026 ok

United Nations Institute for Disarmament Research. International AI policy and disarmament-related conferences in scope per the safety community's broader governance interests.

active

Candidates (1)

Sources discovered by the agent and pending confirmation. Auto-promoted to active when confidence is high; demoted if they yield no events for 60+ days.

aggregator trust 0.60 discovered Apr 29, 2026

No events found. Site doesn't show upcoming conferences. Would need dedicated events page or corroboration before promoting to active.

candidate

Deprecated / broken (3)

aggregator Meetup group no longer exists as of 2026-04-29
broken
aggregator DNS resolution failed in run-5 (2026-04-29). Domain appears expired. Replace with a working London AI safety community URL when one is identified; in the meantime Lu.ma + LessWrong cover London events.
broken
org Domain does not exist. Verify correct URL or deprecate.
broken