III

Systemic Safety & Alignment

AI Watch Lab

Establishing rigorous liability testing and certification frameworks for the secure adoption of enterprise and agentic AI — accountability at the level institutions can rely on.

Plate IVVigilance, 2025

← All Programs

§02 — Brief

The AI Watch Lab is a research program in systemic safety — the failure modes that emerge not from a single model but from the interaction of agentic systems, the data they consume, and the institutions that deploy them.

Its outputs are deliberately practical: liability frameworks suitable for adoption by insurers and regulators, certification protocols for enterprise AI, and red-team methodologies that survive peer review.

The Lab maintains a public registry of incidents and near-misses — a clearinghouse for the failure data that the field, in its current state, otherwise discards.

§03 — Questions

What the program is asking.

  1. 01

    What evidence does an enterprise need before delegating a high-stakes decision to an autonomous agent?

  2. 02

    How do we certify behavior that depends on context the certifier cannot fully observe?

  3. 03

    What does liability look like when an agentic system causes harm by composition, not by single fault?

§05 — Apply

Proposals welcome.

The Foundation accepts grant proposals, partnership inquiries, and fellowship applications from researchers whose work intersects with this program.

Plate XIIApplication, 2025