III
AI Watch Lab
Establishing rigorous liability testing and certification frameworks for the secure adoption of enterprise and agentic AI — accountability at the level institutions can rely on.
The AI Watch Lab is a research program in systemic safety — the failure modes that emerge not from a single model but from the interaction of agentic systems, the data they consume, and the institutions that deploy them.
Its outputs are deliberately practical: liability frameworks suitable for adoption by insurers and regulators, certification protocols for enterprise AI, and red-team methodologies that survive peer review.
The Lab maintains a public registry of incidents and near-misses — a clearinghouse for the failure data that the field, in its current state, otherwise discards.
What the program is asking.
01
What evidence does an enterprise need before delegating a high-stakes decision to an autonomous agent?
02
How do we certify behavior that depends on context the certifier cannot fully observe?
03
What does liability look like when an agentic system causes harm by composition, not by single fault?
Proposals welcome.
The Foundation accepts grant proposals, partnership inquiries, and fellowship applications from researchers whose work intersects with this program.