II
Cogs
Developing emotional learning frameworks for humanoid robotics — ensuring that as machines move into our homes and clinics, the interaction is safe, empathetic, and human-led.
Cogs studies the affective layer of humanoid robotics — the signals a machine should attend to, the responses it should withhold, and the disclosures it owes the people it interacts with.
The program treats empathy not as a feature to be added but as a system property to be designed for. That means measurable behavioral outcomes, third-party audit of decision policies, and explicit consent flows for any deployment that involves vulnerable populations.
Cogs is conducted in partnership with academic robotics labs and a small number of clinical research sites. No commercial deployment is considered until a peer-reviewed safety case has been published.
What the program is asking.
01
What does it mean for a humanoid robot to be honest about its own uncertainty?
02
When should an affect-aware system defer to a human, and how is that boundary engineered?
03
How do we evaluate empathic behavior without anthropomorphizing the machine in the metric itself?
The other programs.
Proposals welcome.
The Foundation accepts grant proposals, partnership inquiries, and fellowship applications from researchers whose work intersects with this program.