Speaking requests are accepted from enterprise Trust and Safety teams, regulatory convenings, academic centers, and design leadership groups.








Building shared vocabulary across AI and child safety
In the 2000's being 'culturally competant' was the platinum standards, in the 2010's being trauma-informed' was thought to be best practice.
Now european regulators want something harder. They want proof that AI products reflect both developmental psychology and cultural context in real time. Leadership teams need shared vocabulary to meet these standards before enforcement begins.

Transitional Object Liability
Children form attachment bonds with AI-enabled toys that respond, remember, and adapt. When the service terminates, the subscription lapses, or developers update the model's personality, the child experiences loss of a relationship. To a child in the preoperational stage, the AI was alive. Its disappearance is bereavement.
Find out more
Pediatric Administrative Routing
AI systems increasingly screen children for developmental conditions, authorize treatments, and determine special education eligibility. Pediatric classification errors compound across developmental time. A false negative on autism screening at age 4 means missed early intervention windows. The error propagates through years that cannot be recovered.
Find out more
Affect Analytics Encoding Bias
NYC Local Law 144 already requires bias audits for automated employment decision tools. Extend that logic to automated educational assessment tools and you have the regulatory template that California, Illinois, and the EU are converging toward
Find out more
Executive Function Displacement
The Character.AI litigation has established that "foreseeable psychological harm from product design" is a viable theory of liability. Executive function displacement is a more diffuse harm but still a possible tort.
Find out more