Our Works

EdTech platforms using your model's vision capabilities to assess student engagement through facial expression and voice analysis are encoding culturally specific assumptions about how attention displays. Students from backgrounds with different expression norms are systematically flagged as disengaged. This is not a content moderation problem. It is a civil rights problem. When OCR investigates the school district, the discovery process will surface your model's role in the classification pipeline. Audience: Applied science teams building multimodal capabilities. Policy teams at companies licensing vision and audio models to EdTech. Heads of Trust and Safety at companies whose models are being deployed in K-12 environments without use-case specific safety evaluation. Why it matters now: NYC Local Law 144 already requires bias audits for automated employment decision tools. Extend that logic to automated educational assessment tools and you have the regulatory template that California, Illinois, and the EU are converging toward. Your model's affect classification was not trained on developmental populations. Your EdTech customers are deploying it on developmental populations. The gap between your training distribution and their deployment distribution is your liability surface.

Client
N/A
Date
November 2024
Services
N/A
Software
N/A

Cchac

Liability is accumulating in the gap between today’s silence and tomorrow’s enforcement.Legacy tools detect bullying; they do not detect developmental displacement or parasocial attachment.

We provide the clinical taxonomy you need to measure vertical harms before the 2026 high-risk mandates bind. Define the standard, or have it defined for you.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.