Governing the Machine
As generative AI adoption scales, the legal focus has shifted from "how to use it" to "who is liable when it fails." For in-house counsel, the reality is the machine cannot be held liable, but you can. The authors of Governing the Machine emphasize that accountability in AI is not one-size-fits-all; it exists on a spectrum dictated by the system's complexity and risk (p. 165). Navigating this requires moving beyond passive oversight toward a rigorous Human-in-the-Loop (HITL) framework. While low-consequence uses might only require initial approval, higher-risk legal applications demand active human involvement. As noted in Governing the Machine, we can categorize this into three approaches: Human in Control (HIC): Humans retain ultimate authority and approve every decision made by the AI. Human on the Loop (HOTL): The AI operates with autonomy, but humans maintain oversight and can intervene when necessary. Human in the Loop (HITL): A collaborative ‘human plus ...









