Governing the Machine

As generative AI adoption scales, the legal focus has shifted from "how to use it" to "who is liable when it fails." For in-house counsel, the reality is the machine cannot be held liable, but you can.

The authors of Governing the Machine emphasize that accountability in AI is not one-size-fits-all; it exists on a spectrum dictated by the system's complexity and risk (p. 165). Navigating this requires moving beyond passive oversight toward a rigorous Human-in-the-Loop (HITL) framework.

While low-consequence uses might only require initial approval, higher-risk legal applications demand active human involvement. As noted in Governing the Machine, we can categorize this into three approaches:

  • Human in Control (HIC): Humans retain ultimate authority and approve every decision made by the AI.
  • Human on the Loop (HOTL): The AI operates with autonomy, but humans maintain oversight and can intervene when necessary.
  • Human in the Loop (HITL): A collaborative ‘human plus machine’ model where AI assists rather than replaces (p. 165).

For legal work, where accuracy is non-negotiable, the HITL model is the current best practice to ensure that a human remains accountable for the functioning and outputs of the AI system (p. 166).

The Danger of Automation Bias

The greatest threat to a lawyer’s professional standing is automation bias: the propensity to place undue trust in AI-generated outputs. This bias diminishes a lawyer's attention to validating results, leading them to overlook errors or follow faulty recommendations without critical evaluation (p. 52).

Recent Canadian case law, such as Ko v. Li (2025 ONSC 2766), underscores this risk. In that case, a lawyer faced contempt proceedings after their staff submitted a factum containing AI-generated "hallucinations." The court's rebuke serves as a stark reminder: you cannot delegate your ethical duty of competence to an algorithm.

The Rise of Agentic AI: New Controls Required

The next frontier is "agentic AI", which are systems that can execute tasks across different environments. These require even stricter technical and policy controls, including:

  • Restrictive Action Ranges: Setting default behaviors and mandating HITL before an agent takes an action.
  • Fail-Safe "Off" Switches: Ensuring an organization can disable agents immediately if they begin to fail (p. 166).
  • Attributability: Assigning unique, immutable identifiers to agents so their activities can be traced back to the originating entity (p. 166).

Building a Culture of Contestability

True accountability requires contestability: the ability for individuals (or clients) to question or challenge decisions made by an AI system. For lawyers, this means providing clear pathways for redress when AI-driven decisions are perceived as incorrect, unfair, or biased (p. 166).

The Bottom Line: As the authors of Governing the Machine argue, a human must remain accountable. By treating AI as a high-performance clerk that requires constant, skeptical supervision, you protect not only your clients but your professional license.


































Governing the Machine

Eitel-Porter, Ray, 

2025, Book , 272 pages;

9781399426299, 139942629X


Comments

Popular Posts