Skip to main content

Posts

Featured

Governing the Machine

As generative AI adoption scales, the legal focus has shifted from "how to use it" to "who is liable when it fails." For in-house counsel, the reality is the machine cannot be held liable, but you can. The authors of Governing the Machine emphasize that accountability in AI is not one-size-fits-all; it exists on a spectrum dictated by the system's complexity and risk (p. 165). Navigating this requires moving beyond passive oversight toward a rigorous Human-in-the-Loop (HITL) framework. While low-consequence uses might only require initial approval, higher-risk legal applications demand active human involvement. As noted in Governing the Machine, we can categorize this into three approaches: Human in Control (HIC): Humans retain ultimate authority and approve every decision made by the AI. Human on the Loop (HOTL): The AI operates with autonomy, but humans maintain oversight and can intervene when necessary. Human in the Loop (HITL): A collaborative ‘human plus ...

Latest posts

A Proactive Practitioner’s Guide to Section 11(b) of the Charter

On strategists and strategy : collected essays, 2014-2024

Why Language Models Hallucinate

Swimming up Niagara Falls! : the battle to get disability rights added to the Canadian Charter of Rights and Freedoms

How to think about AI : a guide for the perplexed

May Contain Lies: How Stories, Statistics, and Studies Exploit Our Biases – And What We Can Do about It

Contracting and contract law in the age of artificial intelligence

Smart, not loud : how to get noticed at work for all the right reasons

AI - Limits and Prospects of Artificial Intelligence

Legal Guide to Emerging Technologies