How to think about AI : a guide for the perplexed

Richard Susskind's How to think about AI : a guide for the perplexed offers an examination of artificial intelligence, particularly relevant for those in law. Among his observations, two insights particularly clarify the challenges and opportunities for legal practitioners: (1) the prevalent AI Fallacy and "not-us thinking," and (2) the distinction between automation, innovation, and elimination in how AI reshapes work.

First, Susskind exposes the "AI Fallacy," which is the mistaken belief that machines must replicate human processes (thinking, reasoning, empathizing) to achieve high-level performance (p. 53). This fallacy underpins much of the legal profession's "not-us thinking," where lawyers readily acknowledge AI's transformative potential in other fields but resist its deep integration into law (p. 50). Consider Susskind's own revelation when GPT-4 drafted a column in his style, causing a "shiver" and prompting him to question his future as a columnist (p. 52). This isn't about AI mimicking human intuition, but about its capacity to produce outcomes that rival or exceed human output, challenging the very notion of what constitutes "expertise" when it isn't rooted in a "human process". The crucial realization is that clients do not inherently desire "judgement" or "lawyers"; they seek "health" or to "avoid legal problems" (p. 62). This shifts the focus from the internal workings of legal practice to the external delivery of desired results, irrespective of the "mind" behind them.

This reorientation leads to a second critical insight: the three-tiered impact of AI as automation, innovation, and elimination. Most discussions in law remain fixated on automation, merely computerizing existing tasks like document review or legal research for increased efficiency (p. 25). Susskind argues this is a "major error" that vastly underestimates AI's true transformative power (p. 74). The deeper potential lies in innovation, where AI enables entirely new approaches to problem-solving that were previously inconceivable. Online Dispute Resolution (ODR), for instance, is not simply automated litigation but an innovative way to achieve justice outcomes through radically different processes, often removing the need for physical court appearances or oral advocacy (p. 75). This directly challenges the lawyer who believes their job is safe because "no robot could stand up and plead in court" – the very need for such advocacy might be eliminated (p. 76).

Even more profound is elimination, where AI can cut out the problem altogether. Just as the motor car eliminated the "Great Manure Crisis" of the 1890s by removing the need for horses, AI can enable "dispute avoidance" in law, preventing legal problems from arising in the first place, thus eliminating the very need for dispute resolution (p. 73). The "not-us thinking" which clings to the perceived intrinsic value of human "process" (e.g., the lawyer's "judgement") becomes a blind spot to AI's capacity to deliver the desired "outcome" through entirely new, non-human methods, or to eliminate the problem that required the human expert in the first place (p. 24). The implications for modern society are profound. If professions, particularly law, remain mired in "not-us thinking" and the "AI Fallacy," they risk becoming obsolete not because AI mimics them, but because it outmaneuvers them by addressing societal needs through fundamentally different, more effective means (p. 52). This necessitates radical structural change within institutions like justice systems (p. 83). It calls for "vision-based restructuring" where we ask not "What is the future of X (e.g., judges, lawyers)?" but "How in the future will we solve the problems to which X is our current best answer?" (p. 84).

The profound implications of the "AI Fallacy" and the spectrum of "automation, innovation, and elimination" extend beyond mere policy or economic shifts; they challenge the very cognitive frameworks through which we comprehend reality and human achievement. Our ingrained "process-thinking" and defensive "not-us thinking" reveal our biases, hindering our ability to grasp AI's capacity to yield outcomes ("quasi-judgement," "quasi-empathy," "quasi-creativity", p. 61) that are equal to or surpass human capabilities, yet arise from non-human processes. Consequently, this era demands a profound philosophical shift, urging us to question our common-sense conceptions of reality and human achievement. As Susskind suggests, we must engage in "what-if-AGI? thinking" to stretch and stress-test our views and instincts about AI and our future, without being inhibited by technological myopia or the belief that current limitations will persist (p. 110). Only through this open-minded contemplation can we effectively balance “the benefits and threats of artificial intelligence—saving humanity with and from AI—[which] is the defining challenge of our age" (p. xvi).



























How to think about AI : a guide for the perplexed

Susskind, Richard E. 

2025, Book , xvi, 202 pages

0198941927, 9780198941927


Comments

Popular Posts