Regulating AI: Critical Issues and Choices

Amidst an ongoing debate over regulatory approaches to artificial intelligence (AI), automated decision-making (ADM) and algorithms in the legal sector, a recent report by the Law Commission of Ontario (LCO), Regulating AI: Critical Issues and Choices, highlights the striking difference in approaches between a Canadian rights-centered approach and the American rapid-deployment approach.

The report contrasts former US President Trump’s 2019 Executive Order on Maintaining American Leadership in Artificial Intelligence, with the Government of Canada’s Directive on Automated Decision-Making.

First addressing the American Executive Order, LOC’s report highlights that:

Significantly, the “Memorandum for the Heads of Executive Departments and Agencies” accompanying this Executive Order states “Federal agencies must avoid regulatory or non-regulatory actions that needlessly hamper AI innovation and growth,” clearly prioritizing innovation over rights protection. The Executive Order even states that “agencies must avoid a precautionary approach” (pg 17; emphasis added).

By way of contrast, the Government of Canada’s Directive “explicitly aligns innovation with public trust and rights protection, rather than conceptualizing them in conflict with one another” (pg 18). The report cites the Directive description which states:

The Government of Canada is increasingly looking to utilize artificial intelligence to make, or assist in making, administrative decisions to improve service delivery. The Government is committed to doing so in a manner that is compatible with core administrative law principles such as transparency, accountability, legality, and procedural fairness. Understanding that this technology is changing rapidly, this Directive will continue to evolve to ensure that it remains relevant.

It seems right that the Canadian approach -- orienting innovation with rights -- is the better approach. This aligns with the role of law envisioned by Joshua A. T. Fairfield as ‘adaptive social technology’, that situates itself together with technology to advance in parallel, thereby informing what we wish our society to be.

The rights-centered approach is also in keeping with the work of Boris Babic, I. et al. in HBR's 10 must reads 2022 : the definitive management ideas of the year from Harvard Business Review 2022 entitled “When Machine Learning Goes Off the Rails”. Here, the authors similarly advance the importance of addressing moral risk in innovation through responsible algorithm design (pg 137). The authors suggest a strategy of treating AI and ADM as if it’s human (pg 142), whereby new systems are subjected to controlled randomized trials to ensure safety, efficacy, and fairness prior to rollout. Additionally, AI and ADM decisions should be evaluated for quality through comparison with humans in a comparable situation. It’s likely that governments failing to heed the warnings of these moral risks will have trouble gaining traction in implementation.

To read more from the report, including LOC’s comprehensive framework for governmental protection of human rights, due process, and public participation using AI and ADM systems, check it out, here.











Regulating AI: Critical Issues and Choices 

Law Commission of Ontario

April 14, 2021, 58 Pages

  

 

Comments

Popular Posts