The Inaugural HKU Technology Law Symposium

Rising to Legal Challenges in the Age of Artificial Intelligence

Monday, July 15, 2019
9:00am – 6:00pm

Academic Conference Room, 11/F Cheng Yu Tung Tower
University of Hong Kong Faculty of Law

Frank Pasquale, Piper & Marbury Professor of Law, University of Maryland Carey School of Law

Keynote Speech

Data-Driven Duties in the Development of Artificial Intelligence

Corporations will increasingly attempt to substitute artificial intelligence (AI) and robotics for human labor. This evolution will create novel situations for tort law to address.  However, tort will only be one of several types of law at play in the deployment of AI. Regulators will try to forestall problems by setting standards, and corporate lawyers will attempt to deflect liability via contractual disclaimers and exculpatory clauses. The interplay of tort, contract, and regulation will not just allocate responsibility ex post, spreading the costs of accidents among those developing and deploying AI, their insurers, and those they harm. This matrix of legal rules will also deeply influence the development of AI, including the industrial organization of firms and capital’s and labor’s relative share of productivity and knowledge gains.

This talk begins by describing torts that may arise thanks to the deployment of AI and robotics (and some which have already arisen). The focus is on one particular type of failing: the use of inaccurate or inappropriate data in training sets used for machine learning. Inspired by common analogies of algorithms to recipes, I explore the degree to which patterns of liability for spoiled or poisonous food may also inform our eventual treatment of inaccurate or inappropriate data in AI systems. Health privacy regulation also provides important lessons for assuring the appropriateness and quality of data used in patient care, randomized trials, and observational research. The history of both health data and food regulation is instructive: egregious failures not only give rise to tort liability, but also catalyze regulatory commitments to prevent the problems which sparked that liability, which in turn helped create new standards of care.

It is wise to preserve the complementarity of tort law and regulation, rather than opting to radically diminish or increase the role of either of these modalities of social order (as preemption, sweeping exculpatory clauses, or deregulation might do). AI law and policy should create ongoing incentives for a diverse and inclusive set of individuals to both understand and control the development and application of automation. Without imposing robust legal duties on the developers of AI, there is little chance of ensuring accountable technological development in this field. By focusing on the fundamental inputs to AI—the data used to promote machine learning—both judges and policymakers can channel the development of AI to respect, rather than evade, core legal values of fairness, due process, and equal treatment.

Contact Us

7 + 10 =