11 March 2019 - Lilian Edwards Abstract This paper will discuss the underlying problems around algorithmic governance, and in particular, the alleged new “right to an explanation”[1] in the General Data Protection Regulation, arts 22 or 15, which is in fact not new but has existed in the DPD since 1995[2]. The problems with the right are numerous and range from the legal, to the practical. Legal problems include the controversial nature of the right itself; doubts as to its timing and scope; numerous exceptions; and especially a carve out from the right for the protection of trade secrets and intellectual property. Technical problems include that no-one really knows how to present in a “meaningful” way what goes on in the innards of a modern machine learning algorithm, which largely runs on correlation rather than causation, to non-experts. While there has now been considerable academic literature concerning these issues, a number of key policy issues have not yet really been ventilated. Are legal remedies concerning algorithmic transparency really best found in data protection law? Is transparency a useful remedy at all given the historic failure of notice and choice in privacy? Are individualistic legal remedies suitable for problems causing harms to society as a whole or particular groups rather than noticeable harms to discrete individuals? And are technological fixes such as “fairness-aware” algorithms really the best solutions to acute social problems of resources and political ideology? Most of the seminar is drawn from Edwards and Veale “Slave to the Algorithm? Why a 'Right to an Explanation' Is Probably Not the Remedy You Are Looking For” (2017) 16 Duke Law & Technology Review 18 at https://scholarship.law.duke.edu/cgi/viewcontent.cgi?referer=&httpsredir=1&article=1315&context=dltr also available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2972855 [1] See B Goodman and S Flaxman “EU regulations on algorithmic decision making and “a right to an explanation”, 2016 ICML Workshop on Human Interpretability in Machine Learning (WHI 2016), New York, USA. [2] Data Protection Diective, art 12(a). Mar 11 2019 14.00 - 15.00 11 March 2019 - Lilian Edwards Why a Right to an Explanation in AI may Not be the Remedy You are Looking For IF 4.31/4.33
11 March 2019 - Lilian Edwards Abstract This paper will discuss the underlying problems around algorithmic governance, and in particular, the alleged new “right to an explanation”[1] in the General Data Protection Regulation, arts 22 or 15, which is in fact not new but has existed in the DPD since 1995[2]. The problems with the right are numerous and range from the legal, to the practical. Legal problems include the controversial nature of the right itself; doubts as to its timing and scope; numerous exceptions; and especially a carve out from the right for the protection of trade secrets and intellectual property. Technical problems include that no-one really knows how to present in a “meaningful” way what goes on in the innards of a modern machine learning algorithm, which largely runs on correlation rather than causation, to non-experts. While there has now been considerable academic literature concerning these issues, a number of key policy issues have not yet really been ventilated. Are legal remedies concerning algorithmic transparency really best found in data protection law? Is transparency a useful remedy at all given the historic failure of notice and choice in privacy? Are individualistic legal remedies suitable for problems causing harms to society as a whole or particular groups rather than noticeable harms to discrete individuals? And are technological fixes such as “fairness-aware” algorithms really the best solutions to acute social problems of resources and political ideology? Most of the seminar is drawn from Edwards and Veale “Slave to the Algorithm? Why a 'Right to an Explanation' Is Probably Not the Remedy You Are Looking For” (2017) 16 Duke Law & Technology Review 18 at https://scholarship.law.duke.edu/cgi/viewcontent.cgi?referer=&httpsredir=1&article=1315&context=dltr also available at https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2972855 [1] See B Goodman and S Flaxman “EU regulations on algorithmic decision making and “a right to an explanation”, 2016 ICML Workshop on Human Interpretability in Machine Learning (WHI 2016), New York, USA. [2] Data Protection Diective, art 12(a). Mar 11 2019 14.00 - 15.00 11 March 2019 - Lilian Edwards Why a Right to an Explanation in AI may Not be the Remedy You are Looking For IF 4.31/4.33
Mar 11 2019 14.00 - 15.00 11 March 2019 - Lilian Edwards Why a Right to an Explanation in AI may Not be the Remedy You are Looking For