The academic debate on transparency requirements in the GDPR: a brief overview

January 27th, 2018 Posted by Uncategorized 0 thoughts on “The academic debate on transparency requirements in the GDPR: a brief overview”

Written by Merle Temme, a European Law School alumna whose paper on algorithmic transparency was nominated for the European Data Protection Law Review (EDPL) Young Scholars Award.

Algorithms (sequences of instructions telling a computer what to do) are becoming deeply entrenched in contemporary societies. When designed well, they are incredibly useful tools in accomplishing a great variety of tasks, simplifying human life in many different ways. Their use is not, however, uncontroversial, especially when algorithms are being used in automated decision-making (ADM) and therefore make decisions that have potentially life-changing consequences for individuals without any (or only marginal) human intervention.

It is by now well known that, like humans, algorithms can carry implicit biases and may well deliver discriminatory results. Remedies do exist – for instance, having developers factor in positive social values (like fair and equal treatment) already at the design stage of the algorithm and, in case of a violation of these values, enforcing them through anti-discrimination legislation. Rendering a system both fair and efficient, however, requires extra care and attention and such an effort will cost time and money. Operators of ADM may therefore easily be tempted to rely on less well-designed – albeit cheaper – ADM systems.

The European Union legislature decided to tackle this problem last year by regulating the way in which the data forming the basis of the algorithm’s decision is being processed. The EU’s overhaul of its data protection regime, the General Data Protection Regulation (GDPR), will have to be applied by the Member States from May 2018 onwards. The GDPR provides for rules such as transparency requirements, which are applicable to human and automatic decision-making alike, but also features special provisions which are pertinent to ADM alone. Not only is this intended to address the abovementioned accountability issue; greater transparency is also supposed to help human subjects of ADM to better understand what factors underpin the decisions that affect them and how the system can be held accountable.

The GDPR is praised as ambitious and designed to bring about substantial change, aiming at making Europe ‘fit for the digital age’, but has at the same time been criticised for being vague and ambiguous – a hybrid legal instrument mixing many aspects of a directive and a regulation. In name it is a regulation, directly applicable across the board in the EU, albeit one that leaves many aspects to be regulated by the Member States; a feature typical of European directives.

This ambiguity has spawned an interesting debate among researchers on how the GDPR’s transparency requirements are to be interpreted in so far as ADM is concerned. Goodman & Flaxman – in a rather brief paper  – entered the scene in summer 2016 by identifying a ‘right to explanation’ as the most important expression of algorithmic transparency in the GDPR, without, however, providing a strong line of argumentation for this statement or even identifying a legal basis for such a right. They identify the right to explanation as a more fully-fledged version of the right established by the Data Protection Directive of 1995 (which from May onwards will be superseded by the GDPR) and argue first, that an algorithm can ‘only be explained if the trained model can be articulated and understood by a human’. Secondly, they hold that any adequate explanation would, at a minimum, ‘provide an account of how input features relate to predictions, allowing one to answer questions such as: Is the model more or less likely to recommend a loan if the applicant is a minority? Which features play the largest role in prediction?’.

Wachter, Mittelstadt & Floridi took up the gauntlet and argued, on the basis of the structure of the regulation and its drafting history, that the evidence for a right to explanation is inconclusive. Instead, they propose an alternative ‘right to be informed’ about certain aspects of the decision-making process (e.g. the purpose and legal basis of the processing). First, they claim that even if a right to explanation existed, restrictions and carve-outs in the GDPR would render its field of application very limited. Secondly, they set out the central point of their paper, the degree to which ADM can be explained in the first place: Wachter et al. make a distinction between how general or specific the explanation could be and at what point in time it would take place, only to conclude that the sole possible interpretation would be a very general explanation of ‘system functionality’ (what they name the right to be informed).

A very recent paper by Selbst & Powles, however, describes Wachter et al.’s analysis as an ‘overreaction’ to Goodman & Flaxman’s paper that ‘distorts the debate’. Their central point of critique is Wachter et al.’s analytical framework, namely the model they use to explain the degree to which the inner workings of ADM can be explained. According to Selbst & Powles, that model is nonsensical and rooted in ‘a lack of technological understanding’.  Interestingly, most of their paper is focused on debunking that model and a detailed explanation of why it does not correspond to computer programming reality. Only then do they turn to the legal text itself. By applying a holistic method of interpretation, they conclude that the regulation, requiring ‘meaningful information about the logic involved’ (in an automated decision), must contain ‘something like’ a right to explanation in order to enable the data subject to exercise her rights under the GDPR and human rights law.

The way this will play out in practice will become clear once the GDPR becomes applicable in a few months and European courts will have the opportunity to weigh in and decide how to interpret it in the disputes that will be laid before them. The development of this debate so far – from purely legal arguments (Goodman & Flaxman) to a more technical analysis (Wachter, Mittelstadt & Floridi) and the rebuttal of the latter (Selbst & Powles) – is, however, remarkable: it indicates that on a topic as complex as algorithmic transparency, legal knowledge is not enough anymore. To win the argument, the lawyer/legal researcher of the future (or rather, the present) must have conceptual knowledge of the technology he seeks to assess – be it to criticize, regulate, or use it. Not only does understanding technology and writing about it in a ‘not purely legal’ way add credibility to one’s own analysis, reproaching someone’s ‘lack of technological understanding’ may become the most effective tool in rebutting a colleague’s arguments.

Leave a Reply

Your email address will not be published. Required fields are marked *

TECHNOLAWGEEKS

Copyright © 2017