Monthly Archives: January, 2018

The academic debate on transparency requirements in the GDPR: a brief overview

January 27th, 2018 Posted by Uncategorized 0 thoughts on “The academic debate on transparency requirements in the GDPR: a brief overview”

Written by Merle Temme, a European Law School alumna whose paper on algorithmic transparency was nominated for the European Data Protection Law Review (EDPL) Young Scholars Award.

Algorithms (sequences of instructions telling a computer what to do) are becoming deeply entrenched in contemporary societies. When designed well, they are incredibly useful tools in accomplishing a great variety of tasks, simplifying human life in many different ways. Their use is not, however, uncontroversial, especially when algorithms are being used in automated decision-making (ADM) and therefore make decisions that have potentially life-changing consequences for individuals without any (or only marginal) human intervention.

It is by now well known that, like humans, algorithms can carry implicit biases and may well deliver discriminatory results. Remedies do exist – for instance, having developers factor in positive social values (like fair and equal treatment) already at the design stage of the algorithm and, in case of a violation of these values, enforcing them through anti-discrimination legislation. Rendering a system both fair and efficient, however, requires extra care and attention and such an effort will cost time and money. Operators of ADM may therefore easily be tempted to rely on less well-designed – albeit cheaper – ADM systems.

The European Union legislature decided to tackle this problem last year by regulating the way in which the data forming the basis of the algorithm’s decision is being processed. The EU’s overhaul of its data protection regime, the General Data Protection Regulation (GDPR), will have to be applied by the Member States from May 2018 onwards. The GDPR provides for rules such as transparency requirements, which are applicable to human and automatic decision-making alike, but also features special provisions which are pertinent to ADM alone. Not only is this intended to address the abovementioned accountability issue; greater transparency is also supposed to help human subjects of ADM to better understand what factors underpin the decisions that affect them and how the system can be held accountable.

The GDPR is praised as ambitious and designed to bring about substantial change, aiming at making Europe ‘fit for the digital age’, but has at the same time been criticised for being vague and ambiguous – a hybrid legal instrument mixing many aspects of a directive and a regulation. In name it is a regulation, directly applicable across the board in the EU, albeit one that leaves many aspects to be regulated by the Member States; a feature typical of European directives.

This ambiguity has spawned an interesting debate among researchers on how the GDPR’s transparency requirements are to be interpreted in so far as ADM is concerned. Goodman & Flaxman – in a rather brief paper  – entered the scene in summer 2016 by identifying a ‘right to explanation’ as the most important expression of algorithmic transparency in the GDPR, without, however, providing a strong line of argumentation for this statement or even identifying a legal basis for such a right. They identify the right to explanation as a more fully-fledged version of the right established by the Data Protection Directive of 1995 (which from May onwards will be superseded by the GDPR) and argue first, that an algorithm can ‘only be explained if the trained model can be articulated and understood by a human’. Secondly, they hold that any adequate explanation would, at a minimum, ‘provide an account of how input features relate to predictions, allowing one to answer questions such as: Is the model more or less likely to recommend a loan if the applicant is a minority? Which features play the largest role in prediction?’.

Wachter, Mittelstadt & Floridi took up the gauntlet and argued, on the basis of the structure of the regulation and its drafting history, that the evidence for a right to explanation is inconclusive. Instead, they propose an alternative ‘right to be informed’ about certain aspects of the decision-making process (e.g. the purpose and legal basis of the processing). First, they claim that even if a right to explanation existed, restrictions and carve-outs in the GDPR would render its field of application very limited. Secondly, they set out the central point of their paper, the degree to which ADM can be explained in the first place: Wachter et al. make a distinction between how general or specific the explanation could be and at what point in time it would take place, only to conclude that the sole possible interpretation would be a very general explanation of ‘system functionality’ (what they name the right to be informed).

A very recent paper by Selbst & Powles, however, describes Wachter et al.’s analysis as an ‘overreaction’ to Goodman & Flaxman’s paper that ‘distorts the debate’. Their central point of critique is Wachter et al.’s analytical framework, namely the model they use to explain the degree to which the inner workings of ADM can be explained. According to Selbst & Powles, that model is nonsensical and rooted in ‘a lack of technological understanding’.  Interestingly, most of their paper is focused on debunking that model and a detailed explanation of why it does not correspond to computer programming reality. Only then do they turn to the legal text itself. By applying a holistic method of interpretation, they conclude that the regulation, requiring ‘meaningful information about the logic involved’ (in an automated decision), must contain ‘something like’ a right to explanation in order to enable the data subject to exercise her rights under the GDPR and human rights law.

The way this will play out in practice will become clear once the GDPR becomes applicable in a few months and European courts will have the opportunity to weigh in and decide how to interpret it in the disputes that will be laid before them. The development of this debate so far – from purely legal arguments (Goodman & Flaxman) to a more technical analysis (Wachter, Mittelstadt & Floridi) and the rebuttal of the latter (Selbst & Powles) – is, however, remarkable: it indicates that on a topic as complex as algorithmic transparency, legal knowledge is not enough anymore. To win the argument, the lawyer/legal researcher of the future (or rather, the present) must have conceptual knowledge of the technology he seeks to assess – be it to criticize, regulate, or use it. Not only does understanding technology and writing about it in a ‘not purely legal’ way add credibility to one’s own analysis, reproaching someone’s ‘lack of technological understanding’ may become the most effective tool in rebutting a colleague’s arguments.

In the shoes of a hackathon participant

January 21st, 2018 Posted by Uncategorized 0 thoughts on “In the shoes of a hackathon participant”

Written by Anette Piirsalu, a European Law Bachelor student at Maastricht University, Faculty of Law. Anette is interested in the interplay of law, technology and business. She plans to continue her career in privacy matters, and possibly do a further degree in ICT.

When I heard about the Brightlands Hackathon that took place in the end of November, I was immediately very excited and determined to participate. I had heard of that event before, and it seemed like a very fun experience. However, as the event came closer, these emotions were gradually replaced by a feeling of discomfort of doing something completely different. After all, I am just a law student. What would I do at a hackathon? I imagined there to be bunch of IT and business people who could probably contribute to the projects far better than I could. Thus, I doubted a lot whether to actually sign myself up or not. Yet, as a last minute decision I still decided to sign up and just see what happens!

On Friday evening I got on a train to Heerlen. Already on the way I met other students from Maastricht also going to the hackathon, and when we arrived at the campus, I thought to myself “so far, so good”. The event started with idea pitching – everyone who had an idea already could present this to the rest. Others could then join the ideas they had the most belief in. Of course I did not arrive there with an idea. I was just there to see what the weekend would bring. As it turned out, then a lot of others had also thought the same, thus, there were quite few people who pitched their ideas. However, after hearing the different ideas, many were inspired and came up with their ideas on the spot. Therefore, in the end ten teams were formed and the work began! We moved to our group room and started to develop our idea of what we wanted to do. This was definitely far from easy!  The first evening was the most frustrating. We ended up ditching our original idea, however, we did not manage to come up with any new idea that would really solve the issues we were thinking about. Therefore, in the end of the first day, when I went to sleep around 3AM, I was a bit uncertain about how the next day would go. However, Saturday started off great – we came up with the idea first thing in the morning and everything from then on went super smoothly. We had a very nice group dynamics and we worked very well together. It was very exciting to develop the product and come up with a business plan. Every once in a while different coaches would step by and gave us new techniques on how to continue. I think those different techniques and methods were super useful and something I definitely took with me from the event.

On Sunday morning – the day of the pitching – you could already feel the excitement in the air. Everybody was putting on the final touches on their presentations. I did not come to the event with an idea to win, I did not really think about it at all before Sunday. However, after all our hard work, I did find myself thinking that we could actually win this. On the other hand, while listening to other teams, there were several very strong ideas. Our pitch was the last one, thus, we had to wait nervously for all the other nine teams to present before we got our turn. The pitch itself went by very quickly followed by a huge feeling of relief. We had done everything we could, now it was just time to wait… And then the time came for the announcements. I cannot even describe the feeling I had when they announced us as winners! All I know is that I was very glad that I decided to participate. I met amazing people and got such an incredible experience which helped me to figure out what I want to do in the future. Therefore, I recommend all of you to just come to the Rethinking Justice Hackathon and see what happens! I am 100% sure that you will not leave disappointed and you will make memories that last a lifetime!

Will I ever be to blame with a level 5 vehicle?

January 14th, 2018 Posted by Uncategorized 0 thoughts on “Will I ever be to blame with a level 5 vehicle?”

Written by Doryane Lemeunier, a European Law Bachelor’s student at Maastricht University, Faculty of Law. Doryane has an international background having spent two years in United World College in Mostar and is currently developing her skills in a more national domain at the University of Glasgow. She is furthermore interested in pursuing her studies in the field of EU Competition law at the University of Amsterdam.

Entering a world of technology and magic. Some people’s dreams are to fly, but even more people when you ask around will tell you: “I wish I could teleport…”

Could that be a thing as the world’s technologies evolve so rapidly? Scientist have managed to teleport information in 2014 as a result of an experiment carried out under the supervision of Dr Hanson at the TU Delft’s Kavli Institute of Nanoscience.

If that ever happens, I will make sure to write an article in due time about how the law deals with that sort of technology. Nowadays, we might not yet be able to teleport, but for the last couple of years it seems that the automobile has been moving towards an automatic era, creating cars that can be self-driven. Wouldn’t that also entail new laws in the field of tort and traffic? So far nothing has been concretised since fully self-driving cars have not made their appearances on public roads.

As the Society of Automotive Engineers (SAE International) doesn’t fail to mention: there are different levels at which a car is considered to have some automatism:

Level 0 – No Automation: “the full time performance by the human driver of all aspects of the dynamic driving tasks, even when enhanced by warnings or intervention systems”

Level 1 – Driver Assistance: “the driving mode-specific execution by a driver assistance system of either steering or acceleration/deceleration using information about the driving environment and with the expectation that the human driver perform all remaining aspects of the dynamic driving task”

Level 2 – Partial Automation: “the driving mode-specific execution by one or more driver assistance systems of both steering and acceleration/ deceleration using information about the driving environment and with the expectation that the human driver perform all remaining aspects of the dynamic driving task”

Level 3 – Conditional Automation: “the driving mode-specific performance by an automated driving system of all aspects of the dynamic driving task with the expectation that the human driver will respond appropriately to a request to intervene”

Level 4 – High Automation: “the driving mode-specific performance by an automated driving system of all aspects of the dynamic driving task, even if a human driver does not respond appropriately to a request to intervene”

Level 5 – Full Automation: “the full-time performance by an automated driving system of all aspects of the dynamic driving task under all roadway and environmental conditions that can be managed by a human driver”

[Click here for more information]

The Volkswagen Group has unveiled its first concept Level 5 autonomous vehicle. Named Sedric, it’s envisioned as capable of operating any driving mode in any environmental condition, allowing passengers to sit back and enjoy the ride.

Compared to everything we have had so far this is a big technological improvement, and the law indeed has to keep up with it.

Articles about liabilities have already been published. But let’s sit back and look at all the potential scenarios. Could it mean that a level 5 autonomous pick up the children from school? Could it mean that one could fall asleep or get some work done while the car transports them somewhere? Could it mean no more driver’s licences? Could it really mean one could get in the car drunk?!

These are a lot of questions that should carefully be considered by the legislature, since many situations will probably follow from these cars. The human mind being far from lawless and defects in manufacture and design being possible, who will be to blame? Who will take the fall?

Michael I. Krauss, has already written his idea about the turn tort law should take in this respect. In his mind there can be three possible defects:

  • Manufacturing defects in which case the manufacturer should be held liable for having marketed a product that did not perform as advertised.
  • Informational defects, where again manufacturers should be held liable only in case they acted negligently in the sense that they would be acting negligently if a reasonable manufacturer could have provided a better warning. Concerning informational defects however nothing more is said. What if the car has given clear signs of warning but the user has not noticed, would the user be held responsible?
  • Design defects in which case similarly to informational defects the rule should be based on negligence.

Nothing is mentioned in regards to the liability of the user when it comes to safeguard or control the vehicle in case of defect. Therefore, can a user ever be blamed when using a level 5 car?

It seems that the already imagined rules do not cover every scenario. Assuming a person came back drunk from a party and used the car to go home, prior to which the car informed the user of an informational defect, however being drunk the user didn’t notice and causes an accident, should the user then be held liable?

Take the same situation, but this time a child over 15 gets in the car, doesn’t necessarily understand what the car is trying to warn him about and drives off?

Should there be any more safeguards? Should there still be a minimum age? Should there still be a sobriety level to be respected?

What happens if the police tries to stop the car, for an allocated stop and search, how will the car notice if the user is sleeping? Should that involve new devices, for example electronic devices to make sure the cars pull over? Wouldn’t that undermine the security of the system when put into the wrong hands?

Many questions are still to be answered, the self-driving cars are definitely a security topic that the legislature should think through very carefully before putting them on the road.

Geneva Motor Show 2017: VW Group unveils ‘Sedric’

You Don’t Need to Be a Superhero to Be in the Justice League: Rethinking Justice Hackathon (3-4 March 2018)

January 3rd, 2018 Posted by Uncategorized 0 thoughts on “You Don’t Need to Be a Superhero to Be in the Justice League: Rethinking Justice Hackathon (3-4 March 2018)”

Why Do We Need Another Hackathon?

Making the world a better place is easier said than done. This is why every contribution to the noble cause embraced by humanity through the Sustainable Development Goals matters, because success is the sum of small efforts. Ours is a shared world: citizens, businesses, states and institutions all face the same risks and challenges, and so there is a constant need for society to innovate – to find better ways of doing things. Ideally, this can be done in order to bring about more justice in the world. What we mean by justice is simply more fairness, in the way in which citizens, civil society, businesses and public institutions interact with one another.

While thinking about broad theme has its advantages, we want to create a nurturing environment and mindset where someone with an idea can go ahead and do something about it. This is how the Rethinking Justice Hackathon came to life: students, staff and alumni from Maastricht University, as well as friends from the industry, coming together in a 24 hour hackathon to celebrate free thinking and enthusiastic doing.

We want to hold a hackathon that celebrates rethinking justice in all its dimensions: individual, social, commercial, political or cultural. To accommodate this wide range of options while making sure that our participants can focus their attention on topics that are narrower than this, we created four different challenges. Each challenge is led by a Partner, who operates in a field relevant for the theme of the hackathon and will be actively involved in pre-hackathon events. The four Partners we are collaborating with for this edition are: The Hague Institute for the Innovation of Law (Social Justice challenge); eBay (E-Commerce Conflicts challenge); Dubai International Financial Centre (DIFC) Courts (Courts of the Future challenge); and Maastricht University’s Institute of Data Science (Data-Driven Justice challenge). Each of the partners will host a workshop for participants, so the latter can understand how they can relate to the challenges from the perspective of their own disciplines and expertise, while also allowing the participants to immerse in the way of thinking of our Partners:

– for the Hague (HiiL) and Brussels (eBay), we will arrange for transportation so that students can jump into a bus and be brought to our partners’ venue;

– for Maastricht (IDS), the workshop will be held in the IDS HQ at the Brightlands Health Campus;

– we will also facilitate an online workshop given by DIFC Courts.

Hackathons as Education at Maastricht University

As one of the youngest Dutch universities, Maastricht’s pedagogy has always stood out because of its Problem-Based Learning approach: departing from real-life problems and learning by doing, either through independent inquiry or group collaboration. For this reason, we consider hackathons and PBL to be a match made in heaven: creativity, leadership, perseverance, empathy, communication – all of these 21st-century skills that are so central to modern work experiences have friendly roots in the pedagogical concepts of Maastricht University education.

Real Interdisciplinarity

We want this event to take the shape of a hackathon because we believe in the creativity and stimulation it generates. Because of its nature, we expect participants to engage all their pre-knowledge in not only coming up with concepts but actually starting to execute them in as far as that is possible. For teams to come up with well-rounded ideas, interdisciplinarity is one aspect we are heavily vested in, to make sure that we attract enough interest from disciplines that complement each other and that can gain a lot of mutual benefits, which in turn can increase the quality of presented projects.

Our main goal is to ensure the highest quality possible for an educational hackathon experience. For this to happen, participants must gather as much pre-knowledge as possible, and together with the Partners of the challenges, we will facilitate knowledge acquisition. To that end, we will facilitate an additional series of workshops on the actual day of the hackathon, on a need-to-train basis. Teams encountering various issues while working on the challenges will be able to get tailored coaching for their problem-solving needs. For such workshops, we will issue online badges (Badgr), so participants can display their skills mastery on social media.

Registration and Team Division

Participants register individually and rank the four challenges according to preference. We will match their preferences across disciplines and backgrounds to ensure as much team diversity as possible, and create teams which will be allocated to the different challenges.

We have 100 places open for participants, who need to apply individually, and if selected, will be placed in teams on the basis of indicated preferences, so we can make sure everyone gets to enjoy a REAL interdisciplinary experience. We will announce the Hackathon participants on 3 February! Final participants will be charged a €24 participation fee upon confirmation (that’s investing €1 per hour in a 24-hour skills training event and getting a lot of free food, drinks and road trips in return).

Outcomes

The main purpose of this hackathon is educational (Hackathons = PBL). However, apart from the very experience, we aim to offer our participants, and apart from the employability importance of this activity in their profiles, we want to make sure that good ideas can be followed-up on. To this end, we will choose the two best projects on social innovation and commercial innovation:

The Hague Institute for the Innovation of Law will offer further coaching to members of the first winning team with a view to applying to the Innovating Justice Accelerator where they can further win €20,000 from the Dutch Ministry of Justice to develop their project. The Brightlands Techruption Incubator will offer guidance and counseling to members of the second winning team, to further consider whether the project can be incubated at Brightlands as a start-up. All participants will receive an online badge for their overall participation in the Hackathon.

Get ready for a challenging, intense, creative and satisfying justice-hacking experience! Check us out and don’t forget to apply until 1 February!

TECHNOLAWGEEKS

Copyright © 2017