Posts by multitasker

The role of the STOA – Where do MEP’s get their data when it comes to technology?

February 17th, 2018 Posted by Uncategorized 0 thoughts on “The role of the STOA – Where do MEP’s get their data when it comes to technology?”

Written by Doris Bogunović who shares regular blog posts with us on the role of the European institutions working on issues related to technology. She has a legal background, a keen interest in technology as well as experience with both the Court of Justice of the European Union and the European Parliament.

The current discussion that Europe is led by technocrats might be somewhat unjustified, at least when it comes to the European Parliament. Members of the parliament are often not experts in certain areas and they work with consulting bodies to help them in the decision making and policy forming processes. The most usual way an MEP can keep himself informed is through a well set up team at his/her cabinet. Every MEP has an office staff usually comprised of permanent assistants, contractual agents and interns. Since every MEP is a member of different delegations, conferences, committees or sub-committees (in which he/she represents the interest of the Member State of his/her election), his/her staff is in charge of research on all of the topics the MEP is working on within the mentioned bodies. But where else do MEPs get their info when it comes to technology specifically? The answer is from a consulting body called STOA.

STOA, the European Parliament Office for Scientific and Technological Option Assessment was officially launched in March 1987. By 2003, the Office had its set of rules and today it serves the European Parliament (EP) on a permanent basis, carrying out the important task of providing independent and impartial information to the Parliament’s Committees and other parliamentary bodies, regarding science and technology – specifically, researching scientific and technological developments, opportunities, as well as their risks and implications.

STOA’s activities consist of conducting Technology Assessment and Scientific Foresight projects and by organising workshops, expert discussions and visits to scientific and technical institutions. Any EP Member or EP body may submit a proposal to the STOA Panel for STOA activities to be carried out.

There are 25 members of the STOA Panel appointed for a renewable two-and-a-half-year period by nine EP Committees:

  1. Committee on Industry, Research and Energy/AGRI
  2. Committee on Employment and Social Affairs/CULT
  3. Committee on the Environment, Public Health and Food Safety/EMPL
  4. Committee on the Internal Market and Consumer Protection/ENVI
  5. Committee on Transport and Tourism/IMCO
  6. Committee on Agriculture/ITRE
  7. Committee on Legal Affairs/JURI
  8. Committee on Culture and Education/LIBE
  9. Committee on Civil Liberties, Justice and Home Affairs/TRAN)

It’s envisioned they meet at least six times a year.

The thematic priority areas of STOA are eco-efficient transport and modern energy solutions, sustainable management of natural resources, potential and challenges of the internet, health and new technologies in the life sciences and science policy, communication and global networking.

The Vice-President of the European Parliament responsible for STOA is a Spanish politician Ramón Luis Valcárcel Siso. MEP Valcárcel Siso is a member of the EPP group (Christian Democrats). Not only an ardent supporter of digitalisation but also a quite down to Earth opponent of misogynists and hate speech, even when it comes to conservative forces.

Furthermore, STOA cooperates with other parliamentary bodies such as the European Parliamentary Technology Assessment (EPTA). EPTA is a network of technology assessment (TA) institutions which was organised in 1990 to advise parliaments across Europe. Today, it has 13 full members, one of them being STOA. EPTA organises annual conferences and promotes cooperation between parliamentary bodies. EPTA members (individual institutions like STOA) are permanent consultants to the parliamentary bodies who are helping with the decision making process by carrying out technology assessment studies on behalf of parliaments.

EPTA’s goal is to ‘provide impartial and high quality accounts and reports of developments in issues such as for example bioethics and biotechnology, public health, environment and energy, ICTs, and R&D policy’. A good example of EPTA’s work is the Mobility Report from 2017 about mobility pricing in different countries and their future plans in tackling mobility issues submitted by ETSA members.

What do you think, is Europe full of technocrats? How does it work in your country? Should parliamentarians be specialists in certain areas or just well informed decision makers? How objective do you think is the information delivered to them? Are the reports they draft available to the public as well?

Self-driving vehicles: time for lawyers to keep their eyes on the road

February 11th, 2018 Posted by Uncategorized 0 thoughts on “Self-driving vehicles: time for lawyers to keep their eyes on the road”

Written by Caroline Calomme, Technolawgeeks’ co-founder and product manager for connected car services at Be-Mobile. This blog post is based on a speech given at ‘L’intelligence artificielle au coeur de l’entreprise’ organized by CMS Belgium in October 2017 and what I’ve learned from my wonderful new colleagues.

We’ve always feared disruptive inventions in the field of transport. That’s nothing new. The first trains? People believed that the journey would melt their bodies. The passengers wouldn’t be able to breath at such a high speed and their eyes would be damaged. The train rides could cause instant insanity. Even worse, the trains would make women’s uteruses fly out. It’s not the only mode of transport fueling the public’s anxiety. In the United Kingdom, the first cars were banned from travelling faster than 2 mph (3.2 km/h) in the city. Even bicycles were considered extremely dangerous. Those who dared to try this engine of death ran the risk of suffering from a terrible medical condition: the bicycle face. The speed and “the unconscious effort to maintain one’s balance” would leave you disfigured and scarred for life. In comparison to this, our reactions to autonomous cars seem almost reasonable.

Do we really understand the technology though? To many, artificial intelligence and mobility are synonymous for self-driving vehicles. It’s the first picture that comes to mind. Yet we’re only at the very start with not-so-glamourous –  although very practical – applications such as parking assistance, speed adaptation or lane centering. Autonomous vehicles fascinate us but we shouldn’t confuse the potential of this technology with the reality. It’s not because they’re featured in movies and TV shows that we’ve gotten that far. You’ll still have to wait a while before taking a nap in your car after a long day at work. If you don’t believe me, have a look at those articles in Wired, Forbes, TechCrunch or the Huffington Post.

Of course, this doesn’t mean that we shouldn’t start reflecting on the policy implications (see also Doryane’s post). When they’re introduced on the market, self-driving cars will disrupt insurance schemes as we know them and raise serious ethical concerns. Do we want to build cars which obey every single rule on the road? Or shall we program them to know when it’s best not to follow rules to the letter? In the first scenario, we’ll need to clarify the hierarchies between the obligations to avoid situations where it’s impossible to obey the law without breaking another rule (yes, this is an actual possibility since the driving code is still written by fallible humans). In the second scenario where the vehicles are taught to think like us, there’s a chance that they’ll also do the math: costs of the fine < benefits from driving faster…There’s still a lot to think about!

Nonetheless, we tend to focus so much on the vehicles that we forget the infrastructure. That’s unfortunate because that’s where we’ll see technological advances happening in the near future. Cars don’t interact only with one another, but also with road signs, traffic lights and much more (check out the European Commission’s website for more information on vehicle-to-infrastructure policies). Here’s a very concrete example. Today, we’ve dynamic boards on the road. Sometimes, they indicate a new maximum speed, due to road works for instance. Instead of only sending information to the cars, the boards can also receive information from them (for the techies among you, it’s of course a figure of speech). Imagine the next step: displaying the ideal speed at which cars should be driving based on the current traffic flow. It decreases the probability of accidents caused by hurried commuters and ensures that drivers do not slow down for no reason.

True, we don’t need the infrastructure, we have apps. But let’s not forget that almost 20% of the population is over 65.  True, we can’t always rely on the drivers following the advice. It’s a fact that until we remove the drivers from the equation by sending the information directly to an intelligent vehicle, we’ll unfortunately need to count on common sense (this video on how ghost jams start illustrate why it doesn’t work).  On the bright side, there’s already a lot of data to learn from: the correlation between the number of trucks and the decrease in speed, the exits and times where traffic slows down the most, the impact of the weather, etc. If we took a step back, we’d realize that artificial intelligence can also help us reduce traffic jams even before controlling the vehicles (some inspiration here). While a vehicle parking itself in a crowded city when you go shopping has its perks, this tangible progress would already have a great impact, as anyone who needs to drive to work can probably attest.

But let’s get back to the key role of the infrastructure. Have you ever waited in front of a red traffic light at a crossroad where all the other traffic lights also happen to be red? And where the light is green for pedestrians although they’re nowhere near the crossroad? If only you could let the traffic light know this makes no sense…That wouldn’t be very practical because every driver would send requests and ask for priority. But what about vehicles transporting dangerous goods? Ambulances? Public buses which are already 10 minutes behind schedule? Even better, the traffic light could detect that 10 vehicles are waiting in one direction while there’s only one vehicle in the other direction and could take this into account. It could also recognize an elderly or disabled person who needs a little bit more time to cross (on that note, ‘there’s an app for that’).

It’s time to shift the policy and legal debate to the real world. While I admire the willingness not to be outrun by technology once again, policymakers and legal experts might be overlooking fundamental advances that are a lot easier to implement than self-driving vehicles and also raise questions of liability, cybersecurity, public procurement, intellectual property, competition law, data protection, etc.

Do I own my avatar and its belongings?

February 4th, 2018 Posted by Uncategorized 0 thoughts on “Do I own my avatar and its belongings?”

Written by Kristopher Badurek, a Bachelor student of Maastricht University’s European Law School. He is a tech enthusiast who puts emphasis on interdisciplinarity and self-development.

Massively Multiplayer Online (MMO) games keep maintaining their popularity, with new games spawning more frequently than ever. And yet, certain problems have always marred players of such games. Many players at least once logged into their accounts to find their beloved equipment gone. At times like this, many questions run through the heads of troubled players. One of them is, did I actually own that equipment?

Generally, all objects useable by the players, be it swords, blocks or currency, as well as avatars themselves, are considered under an umbrella term of virtual property. They are believed to be more than simple code, mostly due to their perceived fungibility and the ability to create a sense of personal attachment. The term virtual property seems to imply that it is a type of property. From the lay perspective, it can indeed be perceived as property. It can be possessed, used and enjoyed by the players. However, the situation is not as simple as it may initially seem.

Legally speaking, virtual property does resemble property to some extent. In his heavily influential work, Joshua Fairfield bases the similarity to real property on three factors: rivalrousness, persistence and interconnectivity.  Both in virtual and real life, owning a piece of property comes with the possibility of excluding others from its use. Whether it’s a real house or one in Second Life, its owner may invite others and decline others’ visits as he deems fit. Both types of property are also persistent. Like its real counterpart, a virtual house will not disappear without a trace for no reason, even after the player turns off his computer. It is still there, somewhere, waiting for its owner to return. Lastly, both virtual and real property are interconnected. Upon inviting a friend to a house, be it real or virtual, both the player and his friend will be able to experience the same objects in the same place, even despite the friend not owning the object.

However, unlike real property, all virtual property is burdened with certain limitations. It is those limitations that effectively block its legal recognition as property. They stem from terms of service, which every player must accept before entering a virtual world.  This so-called End-User License Agreement (EULA) controls virtually every action of the user.  Most of these licenses pre-emptively require a waiver of any potential right to any virtual property that the player may amass over the time spent in game. It does not change the fact that the player is allowed to use, enjoy, and sometimes even profit from his virtual property, but at the end of the day, those objects are still factually owned by the virtual world’s developers. This means that virtual property is simply not owned by the player, and any rights to it the player may have are derived from the license granted by the developers.

The academic debate on transparency requirements in the GDPR: a brief overview

January 27th, 2018 Posted by Uncategorized 0 thoughts on “The academic debate on transparency requirements in the GDPR: a brief overview”

Written by Merle Temme, a European Law School alumna whose paper on algorithmic transparency was nominated for the European Data Protection Law Review (EDPL) Young Scholars Award.

Algorithms (sequences of instructions telling a computer what to do) are becoming deeply entrenched in contemporary societies. When designed well, they are incredibly useful tools in accomplishing a great variety of tasks, simplifying human life in many different ways. Their use is not, however, uncontroversial, especially when algorithms are being used in automated decision-making (ADM) and therefore make decisions that have potentially life-changing consequences for individuals without any (or only marginal) human intervention.

It is by now well known that, like humans, algorithms can carry implicit biases and may well deliver discriminatory results. Remedies do exist – for instance, having developers factor in positive social values (like fair and equal treatment) already at the design stage of the algorithm and, in case of a violation of these values, enforcing them through anti-discrimination legislation. Rendering a system both fair and efficient, however, requires extra care and attention and such an effort will cost time and money. Operators of ADM may therefore easily be tempted to rely on less well-designed – albeit cheaper – ADM systems.

The European Union legislature decided to tackle this problem last year by regulating the way in which the data forming the basis of the algorithm’s decision is being processed. The EU’s overhaul of its data protection regime, the General Data Protection Regulation (GDPR), will have to be applied by the Member States from May 2018 onwards. The GDPR provides for rules such as transparency requirements, which are applicable to human and automatic decision-making alike, but also features special provisions which are pertinent to ADM alone. Not only is this intended to address the abovementioned accountability issue; greater transparency is also supposed to help human subjects of ADM to better understand what factors underpin the decisions that affect them and how the system can be held accountable.

The GDPR is praised as ambitious and designed to bring about substantial change, aiming at making Europe ‘fit for the digital age’, but has at the same time been criticised for being vague and ambiguous – a hybrid legal instrument mixing many aspects of a directive and a regulation. In name it is a regulation, directly applicable across the board in the EU, albeit one that leaves many aspects to be regulated by the Member States; a feature typical of European directives.

This ambiguity has spawned an interesting debate among researchers on how the GDPR’s transparency requirements are to be interpreted in so far as ADM is concerned. Goodman & Flaxman – in a rather brief paper  – entered the scene in summer 2016 by identifying a ‘right to explanation’ as the most important expression of algorithmic transparency in the GDPR, without, however, providing a strong line of argumentation for this statement or even identifying a legal basis for such a right. They identify the right to explanation as a more fully-fledged version of the right established by the Data Protection Directive of 1995 (which from May onwards will be superseded by the GDPR) and argue first, that an algorithm can ‘only be explained if the trained model can be articulated and understood by a human’. Secondly, they hold that any adequate explanation would, at a minimum, ‘provide an account of how input features relate to predictions, allowing one to answer questions such as: Is the model more or less likely to recommend a loan if the applicant is a minority? Which features play the largest role in prediction?’.

Wachter, Mittelstadt & Floridi took up the gauntlet and argued, on the basis of the structure of the regulation and its drafting history, that the evidence for a right to explanation is inconclusive. Instead, they propose an alternative ‘right to be informed’ about certain aspects of the decision-making process (e.g. the purpose and legal basis of the processing). First, they claim that even if a right to explanation existed, restrictions and carve-outs in the GDPR would render its field of application very limited. Secondly, they set out the central point of their paper, the degree to which ADM can be explained in the first place: Wachter et al. make a distinction between how general or specific the explanation could be and at what point in time it would take place, only to conclude that the sole possible interpretation would be a very general explanation of ‘system functionality’ (what they name the right to be informed).

A very recent paper by Selbst & Powles, however, describes Wachter et al.’s analysis as an ‘overreaction’ to Goodman & Flaxman’s paper that ‘distorts the debate’. Their central point of critique is Wachter et al.’s analytical framework, namely the model they use to explain the degree to which the inner workings of ADM can be explained. According to Selbst & Powles, that model is nonsensical and rooted in ‘a lack of technological understanding’.  Interestingly, most of their paper is focused on debunking that model and a detailed explanation of why it does not correspond to computer programming reality. Only then do they turn to the legal text itself. By applying a holistic method of interpretation, they conclude that the regulation, requiring ‘meaningful information about the logic involved’ (in an automated decision), must contain ‘something like’ a right to explanation in order to enable the data subject to exercise her rights under the GDPR and human rights law.

The way this will play out in practice will become clear once the GDPR becomes applicable in a few months and European courts will have the opportunity to weigh in and decide how to interpret it in the disputes that will be laid before them. The development of this debate so far – from purely legal arguments (Goodman & Flaxman) to a more technical analysis (Wachter, Mittelstadt & Floridi) and the rebuttal of the latter (Selbst & Powles) – is, however, remarkable: it indicates that on a topic as complex as algorithmic transparency, legal knowledge is not enough anymore. To win the argument, the lawyer/legal researcher of the future (or rather, the present) must have conceptual knowledge of the technology he seeks to assess – be it to criticize, regulate, or use it. Not only does understanding technology and writing about it in a ‘not purely legal’ way add credibility to one’s own analysis, reproaching someone’s ‘lack of technological understanding’ may become the most effective tool in rebutting a colleague’s arguments.

In the shoes of a hackathon participant

January 21st, 2018 Posted by Uncategorized 0 thoughts on “In the shoes of a hackathon participant”

Written by Anette Piirsalu, a European Law Bachelor student at Maastricht University, Faculty of Law. Anette is interested in the interplay of law, technology and business. She plans to continue her career in privacy matters, and possibly do a further degree in ICT.

When I heard about the Brightlands Hackathon that took place in the end of November, I was immediately very excited and determined to participate. I had heard of that event before, and it seemed like a very fun experience. However, as the event came closer, these emotions were gradually replaced by a feeling of discomfort of doing something completely different. After all, I am just a law student. What would I do at a hackathon? I imagined there to be bunch of IT and business people who could probably contribute to the projects far better than I could. Thus, I doubted a lot whether to actually sign myself up or not. Yet, as a last minute decision I still decided to sign up and just see what happens!

On Friday evening I got on a train to Heerlen. Already on the way I met other students from Maastricht also going to the hackathon, and when we arrived at the campus, I thought to myself “so far, so good”. The event started with idea pitching – everyone who had an idea already could present this to the rest. Others could then join the ideas they had the most belief in. Of course I did not arrive there with an idea. I was just there to see what the weekend would bring. As it turned out, then a lot of others had also thought the same, thus, there were quite few people who pitched their ideas. However, after hearing the different ideas, many were inspired and came up with their ideas on the spot. Therefore, in the end ten teams were formed and the work began! We moved to our group room and started to develop our idea of what we wanted to do. This was definitely far from easy!  The first evening was the most frustrating. We ended up ditching our original idea, however, we did not manage to come up with any new idea that would really solve the issues we were thinking about. Therefore, in the end of the first day, when I went to sleep around 3AM, I was a bit uncertain about how the next day would go. However, Saturday started off great – we came up with the idea first thing in the morning and everything from then on went super smoothly. We had a very nice group dynamics and we worked very well together. It was very exciting to develop the product and come up with a business plan. Every once in a while different coaches would step by and gave us new techniques on how to continue. I think those different techniques and methods were super useful and something I definitely took with me from the event.

On Sunday morning – the day of the pitching – you could already feel the excitement in the air. Everybody was putting on the final touches on their presentations. I did not come to the event with an idea to win, I did not really think about it at all before Sunday. However, after all our hard work, I did find myself thinking that we could actually win this. On the other hand, while listening to other teams, there were several very strong ideas. Our pitch was the last one, thus, we had to wait nervously for all the other nine teams to present before we got our turn. The pitch itself went by very quickly followed by a huge feeling of relief. We had done everything we could, now it was just time to wait… And then the time came for the announcements. I cannot even describe the feeling I had when they announced us as winners! All I know is that I was very glad that I decided to participate. I met amazing people and got such an incredible experience which helped me to figure out what I want to do in the future. Therefore, I recommend all of you to just come to the Rethinking Justice Hackathon and see what happens! I am 100% sure that you will not leave disappointed and you will make memories that last a lifetime!

TECHNOLAWGEEKS

Copyright © 2017