This night we had a good exchange of thoughts on artificial intelligence and machine learning (ML). ML has a longer history, but now computing power is sufficient and some innovations were made. It is becoming more intensely used and wide-spread, having an impact on several parts of life. Overall our conversation was not so much about easy solutions (which there are not), but we mainly discussed the new questions it raises.
GDPR is a positive thing in itself, the transparency and control. It might be hard to reach the same level of fairness with ML, but we shouldn't be as pessimistic as Greenfield.
How to design in the context of ML? Is it possible to make explainable machines at all? For rule-based systems this is possible, as the rules applied can be reconstructed. But in general it may be hard to explain ML, unless it was designed to do so. Could it be a better strategy to design an experience/dialogue with the machine to give insights in the choice it makes? Do we need to regulate that? And should this be on the ML as a process or on the outcomes?
It is clear that there is often bias in these systems, either through the makers or the data it is fed. A good strategy is to have more diverse makers, of course. This also relates to culture in a way. We have different approaches depending on regions. Just think of the differences in culture of US (business-centered), China (government-centered) and Europe (tries to combine all kind of backgrounds).
Thinking about design methods for ML we touched Value Sensitive Design. Making sure we consider top-priority values and multiple stakeholder could be a recipe to 'control' the black box, to make sure it has the right behavior. Ultimately it is a question of who is responsible for assuring this. Also depending on the situation we prefer: a human-in-the-loop, human-out-of-the loop, or human-on-the-loop system. Think of Tesla marketing self-driving cars as autonomous, while in reality the driver is still responsible and should at least be on the loop.
We had an interactive session during the whole evening. We taped the conversation and asked all attendees if they were ok with publishing the footage. Hence the short text here.
Watch the video recording
As we learnt in one of the earlier editions on GDPR/AVG, transparency and control on personal data are key concepts in the regulations. These might however be hard to implement. Various experts have expressed concerns about the applicability of these regulations to AI and about the impact of these systems on decision making.
So what can we contribute as tech workers?
We are happy to have Maaike Harbers joining us for setting the stage with an interview and conversation.
Maaike Harbers is a research professor in Artificial Intelligence & Society at Knowledge Center Creating 010, and a senior lecturer at the Creative Media and Game Technologies program, both at Rotterdam University of Applied Sciences. Her work focuses on artificial intelligence, ethics and design. She studies how designers can create interactive, intelligent technology in a responsible way by accounting for the ethical implications of their concepts during design time. She received a PhD from Utrecht University in 2011 on the topic of Explainable Artificial Intelligence.
Afterwards we will discuss the topics addressed in groups and share insights, before we have some drinks.
We meet at Sensor Lab in Utrecht on Monday, June 4. We start at 19:00 and wrap up around 21:00. (Doors open 18:30.) Admission fee is 5 euros payable at the door.
To RSVP, send an email to firstname.lastname@example.org. Hope to see you there!