On February 19, 2020, the EU Commission published a White Paper on Artificial Intelligence (AI). In the paper the Commission describes the possibilities and challenges of AI and presents options for future regulation.
At the same time, the Commission launched a public consultation to which everyone is invited to respond by 14 June. Join in and speak up – for artificial intelligence without discrimination and a clear ban on facial recognition in public places!
It is not always easy to get involved in European debates and legislation. That’s why I have prepared this hopefully helpful guide on how to respond to the consultation:
participation: All EU citizens can participate.
allow time: replying can take between 10 minutes and an hour, depending on how much you want to answer. Those who do not have much time can limit themselves to objecting to facial recognition in public places.
preliminary research: It is recommended to read the White Paper of the EU Commission first, here is the English version.
create an account: If you have never responded to a consultation before, you first have to create an account on the Commission’s website:
fill in: The consultation questionnaire can be filled in online here:
You can change the language at the top of the page and then indicate in the questionnaire that you are answering in English. You can answer as a citizen, association, research institution, consumer organisation or company. You can choose whether your answers should be public or anonymous. The questionnaire consists of multiple-choice questions, which makes it easier to answer but also simplifies some complex questions.
own comments: The form offers additional optional fields (500 characters maximum), which can be used for additional comments. Here are the most important points that could be entered there
In “Section 2 – an ecosystem for trust”, which deals with the risks of AI, one can explain where and how AI can discriminate. Our current rules are not sufficient to successfully prevent discrimination.
Secondly, the inadequate approach of the Commission’s risk assessment needs to be commented on. In its White Paper, the Commission has made things too easy by defining high hurdles and specific sectors for high-risk applications. This is too narrow, because all systems that affect fundamental rights or decide about people’s access to resources or to social participation (such as elections or online discussions and fora) are high-risk systems. A much more differentiated approach is needed: The more damage artificial intelligence can do to individuals or society, the higher the requirements and regulatory rules should be. A good model here would be, for example, the proposal of the German Data Ethics Commission.
The part dealing with biometric systems is also important – i.e. how cameras with facial recognition can be used. Here one could argue for a clear ban in public places, because the threat to our fundamental rights is far too high, but offers no increase in public security.
AI should be trustworthy and secure in order to guarantee European values, rights and freedoms. Therefore, in the enforcement system, one could point out that we need a much stronger mechanism and a European monitoring and supervisory authority with its own competence, role and powers to assist and relieve national authorities.
For more background information on this subject in German, I recommend my commentary in the Tagesspiegel.