Three proposals for artificial intelligence in Europe

Why was I cut off welfare this month? Why are certain videos on YouTube recommended to me? Why didn’t my resume make it to the final round of an application process? Why is a product advertised to me in the online shop? Why is my credit score worse than that of my husband? The answer to all these questions can be: because an algorithm decided so.

Since artificial intelligence (AI) and algorithmic decisions are increasingly influencing our everyday life, the European Parliament is currently working on several own-initiative reports – this means exts which the Commission is then to present as a legislative proposal. I am rapporteur – in other words, the author of the report – in my Committee on Consumer Rights and the Internal Market. Last week, my draft opinion containing proposals for the ethical aspects of artificial intelligence was published. (PDF).

The text proposes that ethical guidelines and binding rules must go hand in hand. We need clear laws in Europe so that everyone can understand how and where artificial intelligence is used, what the consequences are and how to defend against discriminatory or simply wrong decisions by machines. This is therefore about protecting people, our rights and freedoms. Furthermore, such rules will increase confidence in technology and create a single internal market within Europe, which will stand out from other world markets because of its values.

In this way, “AI made in Europe” can not only safeguard fundamental rights, but also become a real competitive advantage. For example, by stipulating for the first time in a law that artificial intelligence must not discriminate, that it must not violate our privacy, data protection, freedom of opinion or even human dignity.

My contribution to work of the Parliament are the following three proposals:

  1. A risk-based approach
    AI systems need to be assessed in advance on the basis of the potential harm to the individual as well as to society as a whole – although systems that may affect an individual’s access to resources or participation in social processes must never be placed in the lowest risk category. This category is reserved for AI products that neither affect nor endanger people. The determination of the risk of AI should be based on a combination of the severity of the potential harm and the probability of its occurrence. An increased risk potential of AI must be accompanied by a higher degree of regulatory intervention and requirements for companies. I consider the risk-based approach proposed by the Commission in its White Paper of February 2020 to be inadequate because it only identifies high-risk areas and there the concrete use case of an application in order to distinguish between problematic and non-problematic AI. Too many critical applications are not covered by the proposed approach.

  1. No discrimination
    So far, there are no binding rules worldwide that prohibit discrimination by AI. Although there are international human rights instruments and the EU Charter of Fundamental Rights, I propose more specifically that new AI must be tested for their potential to discriminate or violate other rights before they are applied.

  1. A European authority
    Each Member State should designate a national authority responsible for supervision. In order to avoid fragmentation of the internal market, I propose the creation of a new European committee which, as an independent body, can ensure uniform application of the AI rules throughout the European Union.

Where do we go from here?

The Committee on Legal Affairs is responsible for this European Parliament report. My opinion, which I am negotiating on behalf of the Committee on the Internal Market and Consumer Protection (IMCO), is incorporated into this report. The result is a Parliament resolution. It will be put to the vote in the autumn and thus expresses the opinion of Parliament. It is a guideline for the EU Commission, which is planning a draft law on rules for artificial intelligence (2021). My own-initiative report is thus a piece of the puzzle on which a section of the great law can already be seen.

The planned timeline for the further procedure:

  • Draft opinion in IMCO committee: May 14 or 18
  • Deadline for amendments from the other groups: 19 May
  • Amendments in IMCO Committee: 29 June
  • Voting in IMCO Committee: 6 or 7 July

The next step is for the report to be voted on in the Legal Affairs Committee in September and then hopefully adopted by all MEPs in plenary session in autumn.

Further reading:

link to my draft opinion for IMCO:


Procedure on EP website: EU Commission High-Level Expert Group on Artificial Intelligence Ethics Guidelines on Artificial Intelligence and Policy and Investment Recommendations

Gutachten der Datenethikkommission:

Dr. Katharina Zweig:

vzbv: Für künstliche Intelligenz, die den Menschen dient

Recommendation CM/Rec(2020)1 of the Cuncil of Europe :

AccessNow: Artificial Intelligence and Human Rights


AI Now Institute: Discriminating Systems, Gender, Race and Power in AI

UN Special Rapporteur on extreme poverty and human rights: Digital welfare states and human rights

Dr. Karen Yeung: A study of the implications of advanced digital technologies (including AI systems) for the concept of responsibility within a human rights framework

Michael Veale: A Critical Take on the Policy Recommendations of the EU High-Level Expert Group on Artificial Intelligence

News & Events​

[mailpoet_form id=”6″]