The results of my report on ethical aspects of AI

I am delighted that one of my objectives, which I had already set out in the election campaign, is now stated explicitly in my first report for the European Parliament: We are opening the black box ki and preventing discrimination!

Today, the Committee on Internal Market and Consumer Protection (IMCO) in the European Parliament has confirmed all the compromise amendments in my report with far-reaching demands for future standards in the ethics of artificial intelligence, robotics and related technologies. Now all the content has been finalised, and the overall text will be confirmed again by the committee tomorrow. The report highlights the outlines for future legislation set by the European Parliament. They will feed into the report of the JURI Committee, which will be put to the vote in September and is expected to go to plenary in October.

We have sharpened the ethical requirements in such a way that AI can be used in a more just and social way for the benefit of humanity. Data sets must be representative, and regulators should have free access to the code in Europe to prevent discrimination, surveillance, economic exploitation and manipulation of opinion. We are thus opening the black box of AI. In the future, we will be able to examine algorithms much more easily. A European committee coordinates supervision. At the same time, we want to give consumers more rights.

The main achievements of the report at a glance:

  • Non-discrimination: We don’t need machines to serve or even reinforce prejudice. That is why we set high standards for high-quality, representative training data. We demand diverse teams of developers and engineers and ensure that the decision-making levels of the European centre of expertise on AI are diverse and gender-balanced.
  • More supervision and competence: Supervisory authorities gain direct access to the company’s documentation, code and data sets. A European committee made up of representatives of the member states coordinates the monitoring. In this way, we create a level playing field. A competence centre will provide guidance, assessment and expertise to all national authorities.
  • More substantial rights for consumers: The users of AI applications should be adequately informed about the existence, reasoning and possible results and effects of algorithmic systems. They should learn how the system’s decisions can be reviewed, meaningfully challenged and corrected, and how to enforce their rights. In this way, we switch off the AI autopilot and put the consumers back in the driver’s seat. Consumer protection organisations should be adequately funded for educational work, and researchers should have API access to AI systems to analyse their effects independently.
  • More sensitive risk assessment: The legal obligations for companies should gradually increase with the identified risk level of an AI application. In the lowest category, no additional legal obligations are necessary. However, any algorithmic system that can harm people or potentially violate the rights of an individual or deny him or her access to public services should never be placed in the lowest category. The tiered model must be regulated in a binding and uniform manner throughout the EU. The risk assessment of systems must be subject to regular reassessment.
  • +32 22 84 59 05
  • Alexandra.Geese@ep.europa.eu

News & Events​