Artificial Intelligence: The time for ethics is over

The following article was originally published at Euractiv on 29 June 2020. Organising ethical debates has long been an efficient way for industry to delay and avoid hard regulation. Europe now needs strong, enforceable rights for its citizens, writes Green MEP Alexandra Geese.

If the rules are too weak, there is a too great a risk that our rights and freedoms will be undermined: This currently applies to all applications of artificial intelligence, which up to now have only been based on non-binding ethical principles and values. In this legislation, Europe has the chance to adopt a legal framework for AI with clear rules. We need strong instruments to protect our fundamental rights and democracy.

In the past few years, governments the world over have been busy setting up special committees, councils and expert groups to discuss the ethics of artificial intelligence: the High-Level Expert Group on Artificial Intelligence appointed by the European Commission, the expert group on AI in Society of the Organisation for Economic Co-operation and Development (OECD), the Advisory Council on the Ethical Use of Artificial Intelligence and Data in Singapore, and the Select Committee on Artificial Intelligence of the UK House of Lords or in the United States the Obama administration put together a Roadmap for AI Policy.

The city of New York, which was one of the first to set up a special council called the “Automated Decision System Task Force”, took more than a year to even agree on a definition of automated decision making. To date, it has still not managed to draw up an overview of all AI systems that are in use in the city.

At the same time, many companies have gratefully accepted the free pass to self-regulation by establishing ethics boards, writing guidelines, and sponsoring research in topics like algorithmic bias or “fairness in artificial intelligence”.

Over recent years, the corporate sector has been building one “ethics washing machine” after another. Facebook has funded AI ethicists at the TU Munich in Germany, while Google, SAP and Microsoft have all adopted ethical guiding principles and codes.

I don’t want to question these supposedly good intentions, but it is obvious that the focus on ethical debates has long since been an efficient way for the industry to buy time and avoid hard regulation. Too often, relying on ethics and self-regulation has been insufficient in holding companies to account, if strong enforcement and independent oversight mechanisms provided by an institutional framework are lacking.

In the past years, we have seen plenty of evidence highlighting the fact that our current laws are insufficient to protect against discrimination, when the burden of proof is still on the victim who might not even be aware that they are being discriminated against by an algorithm, and when liability for damage cannot always be established throughout the complex supply chain of an AI system.

People who are most likely to be discriminated against by AI systems are also more likely not to have the financial means or the self-confidence to file a lawsuit of uncertain outcome. Automated decision-making tools can often exacerbate racism and sexism anchored in our societies.

Just think of the COMPAS system that was used in the US to predict whether defendants will commit crimes again – it was found to discriminate against black defendants. In a recent study by the Alan Turing Institute in London and the University of Oxford, a research team demonstrated that current laws in Europe are insufficient to protect people against harm done by flawed algorithms.

There are plenty of examples that illustrate that a soft law approach via ethics, self-regulation and corporate social responsibility fails dramatically almost every single time. For instance, The Intercept revealed in 2018 that Google was developing a censored version of its search engine for the Chinese government – in direct violation of its ethical AI principles.

It will not be easy for European policymakers to write a law that is to the point, contains targeted measures, keeps pace with technology and does not cause collateral damage to anyone.

But this challenge is also an opportunity for us in Europe. We now have the chance to be the first continent to put humanity at the centre of digital policies and bear in mind that powerful interests attempt to push the “ethics agenda”. We need strong, enforceable rights for the users of future AI systems because they have an ever-growing influence on all aspects of our lives.

This is why the legislative process needs to be as participative and inclusive as possible. The EU Commission has just held a public consultation which aims at preparing its proposal for the European approach to AI. Theoretically, citizens could respond but the EU Commission’s consultation process is unfortunately not very accessible, due to the fact that you are asked to set up an account before being able to respond to a questionnaire containing a long list of very technical questions.

EU policymakers should, therefore, find other fora and proactively involve those groups most affected by AI and less likely to sit at the table where decisions are being made. This would also help increase transparency and build trust in EU decision-making.

Only then are we able to make sure that we adopt rules that make our rights and freedoms under Treaties and the Charter enforceable, instead of simply adopting ethical guidance in the name of ‘principles and values.’

Picture: CC0 via pixabay/PeteLinforth

News & Events​