[stock-market-ticker symbols="FB;BABA;AMZN;AXP;AAPL;DBD;EEFT;GTO.AS;ING.PA;MA;MGI;NPSNY;NCR;PYPL;005930.KS;SQ;HO.PA;V;WDI.DE;WU;WP" width="100%" palette="financial-light"]

EU committee calls for all-out ban on the use of social scoring for lending decisions

30 septembrie 2021

AI in Europe: not all decisions can be reduced to ones and zeros, says the European Economic and Social Committee (EESC).

In two reports on draft EU legislation on artificial intelligence, the EESC calls for an all-out ban on social scoring in the EU and for a complaint and redress mechanism for people who have suffered harm from an AI system.

At its September plenary, the EESC welcomed the proposed Artificial Intelligence Act (AIA) and Coordinated Plan on AI.

Social scoring and redress: there is the rub

Weaknesses in the proposals are, amongst others, to be found in the area of „social scoring”, in the EESC’s view. The Committee flags up the danger of this practice gaining currency in Europe as it is doing in China, where the government can go so far as to deny people access to public services.

The draft AIA does include a ban on social scoring by public authorities in Europe, but the EESC would like to see it extended to private and semi-private organisations to rule out such uses as, for instance, to establish whether an individual is eligible for a loan or a mortgage. The EESC sees no place in the EU for scoring the trustworthiness of its people based on their social behaviour or personality characteristics, regardless of who is doing the scoring.

„It is important that the AIA halts the current trajectory of public and private actors, using ever more information to assess, categorise and score us”, says Catelijne Muller, rapporteur of the EESC opinion on the AIA, author of EESC’s first, trail-blazing opinion on AI in 2017. „The AIA should attempt to draw a clear line between what is considered ‘social scoring’ and what can be considered an acceptable form of evaluation for a certain purpose. That line can be drawn where the information used for the assessment is not reasonably relevant or proportionate”.

The EESC also points out the dangers of listing „high-risk” AI, warning that this listing approach can normalise and mainstream quite a number of AI practices that are still heavily criticised. Biometric recognition including emotion or affect recognition, where a person’s facial expressions, tone of voice, posture and gestures are analysed to predict future behaviour, detect lies and even to see if someone is likely to be successful in a job, would be allowed. And so would assessing, scoring and even firing workers based on AI, or assessing students in exams – a practice which has crept in during the pandemic and has been judged extremely invasive by students, with AI systems following their eye movements in front of screens, key strokes, background noise, etc.

In addition, the proposed requirements for high-risk AI cannot always mitigate the harm to health, safety and fundamental rights these practices pose. Hence the need to introduce a complaint or redress mechanism for people suffering harm from AI systems. The EESC flags up this gap, asking the Commission to implement such a system so that Europeans have the right to challenge decisions taken solely by an algorithm.

More generally, in the EESC’s view, the AIA fails to spell out that the promise of AI lies in enhancing human decision making and human intelligence. It works on the premise that, once the requirements for medium- and high-risk AI are met, AI can largely replace human decision making.

The AIA lacks notions such as the human prerogative of decision making, the need for human agency and autonomy, the strength of human-machine collaboration and the full involvement of stakeholders,” says Cateljine Muller.

We at the EESC have always advocated a human in command approach to AI, because not all decisions can be reduced to ones and zeros. Many have a moral component, serious legal implications and major societal impacts, such as on law enforcement and the judiciary, social services, housing, financial services, education and labour regulations. Are we really ready to allow AI to replace human decision making even in critical processes like law enforcement and the judiciary?

Adauga comentariu

Noutăți
Cifra/Declaratia zilei

Anders Olofsson – former Head of Payments Finastra

Banking 4.0 – „how was the experience for you”

So many people are coming here to Bucharest, people that I see and interact on linkedin and now I get the change to meet them in person. It was like being to the Football World Cup but this was the World Cup on linkedin in payments and open banking.”

Many more interesting quotes in the video below:

Sondaj

In 23 septembrie 2019, BNR a anuntat infiintarea unui Fintech Innovation Hub pentru a sustine inovatia in domeniul serviciilor financiare si de plata. In acest sens, care credeti ca ar trebui sa fie urmatorul pas al bancii centrale?