MEPs vote in raft of amendments to EU AI Act | Computer Weekly (2023)

MEPs in two European Parliament committees have overwhelmingly voted for a raft of amendments to the Artificial Intelligence Act (AIA), including a number of bans on “intrusive and discriminatory” systems, but there are still concerns around lingering loopholes and the potential for state overreach.

The list of prohibited systems deemed to represent “an unacceptable level of riskto people’s safety” now includes the use of live facial recognition in publicly accessible spaces; biometric categorisation systems using sensitive characteristics; and the use of emotion recognition in law enforcement, border management, workplace, and educational institutions.

Members of the Committees for Internal Market and Consumer Protection (IMCO) and for Civil Liberties, Justice and Home Affairs (LIBE) also opted for a complete ban predictive policing systems(including both individual and place-based profiling, the latter of which was previously not included), and the indiscriminate scraping of biometric data from social media or CCTV footage to create facial recognition databases.

While retrospective remote biometric identification systems are now prohibited, MEPs kept exceptions for law enforcement but said it would only be for the prosecution of serious crimes and only after official judicial authorisation.

On top of prohibitions, the MEPs also voted to expand the definition of what is considered “high risk” to include AI systems that harm people’s health, safety, fundamental rights or the environment, as well as measures to boost the accountability and transparency of AI deployers.

This includes an obligation to perform fundamental rights impact assessments before deploying high-risk systems, which public authorities will have to publish, and expanding the scope of the AIAs publicly viewable database of high-risk systems to also include those deployed by public bodies.

Completely new measures around “foundational” models and generative AI systems have also been introduced, the creators of which will be obliged to assess a range of risks related to their systems - including the potential for environmental damage and whether their systems guarantee protection of fundamental rights – and forced to disclose “a sufficiently detailed summary of the use of training data protected” by copyright laws.

“It is crucial to build citizens’ trust in the development of AI, to set the European way for dealing with the extraordinary changes that are already happening, as well as to steer the political debate on AI at the global level,” said AIA co-rapporteur Brando Benifei. “We are confident our text balances the protection of fundamental rights with the need to provide legal certainty to businesses and stimulate innovation in Europe.”

However, the amendments only represent a “draft negotiating mandate” for the European Parliament, and are still subject to a plenary vote of the entire Parliament in mid-June 2023. Following this vote, behind-closed-door trialogue negotiations will begin between the European Parliament, European Council, and European Commission – all of which have adopted different positions.

Daniel Leufer, a senior policy analyst at Access Now, said for example that the council’s position is for there to be a much wider range of exemptions for the use of AI by law enforcement and immigration authorities, adding: “It’s hard to know what’s a real position that someone’s not going to move from.”

Initial reactions

Responding to the amendments, the Computer & Communications Industry Association (CCIA Europe)– whose members include the likes of Meta, Google, Amazon, BT, Uber, Red Hat and Intel, among many other tech firms – said that although there were some “useful improvements”, such as the definition of AI being aligned to that of the Organisation for Economic Co-operation and Development (OECD), “other changes introduced by Parliament mark a clear departure from the AI Act’s actual objective, which is promoting the uptake of AI in Europe.”

It specifically claimed that “useful AI applications would now face stringent requirements, or might even be banned” due to the “broad extension” of prohibited and high-risk use cases: “By abandoning the risk-based structure of the act, Members of the European Parliament dropped the ambition to support AI innovation.”

CCIA Europe’s policy manager, Boniface de Champris, added that the association is now calling on “EU lawmakers to maintain the AI Act’s risk-based approach in order to ensure that AI innovation can flourish in the European Union.

“The best way for the EU to inspire other jurisdictions is by ensuring that new regulation will enable, rather than inhibit, the development of useful AI practices.”

Tim Wright, a tech and AI regulatory partner at London law firm Fladgate, similarly noted that the AIA would “may take the edge off” European AI companies abilities to innovate.

“US -based AI developers will likely steal a march on their European competitors given news that the EU parliamentary committees have green-lit its ground-breaking AI Act, where AI systems will need to be categorised according to their potential for harm from the outset,” he said.

“The US tech approach (think Uber) is typically to experiment first and – once market and product fit is established – to retrofit to other markets and their regulatory framework. This approach fosters innovation whereas EU based AI developers will need to take note of the new rules and develop systems and processes which may take the edge off their ability to innovate.

“The UK is adopting a similar approach to the US, although the proximity of the EU market means that UK-based developers are more likely to fall into step with the EU ruleset from the outset; however the potential to experiment in a safe space – a regulatory sandbox – may prove very attractive.”

Civil society groups that have been campaigning around the AIA, on the other hand, welcomed a number of the new amendments, but warned there are still a number of issues, particularly around industry self-assessment and carve-outs for national security or law enforcement.

Griff Ferris, senior legal and policy officer at non-governmental organisation Fair Trials – which has been explicitly calling for a ban on the use of AI and other automated system to “predict” criminal behaviour since September 2021 – described the prohibition of predictive policing as a “landmark result” that will protect people from an “incredibly harmful, unjust and discriminatory” practice.

“We’ve seen how the use of these systems repeatedly criminalises people, even whole communities, labelling them as criminals based on their backgrounds. These systems automate injustice, exacerbating and reinforcing racism and discrimination in policing and the criminal justice system, and feeding systemic inequality in society,” he said.

“The EU Parliament has taken an important step in voting for a ban on these systems, and we urge them to finish the job at the final vote in June.”

Ella Jakubowska, senior policy adviser at European Digital Rights (EDRi), added: “We are delighted to see Members of the European Parliament stepping up to prohibit so many of the practices that amount to biometric mass surveillance. With this vote, the EU shows it is willing to put people over profits, freedom over control, and dignity over dystopia.”

Leufer similarly welcomed the two committee’s amendments, which he said better protects peoples rights: “Important changes have been made to stop harmful applications like dangerous biometric surveillance and predictive policing, as well as increasing accountability and transparency requirements for deployers of high-risk AI systems.

“However, lawmakers must address the critical gaps that remain, such as a dangerous loophole in Article 6’s high-risk classification process.”

Self-assessment

Speaking with Computer Weekly ahead of the vote, Leufer said Article 6 was previously amended by the European Council to exempt systems from the high-risk list (contained in Annex Three of the AIA) that would be “purely accessory”, which would essentially allow AI providers to opt-out of the regulation based on a self-assessment of whether their applications are high-risk or not.

“I don’t know who is selling an AI system that does one of the things in Annex three, but that is purely accessory to decision-making or outcomes,” he said. “The big danger is that if you leave it to a provider to decide whether or not their system is ‘purely accessory’, they’re hugely incentivised to say that it is and to just opt out of following the regulation.”

Leufer said the Parliament text voted on by the two committee’s includes “something much worse…which is to allow providers to do a self-assessment to see if they actually pose a significant risk”.

EDRi shared similar concerns around Article 6, noting it would incentivise under-classification and provide a basis for companies to argue that they should not be subject to the AIA’s requirements for high-risk systems.

“Unfortunately, the Parliament is proposing some very worrying changes relating to what counts as ‘high-risk’ AI,” said Sarah Chander, a senior policy adviser at EDRi. “With the changes in the text, developers will be able to decide if their system is ‘significant’ enough to be considered high risk, a major red flag for the enforcement of this legislation.”

On high-risk classifications generally, Conor Dunlop, the European public policy lead at the at the Ada Lovelace Institute, told Computer Weekly that the requirements placed on high-risks systems – including the need for quality data sets, technical documentation, transparency, human oversight, et cetera – should already be industry standard practices.

“There’s been a lot of pushback from industry to say that this is overly burdensome,” he said, adding that a solution would be to simply open more systems up to third-party assessments and conformity checks: “I think that would compel safer development and deployment.”

State overreach

Regarding the prohibitions on live and retrospective facial recognition, Leufer added while the Parliament has deleted all the exemptions on the former, it has not done so for the latter, which can still be used by law enforcement with judicial authorisation.

“Any exception means that the infrastructure needs to be there for use in those exceptional circumstances. Either that requires permanent infrastructure being installed in a public space, or it requires the purchase of mobile infrastructure,” he said. “They’re not going to leave it sitting around for three years and not use it, it’s going to be incentivised to show results that it was a worthwhile investment, and it will lead to overuse.”

Pointing to a joint opinion on the AIA published by two pan-European data protection authorities, Leufer added that those bodies called for a ban on remote biometric identification in any context, and clearly stated that both live and retrospective facial recognition are incompatible with Europe’s data protection laws.

“It’s already illegal, we [at Access Now] been saying that for a long time, so it would be good if the AI Act put it to rest and had an explicit prohibition,” he said. “Anything less than a full ban is actually worse than not having anything, because it could be seen as providing a legal basis for something that’s already illegal.”

Leufer added part of the problem is that lawmakers have fallen into the trap of seeing live facial recognition as somehow more dangerous than retrospective facial recognition: “There is something visceral about being matched on the spot by this thing and then having the instant intervention, but I really think the retrospective is much more dangerous, as it weaponises historic CCTV footage, photos, all of this content that’s lying around, to just destroy anonymity.”

There are also concerns about the AIA allowing the development and deployment of AI for national security or military purposes with no exemptions on its use.

In a conversation with Computer Weekly about the ethical justifications of military AI, Elke Schwarz – an associate professor of political theory at Queen Mary University London and author ofDeath machines: The ethics of violent technologies– for example, described the AIA’s approach to military AI as “a bit of a muddle”.

This is because while military AI systems are exempt from the requirements if specifically designed for military purposes, the vast majority of AI systems are developed in the private sector for other uses and then transferred into the military domain afterwards.

“Palantir works with the NHS and works with the military, you know, so they have two or three core products of AI systems that obviously change based on different data and contexts, but ultimately it’s a similar logic that applies,” she said.

“Most big ambitious AI regulations end up weirdly bracketing the military aspect. I think there’s also a big lobby not to regulate, or let the private sector regulate ultimately, which is not very effective usually.”

In a legal opinion prepared for the European Center for Not-for-Profit Law in late 2022, emeritus professor of international law at the London Metropolitan University Douwe Korff said: “The attempts to exclude from the new protections, in sweeping terms, anything to do with AI in national security, defence and transnational law enforcement contexts, including research into as well as the ‘design, development and application of’ artificial intelligence systems used for those purposes, also by private companies, are pernicious: if successful, they would make the entire military-industrial-political complex a largely digital rights-free zone.”

Describing the national security exemption as “a huge potential loophole”, Ferris also noted it would “undermine all other protections” in the AIA, “particularly in the context of migration, policing, and criminal justice, because those are all issues which governments see as issues of national security”.

Access Now and EDRi are also calling for the national security and military exemptions to be dropped from the AIA.

Read more about artificial intelligence

  • TUC says government is failing to protect workers from AI harms: TUC issues warning about artificial intelligence leading to more widespread workplace discrimination if the right checks are not put in place.
  • Lords AI weapons committee holds first evidence session: In first evidence session of Lords AI weapons committee, expert witnesses unpack claims that artificial intelligence in weapon systems will help military organisations to improve their compliance with international humanitarian law.
  • MPs warned of AI arms race to the bottom: Expert tells Parliamentary committee that tech companies developing artificial intelligence are cutting corners and placing safety on the backburner, opening up ‘enormous risks’ for the future of AI.

FAQs

How does the EU regulate AI? ›

The AI Act categorizes applications of AI into four levels of risk: unacceptable risk, high risk, limited risk and minimal or no risk. Unacceptable risk applications are banned by default and cannot be deployed in the bloc.

What is the new EU AI law? ›

The new law promotes regulatory sandboxes, or controlled environments, established by public authorities to test AI before its deployment. MEPs want to boost citizens' right to file complaints about AI systems and receive explanations of decisions based on high-risk AI systems that significantly impact their rights.

What is the EU legislative proposal on AI? ›

The Artificial Intelligence Act was originally proposed by the European Commission in April 2021. A so-called general approach position on the legislation was adopted by the European Council in late 2022 and the legislation is currently under discussion in the European Parliament.

What is the timeline EU AI regulation? ›

European Standards Organisations

20th May 2022: Released first draft standardisation request in support of safe and trustworthy AI. End of June 2022: Draft sent back with amendments and requests for clarification. 5th December 2022: Published draft based on ESO and stakeholder consultation.

Who regulates AI in the United States? ›

As the federal AI standards coordinator, NIST works with government and industry leaders both in the United States and internationally to develop technical standards to promote the adoption of AI, enumerated in the “Technical AI Standards” section on its website.

Is the EU AI Act in force? ›

On 21 April 2021, the European Commission introduced its proposal for a regulation laying down harmonised rules on AI throughout the European Union (the “AI Act”). Its status as a regulation means that, once finalised and in force, the AI Act will apply directly in each of the 27 EU member state countries.

Has the AI Act passed? ›

When does the AI Act take effect? The AI Act has not yet been adopted. The European Commission proposed the regulation in April 2021, and it will need to be reviewed and approved by the European Parliament and the Council of the EU before it can become law.

Does EU law still apply? ›

The Retained EU Law Bill would automatically revoke most retained EU law at the end of 2023, as part of a 'sunset clause'. This would not apply to retained EU law that was domestic primary legislation. Any retained EU law that still applied after the end of 2023 would be renamed as assimilated law.

What are the unacceptable risks of the EU AI Act? ›

Unacceptable: Applications that comprise subliminal techniques, exploitative systems or social scoring systems used by public authorities are strictly prohibited. Also prohibited are any real-time remote biometric identification systems used by law enforcement in publicly-accessible spaces.

What is the summary of the EU AI Act? ›

The Act will cover systems that can generate output such as content, predictions, recommendations, or decisions influencing environments. Apart from uses of AI by companies, it will also look at AI used in public sector and law enforcement.

What are the risk levels in EU AI Act? ›

European proposal for a legal framework on AI

The legal framework for AI proposes a clear, easy to understand approach, based on four different levels of risk: unacceptable risk, high risk, limited risk, and minimal risk.

When did the EU pass the AI Act? ›

The European Council adopted its general approach on the AI Act on 6 December 2022.

What are the 5 stages of AI cycle? ›

It mainly has 5 ordered stages which distribute the entire development in specific and clear steps: These are Problem Scoping, Data Acquisition, Data Exploration, Modelling and Evaluation.

What are the three eras of AI? ›

They tentatively frame the trends in compute in terms of three distinct eras: the Pre Deep Learning Era , the Deep Learning Era and the Large-Scale Era .

Is the US the leader in AI? ›

The United States is the clear leader in AI development, with major tech companies headquartered there leading the charge. The United States has indisputably become the primary hub for artificial intelligence development, with tech giants like Google, Facebook, and Microsoft at the forefront of AI-driven research.

Will AI replace law enforcement? ›

Police work requires a lot of human judgment, which robots are naturally not capable of. So, it would be near impossible for robots to entirely replace human police officers.

Which country owns AI? ›

ai is the Internet country code top-level domain (ccTLD) for Anguilla, a British Overseas Territory in the Caribbean. It is administered by the government of Anguilla.

How is AI used in the US military? ›

A.I. has been applied to military operations too. Countries including the U.S. and China are investing more every year into A.I. applications including autonomous vehicles, surveillance, and automated target recognition systems.

Is AI in the military ethical? ›

The most obvious ethical issues with military AI occur with targeting, and other issues arise in the planning of operations and logistics support. However, building AI systems to make potentially lethal judgments is difficult, and current AI methods are still less accurate than humans for many tasks (Emery, 2021).

Does the US government use AI? ›

The United States government uses artificial intelligence in the military, intelligence, and law enforcement to help mitigate potential threats. However, the use of machine learning technology largely remains unregulated by the government, although year-on-year spending on AI government contracts continues to increase.

Does AI have First Amendment rights? ›

AI itself is not human and cannot have constitutional rights, writes Cass Sunstein, just as a vacuum cleaner does not have constitutional rights. But it seems pretty clear that content created by generative AI probably has free speech protections. It is speech.

Is AI going to replace us? ›

AI won't entirely replace humans any time soon, industry experts and companies investing in the technology say. But jobs are transforming as AI becomes more accessible.

Will AI take us over? ›

It's unlikely that a single AI system or application could become so powerful as to take over the world. While the potential risks of AI may seem distant and theoretical, the reality is that we are already experiencing the impact of intelligent machines in our daily lives.

Do member states have a say in EU law making? ›

Parliament and Council adopt ordinary legislative procedure

Most EU laws are adopted using the ordinary legislative procedure, in which the European Parliament (directly elected) and the Council of the EU (representatives of the 27 EU countries) have equal say.

Do countries have to follow EU law? ›

Policy made at the EU level generally applies to all 28 Member States of the EU, unless any have negotiated 'opt outs' or exemptions, which mean that they do not have to implement certain policies, or particular clauses in legislation.

Who can invoke EU law? ›

This means that individuals can invoke a provision of EU law in relation to the state. Horizontal direct effect is of consequence in relations between individuals. This means that an individual can invoke a provision of EU law in relation to another individual.

What are 3 negative impacts of AI on society? ›

These negative effects include unemployment, bias, terrorism, and risks to privacy, which the paper will discuss in detail.

What is the biggest threat of AI? ›

Risks of Artificial Intelligence
  • Automation-spurred job loss.
  • Privacy violations.
  • Deepfakes.
  • Algorithmic bias caused by bad data.
  • Socioeconomic inequality.
  • Market volatility.
  • Weapons automatization.
Jan 25, 2023

What are 4 risks of artificial intelligence? ›

There are a myriad of risks to do with AI that we deal with in our lives today. Not every AI risk is as big and worrisome as killer robots or sentient AI. Some of the biggest risks today include things like consumer privacy, biased programming, danger to humans, and unclear legal regulation.

What are the main points of the AI Act? ›

➜ The AIA expressly prohibits the use of AI for subliminal distortion of a person's behavior that may cause physical or mental harm; exploiting vulnerabilities of specific groups of people like the young, the elderly, or persons with disabilities; social scoring that may lead to unjustified or disproportionate ...

What is Article 5 of the EU AI Act? ›

The competent judicial or administrative authority shall only grant the authorisation where it is satisfied, based on objective evidence or clear indications presented to it, that the use of the 'real-time' remote biometric identification system at issue is necessary for and proportionate to achieving one of the ...

What is Article 10 of the AI Act? ›

Training, validation and testing data sets shall be relevant, representative, free of errors and complete. They shall have the appropriate statistical properties, including, where applicable, as regards the persons or groups of persons on which the high-risk AI system is intended to be used.

What are the 3 risk acceptance principles according to EU regulation? ›

Regulation (EU) No 402/2013 presents three risk acceptance principles that are already recognised as current possible practices for controlling hazards and the associated risks in railway systems: the application of codes of practice, a comparison with similar reference systems and an explicit risk estimation.

What is AI in credit risk scoring? ›

AI-based credit scoring is the most promising and relevant solution. Credit scoring evaluates how well a bank's customer can pay and is willing to pay off debt. AI-based credit scoring decisions are formed from data such as: Total income.

How many years of risk can be predicted through AI? ›

AI Predicts Cancer In People Three Years Before Diagnosis: Report.

How will AI be regulated? ›

Leading the way in AI regulation is the European Union, which is racing to create an Artificial Intelligence Act. This proposed law will assign three risk categories relating to AI: applications and systems that create “unacceptable risk” will be banned, such as government-run social scoring used in China.

What are the EU plans to regulate AI image generators? ›

The EU AI Act – A Proposal for Regulating AI Technologies

The proposed regulation will evaluate AI tools on relevant factors like biometric surveillance, spreading misinformation, and discriminatory language and rate them based on their perceived risk – minimal, limited, high, and unacceptable.

Do the EU's guidelines for trustworthy AI have the force of law? ›

To that end, the EU ethics guidelines promote a trustworthy AI system that is lawful (complying with all applicable laws and regulations), ethical (ensuring adherence to ethical principles and values) and robust (both from a technical and social perspective) in order to avoid causing unintentional harm.

What is the impact of EU's new data protection regulation on AI? ›

The EU's new data privacy rules, the General Data Protection Regulation (GDPR), will have a negative impact on the development and use of artificial intelligence (AI) in Europe, putting EU firms at a competitive disadvantage compared with their competitors in North America and Asia.

Does AI have legal rights? ›

There are no federal laws specifically regulating AI or applications of AI, such as facial-recognition software, which has been criticized by privacy and digital rights groups for years over privacy issues and leading to the wrongful arrests, of at least several Black men, among other issues.

Do we need AI regulation? ›

Why do we need rules on AI? The proposed AI regulation ensures that Europeans can trust what AI has to offer. While most AI systems pose limited to no risk and can contribute to solving many societal challenges, certain AI systems create risks that we must address to avoid undesirable outcomes.

What is the most advanced AI image generator? ›

Best AI art generator overall

Bing's Image Creator is powered by a more advanced version of the DALL-E, and produces the same (if not higher) quality results just as quickly. Like DALL-E, it is free to use. All you need to do to access the art generator is visit the website and sign in with a Microsoft account.

What is the AI generator everyone is using? ›

Deep Dream Generator

One of the most popular AI art generators on the market, Deep Dream is an online tool that enables you to create realistic images with AI.

What is the EU AI strategy? ›

The European AI Strategy aims at making the EU a world-class hub for AI and ensuring that AI is human-centric and trustworthy. Such an objective translates into the European approach to excellence and trust through concrete rules and actions.

What is high risk in the EU AI Act? ›

According to the general approach of the Council, high-risk AI systems are products or safety components of products that are subject to a third-party conformity assessment before being placed on the market or put into service, or they are AI systems intended to be used for certain purposes identified by the AI Act.

What is the new EU legislation 2023? ›

In early 2023, the EC will propose a comprehensive reform of the EU's electricity market, including the decoupling of gas and electricity prices, to lessen the price effect of gas on electricity. The EC will take measures to promote more sustainable energy sources.

Does the European Union have stricter data protection laws than the US? ›

US Data Privacy Laws and Differences with EU. Arguably the most significant difference in US legislation versus the EU is the lack of a comprehensive data privacy law that applies to all types of data and all U.S. companies.

What are the EU ethics guidelines for trustworthy AI? ›

Trustworthy AI has three components, which should be met throughout the system's entire life cycle: (1) it should be lawful, complying with all applicable laws and regulations (2) it should be ethical, ensuring adherence to ethical principles and values and (3) it should be robust, both from a technical and social ...

What is the purpose of the EU AI Act? ›

The Act will cover systems that can generate output such as content, predictions, recommendations, or decisions influencing environments. Apart from uses of AI by companies, it will also look at AI used in public sector and law enforcement.

References

Top Articles
Latest Posts
Article information

Author: Clemencia Bogisich Ret

Last Updated: 12/13/2023

Views: 6265

Rating: 5 / 5 (80 voted)

Reviews: 87% of readers found this page helpful

Author information

Name: Clemencia Bogisich Ret

Birthday: 2001-07-17

Address: Suite 794 53887 Geri Spring, West Cristentown, KY 54855

Phone: +5934435460663

Job: Central Hospitality Director

Hobby: Yoga, Electronics, Rafting, Lockpicking, Inline skating, Puzzles, scrapbook

Introduction: My name is Clemencia Bogisich Ret, I am a super, outstanding, graceful, friendly, vast, comfortable, agreeable person who loves writing and wants to share my knowledge and understanding with you.