EU Unveils Proposed Strict AI RegulationMeasure Would Ban Use of Biometrics for Surveillance
The European Union has officially proposed a strict new regulation on artificial intelligence that would ban the use of "real time" biometrics for surveillance, citing privacy concerns. The regulation would prohibit the use of facial recognition and other biometrics in public places.
Under the proposed regulation, violations would result in fines of up to $36 million or, if the offender is a company, up to $36 million or up to 6% of the company's total worldwide annual revenue, whichever is higher.
The European Commission unveiled the proposal Wednesday. The draft must now be reviewed by the European Parliament and the European Council. It could be subject to complaints, investigations and referral to the European Court of Justice, and amendments eventually could be added.
Under the proposed regulation, "high-risk" AI systems that potentially pose significant risks to health and safety or fundamental privacy rights would be subject to scrutiny before they are put on the market and throughout their life cycle. The regulation calls for a mandatory risk management system, strict data and data governance requirements, technical documentation and record-keeping requirements, and postmarket monitoring and reporting of incidents.
Military use of AI systems would be exempt from this scrutiny.
Specific Safeguards Proposed
The proposed regulation would apply to "institutions, offices, bodies and agencies" in the EU that are "acting as a provider or user of an AI system."
"The proposal sets harmonized rules for the development, placement on the market and use of AI systems in the [EU] following a proportionate risk-based approach," the draft regulation notes.
"Certain particularly harmful AI practices" would be prohibited. For example, the proposal notes, "specific restrictions and safeguards are proposed in relation to certain uses of remote biometric identification systems for the purpose of law enforcement."
The proposed regulation would be enforced through a governance system that builds on existing cooperation systems between member states.
"We want to use more Trustworthy AI! For health, fighting climate change, convenience in everyday life - if we can trust AI not to put our fundamental rights at risk. This is today’s proposal - and that we become excellent in developing #TrustworthyAI," tweeted European Commissioner Margrethe Vestager.
We want to use more #TrustworthyAI! For health, fighting climate change, convenience in everyday life -if we can trust #AI not to put our fundamental rights at risk. This is today’s proposal - and that we become excellent in developing #TrustworthyAI https://t.co/eocNIph1TN— Margrethe Vestager (@vestager) April 21, 2021
The proposed regulation lacks clarity on permitted use cases for low-risk AI applications, says Ray Walsh, digital privacy expert at ProPrivacy, a digital privacy campaigning and research organization.
Under the proposal, AI systems such as chatbots that are considered "low-risk" would face fewer restrictions.
"Use of AI in lower-risk public sector systems still leaves the public at considerable risk, especially considering that people don’t usually have the opportunity to decline being targeted with automated systems that leverage AI as they move around in public," Walsh says. "The European Commission must now hash out the draft proposal with member EU countries, which means that there is still some way to go before it is finalized. We can only hope that any remaining vagueness in the draft that leaves too much room for interpretation is disputed by member states to ensure that the final version is more robust."
Last week, following a leak of a version of the proposed regulation, Daniel Leufer, a European policy analyst, said it lacked clarity and contained several loopholes, including the lack of limitations on AI use by the military (see: Draft EU Regulation Proposes Curbs on AI for Surveillance).
But Gregory Cardiet, security engineering director at Vectra AI - a security firm that provides AI-based threat detection - said the regulation would create a level playing field for AI vendors.
"Some companies are today building a technical advantage that is so great that it is very unlikely that other companies will ever be able to catch up due to the immense gap that they have acquired already," Cardiet notes. "For that reason, it is critical that a sort of control is being provided by governments in order to ensure that a few companies are not getting too much control of our world."
Over the past several years, the use of facial recognition and other AI technologies has stoked privacy concerns in Europe and elsewhere. Concerns include data harvesting, unauthorized tracking, misuse of data for credential stealing and potential identity theft.
In August 2019, a developer's use of facial recognition software around the Kings Cross railway station in London sparked controversy about violations of the EU's General Data Protection Regulation's privacy provisions, and the project was abandoned (see: Facial Recognition Use in UK Continues to Stir Controversy). Also in 2019, Sweden's Data Protection Authority issued its first fine for violations of the EU's General Data Protection Regulation after a school launched a facial recognition pilot program to track students' attendance, without proper consent.
In March 2020, the American Civil Liberties Union filed a Freedom of Information Act lawsuit against the U.S. Department of Homeland Security and three of its agencies in an effort to learn more about how the department uses facial recognition technology at airports and the country's borders (see: ACLU Files Lawsuit Over Facial Recognition at US Airports).
In the lawsuit, the ACLU alleged that the agencies' increasing use of facial recognition could pose "profound civil-liberties concerns" and enable "persistent government surveillance on a massive scale."