Sun. Dec 22nd, 2024
France Embraces Ai Video Surveillance Ahead Of Olympics

NICE, France — There’s one thing you should know before visiting this charming gateway to the French Riviera.

This sun-drenched Mediterranean resort, which was the scene of a horrifying terrorist attack in 2016, has become what its mayor calls “the most monitored city in France” and a global leader in law enforcement powered by artificial intelligence. It became a laboratory for revolution.

A total of 4,200 cameras have been deployed in public spaces, which equates to one camera for every 81 residents. These aren’t your old surveillance cameras. Some are equipped with thermal imaging and other sensors. They are also connected to a command center and AI technology to detect not only minor violations (like someone parking illegally or entering a public park after hours), but also when someone is trying to access school buildings. It can also alert you to potentially suspicious activity, such as:

The French mayor witnessed a truck plowing through a crowd in Nice. After Barcelona, ​​he aims to make Europe’s streets safer.

The city has trialled facial recognition software that is accurate enough to tell the difference between identical twins.

Another system tested this year on Nice’s iconic Promenade des Anglais used an algorithm that could alert people to irregular vehicle and pedestrian movements in real time. Officials here say this could have allowed police to quickly report the assailant who drove a 19-ton truck into crowds along the beach. In 2016, 86 people died on the promenade, and dozens of others died on the promenade. More than 100 people were injured.

“There are people who have declared war on us. We cannot win wars by using peaceful weapons,” Mayor Christian Estrosi said. “Artificial intelligence is the most defensive weapon we have.”

More broadly, France is moving to introduce extensive algorithmic video surveillance in preparation for hosting the 2024 Olympics. This includes technology that can detect sudden crowd movements, abandoned objects, and people lying on the ground. Officials say such technology could be key to thwarting attacks like the bombings at the 1996 Summer Olympics in Atlanta.

But this futuristic (some would say Orwellian) adoption of police forces seeking to take the lead in AI regulation faces challenges in regions of the world that are home to some of the strongest digital privacy protections. ing.

“They are putting us all under the omnidirectional surveillance of AI,” said Felix Treguet, co-founder of French digital civil rights group La Cadrature du Net.

People traveling to France aren’t looking to have facial recognition cameras like this installed in their hotel rooms. Established by Chinese authorities To confirm the identity of guests ahead of the Asian Games in September. However, Western governments are increasingly leveraging AI as a crime-fighting tool.

In the United States, police have partnered with companies such as New York-based Clearview AI, which develops facial recognition algorithms and has amassed a database of more than 20 billion photos taken from the internet. The system helped identify rioters during the January 6, 2021, attack on the U.S. Capitol. It also faces privacy lawsuits and concerns about racial profiling and false arrests.

Facial recognition company Clearview AI tells investors it’s exploring massive expansion beyond law enforcement

In the UK, which was an early adopter of CCTV surveillance, the government encouraged police chiefs Double the number of retrospective facial recognition searches you conduct and consider live facial recognition to find people on police watch lists in places like football stadiums.

And on the European continent, France is not alone in deploying AI for security. For example, along Venice’s waterways, existing camera network pipes are connected to an AI-powered control center that recognizes the shape and size of boats even in the water’s refracted light to monitor speed and safety. Masu. The algorithms are also being used to analyze data from sensors in the city’s busiest tourist centers, with the ability to detect sudden crowd movements that could indicate an attack. In one example last year, Venice police used AI to scan footage and track a specific jacket to find and arrest a group of men allegedly involved in a stabbing.

Regulation of AI surveillance

The European Union is trying to control social media and enforce privacy in the digital age more than any other Western country, enacting landmark regulations that could lead to investigations, fines and management changes at leading U.S. technology companies such as Google and Google. It has been enacted. Meta. Earlier this month, the EU reached a historic agreement on a new AI law that classifies risks, increases transparency and imposes financial penalties on tech companies that violate them.

But even as the EU seeks to regulate civilian use of AI and ban the most risky systems, European governments are trying to protect their rights to use AI. The AI ​​Act nearly collapsed amid French-led demands to carve out exceptions for the use of AI in law enforcement.

EU reaches agreement on landmark AI bill, putting the US ahead of the curve

The compromise would require judicial approval of biometrics. Facial recognition technology can only be applied to recorded videos for the purpose of identifying people who have been convicted of a crime or who are suspected of committing a serious crime. Real-time surveillance may occur in limited situations, such as tracking kidnap victims or terrorist suspects. Searches may violate AI laws if targets are deemed to be categorized by political affiliation, ethnicity, or gender identity.

“If you want to find someone wearing a red shirt, you can do that,” said Brando Benifay, one of two MEPs in the European Parliament running to spearhead the bill. “But for classification purposes, you can’t get biometric data from every black person because you’re looking for black terrorists. [Palestinian] It’s a kaffiyeh for political reasons. ”

Stick figure solution

Many European countries are devising ways to stay ahead of the curve on AI while circumventing rules that ban the mass use of biometric data and facial recognition.

In privacy-conscious Germany, which has memories of intrusions by the Nazi-era and Cold War secret police, authorities tested an AI algorithm developed by the Fraunhofer Institute in one of Hamburg’s most crime-prone areas. The system detects, flags, and polices a variety of behaviors, including kicking, punching, aggressive and defensive postures, lying down, pushing, and running. But the images shown to police look like matchstick letters, and the people seen on camera have been anonymized.

Coronavirus tracking app encounters resistance in privacy-focused Europe

Nikolai Kinne, head of the Hamburg Police Department’s Intelligent Surveillance Project, said the software is “not interested in gender, skin color or any other special personality traits of the individual.”

Activists argue that even such observations can be troubling. People may act more self-consciously or avoid certain areas altogether if they know they may be alerted to a camera.

Konstantin Macher of the German digital rights group Digital Karage said this “drives and pressures people into doing what they think is expected of them.” “I think this takes away the beauty of human behavior and encourages us to behave in a routine, robotic way.”

Promoting expansion of AI surveillance in France

France is betting big on AI-assisted security for the Olympics. Hundreds of smart cameras will monitor crowds in and around Paris. The new law continues to ban facial recognition in most cases, but expands the legal application of algorithmic video surveillance for at least six months before, during and after the Games.

Many observers see this as a pilot that could be extended indefinitely if the technology is generally accepted and proven to work.a poll A survey conducted around the time the law was adopted showed overwhelming support, with 89% of respondents supporting smart cameras in stadiums, 81% on public transport, and 74% on public roads. did.

“We know this battle will be lost,” said Paul Cassia, an activist and law professor with the Paris-based Organization for the Defense of Constitutional Freedoms. “We all use smartphones. There are cameras everywhere now. People ask for these kinds of measures, but when something happens, people say, ‘It’s your fault, protect us. I didn’t try hard enough for it.”

Teacher killed in knife attack in France.Authorities launch terrorist investigation

Mayor Estrosi says more freedom is essential.his city was granted Experimenting with facial recognition during carnival It was carried out in 2019, but the rules were so strict that the test could only be applied to volunteers walking in certain areas. In another experiment, Biometric Portal Involvement at Local High Schoolhas been deemed unduly intrusive by the French Data Protection Authority.

The mayor is advocating for broader AI. Among other things, the city is introducing experimental technology to buses and streetcars that can detect redness in passengers’ faces and alert officers to possible health emergencies or other sources of stress.

The city says about 18 percent of all police cases are now solved with the help of smart cameras. Estrosi argues that number could be even higher. “AI is everywhere except where it really helps us,” he said. “I need to use facial recognition to keep my city safe. The software is ready and it’s there.”

Virgil Demoustier in Paris, Kate Brady in Berlin and Stefano Pitrelli in Rome contributed to this report.