International facial-recognition company Clearview AI has been ordered to cease collecting images in Australia and “destroy” images it collected amid increasing scrutiny of the technology.
The Office of the Australian Information Commissioner (OAIC) on Wednesday, ruled Clearview had breached Australians’ privacy by taking biometric information from the internet and sharing it through its platform.
Commissioner Angelene Falk said Clearview had collected personal and sensitive information without consent and by unfair means, and had failed to adequately inform Australians of its actions.
Clearview AI holds a database of more than 3 billion images scraped from publicly available websites like social media. It allows users, including law enforcement, to upload an image of a person and will then find images of the same person.
“The covert collection of this kind of sensitive information is unreasonably intrusive and unfair,” Falk said.
“It carries significant risk of harm to individuals, including vulnerable groups such as children and victims of crime, whose images can be searched on Clearview AI’s database.
“By its nature, this biometric identity information cannot be reissued or cancelled and may also be replicated and used for identity theft. Individuals featured in the database may also be at risk of misidentification.”
She said Australians using social media did not expect their images would be collected by commercial entities to form biometric templates for other purposes.
Clearview AI provided trials of its tools to some Australian police forces, including the Australian Federal Police, between October 2019 and March 2020.
The OAIC is now examining whether the AFP’s trial use of the technology complied with the Australian Government Agencies Privacy Code.
Facebook ends facial-recognition tech
Facebook also announced on Tuesday it would end its facial-recognition service, citing “societal concerns”. It will delete more than 1 billion users’ facial-recognition templates.
To Digital Rights Watch program lead Samantha Floreani, the Facebook action and Clearview probe mark a “global shift in attitude” towards facial-recognition technology.
Amazon, Microsoft and IBM all reined in their facial-recognition technology in 2020 due to privacy and algorithmic bias concerns.
“It’s really clear that people are becoming more aware of the risks and they’re becoming increasingly uncomfortable with it, and the companies are beginning to pay attention to those issues,” Floreani told Yahoo Finance.
What’s the difference between Clearview, Facebook and my phone’s facial recognition?
It can be difficult to grasp the potential risks associated with facial-recognition technology, especially given more harmless forms of it are already pervasive, Floreani said.
For example, many phone users will already be using facial recognition to unlock their devices.
“That’s a pretty fine use of the technology. It stays on the device, it’s convenient [and] it’s pretty innocuous,” she said.
“[But] when we think about how it can be used in law-enforcement contexts, it can become quite scary - especially if you are someone who is Black or a person of colour, or if you’re an activist - those kinds of intersections - that’s where the potential for harm really grows to an alarming level.”
Current facial-recognition technology has been found to hold gender and racial biases that are ingrained in the artificial intelligence when it’s trained.
Black American man Robert Williams was wrongfully arrested in June 2020 after his driver’s licence photo was incorrectly matched by facial-recognition AI.
A 2018 study found that facial-recognition systems misidentified American women of colour at around 40 times the rate of white men.
More recently, testing by the United States Technical Standards Agency (NIST) also found higher error rates for different population groups.
The ramifications extended beyond law enforcement, Floreani said.
For example, a person choosing to sit a remote exam who needs to use facial recognition to sign in may face additional hurdles.
Clearview AI defends its facial recognition technology
Clearview AI has defended its technology and plans to appeal the OAIC determination at the Administrative Appeals Tribunal.
In a statement, CEO Hoan Ton-That said that while he respected the work done by the OAIC, he was disheartened by the finding.
"I look forward to engaging in conversation with leaders and lawmakers to fully discuss the privacy issues, so the true value of Clearview AI’s technology, which has proven so essential to law enforcement, can continue to make communities safe," Ton-That said.
Attorney for Clearview AI Mark Love also defended Clearview's work.
"Not only has the Commissioner’s decision missed the mark on the manner of Clearview AI’s manner of operation, the Commissioner lacks jurisdiction," Love said.
"To be clear, Clearview AI has not violated any law nor has it interfered with the privacy of Australians. Clearview AI does not do business in Australia, does not have any Australian users."
Clearview AI has largely marketed its products based on their use for law enforcement, describing itself as offering "a tool to help generate high-quality investigative leads" on its website.
However, the Australian Human Rights Commission (AHRC) in July, also warned of NSW police officers’ plans to trial a controversial federal government facial-recognition system.
The centralised database, dubbed “the Capability”, would compile licence, immigration and passport photos collected by state and federal agencies into a national database that could then be used in criminal investigations.
The Federal Parliament’s Joint Standing Committee on Intelligence and Security rejected legislation allowing the sharing of these photos in 2019.
The legislation is yet to be reintroduced to Parliament.
However, NSW Police states on its website that it is already participating in a trial of the facial-matching service.
“Australian law enforcement has demonstrated an appetite to be using facial-recognition technology,” Floreani said.
Photos collected by NSW agencies have not yet been made available to the centralised database, NSW Police says on its website, although Victoria, Tasmania and South Australia do provide photos.
Noting the OAIC and AHRC moves to curb some forms of facial-recognition technology, Floreani said she was hopeful the “vital” protections would continue to strengthen.
“That being said, there is also a lot of desire from the Australian Government to really push that ‘tough on crime’ line, and the use of facial recognition locks into that political agenda.
"So I’m certainly keeping an eye on all of that."