Brittany Kaiser hit international headlines when she blew the whistle on the major data breach scandal involving her former employer, Cambridge Analytica, which saw thousands of Facebook users’ data taken without their consent back in 2018.
The data collection company harvested personal information on where users lived and what pages they liked, without their consent, and used it to help build psychological profiles that analysed the characteristics and personality traits of users to target them with Trump and Brexit political campaigns.
Now, Brittany Kaiser tells Verizon Media’s Identity Decoded webinar that alarm bells were ringing from the start.
“There were quite a few red flags over the years, starting with every time I would call up our law firm for advice on a new data project in a new country,” Kaiser said.
“I usually got more questions about why I was creating so many invoices, instead of why I was investigating further how we could be compliant on projects that clients are asking us for.”
Kaiser revealed the data collection provider didn’t have an in-house data protection officer until 2018 - the year she became a whistleblower.
“That’s definitely something that raised alarms for a while, but there’s only so many times you can say something internally before you have to ask for extra help.”
And though the world hammered Cambridge for its collection, Kaiser estimated at least 40,000 other companies had that same data set.
“When the story started hitting the papers about a Facebook dataset that had been collected the year before I joined Cambridge Analytica, I thought, ‘Wow, is that what the world is getting into a fuss about?’”
“The Facebook dataset was collected by Cambridge, and at least 40,000 other companies. And is now perhaps on millions of databases around the world. That Facebook data set was one small part of a big database, and some of the bigger problems are about how that data was used, not just how it was collected.”
It’s at this point Kaiser said she realised there was a lack of global understanding about how big the data industry is, and how complex it is, and how people have continuously been manipulated to give their data away.
“Most consumers don't really understand what their personal data is and how much information has been collected about them since they had their very first device,” Kaiser said.
“So starting to hear these stories in the papers brought a little bit of an awakening about how digitally illiterate we are.”
But while we might be more observant when it comes to our data, Kaiser believes we could still be plagued by the same issues that arose in the 2016 election.
“There’s a rise of fake news and disinformation campaigns that are paid for by both local and foreign entities during election time,” she said.
“We still haven't figured out how to enforce our electoral laws on platforms like Facebook.
“We allow politicians to say whatever they'd like uncensored, but they're allowed to not comply with community standards that you and I have to agree to. I think it’s quite difficult during election time to police that.”
Own your data
Now, Kaiser’s mission is to help educate others to own their data.
“We weren't told everything that you're typing and searching for is being recorded and traded around the world without your explicit consent or knowledge, and that's starting to change these days but trust me, none of us grew up learning in depth about what that means,” she said.
In the years following the Cambridge Analytica scandal, Kaiser has launched the Own Your Data foundation, to teach digital literacy,
She’s also an advocate for an ‘opt-in’ data collection process, and believes people should be paid for handing over their data.
“Big tech companies and the way that advertising and communications [operate] in general has been built upon a foundation of personal data that was taken without full transparency and consent,” Kaiser said.
“It doesn't really have to continue to be that way, because we can start to educate people about what their data is. If you were transparent with them about what data we want to collect, get their explicit opt-in consent.
“And, I believe people should also be fairly compensated for the value that they're producing every day by bringing us datasets that help our companies run.”
Facial recognition data could pose bigger risks
In February, the world’s scariest facial recognition tool, Clearview AI, was hacked.
The tool, created by Australian tech genius Hoan Ton-That, allows users to take a picture of a person, upload it and get to see public photos of that person, along with links to where those photos appeared.
Cambridge Analytica’s data scandal will pale in comparison to the potential breaches to come from apps like Clearview, Kaiser said.
“There's a complete grey-area and lack of international agreement on what we should and shouldn't be allowing AI and robotics to do to us in our future and exactly what the development of data science should look like,” Kaiser said.
“So we need some moral and ethical guidelines, an AI code of ethics or an Internet Bill of Rights.”