‘We are less protected’ due to AI, says Cambridge Analytica whistleblower on protecting our data

by oqtey
‘We are less protected’ due to AI, says Cambridge Analytica whistleblower on protecting our data
ADVERTISEMENT

Cambridge Analytica whistleblower Brittany Kaiser says that online data protection has scarcely improved since she testified to UK parliament in 2018 that millions of people may have had their Facebook data harvested. 

The scandal broke in 2016 after it emerged that more than 87 million people may have had their data collected on Facebook through a personality quiz. 

Though it is unclear how that data was used, Kaiser said that Cambridge Analytica did “chargeable work” with Donald Trump’s election campaign and Leave. EU Brexit. Both organisations said there was no contract signed to work with the analytics company.

“I wish that I could say that it has gotten better. I would say there are now many parts of the world where people are starting to be legally protected, which wasn’t the case in 2018,” Kaiser told Euronews Next.

Though she said there are more data protection laws now, referencing Europe’s GDPR, she said there are no federal data laws in the US and that if an individual wants to take legal action against companies using their data without permission, it takes a lot of time in the courtroom. 

“If you don’t want to spend a lot of time in a courtroom or dealing with the law, I would say technically we are less protected because the technology is so much better at targeting,” she added. 

Artificial intelligence (AI) is also making things worse, especially when it comes to election interference, she said.

“The rise of generative AI has made it so much easier to make things look like they are real. Back in the Cambridge Analytica days, we had very, very basic algorithmic creation of content that the world was using at that time,” she said.

“It’s nothing like what exists today, where you can mimic reality because the AI has become so good,” she said. 

Kaiser said that despite working to advocate for digital rights, even she has been fooled by AI-generated images circulating online. 

On one occasion, she thought something awful was happening in New York when she was momentarily convinced that an AI-generated image of a car being set alight in Manhattan was real.

“Knowing that it’s difficult to even find a real picture of something anymore, I would say that can very easily be abused for politics or for commercial purposes – or for whatever purposes people want to use,” she said. 

“We’re still in the scenario where you know a lot of our intelligence agencies are saying that ‘there’s just as much money being spent by Russia, China, and Iran on disinformation communications,’” she added.

“But the technology is so much better that the impact is more and the money goes further to whatever aim it has”.

AI in elections

Last year, more than 60 countries headed to the polls in what was a super-cycle of global elections. Research is limited on whether AI played a role in them.

ADVERTISEMENT

But a recent paper by the Centre for Emerging Technology and Security (CETaS) at The Alan Turing Institute found that during the UK’s general election, there were 16 viral cases of AI disinformation or deepfakes.

The researchers also analysed the US election and found examples of AI-generated disinformation. 

These included AI bot farms mimicking US voters and allegations against immigrants, which also led to viral AI-enabled content being referenced by some political candidates and received widespread media coverage. 

While the paper said there was not enough evidence that AI-enabled disinformation had a measurable impact on the US presidential election results, it did add that this AI content did shape US election discourse by “amplifying other forms of disinformation and inflaming political debates”.

ADVERTISEMENT

Aside from elections, Kaiser said one of her biggest concerns is the contracts that governments are signing with the Big Tech AI companies, such as the ChatGPT maker OpenAI.

“Because governments are willing to try to use AI, a lot of them are licensing AI products from large companies that are closed source, black boxes, and our data and our government data is just going into these closed source for-profit systems, and there are no protections,” she said.

Open source generally means the software’s source code is available to everyone in the public domain to use, modify, and distribute, and the data that trains the AI is shared.

Whereas, closed source AI means that the code and data the AI is trained on are kept in the company’s full control and ownership.

ADVERTISEMENT

However, there could be national security implications of having large open source AI models in the hands of anyone who can code.

“OpenAI has a vast majority of government AI contracts and for all of our personal data, as well as all of the government data from all these different government agencies and departments, all going into OpenAI’s corporate servers, it creates an even bigger liability,” Kaiser said.

She added that this is similar to how Cambridge Analytica and Facebook were over a decade ago, but in OpenAI’s case, it is now “a lot more data, especially sensitive data, that is being fed into those systems”.

Kaiser is now pushing for governments to adopt more open-source AI companies, both at the federal and state levels. 

ADVERTISEMENT

She said open-source is “essential for civilian-facing government agencies, especially now when it’s become popular for the first time for the public to be able to audit what the government is doing with databases and with data”.

She argued that more open-source AI systems would be a more ethical step forward to gain public trust as governments start to adopt AI, which in the US comes as there is no federal legislation to protect people from how their data might be used in these systems.

Kaiser has recently taken on a new role at open source AI platform ElizaOS, which builds AI agents. She is leading the company’s new subsidiary to help the US public sector build open-source AI technology for governments. 

‘Common sense regulation’

Kaiser is hopeful that the Trump administration may take data protection more seriously. 

ADVERTISEMENT

“This particular government seems quite set on having serious federal technology policy and engaging with technologists and hiring the technologists to run a lot of government departments,” she said. 

“So I’m hopeful that that means that we’ll actually see something happening in the US.

“It would be quite great if we could finally see federal legislation to protect American citizens and to protect our rights in the face of growing tech adoption”.

However, the US made it clear at the AI Action Summit in Paris in February that overregulation of the technology could upend innovation, and that the country would be leading on not putting extraneous regulation on these technologies.

ADVERTISEMENT

“Excessive regulation of the AI sector could kill a transformative sector just as it’s taking off,” US Vice President JD Vance said.

Kaiser does not see this as hindering her ambitions for US data protections or open-source AI, but rather that there is a balancing act when it comes to regulation.

“I certainly don’t agree with [OpenAI CEO] Sam Altman when he says that we should allow any data to be used for our models so that we can be competitive. I think that’s very ‘move fast and break things’ for me,” she said. 

However, she said she also does not think that extraneous regulation is going to help because it ends up not being technically implementable, which she said happened in some components of Europe’s GDPR.

ADVERTISEMENT

“But I do believe that common sense regulation that is co-written with technologists so that it’s easily implementable… would be a good thing for Americans and for the economy,” she said.

Related Posts

Leave a Comment