• The world was shocked to learn that British political consulting firm Cambridge Analytica had harvested the personal data of at least 50 million Facebook users without their consent.
• Artificial Intelligence (AI) can also be used to influence elections and threaten the integrity of democracies worldwide.
• Examples of how AI can potentially threaten democracies include large language models that generate indistinguishable content from human-written text, as well as deep fakes that can fabricate videos of public figures and manipulate public opinion.
Cambridge Analytica Scandal
In 2018, the world was shocked to learn that British political consulting firm Cambridge Analytica had harvested the personal data of at least 50 million Facebook users without their consent and used it to influence elections in the United States and abroad. An undercover investigation by Channel 4 News resulted in footage of the firm’s then CEO, Alexander Nix, suggesting it had no issues with deliberately misleading the public to support its political clients, saying: “It sounds a dreadful thing to say, but these are things that don’t necessarily need to be true. As long as they’re believed”.
Threats Posed by AI
The scandal was a wake-up call about the dangers of both social media and big data, as well as how fragile democracy can be in the face of rapid technological change being experienced globally. According to Trish McCluskey, associate professor at Deakin University, artificial intelligence (AI) could make it much easier for bad actors to spread disinformation and threaten elections around the world. Examples of how AI can potentially threaten democracies include large language models such as OpenAI’s ChatGPT which can generate indistinguishable content from human-written text, as well as deep fakes which can fabricate videos of public figures like presidential candidates and manipulate public opinion.
Rapid Technological Advancement
While it is still generally easy to tell when a video is a deepfake currently, technology is advancing rapidly and will eventually become indistinguishable from reality – such as a deepfake video of former FTX CEO Sam Bankman-Fried that linked to a phishing website being released earlier this year. Similarly Pentagon’s chief digital officer Craig Martell warns that generative AI language models like ChatGPT could become the “perfect tool” for disinformation if not monitored carefully due its lack context leading people take words fabricated by AI as fact.
Regulatory Measures Needed?
As technology advances further so too must regulatory measures put in place by governments around the globe in order ensure technologies such algorithms are not abused or manipulated for malicious purposes or unethical outcomes such influencing an election result or manipulating consumer behaviour or opinions on certain topics/candidates etcetera.. It is important now more than ever for governments/organisations etcetera have laws/standards etcetera set up regulating use/application/development AI algorithms ensuring transparency & accountability while maintaining privacy & security on all fronts .
Conclusion
In conclusion we see with recent scandals regarding use & abuse private user data by companies such Cambridge Analytica combined with advancements made field artificial intelligence (AI) particularly regards creating ‘deepfakes’ there has been growing concerns use technology negatively affect democratic processes & undermine trust citizens government institutions etcetera.. Thus far majority solutions proposed have been preventative rather reactive measures help protect against potential misuse technology systems & create policies governing them accordingly ensure safety citizens both online offline contexts