OpenAI, the company behind ChatGPT, has disclosed in a report that its AI models have been used in several attempts to generate fake content aimed at influencing elections and engaging in cybercrime.

The US-based tech startup, backed by Microsoft, revealed that cybercriminals are leveraging AI tools, including ChatGPT, to produce fake articles and social media comments, as well as to create and debug malware.
So far in 2024, OpenAI has neutralized over 20 incidents where its technology was misused, including shutting down ChatGPT accounts in August that were generating content related to U.S. elections. In July, OpenAI also banned accounts in Rwanda that were producing election-related comments for social media platform X (formerly Twitter).
Despite these efforts, OpenAI noted that none of these attempts achieved viral engagement or significant audience traction. However, concerns continue to grow regarding the use of AI tools to spread fake content, particularly in the lead-up to the U.S. presidential election in November.
The U.S. Department of Homeland Security has also raised alarms about foreign interference from Russia, Iran, and China, which are believed to be using AI to circulate divisive or misleading information ahead of the Nov. 5 elections.
Last week, OpenAI further solidified its standing as one of the world’s most valuable private companies after securing $6.6 billion in a recent funding round. Since its launch in November 2022, ChatGPT has amassed 250 million weekly active users, Reuters news report said.
Previous examples of AI in election campaigns
AI has been increasingly used in election campaigns, influencing voter behavior and shaping the political landscape. OpenAI’s ChatGPT has produced some real-world examples where AI played a role in election influence:
#1 Cambridge Analytica and the 2016 U.S. Presidential Election
How AI Was Used: Cambridge Analytica, a British political consulting firm, gained unauthorized access to millions of Facebook users’ data. This data was then processed through machine learning algorithms to create detailed psychographic profiles of voters. AI-powered models helped the firm to segment voters based on personality traits, political preferences, and behavioral tendencies.
Impact: These AI-driven insights allowed highly targeted ads, messages, and misinformation to be delivered to different voter groups, aiming to manipulate emotions and sway opinions. While it’s debated how much this influenced the outcome, it did create controversy and raised concerns about data privacy and AI ethics in elections.
#2 Social Media Bots in the 2016 U.S. and Brexit Campaigns
How AI Was Used: During the 2016 U.S. election and the Brexit referendum, AI-driven bots played a significant role on social media platforms like Twitter and Facebook. These bots were designed to mimic real user behavior and spread political messages, often amplifying polarizing content.
Impact: AI bots shared and promoted misinformation, created echo chambers, and amplified divisive narratives. This activity potentially influenced public opinion by creating the illusion of widespread support for certain views or candidates, affecting undecided voters.
#3 Deepfake Technology in Political Ads (India, 2020)
How AI Was Used: In the 2020 Delhi Legislative Assembly elections, the Bharatiya Janata Party (BJP) used deepfake technology to create videos of their candidate, Manoj Tiwari, speaking in multiple languages. AI-generated deepfake videos allowed Tiwari to address non-Hindi speaking voters in their native languages without ever recording the message himself.
Impact: This AI innovation helped the party reach a broader audience, breaking language barriers and personalizing outreach efforts. While it was not illegal, it sparked discussions about the potential for deepfake technology to be used in more deceptive ways in future elections.
#4 AI-Powered Voter Sentiment Analysis in Brazil (2018)
How AI Was Used: During Brazil’s 2018 presidential election, both leading candidates, Jair Bolsonaro and Fernando Haddad, used AI-driven analytics to gauge voter sentiment on social media. Bolsonaro’s campaign employed AI tools to track public opinion in real time, allowing them to adjust their strategy and target ads based on trending issues and voter sentiment.
Impact: This real-time analysis allowed the Bolsonaro campaign to tailor their messages to align with what voters cared about most at the moment, increasing engagement and support. It also helped in spreading viral content and shaping public narratives on platforms like WhatsApp and Facebook.
#5 AI-Generated Political Campaign Messages in 2019 European Parliament Elections
How AI Was Used: Some political parties in the European Union used AI tools to generate personalized campaign messages for voters during the 2019 European Parliament elections. AI systems analyzed voters’ social media activity, demographic data, and browsing patterns to create messages that resonated with each voter’s specific concerns.
Impact: This personalized messaging approach allowed campaigns to engage voters on a more personal level, increasing the likelihood of voter turnout and support. However, it also raised concerns about the ethical implications of micro-targeting voters with AI-driven content, as it could be used to manipulate vulnerable or undecided voters.
#6 The Use of AI to Combat Disinformation in 2020 U.S. Election
How AI Was Used: In response to the growing threat of election disinformation, platforms like Facebook, Google, and Twitter deployed AI algorithms to detect and remove fake news and misleading content during the 2020 U.S. presidential election. AI-driven fact-checking tools were employed to flag suspicious content and prevent its viral spread.
Impact: AI played a critical role in limiting the spread of disinformation and reducing the influence of fake news on voters. However, some critics argue that the algorithms were not perfect and sometimes flagged legitimate content, creating tension between free speech and the need for content moderation.
#7 AI-Powered Predictive Analytics in Political Fundraising (Obama Campaign, 2012)
How AI Was Used: In Barack Obama’s 2012 reelection campaign, AI-driven predictive analytics were employed to optimize political fundraising efforts. The campaign used machine learning algorithms to predict which potential donors were most likely to contribute and how much they would be willing to give.
Impact: The AI-driven approach allowed the Obama campaign to raise significant amounts of money by targeting specific voter groups with highly effective fundraising messages. The use of AI to optimize donation strategies led to more efficient resource allocation and a substantial financial advantage.
These examples highlight both the potential benefits and risks of using AI in election processes. While AI can enhance campaign efficiency and voter engagement, it also raises ethical questions around manipulation, misinformation, and privacy.
How to prevent AI-generated disinformation
As AI technologies become more advanced, concerns about their potential misuse in influencing elections have grown. In response, various safeguards are being developed to prevent AI-generated disinformation and manipulation during electoral processes. Here are some key strategies and safeguards being implemented to mitigate these risks:
# AI Detection Tools
Developing AI systems to detect AI-generated content: AI companies, including OpenAI, are working on sophisticated models capable of distinguishing human-generated content from machine-generated text. These tools help platforms identify and flag misleading or false information, especially in the form of articles, social media posts, or comments.
Watermarking and content labeling: Some AI-generated content, especially text or images, can be “watermarked” or labeled to indicate that it was created by an AI. This helps users identify inauthentic content and reduces the spread of misinformation.
# Collaboration with Governments and Regulators
Working with election oversight bodies: AI companies are collaborating with government agencies, election commissions, and cybersecurity organizations to monitor and curb the use of AI in election tampering. This includes sharing information about malicious actors and collaborating on AI detection efforts.
Regulatory frameworks: Governments are increasingly looking into regulations around the use of AI, particularly in elections. The European Union’s proposed AI Act, for example, aims to establish rules that require transparency in AI usage, especially in sensitive areas like elections.
# Platform Moderation and Policies
Social media partnerships: Platforms like X (formerly Twitter), Facebook, and YouTube are working with AI companies to identify and remove fake or misleading content. Enhanced moderation tools use AI to flag content that could mislead or manipulate voters.
Account suspension and bans: As OpenAI has done, platforms are proactively banning or suspending accounts involved in creating and distributing election-related disinformation. This includes both automated bot accounts and human actors using AI tools for malicious purposes.
# User Education and Awareness
Promoting digital literacy: Educating the public on how to spot AI-generated disinformation is crucial. Many organizations are working on awareness campaigns to teach users how to critically assess the information they encounter online, especially during election cycles.
Verification features: Some platforms offer verification tools that help users confirm the legitimacy of information, including whether it comes from verified sources or has been flagged for potential manipulation.
# Transparency in AI Model Use
Disclosing AI-generated content: Companies like OpenAI are taking steps to be transparent about how AI is used in various fields, including elections. By openly reporting attempts to misuse AI, they provide accountability and encourage responsible AI development.
# Limiting AI Model Accessibility
Access restrictions: Certain advanced AI models are not made publicly available or have access limitations to prevent their misuse. For example, models that can generate deepfakes or realistic images may be restricted to verified users or used in controlled environments to limit their potential for creating disinformation.
# Research and Continuous Improvement
Ongoing research into election-related AI misuse: AI developers, academic researchers, and independent organizations are continuously studying how AI can be misused in elections. This helps improve existing safeguards and predict new threats that may emerge with advancing technology.
AI auditing: Regular audits of AI systems help ensure they are not being used for malicious purposes, and they allow companies to make adjustments to their models to prevent exploitation.
# International Cooperation
Global coordination against election interference: Since election disinformation often crosses borders, international cooperation is critical. Countries are working together to share intelligence on foreign actors that use AI to meddle in elections and develop joint strategies to combat the threat.
These safeguards are part of a larger effort to ensure that elections remain free from manipulation, and AI is used responsibly, not as a tool for undermining democratic processes.
Baburajan Kizhakedath