2024 Elections Around the World and Social Media’s Blind Eye to Election Integrity

social media

The glamour of social media has surpassed the influence of conventional media in our daily lives. It’s a platform where people share moments from having lunch to updating relationship statuses. Like ordinary citizens, politicians have also joined social media to communicate their messages directly to the public. Originally, social media feeds were filled with simple content from friends and family; however, it might go unnoticed that these contents, which are not usually “liked” can overload your platform. Overdosing with content profoundly influences people’s behavior. What’s concerning is that this content consists of information wrapped in sentiments, with the potential to sway a large mass of decisions toward bias and irrationality. Yet, the giant social media companies have failed miserably to contain all this.

Deep Concerns

Movements erupted on social media cemented as a powerful force in crafting political discourses and influencing elections worldwide. We are aware of how a viral video of the self-immolation of Mohamed Bouazizi, a 26-year-old street vendor in Tunisia, led to the Arab Spring in 2010. Yet, with the rise of social media, we have also witnessed deep concerns regarding misinformation, disinformation, manipulation, outright fabrication, foreign interference, Election interference, Political advertising and voter suppression. In the post Arab Spring era, these multifaceted challenges get surfaced across the elections in the world including, 2016 US, 2019 India’s and UK’s election. Now, Artificial Intelligence (AI) is taking the reins in self-posting and creating political content, adding further complexity to these concerns.


AI in the form of a Chatbot commonly used on social media platforms for auto-responding and posting political content. Unlike humans, AI has advanced the problems of social media in elections to the next level. For example, in March 2023, an Instagram page “Julina AI Art” shared an AI-generated image of two prominent politicians, Barack Obama and Angela Merkel, enjoying themselves at the beach together. Moreover, the post also contained a “poetry slam” about their outing, generated by an AI chatbot. The duo appeared eerily realistic. The chatbot is drastically altering the nature of political campaigns ads. Information appears to the users not necessarily cent percent true.

As we head into 2024, citizens around 64 countries including the European Union, representing almost 49% of the world’s population, are gearing up for elections. According to We Are Social, active social media user identities have surpassed the 5 billion mark this year. Because of the election’s year, these companies have invested promising funds to safeguard the information integrity of elections.

However, big social media platforms include Facebook, YouTube and Twitter have been observed deviating from their commitment to safeguarding election integrity. Additionally, other platforms like WhatsApp, Telegram, Snapchat, LinkedIn differ from the familiar landscapes of Facebook, YouTube, Twitter, and Instagram, pose new challenges pertaining to proper accountability and careful oversight.

Therefore, with limited regulation and new risks that never existed before the advent of AI, maintaining election integrity digitally, in the upcoming global elections will be a significant challenge.

How actively do these platforms safeguard election integrity?

To contain such problems, the big platforms come up with community guidelines and a reporting process. Content moderation, fact-checking, taking down viral videos, and detecting AI-generated content are the common mechanisms use for addressing the issues almost by all the social media platforms. Though, this One-to-One approach, employed by these companies, seems to be credible but not significant enough to mitigate the problems holistically. As against +accountabletech in a report, Democracy by Design: Social Media’s Policy Scores  tried to score ten leading online platforms: Facebook, Instagram, Threads, YouTube, TikTok, Snapchat, Discord, LinkedIn, Nextdoor, and X (formerly Twitter) based on election preparedness. Unfortunately, it figured out that all these platforms are not averagely crossed even 50% of scoring out of 100% match by Democracy By Design recommendations.

This might be because these companies don’t seriously care about the concerns. The Center for Democracy & Technology (CDT) in a report Seismic Shifts emphasized the experience employees and independent researchers facing harassment, assault, and job threats for exposing mystery and countering disinformation. Keeping such issues under consideration, Democracy By Design aims to balance election integrity and freedom of expression through a content-agnostic approach. It prefers toward substantive design in the product to make it resilient enough with misinformation, fake news, and deepfakes AI-generated content. It is framed with a three-pronged approach consisting of bolstering resilience, countering election manipulation, and leaving “paper trails” providing interplaying ground between freedom of expression and election integrity. It has also appealed to these companies to adopt for easy go but yet to accept.

Almost 2.5 billion users this year engaging with political content over social media. What emerges that, more user on social media platform can lead to more revenue growth. It is reported that Meta platforms and Google are expected to see a spike of 156% growth from 2020. “Campaigns and issue advocacy groups are shifting more spending to digital channels in line with the wider changes to the contours of the ad market,” said Peter Newman, forecasting director at Insider Intelligence. Over riding their own commitments might be due to the company’s priority for revenue interests.

Sidestepping with Responsibility

Major social media companies have failed to comply with their own policies robustly and equitably across the globe. Meta, formerly Facebook, proved insignificant in combating hate and disinformation in languages other than English. The #YaBastaFacebook campaign revealed failure in implementing Meta’s policies in non-English content. In one more story, since Elon Musk took over as CEO of Twitter, it has disbanded its trust and safety teams, denied to ban extremist accounts, and removed marks which alert users associated with foreign accounts. Therefore, a sharp increase in hate speech, online trolling, and harassment has been evident.

Although YouTube committed to restricting recommending contents that contain misinformation and equally TikTok promises to stop political ads from its platform however, enforcement proposals from their side appear false in Free Press’s investigation of the state of platform integrity at major social tech companies in the 2022 report “Empty Promises.” It reviewed the enforcing policies of the four largest social media platforms. It was found that all these companies failed to take adequate measures in combating the concerning problems. Meta, Twitter, and YouTube made significant rollbacks in concrete measures once decided for implementation. These platforms rolled back their dozens of commitments like stopped moderating “Big Lie” content, weakened political ad policies, weakened privacy policies regarding AI access, imposed user fact-checking limits, rolled back deadnaming policy, weakened user penalties for violating platform policies, laid off content moderators/trust & safety teams, reinstated Trump, and reinstated or monetized previously suspended dangerous accounts. Such recommendations are not taken seriously by these companies.

Governments battles with the Social Media Platform’s Challenges

Governments around the world look forward to responsibility tackle these challenges. In this regard, the US Federal Election Commission and Congress both placed recommendations to crack down on the use of deepfakes in political ads. However, the probability of passing this bill by the Federal Legislature is thin. Indonesia is among the top users of social media in Asia. The country is among the leading users of Facebook, Instagram, and TikTok. Experts say buzzers who shape public opinion on social media platforms make money in elections. They are professionals who spread hate speech and create polarization by engaging social media users through “black campaigns“. They cannot be prevented because of weak disclosure data policy. India is facing similar problems. Professional experts in social media, especially in spreading information, remain unaccountable. Whereas, recently, an order from the Indian government to block certain accounts on ‘X’ became a new flashpoint for tension between the social media platform and the Indian government. Both are at loggerheads in an attempt to maintain balance between safeguarding information integrity of election and freedom of expression. As long as non-agreement continues, hardly the government can look forward for resolving the matter. @ECISVEEP is an initiative by the Election Commission of India struggling to combat deepfake emerging social media platforms like Telegram, Snapchat and WhatsApp. These platforms are remained unaccountable in many cases however, paly a significant role disseminating information to a larger segment of the population. Analysts say that the Election Commission is unsuccessful in catching the shifting nature of election campaigns on social media. The Department of Science, Innovation, and Technology’s Secretary, Michelle Donelan, held a roundtable with UK leaders of social media companies Google, Meta, X, TikTok, and Snapchat to discuss the spread of antisemitism, violent content, and misinformation, yet the public is waiting for a concrete outcome. The UK government found in mid-2022 that 70% of social media users have a problem with harmful content.

A country might frame laws to combat these problems, but weak enforcement policy practices allow the ball to remain in their court which continue to be inattentive.

Safeguarding information integrity in the 2024 elections around the world is looking bleak. The social media platforms are yet to prepare to take care of the problems. Whether safeguarding election integrity ranks low or generating revenue takes priority, ultimately democratic process suffers.

Dr. Md Anis Akhtar, an expert in the field of electoral policy in India. Currently, serves as an Assistant Professor in the department of Political Science at a college affiliated with Vidyasagar University, W.B. India.  Feel free to follow https://twitter.com/ForStrateg6211

Support Countercurrents

Countercurrents is answerable only to our readers. Support honest journalism because we have no PLANET B.
Become a Patron at Patreon

Join Our Newsletter

GET COUNTERCURRENTS DAILY NEWSLETTER STRAIGHT TO YOUR INBOX

Join our WhatsApp and Telegram Channels

Get CounterCurrents updates on our WhatsApp and Telegram Channels

Related Posts

Join Our Newsletter


Annual Subscription

Join Countercurrents Annual Fund Raising Campaign and help us

Latest News