Image Description: The photo shows a sketch of a green leaf split into four pieces. Each section has a social media platform logo on it: Instagram, Twitter, Facebook, Youtube. It is contrasted against a dark background, and black arrows point from one section to the next one below it.
Social media has become an open platform for many of us to discuss politics, culture and entertainment and to expose ourselves to diverse issues and viewpoints. Our digital transition has enabled us to stay updated on current events through online news articles, search engines, podcasts and social media platforms — all conveniently accessible and compiled in a single device.
However, without sounding like a boomer denouncing social media for rotting our brains, there is danger in relying on social media as one’s sole news source. One prominent issue seen on social media platforms is the increase of automated accounts (bots), which becomes a threat to the open discourse platform social media strives to be. Social media bots can post content and interact with other users without direct human involvement. While these bots can be effective for automated news updates and replying to comments, there is the danger of them being used to alter perceptions of political discourse, manipulate rating systems, and spread misinformation
Sandra González-Bailón, associate professor of communication at the University of Pennsylvania, recently discussed the role of bots in generating discrepancy in news visibility at an event hosted by UCLA Luskin Department of Public Policy. She presented data on the presence of Twitter bots in controversial political discussions, and the figure is astonishingly larger than the number of human, non-automated accounts.
Specifically, Pew Research Center, a think tank based in Washington D.C., estimates that “66% of tweeted links to popular news and current events websites are posted by bots.” Their findings show that most of the links shared are to popular political sites with centrist or mixed ideology, which means there isn’t a significant political bias generated by bots. This must mean that bots are harmless, right?
Wrong. Max Weiss, a Harvard undergraduate, recently experimented with a text-generation program to create 1,000 comments to respond to a Medicaid issue. The increasingly sophisticated technology was able to generate unique comments that all sounded like actual people advocating for a policy stance. These comments successfully deceived the Medicaid.gov administrators, showing how bots can be easily mistaken for humans with genuine concerns.
Many of us are likely aware of the political discourse on Twitter that took place related to the 2016 U.S. presidential election. Studies show that around a fifth of all tweets related to the 2016 presidential election were from bots. While deeper analyses reveal that these bots did not necessarily affect the election results, they did alter Twitter users’ perception of public sentiment. Therefore, when we had thought that a certain political stance received large public support, it could’ve very well been the impact of bot presence. This hinders our access to meaningful political discourse and social interaction with actual people on social media platforms.
Bots can have fake personas (e.g., names, photos, bios), and as mentioned earlier, they can easily blend in with human comments and posts. While there has always been fear surrounding artificial intelligence (AI) technology, those fears are quickly becoming reality. Without careful regulation of automated accounts, AI-driven personas may eventually be able to undetectably contribute to political debates, pose as individuals and send personalized texts and letters to elected officials; rather than implanting bias in online debates, these bots may be drowning out any actual productive discourse online.
González-Bailón proposed that more research must be conducted to better differentiate unverified bots from verified bots (those that push out legitimate news). She also emphasized that even verified Twitter accounts spread misinformation and that we are more susceptible to believing them because of the “verified” label. Therefore, we must continue to analyze how we verify social media accounts and how the public is influenced by verified accounts.
Developing and standardizing better authentication methods will help regulate bots on social media, but since anonymous speech is so important in online discourse, it’ll be difficult to balance authenticity and privacy.
If anything, it is important to acknowledge the limits of online political discourse, even though social media has been liberating for many unheard voices and underrepresented issues. Face-to-face conversations are still important to have. They can increase our empathy toward different communities because they tend to be much more personal and sincere (how many times have we seen hateful comments that we know these commenters would not have the guts to say in person?).
Without terminating social media from our lives, let’s try to alleviate our dependency on it as our sole source of information and news. Taking a step back from social media and its heated comment threads can be our next cool trend.