The proliferation of bots in journalism raises a panoply of issues, most significantly incisive questions about objectivity, neutrality, and most of all, newsfeed algorithm bias. ‘Bots’ has become a buzzword today, one often equated with impersonal, purely algorithm-based entities that can board the slippery slope down to the pit of distorted and biased news interpretation and representation. The rise of algorithmic newsfeed bias and the battle to distinguish bots from human journalists demand attention more than ever before.
Algorithmic bias occurs when seemingly neutral systems, lodged deep within the substratum of the cyber-landscape, generate outputs that are prejudiced or engender skewed impressions. Often, this arises from flaws or blind spots in the algorithm’s design. Research has shown that algorithmic bias can influence public opinion and further political polarization beyond cyberspace. When it comes to news, algorithmic bias can distort the importance, relevance or credibility of certain news stories, stymieing the democratic process by putting border guards on access to information.
Internet companies like Google, Facebook, and Twitter have created algorithms that control how information is displayed in people’s news feeds, potentially imposing inadvertent bias on how people interpret and respond to news stories. These algorithms, crafted to cater content to each user’s individual preferences, contribute inadvertently to the creation of “echo chambers” and “filter bubbles”.
Sadly, detecting bot-manufactured bias is not an easy task, mainly because of the absence of a readily available, recognizable face behind the data. However, researchers have begun developing measures and techniques to identify and curb the influence of these non-human news producers.
One of these measures is the Botometer, developed by the Observatory on Social Media at Indiana University. This machine learning tool analyzes a Twitter account’s behavior, checking its tweeting patterns, friend networks, and content to ascertain if the account is a bot. It represents a step forward in identifying algorithmically generated bias.
Pioneering organizations are also exploring ways to assimilate bots into the journalism industry ethically. The Associated Press, for instance, currently uses a software named WordSmith that allows journalists to produce news stories using algorithms. However, the organization mandates human intervention during the newsgathering and verification processes to ensure a level of objectivity and neutrality.
Conjointly, technology giants are working to tackle algorithmic bias. Google, for instance, has been transparent about the steps it’s taking to minimize algorithmic bias within its pathways. The company has made efforts towards employing diverse coders and using unbiased machine learning techniques in refining its search engine.
These endeavors are making headway in the battle against algorithmic bias in news feeds. However, the tussle between bots and journalists, representing a broader conflict between technology and human elements in newsgathering and dissemination, is far from over.
Debate still rages over the precise role bots should play in journalism. While the potential of bots to process vast amounts of data offers up a gold mine in the era of big data journalism, the issue of quality control remains tricky. At the heart of journalism lies a commitment to truth and accuracy that currently requires human intervention.
The cost of unchecked bots infiltrating the journalistic terrain is steep. Until an effective gatekeeping mechanism is implemented, distinguishing bots from human journalists must remain a top priority. Not just for the one waging the battle – the journalism industry – but for those impacted by its outcome: readers worldwide.
1. Source: Swire-Thompson, B. and Lazer, D., 2020. Public Health and Online Misinformation: Challenges and Recommendations. Annual Review Of Public Health, 41(1), pp.433-451.
2. Source: The Observatory on Social Media at Indiana University.
3. Source: Associated Press.
4. Source: Google AI. 2020. Responsible AI Practices.
5. Source: Napoli, P., 2019. Social Media and the Public Interest: Media Regulation in the Disinformation Age. Columbia University Press.