For several years now, newsrooms have been taking advantage of the rapid advancements in Artificial Intelligence (AI) to automate the creation of news content, optimize news delivery, generate insightful analysis, and much more. AI, just as it has invaded most aspects of life, has presented opportunities for media and journalism to explore innovative ways of data management and storytelling. Yet, amid all its perks, AI technology has instigated a critical concern among different stakeholders in media and journalism — the challenge of trust in AI-generated news content.
News and journalism mainly thrive on trust, a factor of utmost importance in an age of ‘fake news’ hysteria. Trust is built on accuracy, fact-checking, clarity, context, fairness, and lastly, transparency. While all these attributes are presumably guaranteed in human-authored news content, can we be as confident about AI-generated content?
AI-written news brings efficiency and scalability, with systems developed by tech firms and institutes such as the Associated Press, Yahoo, Automated Insights, and Roswell’s Wordsmith. Notably, Google’s AI technology has been used to draft full news articles. An AI developed by OpenAI, GPT-3, displayed a remarkable leap in this domain, penning articles that could be difficult to distinguish from those written by human authors.
Technology is impressive, for sure. It is fast, reliable, emotion-free, and consistent. Yet, these strengths could double up as drawbacks when it comes to journalism. Can an AI understand irony, interpret humor, or recognize fake news? Can it ensure the ethical standards that human journalists pledge to maintain?
Alan Rusbridger, a former editor with The Guardian, notes, “The ethical dimension is missing in AI. AI can generate data and even conclusions, but can it take responsibility for them?” The risk that AI-written news could spread misinformation or potentially be exploited to disseminate propaganda is a significant concern. AI algorithms require human expertise to ensure content quality and ethical standards.
Additionally, the area of transparency and attribution in AI-created content is blurred. News consumers are accustomed to linking news stories with specific authors, editors, and media platforms. They draw a sense of credibility and authenticity from human-authored content. The impersonality and the lack of an emotional connection in AI-generated news content raise the question of how institutions can develop trust in an automated news environment.
Experts believe the answer lies in striking a strategic balance between AI and human involvement in news creation. AI should be leveraged for its strengths, such as data crunching, while humans manage areas that require discretion, ethical judgement, and creativity.
Moreover, several proponents assert the potential role of technology itself in building trust in AI-generated news. Deploying emerging technologies like blockchain to create an ‘immutable record of work’ that ensures transparency and accountability could be a feasible solution to address the trust challenge.
In conclusion, while AI-generated news content is undeniably an exciting development, the trust challenge in accepting this automatic journalism can’t be disregarded. As we continue to explore AI’s potential in newsrooms, we must ensure that we uphold the values tied to journalism— accuracy, fairness, balance, and most significantly, trust.
This exploration is only the beginning, and media experts, researchers, and technologists must continually debate, revisit, and redefine the rules of AI in journalism.
Sources:
1. Google – AI in Journalism
2. Associated Press – AI News Automation
3. The Guardian – AI generated Journalism
4. OpenAI – GPT-3
5. Automated Insights – Wordsmith AI Technology
6. The Conversation – The Trust Challenge in AI Journalism
7. Reuters Digital News Report 2017 – Fighting Fakes: The Role of Trust and AI.