Is ChatGPT Undermining Journalistic Integrity?
By Talia Kennedy
In recent years, artificial intelligence (AI) tools like ChatGPT have made waves across various industries. One of the most significant impacts outside of the technology sector has been in the media and journalism sectors, where AI is increasingly being used to generate content, assist with research, and streamline news reporting. As these tools become more prevalent, concerns about their effect on journalistic integrity have emerged. Could ChatGPT and similar AI systems be undermining the values that journalism is built upon, such as accuracy, accountability, and trust?
The opening paragraph of this article was written entirely and with no editing by ChatGPT. As you can see, AI can lay out an argument and summarise the facts of a discussion. However, the open AI model has some significant flaws that impact its efficacy in the journalistic sector and ultimately undermine the integrity that provides a foundation for journalism. Whilst AI has been used in journalism to automate processes to enhance productivity for several years, it is the rise of generative AI that poses a more significant challenge.
The integrity of journalism is challenged by AI in three core ways: accuracy, intellectual property, and transparency and accountability. Readers trust news outlets to provide factual and well-researched information, this is built on a historical reliability created by rigorous fact-checking and evaluation of sources. AI can produce an authoritative-sounding article, but it is far from immune to falsehoods. This is because of two core things: the source material that large language models (LLMs) are trained on and the predictive model that AI operates on. OpenAI has been challenged by Noyb, a news outlet focused on data protection, on the basis that GDPR in the EU requires accurate information about individuals. However, ChatGPT has been known to produce falsehoods about individual people and cannot state where the data comes from that informs these outputs (Noyb, 2024). The outputs of AI depend on the inputs that are fed into it, meaning if they are fed disinformation, they will reproduce it (White, 2024). ChatGPT, unlike a human, is unable to assess a source's validity and verify its claims – simply put it lacks the editorial oversight that journalists spend their careers developing.
AI also presents the issue of accountability – journalists attach their name and reputation to their work and are responsible for the information they provide. In contrast, AI is incapable of providing the rationale for including particular statistics or viewpoints it includes. Crucially AI does not have an opinion or the ability to ‘decide’ in any meaningful way. If a news source presents incorrect information in an article generated by ChatGPT – who is at fault? The inability of ChatGPT to properly defend its work fundamentally undermines journalistic integrity and poses a threat to the credibility of any news outlet which employs it.
Lastly, intellectual property – as a large language model, AI is not creating original work and so is in murky water when it comes to intellectual property. The New York Times (NYT) sued Open AI for the use of Times articles to train GPT large language models (Hope, 2024). The Times’s core allegation is that OpenAI is infringing on copyrights through the unlicensed and unauthorized use and reproduction of Times works during the training of its models. As ChatGPT uses its source material to predict the most logical string of words to produce a natural-sounding sentence it can spit out verbatim sentence long strings of words that have been lifted from other articles. ChatGPT taking credit for other's work has been considered by some to be plagiarism and presents a serious concern in the transparency of its working process as well as a financial concern of crediting original authors appropriately. One of the key concerns in the Times case against Open AI is that AI may reproduce or paraphrase articles that are otherwise behind a paywall and therefore readers can circumvent paying for the service. As well as this based on their training these AI models now provide competition to the news outlet (Grynbaum & Mac, 2023).
However, it is this lack of originality that most obviously limits the threat of AI to journalism. The key argument Madhumita Murgia, AI Editor for the Financial Times (FT), argues limits large language models like ChatGPT from being a viable threat to journalism is originality (Adami, 2023). Whilst it can synthesise information and inform reporting it is still unable to meet the demand for a more developed take on a subject that readers look for. Murgia states she is still ‘really optimistic about the original human voice’ and I agree (Adami, 2023). Until ChatGPT can develop a critical lens to assess its work, and in doing so develop original and innovative opinions it will not be able to satisfy readers the same way human writing can. Ultimately, in its current form, AI is only a threat to journalistic integrity if it is used improperly or irresponsibly.
Bibliography
Adami, M., 2023. Is ChatGPT a Threat or an Opportunity for Journalism? 5 AI Experts Weigh
In. [Online] Available at:https://reutersinstitute.politics.ox.ac.uk/news/chatgpt-threat-or-opportunity-journalism-five-ai-experts-weigh (Accessed October 2024).
Grynbaum, M. & Mac, R., 2023. The Times Sues OpenAI and Microsoft Over A.I. Use of
Copyrighted Work. [Online] Available at:https://www.nytimes.com/2023/12/27/business/media/new-york-times-open-ai-microsoft-lawsuit.html (Accessed 13 October 2024).
Hope, A., 2024. NYT v. OpenAI: The Times’s About-Face. [Online]
Available at:https://harvardlawreview.org/blog/2024/04/nyt-v-openai-the-timess-about-face/ (Accessed October 2024).
Noyb, 2024. ChatGPT provides false information about people, and OpenAI can’t correct it.
[Online] Available at:https://noyb.eu/en/chatgpt-provides-false-information-about-people-and-openai-cant-correct-it (Accessed 2024 October).
White, J., 2024. See How Easily A.I. Chatbots Can Be Taught to Spew Disinformation. [Online]
Available at: https://www.nytimes.com/interactive/2024/05/19/technology/biased-ai-chatbots.html?searchResultPosition=20
(Accessed 13 October 2024).
Comments