Upon request for a written piece, the AI Chatbot is quick to develop an answer. Be it on a biology answer to writing an essay on the Gutenberg Bible, this interface seems to know it all. It automatically generates writing that though mediocre, is enough to pass by inattentive users. Being well-known as a tool used by students and members of our community as an assistant in assignments, we thought it would be nice to highlight the obscured side of how ChatGPT works using the following request: “How does China’s ascension as a global power link to sovereignty?”
Repetitiveness and lack of depth
Though extremely quick for answers, it fails to address greater questions. When using
ChatGPT for school work or minor tasks, it provides a vague but assertive enough answer. It may also be argued that OpenAI presents a biased view on topics, rather than being impartial. Another issue is that it repeats the same words and concepts multiple times, ultimately weakening the quality of the writing and making it seem as though it is sloppy.
Ghost sources
Last month, contacted by a user who asked ChatGPT for sources, the renowned newspaper The Guardian reported that there had been the creation of fake citations of their articles. After asking the tool to cite the sources used for a previous response. Though that is not always the case, Chat uses this approach to answer most of these requests. Despite seeming like a credible source with a newspaper name and domain in the citation, when clicked on, these links will most likely lead you to a page that says “404 – Page not found”. Another common practice is when, even when given a source to which a citation must be created, it provides a credible facade with author and date despite not having access to this information. When inquired about sources used for information generated by the tool, the journalist who seemingly wrote the article confirmed that he had no record of such an action. This occurrence is not exclusive to the initial user, but nearly everyone who asks OpenAI for such a request. Though in our experiment, the first source is real, the following bibliography created by ChatGPT presents the other ones as ghost source
- “China’s Belt and Road Initiative.” Council on Foreign Relations, https://www.cfr.org/backgrounder/chinas-massive-belt-and-road-initiative. Accessed 2 June 2023.
- “China’s Expanding Influence in the United Nations.” The Diplomat, https://thediplomat.com/2019/05/chinas-expanding-influence-in-the-united-nations/. Accessed 2 June 2023.
- “China’s Growing Economic Power: Implications for the United States.” U.S.-China Economic and Security Review Commission, https://www.uscc.gov/sites/default/files/Research/China%27s%20Growing%20Economic%20Po wer%20-%20Implications%20for%20the%20United%20States.pdf. Accessed 2 June 2023. “China’s Sovereignty Dilemma.” The Diplomat, https://thediplomat.com/2019/12/chinas-sovereignty-dilemma/. Accessed 2 June 2023.
- “South China Sea: What’s at Stake?” Council on Foreign Relations, https://www.cfr.org/backgrounder/south-china-sea. Accessed 2 June 2023.
The Final Question: Is it worth it?
Exceeding accounts have been written about generative AI’s tendency to manufacture
facts and events. But these problems are particularly troubling for trusted news organizations and journalists whose inclusion adds legitimacy and weight to a persuasively written fantasy. But the question for responsible news organizations is simple and urgent: what can this technology do right now, and how can it benefit responsible reporting at a time when the wider information ecosystem is already under pressure from misinformation, polarization, and ill-intentioned actors? In the search for a balanced use of open artificial intelligence, we must further accentuate the importance of human-sourced information and peer-reviewed work.
Featured image generated by the Artificial Intelligence Midjourney with the prompt “Forest of digital obstacles painted by Salvador Dali”