ChatGPT, a new Artificial intelligence chatbot, is stirring up controversy, not just because of what it is and what it can do, but what may be coming.
There have been other AI chatbots, but ChatGPT has been touted as being close to holding a conversation with a human. It has been programmed to be friendly, with bigoted language supposedly nixed from its vocabulary. For some reason, this has caused its own controversy, but that’s a topic for another day. I want to focus on its uses to write articles, student papers, stories and poetry.
What does that mean for schools, journalism or art?
To delve into these questions, Professor of Journalism and Media Studies John V. Pavlik began to do some research. According to an article by the Rutgers School of Communication and Information, Pavlik asked ChatGPT “What is the meaning of journalism and its function?” The Chatbot answered him immediately. Its response, according to Pavlik was three well-written, coherent paragraphs that he said he felt he would have a hard time writing much better.
Pavlik found ChatGPT wrote generally; in complete sentences that seemed authoritative and were presented as common knowledge. However, no sources were cited. Revealing sources may be something ChatGPT can eventually be programmed to do, a competing tool, Perplexity.ai already does. Pavlik’s research identified a multitude of ways ChatGPT could benefit both journalism and media education, but the list of ways it might pose problems and even dangers was even longer. One positive, his research found, is that ChatGPT can potentially be incredibly useful as a reference or search tool. It can provide background information or ideas, in much the same way students and journalists already use Google.
Comparing ChatGPT to other tools such as Microsoft Excel, Pavlik said, “We use spreadsheets to help us with numbers, so we don’t have to do the math ourselves all the time, and there is nothing wrong with letting the computer do the adding and subtracting.”
At the top of the list of concerns are issues surrounding plagiarism, ethics, bias, and intellectual integrity.
As AI services become more and more mainstream, teachers have already caught students using the chatbox to write papers. Some schools are considering returning to days of handwritten assignments done in the classroom with little or no access to computers.
It has caused issues with news sources as well. The Washington Post recently published an article reviewing tech news website CNET’s debacle after publishing a series of AI-written stories.
“CNET’s robot-written copy is generally indistinguishable from the human-produced kind, although it’s not exactly snappy or scintillating. It’s, well, robotic: serviceable but plodding, pocked by cliches, lacking humor or sass or anything resembling emotions or idiosyncrasies,” Paul Farhi wrote, explaining that publishers of CNET had to write a number of corrections to the bot-generated articles.
For journalists, a tool like ChatGPT might be useful in gaining background knowledge on a story, and prepping for interviews, but the results, according to the International Journalism Network, can not always be trusted. The sentences don’t always make sense. Just like spell check, AI may get confused with context or syntax. In a glitch dubbed virtual hallucination, entire segments of bot-generated stories have been found to be nonsensical if not flat-out false.
Future developments may curb the glitches, and hallucinations occur less frequently. Even when that day arrives, I argue journalists and editors will still be needed, and there can be no replacing artists. That kind of work stems from a place Artificial Intelligence will never have – a heart.