The conversation about the uses of artificial intelligence (AI) systems in business, art and media has taken a new turn since the free version of “ChatGPT” launched in November. The program is a language model that produces eerily human-like responses from text-based prompts input by a user. It can produce poetry, academic papers and even cover letters. That has presented new and accessible ways for everyone to use AI. For example, June Wan, an editor for ZDNET, a business technology news website, called ChatGPT “scary good” at writing product reviews, for example.
The “GPT” in its name stands for Generative Pre-trained Transformer, referencing the “deep learning technology that powers it,” according to the website of Open AI, the company behind ChatGPT. Some experts describe it as a “breakthrough” in AI systems, citing its ability to “generate replies that are often more nuanced than those created by other AI systems,” according to a January blog on the TrendInTech website.
In fact, ChatGPT is astoundingly self-aware of its own capabilities. When asked to describe itself, it responded that what makes it remarkable “is its ability to generate text that sounds like it was written by a human.” That is because it has been “fine-tuned specifically for conversational interactions and can generate text with all the quirks, nuances and subtleties of natural language.”
The sophistication of the system presents a long list of new possibilities for a wide range of users. Students can use it as a homework aid. Business owners can use it to write announcement emails or even advertising copy. However, as with any new system, many issues need to be fine-tuned. Veterinary aid website Daily Paws compared it to a toddler “that still has plenty to learn” when it comes to pet care.
What can it do?
ChatGPT and other language model AIs work by feeding them a large amount of data and computing techniques that allow them to form coherent sentences explained by Sindhu Sundar of Business Insider. This allows it to recognize words in context and respond in a way that “mimics speech patterns while dispatching an encyclopedic knowledge,” explained Sundar. That also differentiates it from a search engine like Google or Bing. While search engines browse the internet for web pages that might be relevant to a user’s query, ChatGPT cannot search the internet, said Sabrina Ortiz of ZDNET. Instead, it uses data it has been trained on to generate responses. That includes specific books, articles, and websites.
ChatGPT opens the door to several possibilities. While interacting with ChatGPT can “evoke the experience of chatting online” for older millennials who grew up in online chat rooms, wrote Sundar, ChatGPT likely is capable of giving more in-depth answers. In addition to “answering simple questions, such as composing essays, describing art in great detail, creating AI art prompts and having philosophical conversations,” wrote Ortiz, ChatGPT can perform other complex operations that might take humans years of study.
For instance, ChatGPT can code software. As a result, some “underground hacking communities” have been toying with ways to use ChatGPT to create malware and computer viruses, wrote Danny Palmer of ZDNET.
It can even produce articles, as Samantha Delouya of Business Insider found out. Delouya asked the program to write an article about a factory reducing production due to growing costs that she had already written. ChatGPT “spit out 200 words in less than 10 seconds, and the result was alarmingly convincing,” she reported. At first glance, ChatGPT may have even done a better job than Delouya herself. The program “included additional details, like the city where the plant is located and a roughly accurate number of workers that would face layoffs.”
However, Delouya quickly realized how unhuman-like ChatGPT is. Aside from some cliches of writing that she admitted she might be guilty of sometimes, Delouya called the piece “nearly pitch-perfect, except for one glaring issue.” The program had faked quotes by the company’s CEO in the article. While the quotes “sounded convincingly like what a CEO might say when faced with the difficult decision to lay off workers, it was all made up,” she wrote.
While this is a significant breach of journalism ethics, it is only a small part of the problem. ChatGPT’s purpose is to mimic, not to be truthful. “These models are trying to come up with text that is plausible according to their model. Something that seems like the kind of thing that would be written. They’re not necessarily trying to be truthful,” said Vincent Conitzer, a computer science and AI professor at Carnegie Mellon University.
This massive lapse is partly due to ChatGPT’s knowledge cutoff date: 2021. However, the article Delouya asked the program to write was about events in December.
So journalists can breathe a sigh of relief knowing their jobs are likely secure, at least for now.
How can I use it?
Despite its quirks and limitations, ChatGPT can still help streamline some operations, especially for small businesses. For enterprises, “chatbots such as ChatGPT have the potential to automate mundane tasks or enhance complex communications, such as creating email sales campaigns, fixing computer code or improving customer support,” wrote Lucas Mearian of Computerworld, a business technology news website. However, “precautions are needed,” he added.
For example, instead of giving ChatGPT free reign, businesses can use it to make adjustments. Management consulting company Gartner predicted in its 2022 report that enterprises might use ChatGPT to “augment or create content, manipulate text in emails to soften language or take a particular tone, and to summarize or simplify content.” Another method is to use it to produce “draft texts” that “meet the length and style desired, which can then be reviewed by the user,” said the report, noting the software could be a starting point for marketing descriptions, letters of recommendation, essays, manuals or instructions, training guides, social media or news posts.
One of the main ways ChatGPT and other language model software can help small businesses is by supplementing customer service. By using the model, companies can “generate responses for their own customer service chatbots, so they can automate many tasks typically done by humans and radically improve response time,” wrote Bernard Marr of Forbes.
According to a 2018 Opus Research report, 35% of consumers want to see more companies using chatbots, and 48% of customers reported not caring about whether the response comes from a human or a chatbot. For companies still trying to make the switch to chatbots, ChatGPT can be a good tool to start training chatbots. The ability to use ChatGPT to train chatbots can be a double-edged sword, though, if “your competitors successfully leverage the technology and your company doesn’t,” warned Marr.
ChatGPT also is capable of generating insights on customer preferences based on its conversations, according to the LinkedIn newsletter Chat GPT for B2B Businesses. It can “collect data about customer preferences and behavior, providing valuable insights that can help inform your marketing and product development strategies,” according to the newsletter.
In addition to content creation and customer service, ChatGPT can also help HR professionals sift through CVs and cover letters. Scot Chrisman of Entrepreneur magazine said recruiters could just copy and paste cover letters into ChatGPT and ask it to search for specific words and criteria. “It will comb through text to determine if candidates have relevant experience, possibly avoiding the need to hire outside recruiters and certainly saving time,” he said.
While the possibilities may be tempting, it is crucial to recognize that ChatGPT is in its beta phase. Open AI “acknowledges the current limitations of the AI, including the potential to occasionally generate incorrect information or biased content,” wrote Marr.
Additionally, there are privacy concerns. ChatGPT is susceptible to cybersecurity attacks and could be used to spread malicious content or viruses, according to Marr. He added that it could also be misused to “manipulate people into divulging personal information using the chatbot, then use that information for fraudulent purposes or for targeted phishing attacks.”
Is this ethical?
There’s a long list of issues with the ethics of making and using such a program. “We’re at the beginning of a broader societal transformation,” said Brian Christian, a computer scientist and author of “The Alignment Problem,” a book about ethical concerns surrounding AI systems.
For starters, as Delouya discovered, ChatGPT is concerned with sounding truthful, not necessarily being truthful. It is “essentially a more powerful, fancier version of the predictive text system on our phones,” explained Brian Chen of The New York Times.
Additionally, ChatGPT is unaware of time and recent developments in technology or any other field. To illustrate this, Christian uses a programming example. He explained that if you ask ChatGPT to write a code, it will compile a code that looks like it was written years ago. Because code is constantly updated to address security vulnerabilities, the code written with a chatbot could be buggy or insecure, he said.
There also are data privacy concerns when using programs such as ChatGPT, ethicists warn. “The data that you enter into an AI app is potentially not at all entirely private to you and you alone,” Lance Eliot, an expert on AI and machine learning, wrote for Forbes in January. That is because AI makers might use the information to continue to improve their AI services.
Indeed, AI software has been utilizing the work of humans for a while now, raising copyright concerns. That is a significant problem with Dall-E, for example, ChatGPT’s visual arts cousin. With Dall-E, if you ask it to produce a painting of the pyramids in the style of Vincent van Gogh, it will come up with an alarmingly convincing result. That is also evident with ChatGPT when you ask it for poems or song lyrics. I asked ChatGPT to write a song about falafel in the style of Michael Jackson, for example, and the program delivered well.
That leads to an important question: “Who ultimately owns content generated by ChatGPT and other AI platforms?” asked Forbes’ Joe McKendrick. When it comes to intellectual property, the model for ChatGPT “is trained on a corpus of created works and it is still unclear what the legal precedent may be for reuse of this content if it was derived from the intellectual property of others” noted Bern Elliot, an analyst at Gartner.
If the legality of the content is murky, the ethics of AI software is even murkier. Not only are copyright issues a factor, but so are the multitude of uses that continue to blur the lines between humans and machines. How can a teacher grade an assignment by a student that used ChatGPT? Can a company ethically use art by Dall-E for commercial purposes? To what extent can you utilize ChatGPT but remain ethical? The debates about these issues are very much ongoing, and there is no answer yet on how to use such systems wisely and ethically, as Eliot, Chen, McKendrick, and other experts agree. However, as the software improves and the conversation continues, we may start seeing some answers.