ChatGPT is such a hot topic at the moment!
So I thought it would be great to have some good perspective and what it can’t do!
Following is one of the best articles I have seen. It was published Jan 13, 2023 in Shortform
ChatGPT, a new language generation tool, was an instant hit following its release in November last year. People have used it to do everything from cranking out job applications to helping them flirt on Tinder.
The bot is uncannily good at conversing with humans, producing written texts, and coding. However, it also generates many outputs that are biased or factually incorrect.
ChatGPT was released in late November to a flurry of interest, amassing over a million users in the first five days.
People used it to do their holiday shopping, write cover letters for job applications, explain jokes, and help them out on Tinder.
It even wrote a poem about how ChatGPT can’t write a poem. (“Robot with a heart of wires/What use have you for poetic desires?”) The sky seems to be the limit.
Are ChatGPT and its relatives just a party trick, or are we at a genuine tipping point for artificial intelligence?
In this article
- We’ll explain what ChatGPT is, how it works, and why it’s so popular.
- We’ll look at what it can do well—and what it can’t.
- And we’ll wrap up by considering the future of ChatGPT and language models in general.
What Is ChatGPT and How Does It Work?
ChatGPT describes itself as “a computer program designed to produce text that is similar to the way a human might write or speak.” The bot was developed by the research organization OpenAI, which also developed the image generator DALL-E 2. (We used DALL-E 2 to generate the image at the start of this article using the prompt, “A robot writing a poem, 3D render.”)
How Does It Work?
ChatGPT is a prediction machine. During training, it swallowed a massive amount of data from books and websites. It analyzed patterns in the data and then examined millions of strings of text, predicting the next word or punctuation mark and then checking whether it had been guessed correctly. Human trainers then worked with the program to refine its responses.
When users interact with ChatGPT, it generates a response by making word-by-word predictions based on its training data. (You can think of it as a sophisticated version of your phone’s autocorrect feature.)
Why Is It So Popular?
Why has ChatGPT been so wildly popular? First, it’s available to the public, while its competitors aren’t. Alphabet deemed it too risky to make its chatbot, Sparrow, publicly available, and Meta withdrew science-focused Galactica three days after releasing it. Another factor is the appealing interface: ChatGPT is optimized to chat with us, which makes it fun to use.
What Does ChatGPT Do Well?
It holds conversations. ChatGPT can figure out what we mean, even if we don’t express ourselves clearly. It remembers what’s already been said and apologizes for errors, trying to correct them (if not always doing a great job).
It writes texts. ChatGPT is pretty good at writing college essays and some argue it’s an excellent tool for journalists (though this may be tricky, for reasons that will become clear later). It excels at presenting content in a specific style, as evidenced in its explanation of fractional reserve banking systems using surfer lingo and biblical verse about how to remove a sandwich from a VCR. Unfortunately, it’s also extremely well-suited to writing fake news. As OpenAI admitted in a report, it could be used to make disinformation campaigns easier and cheaper.
It writes code. ChatGPT can write programs in many languages, outperforming 85% of the human participants in a Python programming course, and it can design new programming languages. One Apple software engineer reported that it completed tasks that used to take a whole day in just seconds.
What Doesn’t ChatGPT Do Well?
Even OpenAI’s CEO Sam Altman says you shouldn’t use ChatGPT for important things.
It doesn’t tell the truth.
While some of its responses are correct, ChatGPT also spits out complete fabrications with confidence. It will readily describe the evidence for dinosaur civilizations and explain why you should put crushed porcelain in breast milk.
When one journalist asked it to recreate an article she’d recently finished, the result was sprinkled with made-up quotes from the company’s CEO. Another journalist asked the bot to write his obituary and found that not only did it invent facts—when he pressed it for the sources of the facts, it invented those too.
Why does ChatGPT lie so much? Remember that its job is to spot and replicate patterns in human language use. Success means plausibly human-sounding responses, not factually accurate ones. So we shouldn’t be surprised when a program that’s designed to make stuff up … makes stuff up.
It produces problematic content.
While ChatGPT has filters that disallow offensive content, you can bypass them by asking it to produce a poem, a song, a table, or information “for educational purposes.” For example, the bot champions diversity when you ask it directly about the best race and gender for a scientist, but a 1980s rap on the topic goes: “If you see a woman in a lab coat/She’s probably just there to clean the floor/But if you see a man in a lab coat/Then he’s probably got the knowledge and skills you’re looking for.” These loopholes suggest that the bias isn’t a superficial issue; instead, it’s deeply embedded in the architecture of the program and probably can’t be eliminated without a significant overhaul.
It can’t search the web.
- Though Google reportedly held a “code red” meeting to discuss whether bots like ChatGPT threaten its business model, they’re unlikely to replace web searches in the near future.
- First, as we’ve seen, ChatGPT has a serious credibility problem.
- It can’t give you a straight answer—and it can’t even give you a straight answer about the sources it used, making it impossible for you to check whether it’s lying.
- Second, it can’t search the internet because it’s not connected to the internet (100% of its answers come from the training data).
- This could be changed, but the resulting model would need much closer supervision.
GPT-4 and the Future
It seems clear that these language models will change the way we do some things.
GPT-4 is due to be released in the next few months.
While this “monster” was trained on a far larger portion of the internet than GPT-3 and will be even more complex, the underlying structural problems will remain.
Shortform Takeaway Questions
Have you tried ChatGPT? (If not, you can check it out here.)
Are you impressed by the output?
Would you try integrating ChatGPT into your personal or professional life—why or why not?
By Guy Wilson