Beware of GPT3’s Overconfidence: It May Not Always Be Right

Something I’m observing with GPT3 is that it is remarkably “confident” in its responses!

No matter how wrong or off-base the responses are, they’re always written as though they’re the absolute authority on the subject.

This doesn’t matter too much when we’re simply testing it—and it can be amusing at times—but it will have far-reaching implications when people and brands begin publishing content based on AI-generated copy (and when students begin submitting essays written by GPT, too!).

It’s worth keeping in mind that GPT3 is trained by a vast amount of data from the public internet. Even if you direct it to have some context, it lacks any actual understanding of the subject matter—or the blurry nuances around the edges, so to speak.

It’s truly amazing technology, but it’s still early days and needs to be treated with caution and an understanding of what it is and how it works.  

For a bit of an experiment, I thought I’d have ChatGPT suggest the headline for this article.

ChatGPT’s 10 suggested blog headlines warning about GPT3’s tendency to consider itself the authority on a subject when it might not be

  1. "Beware of GPT3's Overconfidence: It May Not Always Be Right"
  2. "GPT3's Authority Bias: Why You Shouldn't Always Trust Its Answers"
  3. "The Dangers of Relying on GPT3's Self-Assuredness"
  4. "GPT3's Tendency to Presume Its Own Expertise: A Cautionary Tale"
  5. "GPT3's Illusion of Knowledge: Why It's Important to Fact-Check Its Answers"
  6. "GPT3's Faulty Logic: How It Can Lead You Astray"
  7. "The Risks of Accepting GPT3's Word as Gospel"
  8. "GPT3's Lack of Context: Why It's Not Always the Best Source of Information"
  9. "GPT3's Inability to Judge Its Own Accuracy: Why You Should Proceed with Caution"
  10. "Don't Be Fooled by GPT3's Air of Authority: It's Not Always Right"

To wrap this up, I thought I would ask ChatGPT3 why this might be the case.

ChatGPT on why we should be careful when discerning the appropriateness of its responses

There are several reasons why it's important to be careful when considering the appropriateness of GPT3's responses. One reason is that GPT3 is a large language model trained using machine learning, which means that it has not been explicitly programmed to provide accurate or reliable information on all topics. As a result, GPT3's answers may sometimes be incomplete, inaccurate, or biased.

Another reason to be cautious is that GPT3 has a tendency to exhibit an "authority bias," which means that it may assume that it is the expert on a subject and provide answers accordingly, even when it is not actually knowledgeable about the topic. This can lead to situations where GPT3 provides answers that are overly confident or that lack necessary context or evidence to support its claims.

Additionally, GPT3 is not capable of evaluating its own accuracy or understanding the limitations of its knowledge, which means that it may provide answers that are not supported by evidence or that are based on faulty logic. In these cases, it is important to carefully consider the source and reliability of the information provided by GPT3 before accepting it as fact.

There you have it. That certainly feels like an appropriate response, right? I am not an expert on GPT3, so I am not 100% sure if this is accurate—but, it certainly comes across with confident authority on the subject. Who am I to doubt it?

Yikes. Welcome to the somewhat-terrifying-but-fascinating future we are living in.

Subscribe to Galen King

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe