What Generative AI is Not

Written by Omelia Tennant and Jill Ballard
August 7, 2023 • 2 minute read


With the many definitions describing what Generative AI (GenAI) is, it’s also essential to consider what it’s not. Developing a holistic understanding that recognizes its limitations and myths can guide more informed decision-making in all contexts.

What It’s Not

Although GenAI has many advantages, it does have significant limitations. Specifically for large language models (LLMs), such GenAI tools are

  • Not a replacement for human intelligence—human prompting and existing information are required to generate content.
  • Not an arbiter of quality.
    • Generated content is not always accurate and can include fabricated information, referred to as hallucinations, which are presented as factual or authoritative.
    • Generated content can be limited in scope and may not be current. For example, the dataset used by the current (mid-2023) iteration of ChatGPT only accesses information through September 2021.
  • Not self-aware or able to self-reflect in its content generation.
  • Not reliable for fact-checking, as it accesses texts without proper referencing.
  • Not anti-biased. In fact, GenAI tools often amplify and exacerbate existing bias in generated content.
  • Not robust. Responses can be overtly condensed or lack completeness.
  • Not able to reproduce generated content consistently.
  • Not protective of privacy. GenAI tools may access unprotected personal or sensitive information, which can put individuals' privacy at risk.

We are all GenAI users in some capacity. Due diligence is recommended for understanding the technology’s limitations and risks, and increasingly so as GenAI integration expands into a range of applications.