Skip to content

AI hallucinations

Don't trust the AI, they invent things

Regardless of how easy Artifical Intelligence tools such as OpenAI let us create and revise content for websites or other usages, they just cannot be trusted.

ChatGPT and similar tools are a class of AI system called Large Language models. They work by trying to predict the next "word" in a sentence. They do that perfectly. But they do not have any understanding of what they "write". At all.

For instance, asked How can I recognize a chicken egg from a cow egg, chatGPT will answer, as of April 2023:

One way to recognize a chicken egg from a cow egg is by their size and color. Chicken eggs are typically smaller and come in shades of white, brown, or light green, while cow eggs (also known as "duck eggs") are larger and have a thicker shell, usually in shades of white or off-white. Additionally, the yolk of a cow egg is typically larger and richer in flavor than a chicken egg yolk.

Another way to tell the difference is by cracking the egg open. A chicken egg will have a clear white and a yellow yolk, while a cow egg will have a slightly blueish white and a larger, richer yolk that is more orange in color.

It's worth noting that cow eggs are less common in most Western countries and are typically only available at specialty stores or from local farmers. If you're unsure about the type of egg you have, you can always ask the person who sold or gave it to you for more information.

That's funny in a way, but not when that sort of nonsense appears on your website, or on your client's website.

AI invent things, don't trust them blindly

Always, absolutely always carefully read and double-check any content created by an AI provider API.

Note that in our experience, such invention of nonsensical content happens when trying to create content.

Translating from one language to another, or revising content (spelling, shortening, summarizing, adding keywords,...) is much more reliable because the transformation starts with a content you provide (and therefore supposedly makes some sense!)