Nightmares are Dreams too

While hallucinating AI sounds like something straight out of a novel by William Gibson, the concept is real, and almost everyone who has played with AI-based chatbots has encountered the flaw themselves. AI “hallucinations” manifest when AI systems create something that looks very convincing but has no basis in the real world, or in response to bad data inputs.
This could manifest as a picture of a person with too few or too many arms and legs, a document with made up references, or just straight nonsense.
Psychedel-ai
AI developers borrowed the word ‘hallucinations’ from human psychology, and that was no accident. Human hallucinations are perceptions of something not actually present in the environment. And the mechanism is very close to dreaming.
While AI hallucination can result in some entertaining output, chatbots hallucinating convincing fakes can lead to anything from misunderstandings to misinformation. AI hallucinating medical solutions could lead to even less desirable outputs. So it’s a real live issue. But notably, artificial intelligence hallucinating things into existence, along with the fundamentals of the way it responds – by statistically ‘guessing’ word and sentence structures or visual expressions based on its training data – may be revealing some fundamental insights about the way these models work.
Generative AI is not rational
Over the decades, AI’s depiction in pop culture, such as Skynet in the Terminator, anchored the notion of AI being rational. The story often follows a similar pattern: humans program AI to do good but buried somewhere there are conflicting instructions, which it follows to the absolute letter. Chaos and destruction ensues.
But generative artificial intelligence is actually a lot more complicated and less predictable than how it has been typically depicted. Some of the most powerful approaches to artificial intelligence deliberately and directly design systems based on the architecture of the human brain.
Large language models built with deep learning systems have layers of linked networks, allowing the model to provide coherent answers to complicated questions. The process creates plenty of room for mistakes and creativity. So for example, because ChatGPT and similar large language systems build sentences based on training in the relationships of words, the longer the piece you ask them to write, the greater the chance you spiral off into some really odd directions.
AI leader Jan LeCun likens GPT 4.0 to “at best approximating the functions of Wernicke and Broca’s areas of the brain”. These are two areas of the brain that are well established as processing speech. And in that way, its guessing ability, rather than its reasoning ability, it could be seen as a sort of almost intuitive System One simulation of a human.
Of course, so much depends on the input. We often ask ChatGPT a really straightforward question, along the lines of “In 20 words or less, describe an image representing this: [input long text such as a Blog summary]”. We might be using this to (for example) feed into Midjourney to create a ‘mood board’ for a blog.
But what happens if there is no [input]? You would expect it to respond with ‘Sorry, no data’. But it doesn’t. It hallucinates an answer… indeed, a whole variety of answers!
It’s doing something. Is it dreaming…?

Similarly, we sometimes use image recognition AIs for tagging concepts in photos (e.g. dogs, cats, landscapes and so on). In the example below, we uploaded a blank white square. Again, the expectation is that it would return ‘no data’. Instead, it provided the following list of tags.

What’s happening (in analogy) is that this image tagging AI is staring at a blank piece of paper and seeking to obey its instructions to make some kind of sense of it. Its ‘goal’ is to return tags, so that’s what it has to try to do. However, the numbers are supposed to be probabilities of accuracy of the tags. Is there really a 98% probability that a blank white square is ‘far-out’, or a 91% probability that it is ‘crazy’…?
Nightmares are Dreams too…
This ‘dreaming’ seems perfectly harmless on first sight, albeit slightly disconcerting. But sometimes the results are genuinely nightmarish – especially when it comes to generative visual AI, which creates a more visceral response in us than mere text. So let’s look at some examples of how this has manifested in response to real live prompts…








In sum…
You could well argue that in none of these cases it is actually wrong in its dream-state depiction of the prompts – just deeply unsettling. After all, our nightmares are dreams too.
As the models improve, the likelihood is these issues will get fixed in the end, not least because finding ways to reduce these AIs’ hallucinations is critical for the quality assurance of AI-based services, along with their credibility. The AI world’s dreams and nightmares may be just a small bump in the great long road of developments, but it’s certainly a timely reminder of the importance of maintaining vigilance, checking the outputs, and ensuring our technology is actually doing what we need it to.
Our next experiment will be to seek to establish the ‘hallucination rate’. For example, freaky bunny ears emerged on around iteration 40 of the prompt. T Rex popped up around iteration 100. But it’s a bit like rolling a dice with 100 sides: you may get a 6 on the first roll or not until the 200th! So we’re going to test a few prompts at iterations of 1,000-2,000 and see if we can quantify this, and report back.
More as it happens…
* * * * * * * * * * * *
We’ll be writing more on this and other AI related subjects shortly, meanwhile please do get in touch if you found this interesting…
Get in touch today to learn more about and what Signoi’s AI-driven analysis software can do for you.
Please contact us at hello@signoi.com