AI Hallucinations and How to Avoid Them

AI hallucinations are one of the most frustrating and time consuming parts of building AI applications. Not only is the AI wrong but it’s convincing which makes things much trickier. Here are some tips on how to reliably avoid hallucinations.

December 15th 2023 Max Brodeur-Urbas
Max Brodeur-Urbas
Co-Founder/CEO
AI Hallucinations and How to Avoid Them

What is an AI Hallucination

This might be obvious but a hallucination is the AI (LLMs specifically) making s*$% up. It is responding to your prompts with nonsense and doing so with confidence. This convincingness is the real source of the issue. If there was an easy way to spot hallucinations you could solve the problem by catching and handling errors like any other software system. The only effective way to deal with hallucinations is prevention.

Why does AI Hallucinate

I like to imagine AI (LLMs) as a rock climber and you (the engineer) as its belayer. You’re responsible for setting the holds/mapping out a route based on the model's skill level, guiding it verbally through the tricky sections and giving it a cheeky pull up the wall when you expect it to struggle. Hallucinations are a result of expecting too much from the AI at a given move in the climb. It will do anything to please you, even if that means cheating a little bit.

Hallucinations = Skill Issue

Hallucinations are caused by one of two skill issues. Either the model isn’t capable enough to complete the task or you didn’t guide it effectively. Failing to guide it can take many shapes but is generally related to context as that’s all you really have control over. It’s possible you overloaded it with context, didn’t give it enough context or didn’t give it the correct context among other things. People are quick to blame the model for hallucinating but don’t evaluate their approach critically enough.

Here are 4 techniques I use that help ensure the AI never feels the need to cheat and make things up.

Reduce the Problem Space

Wherever possible, simplify the problem being given to the AI. Put those rock climbing holds as close together as possible so that it’s almost impossible to fail. Breaking one difficult move up into 4 smaller moves makes things much more reliable. This may mean making more AI calls and some the system being slower overall but for most business use cases, being reliable and correct is all that matters.

If you have a task where you’d like the AI to extract 3 pieces of data from a source and generate analysis, break it up into 4 much smaller steps. Extract each piece of data independently in parallel, concatenate the results and pass that into the final analysis step. This is one of many many ways we can simplify a problem like this but as long as you’re breaking things up into simpler steps, you’re heading in the right direction.

If you simplify your tasks enough, you’ll find they’ve become so trivial that the much less capable models are now viable solutions. In the example above, you may have needed 3 extra calls but if the model you’re now able to use is 20x less expensive, you’re reducing costs while increasing reliability!

Context Saturated Prompting

It is your job to ensure there is no room for confusion from the models perspective. In our climbing analogy, this is you yelling hyper specific instructions up the wall while sitting back a bit in your harness to pull the model through the move. When I say provide the correct context, I don’t mean simply giving it a bit of background info. I mean absolutely saturating your prompt in the most relevant information possible using relevant examples, explaining common pitfalls, clarifying key terms, dynamically injecting relevant chunks, explaining failure scenarios etc.

This part of the process is where the real engineering comes in. This is the ‘prompt’ engineering people say they’re doing but I think it would be better labeled as ‘context’ engineering. It’s essential to not cut corners here because providing insufficient context is the perfect opportunity for the AI to fill in the blanks with wildly reasonable hallucinations.

Add Validation Steps

This part is difficult since hallucinations are inherently convincing but coming up with basic sanity checks can go a long way. This is also extremely dependent on the task at hand but there’s always some imaginative way you should be able to ensure some non-zero confidence in the answer.

  • if it’s data extraction, validate the data and perform a keyword search on the source to confirm it was present.
  • If it’s content generation, use a cheaper model to sanity check the structure.
  • If it’s quoting sources, scrape the source for the relevant quote.
  • If you’re expecting JSON responses, load the JSON and confirm it matches a set of schema.

These sorts of steps near the end of your process give you the opportunity to use a fail safe you may have built. Regardless of what your backup plan is, it’s better you were warned than the hallucination quietly slipping by you.

Use as Little AI as Possible

Your ultimate goal is to get the climber to the top of the wall. If you can let them take a break and just pull them past certain sections, take that opportunity. The less AI you use, the less chance there is for hallucinations and the cheaper things get! Oftentimes people try to use AI for absolutely every part of their process but don’t consider that a simple python script or API call could have done most of the job. The less you use AI the cheaper your solution will get and the more budget you’ll have on the sections that can’t be solved without AI.

Within AgentHub, our typical automations are ~10% AI and 90% infrastructure. That infrastructure is loading data from third parties, taking user input and formatting data in preparation for the expensive AI steps. We specialize in helping you define these rigid workflows (climbing routes) for your AI to reliably accomplish. If you have a task that you've been hoping to have AI take over, we have can help you automate it.

The only perfect way to avoid hallucinations is to avoid using AI altogether. The best contraception is abstinence as they say (I don't). If you are using AI however, practice safe development and hopefully these tips help.

Related Posts

Here is some more content related to Gumloop, LLMs and automation. Enjoy!

3 min read

Why We Built AgentHub

The journey of building AgentHub, from discovering AutoGPT to creating a platform for reliable and cost-effective AI automation.

Max Brodeur-Urbas

Co-Founder/CEO

Ready to Automate?

Start using Gumloop today.

Automate your complex business processes without writing a single line of code. No credit card required.