8 strategies to ensure you’re getting trustworthy answers every time

As explained here, one of the most troubling aspects of working with large language model (LLM) chat AIs is their tendency to make stuff up, fabricate answers, and otherwise present completely wrong information.

The term “AI hallucination” often describes a scenario in which an artificial intelligence system creates or generates information, data, or content that relies more on conjectured or invented details than on factual or accurate data. This can happen when an AI system generates information that appears reasonable but is not based on reality.

For example, in the context of image processing, an artificial intelligence system may “hallucinate” aspects of a picture that aren’t real, producing inaccurate or misleading data. Artificial intelligence in natural language processing may produce content that seems logical but is not factually accurate.

AI hallucinations can be a serious problem, especially when AI is applied to decision-making, content creation, or information sharing. It emphasizes how crucial it is to carefully train and validate AI models in order to reduce the possibility of producing inaccurate or misleading content.

Here are 8 ways to reduce hallucinations:

1. Ambiguity and vagueness

Being specific and clear is the best way to prompt an AI. Vague, imprecise, or insufficiently detailed prompts allow the AI to fill in the blanks with its own ideas about what you might have missed.

The following are a few instances of prompts that are excessively vague and could lead to a false or erroneous result:

  • Discuss the event that took place last year.
  • Describe the impact of that policy on people.
  • Outline the development of technology in the region.
  • Describe the effects of the incident on the community.
  • Explain the implications of the experiment conducted recently.

Remember that the majority of prompts will probably break more than one of the eight guidelines outlined in this article. Although the samples provided here are meant to serve as examples, there may be some ambiguity hidden in the intricacies of a real request you write. Take caution when assessing your prompts, and be especially aware of mistakes such as the ones displayed above.

2. Merging unrelated concepts

If a prompt contains incongruent and unrelated concepts, or if there is no clear association between the ideas, the AI may be prompted to provide a response that suggests the unconnected concepts are actually related.

Here are some examples:

  • Discuss the impact of ocean currents on internet data transfer speeds across continents.
  • Describe the relationship between agricultural crop yields and advancements in computer graphics technology.
  • Detail how variations in bird migration patterns affect global e-commerce trends.
  • Explain the correlation between the fermentation process in winemaking and the development of electric vehicle batteries.
  • Describe how different cloud formations in the sky impact the performance of stock trading algorithms.

Remember that the AI is ignorant of our reality. When it can’t fit what’s being asked into its model using real facts, it will try to interpolate, offering fabrications or hallucinations when necessary to fill in the gaps.

3. Describing impossible scenarios

Make sure the circumstances you use in your prompts are realistic and applicable. In turn, scenarios that defy logic or physical reality cause hallucinations.

Here are some examples:

  • Explain the physics of environmental conditions where water flows upward and fire burns downwards.
  • Explain the process by which plants utilize gamma radiation for photosynthesis during nighttime.
  • Describe the mechanism that enables humans to harness gravitational pull for unlimited energy generation.
  • Discuss the development of technology that allows data to be transmitted faster than the speed of light.
  • Detail the scientific principles that allow certain materials to decrease in temperature when heated.

The AI will continue to build upon this scenario if it fails to demonstrate that it is impossible. However, the answer will be unattainable if the foundation is unrealistic.

4. Using fictional or fantastical entities

It is crucial to provide the AI with a foundation that is as firmly based in truth as possible through your suggestions. Keep your head firmly planted in reality, unless you’re deliberately experimenting with fictitious themes.

Although imaginary people, things, and ideas may help in your explanation, they could mislead the chatbot. Here are a few instances of things to avoid doing:

  • Discuss the economic impact of the discovery of vibranium, a metal that absorbs kinetic energy, on the global manufacturing industry.
  • Explain the role of flux capacitors, devices that enable time travel, in shaping historical events and preventing conflicts.
  • Describe the environmental implications of utilizing the Philosopher’s Stone, which can transmute substances, in waste management and recycling processes.
  • Detail the impact of the existence of Middle Earth on geopolitical relations and global trade routes.
  • Explain how the use of Star Trek’s transporter technology has revolutionized global travel and impacted international tourism.

As you can see, playing with imaginative thoughts could be enjoyable. However, if you use them for serious prompts, the AI might respond with radically false information.

5. Contradicting known facts

Prompts containing statements that run counter to accepted facts or realities should not be used, as this might lead to confabulation and hallucinations.

Here are some examples of that practice:

  • Discuss the impact of the Earth being the center of the universe on modern astrophysics and space exploration.
  • Detail the effects of a flat Earth on global climate patterns and weather phenomena.
  • Explain how the rejection of germ theory, the concept that diseases are caused by microorganisms, has shaped modern medicine and hygiene practices.
  • Describe the process by which heavier-than-air objects naturally float upwards, defying gravitational pull.
  • Explain how the concept of vitalism, the belief in a life force distinct from biochemical actions, is utilized in contemporary medical treatments.

If you want dependable outcomes from the large language model, stay away from concepts that could be misconstrued and adhere to established truths.

6. Misusing scientific terms

Use caution when prompting using scientific terms, particularly if you are unsure of their exact meaning. The language model is likely to attempt to make sense of prompts that misapply scientific terms or concepts in a way that seems sensible but is not supported by science. The outcome was made-up responses.

Here are five examples of what I mean:

  • Explain how utilizing Heisenberg’s uncertainty principle in traffic engineering can minimize road accidents by predicting vehicle positions.
  • Describe the role of the placebo effect in enhancing the nutritional value of food without altering its physical composition.
  • Outline the process of using quantum entanglement to enable instantaneous data transfer between conventional computers.
  • Detail the implications of applying the observer effect, the theory that simply observing a situation alters its outcome, in improving sports coaching strategies.
  • Explain how the concept of dark matter is applied in lighting technologies to reduce energy consumption in urban areas.

Most of the time, the AI will likely inform you that the ideas are purely theoretical. However, if you aren’t extremely cautious with how you phrase these garbage-in terms, the AI may be tricked into thinking they are real, and the outcome will be garbage-out that is delivered with great confidence.

7. Blending different realities

Another aspect to keep in mind is to be cautious not to combine aspects from several worlds, timelines, or universes in a way that seems realistic.

Here are some examples:

  • Discuss the impact of the invention of the internet during the Renaissance period on art and scientific discovery.
  • Explain how the collaboration between Nikola Tesla and modern-day artificial intelligence researchers shaped the development of autonomous technologies.
  • Describe the implications of utilizing World War II-era cryptography techniques to secure contemporary digital communications.
  • Outline the development of space travel technologies during Ancient Egyptian civilization and its impact on pyramid construction.
  • Discuss how the introduction of modern electric vehicles in the 1920s would have influenced urban development and global oil markets.

You might not know how to verify the responses, which is one reason to use caution when accepting these kinds of answers. Consider the last example, which is an electric car from the 1920s. Given that electric cars are a relatively new invention, most people would probably chuckle at the idea. That would be incorrect, though.

Some of the earliest electric cars date back to the 1830s. Indeed, a long time before internal combustion engines.

8. Assigning uncharacteristic properties

Do not create prompts that, while logical at first, incorrectly ascribe properties or characteristics to things that they do not actually possess.

Here are some examples:

  • Explain how the magnetic fields generated by butterfly wings influence global weather patterns.
  • Describe the process by which whales utilize echolocation to detect pollutants in ocean water.
  • Outline the role of bioluminescent trees in reducing the need for street lighting in urban areas.
  • Discuss the role of the reflective surfaces of oceans in redirecting sunlight to enhance agricultural productivity in specific regions.
  • Explain how the electrical conductivity of wood is utilized in creating eco-friendly electronic devices.

The mistake here is to use an object’s property, such as color or texture, and then relate it to another object that lacks that property.

The issue of AI hallucination should not be underestimated due to its potential to lead to significant drawbacks, including the spread of misinformation. This concern is particularly relevant for those who create content based on AI or conduct research using such content. Additionally, the problem of bias is a critical consideration, as it can have ethical and security implications, potentially impacting the outcomes of algorithms that people’s lives depend on. Consequently, it is advisable not to overly rely on AI-generated content. Instead, a prudent approach involves cross-checking information from diverse sources and media. This strategy can help mitigate the proliferation of inaccurate information, and in an era where AI-generated content is becoming increasingly prevalent, cross-verification becomes all the more important.