Skip to main content

Misinformation, hallucinations and GenAI

In order to use generative artificial intelligence (GenAI) responsibly, make sure your agency can access high-quality information. This avoids spreading misinformation and sharing hallucinations.

This guidance is aligned with the following OECD AI principles:

Access high-quality information

While there are many benefits to using GenAI, it can generate inaccurate and incomplete outputs.

Having access to high-quality information is vital to support effective decision-making. It’s important that the government avoids contributing to misinformation by sharing inaccurate information.

Limits of GenAI

GenAI systems may not always comprehend:

  • real-world contexts
  • nuances in language
  • cultural references
  • intent.

GenAI systems may not have access to information that’s known to be true or reliable. Educate your teams with the skills and capabilities to use GenAI effectively, and to never consider generated content as authoritative.

Misinformation, disinformation and hallucinations

The New Zealand government has identified disinformation as a core issue and a National Security Intelligence Priority. The Department of Prime Minister and Cabinet (DPMC) defines:

  • Disinformation as false or modified information knowingly and deliberately shared to cause harm or achieve a broader aim
  • Misinformation as information that is false or misleading, though not created or shared with the direct intention of causing harm.

Strengthening resilience to disinformation — DPMC

Related to but distinct from misinformation and disinformation are hallucinations. The OECD defines hallucinations as when GenAI systems create incorrect yet convincing outputs.

Generative AI: Risks and unknowns — OECD.AI

Commit to avoiding hallucinations and incorrect information

  • Be specific and prescriptive in your query
  • When using GenAI, you can include instructions to return an ‘I do not know’ when the model is unsure.
  • Data quality and representative data is a key part of avoiding hallucinations, alongside reliable testing process.

Best practice to help avoid misinformation and hallucinations

We recommend that you consider these best practices to avoid misinformation and hallucinations when using GenAI.

Examine the impact

For each use case, assess the impact of using AI-generated content and the risks of misinformation.

Verify and cross-reference information

Check the quality of results produced by GenAI systems with trusted sources. This helps make sure the content generated is accurate.

Check reliability and truthfulness

Double-check the reliability and truthfulness of AI results before using that generated information. This lessens the risk of sharing misleading information.

Example scenario of misinformation, hallucination and GenAI

For some policy work, you’re using GenAI to learn about a topic you’re unfamiliar with. It produces information that looks robust.

Evaluate the references and citations provided in the system and check if the sources provided are legitimate and appropriate. Cross-check the information with credible sources and experts or relevant communities.

Make sure your manager knows that you’ve used the tool. You should not publish the information to the public until it’s been absolutely verified and approved.

Be clear that GenAI was used to produce it, and that people can challenge those outputs. This will help maintain transparency, trust, and robust outcomes.

More information — GenAI misinformation and hallucinations

Related guidance

Utility links and page information

Was this page helpful?
Thanks, do you want to tell us more?

Do not enter personal information. All fields are optional.

Last updated