Misinformation, hallucinations and GenAI
In order to use generative artificial intelligence (GenAI) responsibly, make sure your agency can access high-quality information. This avoids spreading misinformation and sharing hallucinations.
This guidance is aligned with the following OECD AI principles:
Access high-quality information
While there are many benefits to using GenAI, it can generate inaccurate and incomplete outputs.
Having access to high-quality information is vital to support effective decision-making. It’s important that the government avoids contributing to misinformation by sharing inaccurate information.
Limits of GenAI
GenAI systems may not always comprehend:
- real-world contexts
- nuances in language
- cultural references
- intent.
GenAI systems may not have access to information that’s known to be true or reliable. Educate your teams with the skills and capabilities to use GenAI effectively, and to never consider generated content as authoritative.
Misinformation, disinformation and hallucinations
The New Zealand government has identified disinformation as a core issue and a National Security Intelligence Priority. The Department of Prime Minister and Cabinet (DPMC) defines:
- Disinformation as false or modified information knowingly and deliberately shared to cause harm or achieve a broader aim
- Misinformation as information that is false or misleading, though not created or shared with the direct intention of causing harm.
Strengthening resilience to disinformation — DPMC
Related to but distinct from misinformation and disinformation are hallucinations. The OECD defines hallucinations as when GenAI systems create incorrect yet convincing outputs.
Generative AI: Risks and unknowns — OECD.AI
Commit to avoiding hallucinations and incorrect information
- Be specific and prescriptive in your query
- When using GenAI, you can include instructions to return an ‘I do not know’ when the model is unsure.
- Data quality and representative data is a key part of avoiding hallucinations, alongside reliable testing process.
Best practice to help avoid misinformation and hallucinations
We recommend that you consider these best practices to avoid misinformation and hallucinations when using GenAI.
Examine the impact
For each use case, assess the impact of using AI-generated content and the risks of misinformation.
Verify and cross-reference information
Check the quality of results produced by GenAI systems with trusted sources. This helps make sure the content generated is accurate.
Check reliability and truthfulness
Double-check the reliability and truthfulness of AI results before using that generated information. This lessens the risk of sharing misleading information.
More information — GenAI misinformation and hallucinations
- Misinformation and disinformation — OECD
- Strengthening resilience to disinformation — Department of the Prime Minister and Cabinet
Related guidance
- Governance
- Skills and capabilities
- Bias, discrimination and fairness
- Transparency and explainability
Utility links and page information
Last updated