Bias, discrimination, fairness, equity and GenAI
Proactively make sure your agency’s use of generative artificial intelligence (GenAI) creates fairness and equity instead of biases and discrimination.
This guidance is aligned with the following OECD AI principles:
Biases in GenAI affects communities
Bias can disproportionally impact some community groups, such as:
- Māori
- Pacific Peoples
- disabled people
- LGBTIQ+ communities
- multicultural communities.
To make sure that use of GenAI is fair, you need processes to ensure its outputs are unbiased and do not worsen social, demographic, or cultural inequalities.
Be aware of your own biases. This might mean getting training or learning to spot unconscious bias, so you do not accidentally include bias throughout the AI lifecycle.
How biases and discrimination can happen in GenAI
In GenAI, harmful biases can be present in:
- text
- images
- audio
- video.
Bias perpetuates stereotypical or unfair treatment related to race, sex and gender, ethnicity, or other protected characteristics.
By identifying and mitigating bias and reducing harm at all stages, you’ll help GenAI to produce fairer outcomes.
Public GenAI systems are often trained on data that can reinforce existing biases, discrimination, and inequalities. This means there’s a risk that using AI could repeat these biases, leading to unfair, harmful, and unequal results.
Tips for best-practice fairness and equity for GenAI
- Write prompts to help reduce the potential for bias.
- You can configure some systems to flag inappropriate prompts.
Commit to fairness, equity and GenAI
Follow best practice approaches to create fairness and equity in your agency’s use of GenAI systems.
Involve Māori and other community partners and groups
Involve your partners from these groups at the appropriate level to manage potential impacts. Build your team’s capability to engage with iwi Māori.
Have diverse teams
Especially in the governance, deployment and use of GenAI, make sure these teams have diverse representation.
Think critically and validate all GenAI outputs
Check all GenAI results to reduce the potential for discrimination and keep reviewing over time to make sure biases are not developing. Refer to the AI Lifecycle and ensure rigorous evaluation at all stages of AI development and use.
A3 Summary: Responsible AI Guidance for the Public Service: GenAI (PDF 120KB)
More information — GenAI privacy related to fairness and equity
This guidance is a good option to help ensure privacy is one of the key priorities for your product, service, system or process.
The Privacy, Human Rights and Ethics Framework — Data.govt.nz
Related guidance
Utility links and page information
Last updated