Skip to main content

Bias, discrimination, fairness, equity and GenAI

Proactively make sure your agency’s use of generative artificial intelligence (GenAI) creates fairness and equity instead of biases and discrimination.

Biases in GenAI affects communities

Bias can disproportionally impact some community groups, such as:

  • Māori
  • Pacific Peoples
  • disabled people
  • LGBTIQ+ communities
  • multicultural communities.

To make sure that use of GenAI is fair, you need processes to ensure its outputs are unbiased and do not worsen social, demographic, or cultural inequalities. 

Be aware of your own biases. This might mean getting training or learning to spot unconscious bias, so you do not accidentally include bias throughout the AI lifecycle.

How biases and discrimination can happen in GenAI

In GenAI, harmful biases can be present in:

  • text
  • images
  • audio
  • video.

Bias perpetuates stereotypical or unfair treatment related to race, sex and gender, ethnicity, or other protected characteristics.

By identifying and mitigating bias and reducing harm at all stages, you’ll help GenAI to produce fairer outcomes.

Public GenAI systems are often trained on data that can reinforce existing biases, discrimination, and inequalities. This means there’s a risk that using AI could repeat these biases, leading to unfair, harmful, and unequal results.

Tips for best-practice fairness and equity for GenAI

  • Write prompts to help reduce the potential for bias.
  • You can configure some systems to flag inappropriate prompts.

Commit to fairness, equity and GenAI

Follow best practice approaches to create fairness and equity in your agency’s use of GenAI systems.

Involve Māori and other community partners and groups

Involve your partners from these groups at the appropriate level to manage potential impacts. Build your team’s capability to engage with iwi Māori.

Have diverse teams

Especially in the governance, deployment and use of GenAI, make sure these teams have diverse representation.

Think critically and validate all GenAI outputs

Check all GenAI results to reduce the potential for discrimination and keep reviewing over time to make sure biases are not developing. Refer to the AI Lifecycle and ensure rigorous evaluation at all stages of AI development and use.

A3 Summary: Responsible AI Guidance for the Public Service: GenAI (PDF 120KB)

Example scenario of fairness, equity and GenAI

You’ve been using GenAI to source information that will be used to make decisions about who to prioritise for support.

You should ask:

  • an expert in the field and/or
  • members of the community that the information is about.

This helps you test that it’s correct, factual and unbiased.

You should also cross-check with official and authoritative sources about the potential for harm if the information is at risk of being incorrect or biased.

After completing your checks, you conclude that the sources are authoritative, accurate and balanced. Confident that your sources are not unintentionally biasing the output, you proceed to use the GenAI provided sources in your work, and test the outputs.

More information — GenAI privacy related to fairness and equity

This guidance is a good option to help ensure privacy is one of the key priorities for your product, service, system or process.

The Privacy, Human Rights and Ethics Framework — Data.govt.nz

Related guidance

Utility links and page information

Was this page helpful?
Thanks, do you want to tell us more?

Do not enter personal information. All fields are optional.

Last updated