Skip to main content

Accountability, responsibility and GenAI

Humans must be involved when using generative artificial intelligence (GenAI). We recommend that you make sure you get different perspectives assessing GenAI outputs.

This guidance is aligned with the following OECD AI principle.

Principle 1.5: AccountabilityOECD

Human oversight is essential for ethical GenAI use

While GenAI can increase productivity, it can also produce misleading, harmful or biased results. ‘Human-in-the-loop’ is an approach where human oversight is integrated across GenAI use. It ensures that humans remain an essential part of decision-making, working alongside GenAI.

Make sure you understand the data you provided to the GenAI systems and ensure you understand, check and agree with the outputs.

Combine the benefits of GenAI and human intelligence

GenAI lacks human understanding, empathy and compassion. It cannot understand context or know when something is wrong or harmful. But when humans and GenAI combine forces, and bring their unique skills together, then value can be gained while ensuring human values remain a priority.

When using GenAI tools for decision-making, make sure a human is involved. If your initial risk assessment shows that the situation is high-risk or high impact, ensure your use follows your agency’s risk and assurance framework and all controls are applied. If you’re using GenAI for a significant piece of work, an AI impact assessment will help you to make responsible choices.

Algorithm Impact Assessment Toolkit — Data.govt.nz

Commit to using different perspectives with GenAI

You can ask GenAI systems to consider content from different perspectives. You should also ask humans for different views and then check with people who have knowledge of this viewpoint.

Encourage a culture of responsibility

Equip your teams to understand potential consequences. Encourage people to check that outputs are:

  • truthful
  • non-harmful
  • factual
  • lawful
  • non-discriminatory.

Consider either:

  • not using GenAI or using a different system
  • increasing human oversight to an appropriate level.

Build evaluation and auditing processes

To oversee AI use and outputs, create processes and controls that help to build accountability and responsibility in your organisation.

Accountability, responsibility and GenAI scenario

Example scenario of accountability, responsibility and GenAI

You use GenAI to summarise a long document, containing only publicly available information, into a quick read for the users of the government service. You’ll use this summary on the website to link to the longer document.

You check your agency’s GenAI policy and let your manager know you’re using the tool. You should also ask a colleague to review the summary, as a quality check.

You should not publish it until you’ve double-checked that all the content is accurate, culturally appropriate and no key context is missing. Once you’ve completed those checks, you’re able to confidently publish a useful summary for your longer document for the public to use.

More information on assessing algorithms

For a framework to understand high-risk algorithms, check the following charter for algorithms in New Zealand.

Algorithm charter for Aotearoa New Zealand — Data.govt.nz

For ways to help organisations understand and assess the potential impacts of the algorithms they create or use, check the following toolkit.

Algorithm Impact Assessment toolkit — Data.govt.nz

Related guidance

Utility links and page information

Was this page helpful?
Thanks, do you want to tell us more?

Do not enter personal information. All fields are optional.

Last updated