Accountability, responsibility and GenAI
Humans must be involved when using generative artificial intelligence (GenAI). We recommend that you make sure you get different perspectives assessing GenAI outputs.
This guidance is aligned with the following OECD AI principle.
Human oversight is essential for ethical GenAI use
While GenAI can increase productivity, it can also produce misleading, harmful or biased results. ‘Human-in-the-loop’ is an approach where human oversight is integrated across GenAI use. It ensures that humans remain an essential part of decision-making, working alongside GenAI.
Make sure you understand the data you provided to the GenAI systems and ensure you understand, check and agree with the outputs.
Combine the benefits of GenAI and human intelligence
GenAI lacks human understanding, empathy and compassion. It cannot understand context or know when something is wrong or harmful. But when humans and GenAI combine forces, and bring their unique skills together, then value can be gained while ensuring human values remain a priority.
When using GenAI tools for decision-making, make sure a human is involved. If your initial risk assessment shows that the situation is high-risk or high impact, ensure your use follows your agency’s risk and assurance framework and all controls are applied. If you’re using GenAI for a significant piece of work, an AI impact assessment will help you to make responsible choices.
Algorithm Impact Assessment Toolkit — Data.govt.nz
Commit to using different perspectives with GenAI
You can ask GenAI systems to consider content from different perspectives. You should also ask humans for different views and then check with people who have knowledge of this viewpoint.
Encourage a culture of responsibility
Equip your teams to understand potential consequences. Encourage people to check that outputs are:
- truthful
- non-harmful
- factual
- lawful
- non-discriminatory.
Consider either:
- not using GenAI or using a different system
- increasing human oversight to an appropriate level.
Build evaluation and auditing processes
To oversee AI use and outputs, create processes and controls that help to build accountability and responsibility in your organisation.
Accountability, responsibility and GenAI scenario
More information on assessing algorithms
For a framework to understand high-risk algorithms, check the following charter for algorithms in New Zealand.
Algorithm charter for Aotearoa New Zealand — Data.govt.nz
For ways to help organisations understand and assess the potential impacts of the algorithms they create or use, check the following toolkit.
Algorithm Impact Assessment toolkit — Data.govt.nz
Related guidance
Utility links and page information
Last updated