Clear oversight and governance of agency use of GenAI can ensure not just responsible spending and timely delivery, but can enable safe, responsible, ethical and effective use.
This guidance is underpinned by the following OECD AI principles:
Governance and assurance are closely related activities. Understanding the relationship between assurance and governance is important. As some of the existing guidance refers to governance, it’s important in this context to understand the inter relationship between the 2 concepts of governance and assurance.
There’s no single agreed definition of governance. But broadly it involves setting the goals, allocation of resource to achieve those goals, establishment of systems to track progress and management of risk. Assurance is an independent assessment of governance, risk management, and control processes to achieve a specified aim.
What this means is that good governance will establish assurance mechanisms to support the delivery of an outcome for the task or area of responsibility of the governance group. But it’s also possible to have an assurance process that sits above this, which is an assurance system that independently reviews aspects of the work that is governed or sufficiency of governance.
Commit to good governance of GenAI
Agencies should publicly develop and share their GenAI policies and standards to guide its use of AI. Working together will help agencies lift their capability on using emerging technologies like GenAI.
Designate a responsible official to lead adoption of GenAI
In line with international best practice, we recommend public service agencies each designate a responsible senior official to guide the safe, and secure adoption of GenAI systems and other emerging technology. This person could be the Chief Digital Officer, Chief Risk Officer or Chief Data Officer.
coordinating dialogue and collaboration between teams using GenAI systems
mitigation and management of risk and assurance
establishing your agency’s guidance or rules regarding the use of GenAI to advance the appropriate use of AI
engaging in all-of-government AI forums and processes and keeping up to date with changing requirements as they evolve
acting as a central contact in your agency and coordinating with the Government Chief Digital Officer(GCDO) on issues such as technology procurement to ensure GenAI tools are secure, of high quality, and properly supported over time.
involving iwi Māori.
This will provide assurance to the public service that risks are being managed and legal requirements are being met to encourage adoption of AI to modernise public services and deliver better outcomes for all New Zealanders. It will also provide assurance to the public that AI is being adopted responsibly by the public service, modernising public services and delivering better outcomes for all New Zealanders.
Conduct appropriate impact assessments for applications of GenAI
The GCDO is collaborating with agencies to develop an AI Assurance Regime. The assurance approach will cover all forms of AI, though the approach taken to assurance will depend on the type of AI and the risk associated with the use of that AI. While the regime is under development, we encourage agencies to conduct a risk assessment to help agencies identify, assess, document and manage sector-specific low versus high-risk uses of AI systems.
Agencies should subject higher-risk uses of AI to more extensive oversight. This ensures the responsible and ethical use of data and limits potential issues such privacy and security issues, or biased or factually incorrect outputs (such as ‘hallucinations’).
In cases where AI uses people’s personal information, take all necessary steps to protect their data and privacy, including assessing how data is managed and stored, and conducting a privacy impact assessment.
Always ensure accountable humans are involved in the application or use of GenAI systems and outputs. Decision-makers should have the necessary authority and skills to make informed choices.
Understanding and explaining GenAI can be challenging. Managers and leaders accountable for GenAI must articulate how and why it’s used and clarify any factors that have influenced their decisions. Ensure you have people across your organisation who can explain the technology itself.
Human oversight can minimise misleading or biased results and support functions such as:
impact and risk assessments
transparency and reporting
quality assurance.
Transparency and accountability
Openness, transparency and accountability are key to maintaining trust, confidence and integrity. Be transparent with the public, key stakeholders and partners about how you’re using GenAI for the benefit of New Zealand. Explain how personal data related to your use of GenAI systems will be collected and managed.
We strongly recommend publishing your AI use online for wider transparency and working with the accountable official to keep a register of AI use in your agency.
What to publish about your agency’s GenAI use
Agencies should publish information about their development and use of AI, barring reasonable exceptions such as classified use cases. This will help maintain transparency and trust in public service AI use. Agencies might consider publishing information about the type of AI they are using, what stage the project is at, the intent of use or the problem it is trying to solve, and an overview of how the system is being used and by whom.