Managing the risks of GenAI to the public service
There are several risks to public service use of GenAI. Learn how to use GenAI safely, ethically and in privacy-protecting ways.
Download a one-page summary or the full interim guidance in PDF format:
Interim GenAI guidance for the public service — Summary (PDF 230 KB)
Interim GenAI guidance for the public service — Full guidance (PDF 193 KB)
Note: This is interim advice and will be reviewed as AI and GenAI evolves and the risks are better understood.
The guidance on this page was last updated on 11 September 2023.
We strongly recommend you:
Do not use GenAI for
Datasets classified as SENSITIVE or above
The risks for security, and potential impacts if SENSITIVE[Footnote 1] or above datasets were to be compromised could be catastrophic for our society and economy, and public services.
Take all necessary steps to avoid inputting these types of at risk datasets into GenAI tools.
Personal information in GenAI tools external to your environment
If personal information is compromised, the risks to people, and their trust and confidence in the government could be significant. Take all steps to avoid inputting personal information into GenAI tools that are outside your network.
We also recommend you:
Avoid using GenAI for
Personal and client data in your network
The government is held to a higher standard in respect of trust and confidence and so we recommend extreme caution where personal information is, or could be, involved.
Do not include personal information (particularly client information) when using GenAI, unless:
- it’s not possible to use non-personal or synthetic data, and
- all potential harms have been addressed.
Cases without IT department approval (also known as Shadow IT)
There could be a risk of unsanctioned use of GenAI by teams operating within your environment.
This may create a risk of technologies being used in ways that could:
- compromise security
- increase risk of data or privacy breaches
- add complexity to your technology environment in ways that could lead to service disruption or conflict.
Information withheld under the Official Information Act (OIA)
The risks for the integrity of the public service, and potential impacts if withheld information were to be accessed and/or inappropriately used, could be extremely damaging for public trust and confidence.
Take care when using GenAI for data where that data would satisfy OIA withholding grounds.
For business-critical information, systems or public-facing channels
GenAI can generate inaccurate and incomplete outputs and has the potential to perpetuate bias and mis- or disinformation.
AI systems can be complex to understand, causing issues where understanding decision-making processes is important. Avoid using GenAI for your agency’s essential systems and services.
Be aware of
The risk of both Free and Paid GenAI tools
Free GenAI tools may not have the same robust privacy and security controls as paid GenAI tools, and there could be potential for inputted data to be used for unauthorised purposes or in non-transparent ways. Quality and reliability may also be an issue.
Free and paid GenAI tools come with their own sets of risks, and we recommend that you consider factors such as cost, availability, functionality, privacy and security and maintenance and support when making choices about free vs. paid GenAI tools
We recommend blocking access to GenAI tools through your network until this guidance has been applied.
Whilst the benefit potential is substantial, there are several risks to Public Service use of GenAI that, if they were to materialise, could seriously damage public trust be potentially harmful for the public. We recommend fully assessing and actively managing for the risks, to support informed decisions on when and how your agency uses GenAI.
How to use GenAI safely within the Public Service
Robustly govern the use of GenAI
Consider your governance system and obtain senior approval of GenAI decisions relating to the use and application of GenAI in your agency context. Consider also appropriate involvement of your Te Tiriti (Treaty of Waitangi) partners.
We recommend that your agency develops a GenAI-AI policy and standards to guide your agency’s use of GenAI-AI and share this policy with the Government Chief Privacy Officer (GCPO).
Email the GCPO at: gcpo@dia.govt.nz
Because GenAI has the potential to cause significant harm, any input into it and use of its outputs should be robustly governed by accountable humans. It's important that GenAI outputs, and human-made decisions based on them, are fair, transparent and unbiased.
Governance of use and application of GenAI should be based on the principles of:
- safe and ethical use
- security and privacy by design
- be Te Tiriti-based and aligned to Government’s Procurement Rules
- grounded in openness, transparency and accountability.
Ongoing dialogue and collaboration between teams using GenAI tools and your agency’s functional leads[Footnote 2] will be important to effective governance of the use and application of GenAI tools.
Assess and manage for privacy risks
Take all necessary steps to protect privacy. This includes doing and getting senior approval of a solid privacy impact assessment for any testing or use of GenAI.
Applying privacy by design principles can help to build trust in GenAI tools and systems through ensuring they are privacy-compliant, transparent, respect people’s privacy and reduce risks of privacy breaches.
Consider how Te Tiriti in relation to Māori privacy might apply to any use or application of GenAI.
Personal data
We recommend identifying privacy risks and minimising how GenAI tools that contain personal data are used. Anonymise data to reduce risk of identification. Access, encryption and minimisation should also be considered, to ensure that outputs are only accessible to those who are authorised to view or use them.
Openness, transparency and accountability are key to maintaining trust, confidence and integrity. Be transparent with your stakeholders and the public about how you are using GenAI for the benefit of the public, and how personal data related to GenAI tools used will be collected, managed and used.
A privacy impact assessment (PIA) will help you to identify and manage privacy risks. Obtain senior approval of your PIA so as to ensure your senior leaders are properly informed. Also copy your PIA to the GCPO to enable the GCPO to identify further support that agencies may need. Actively govern and manage the identified risks and seek support from the GCPO if you are unsure of what to do, or if critical risks materialise.
Assess and control for security risks
GenAI can increase risk of security breaches. Attackers can use these tools to manipulate or socially engineer your staff and potentially make it easier to produce malware. Managing these risks requires strong cybersecurity practices. For example, a strong security culture around emails.
Data spills
Third party GenAI services are run by independent organisations that will likely have visibility of the queries or prompts that you use to generate its answers. You have limited visibility of where prompt information is stored and how it’s used. This creates risks relating to data spills, reverse-engineering datasets and potential disclosure of sensitive information.
You have limited control of how they’re run, limited configuration control and how they store or use data. This is a similar risk to many software as a service cloud services that you might consume. However, the difference is that GenAI is still a new field, there are not yet well understood transparency and assurance tools available — for example SOC2 or ISO27001 reports — to help you understand and manage these risks.
Insecure code generation
GenAIs often create insecure code that have vulnerabilities and common mistakes. Agencies should not be using code generated from GenAI to put into their production systems without thorough review.
You should always conduct testing and assurance processes to ensure that any outputs your staff use are checked and confirmed as correct and safe to use.
Through applying your agency’s normal security principles and processes, your developers and users of GenAI can help to ensure that AI systems are secure, trustworthy and resilient to potential threats.
Several GenAI tools have “opt out” functionality that provides choices for retaining your data for further training the AI. Opt out of the GenAI tools retaining and using your data for training purposes, if possible.
Consider Te Tiriti o Waitangi (the Treaty of Waitangi)
Work with Iwi Māori where GenAI may use Māori data and/or its use may impact Māori including services to Māori.
Māori communities could be at higher risk of bias and discrimination in the results and application of GenAI outputs, so work with your Te Tiriti partners to understand and actively manage for the impacts of GenAI for Māori.
Māori representatives have expressed varying views, some strongly held, in respect of Government use of GenAI tools. There is heightened concern among Indigenous groups, in particular about discrimination at the hands of GenAI.[Footnote 3] We strongly recommend working with your Te Tiriti partners, where Māori data is involved and/or where Māori interests or outcomes could be affected through using GenAI.
We recommend understanding important context for Māori and the Crown, including why GenAI is being considered, how it could impact Māori including services to Māori, what Māori data might be involved and its status across tapu (sensitivity and risk) or noa (free from tapu), and how Māori Data Governance might apply.
Where Māori data is involved, we recommend aligning to your agency’s existing Māori-Crown relationship approach and leveraging existing engagement models. As part of this you may wish to consider working more actively with your Te Tiriti partners on the opportunities for sharing of decisions, and/or exploring of more Māori-led approaches.
Considering how your teams work with Māori for mutual benefit is recommended. You may wish to consider how engagements can be simple, equitable, safe and value-adding for Māori participants. We also recommend considering how to ensure your team has the capability needed to engage with Māori confidently and successfully.
Use AI ethically and ensure accuracy
Ethics should be at the centre of how the public service uses GenAI systems and tools. AI can perpetuate existing biases and discrimination if they’re not properly designed and used. GenAI systems should be designed to avoid discrimination, with outputs regularly checked for bias and other possible harms.
Validate outputs
We recommend validating and scrutinising GenAI outputs, to reduce the potential for discrimination against people and groups such as women, ethnic communities, older and disabled people.
Validate all outputs before using them in practice. There are many examples of GenAI making up information or returning misleading results — also known as 'hallucinations'.
Optimise data and algorithms
Ensuring the accuracy of GenAI outputs is critical to trustworthy use of GenAI. It’s essential that data used to train AI tools is of high-quality for quality outputs. Cleansing, validating and quality-assuring data can help to ensure accuracy and reliability of outputs.
Good algorithms also play a key role — if algorithms are poorly designed, the results they return could have errors, be inaccurate or inappropriate. Iterative testing, adjustment, validation, and monitoring are key to tuning and optimising the performance of algorithms.
Be accountable
Always ensure accountable humans are making decisions in respect of the application or use of GenAI outputs, and that the decision-makers have the necessary authority and skills.
Human oversight and control are essential for ensuring GenAI is used ethically and responsibly. GenAI can produce misleading or biased results, so we strongly recommend human oversight of validating, verifying and interpreting AI outputs. Human governance over the data provided to AI tools and how the outputs are applied is also recommended.
Human decision-making of the overall risk assessment, taken by decision-makers with appropriate authority and capability, is also highly recommended. It’s important that the people in these decision-making roles have the necessary authority and capability to make these decisions on behalf of the agency.
Be transparent, including to the public
Be open and transparent in terms of what GenAI is being used for and why. Ensure processes are in place to respond to citizen requests to access and correct information.
Citizens are concerned about ethical use of GenAI, and the public has expectations about how GenAI is being used, particularly for the public service. The public service is held to a higher standard; so, consider your social licence and how to assure transparency, accountability, and fairness in how your agency is using and applying GenAI — whether directly or as part of a wider technology solution.
Exercise caution when using publicly available AI
Be aware of the potential security, quality, intellectual property and supply chain risks of publicly available AI and mitigate risks where possible before using AI tools. Publicly available AI tools sit outside an agency’s own environment. They are third party AI platforms, tools or software that have not necessarily been risk assessed to the standard expected for NZ government agencies or are not part of a commercial contract established through government assurance and procurement processes.
For example, some publicly available AI software may be developed by communities of developers, without assurance that the software is secure and free from vulnerabilities. This could lead to security breaches or other issues if the software is not properly tested and maintained within a controlled environment.
Further, contributing public AI software developers could have differing levels of experience, so quality and absence of errors cannot always be assured.
Lastly, support and distribution licenses for publicly available AI software vary, attracting some risk around intellectual property protection, performance, levels of support and maintenance over time.
We recommend taking steps to assess the testing, maintenance, and governance of any type of AI software you are using to ensure it is secure, appropriate, of high quality, and properly supported over time.
Apply the Government procurement principles
There may be increased risk of vendor lock-in, and exposure if providers are using GenAI in the services they provide you. Use the procurement rules and consider the mitigations needed for these when sourcing GenAI tools.
Some of your providers could be, or could be planning to, use GenAI for the services they provide you. It’s essential to have visibility, and ideally control, over how your vendor is using and/or integrating GenAI into the solutions or services they provide you.
AI systems are often proprietary, and vendors can be reluctant to support integration and/or interoperability with other systems. The technology is also swiftly evolving, and as a result, capability can become quickly out of date.
Procurement teams should conduct market research on vendors and their offerings, considering vendor reputation, capability, pricing and the supply chain. We recommend a coordinated approach across agency functional leads[Footnote 4] to support this evaluation. For trustworthy use of AI, and ultimately value for taxpayer dollar, teams are encouraged to also consider including specific commercial protections in contracts with your vendors for:
-
privacy, security and ethical risks
-
technology obsolescence
-
vendor lock-in
-
reliance on third-party provided services/AI.
Test safely
Create guardrails and dedicated testing spaces, like sandboxes,[Footnote 5] for your teams to safely trial and learn to use GenAI.
Safely trialling GenAI is important to ensuring that AI systems and their outputs are as expected, and do not cause unintended harm to people, communities, society, the economy and/or the environment. We suggest selecting appropriate lower-risk datasets to learn and trial safe use AI tools, gauge data quality and test outputs before they are deployed. This should include testing under various conditions to identify issues, and to validate that models and training data are appropriate for use in New Zealand and will work in ways that are accurate and fair for New Zealanders.
For inquiries or questions, contact:
- Digital — Government Chief Digital Officer: gcdo@dia.govt.nz
- Data: datalead@stats.govt.nz
- Information Security — National Cyber Security Centre: info@ncsc.govt.nz
- Procurement: gcdo@dia.govt.nz
- Privacy (public service) — Government Chief Privacy Officer: gcpo@dia.govt.nz
- Privacy (all): enquiries@privacy.org.nz