Skip to main content

Security and GenAI

Learn about the key security risks of introducing GenAI to your organisation. Develop a list of considerations to ensure GenAI software undergoes basic security measures.

This guidance is aligned with the following OECD AI principles:

Security plays a vital role in enabling the reliability and resilience of GenAI systems. Like all digital systems, GenAI systems can be susceptible to security vulnerabilities and misconfigurations.

Fortunately, basic security measures can help achieve an acceptable level of risk. Treat GenAI like you would other software applications. This will provide a key guardrail in developing the strong foundations of your use of GenAI.

Security risks of GenAI

The security risk posed by GenAI applications depends on several factors, such as:

  • the information the tool has access to
  • permitted users
  • when and how it was developed, whether in house or procured from a third party
  • external sharing of data.

A security risk assessment should consider these factors to determine if the application is the right fit for your organisation.

Resources for Security Leads

Notable risks

Vendors introducing GenAI into existing and new applications

Software vendors are rapidly introducing GenAI into existing and new applications and are constantly evolving the capabilities available.

New software versions should be evaluated and tested before they’re rolled out to ensure sufficient guardrails are in place. Contracts should require vendors to provide advance notification of material changes.

Unsanctioned GenAI being accessed in your agency

Be aware of the risk of unsanctioned GenAI applications accessed by users in your organisation. This could include web applications or public GenAI systems outside your enterprise systems, as unapproved use could lead to data breaches.

Some GenAI applications retain user input to train large language models (LLMs) without the choice to opt out. Often there’s no way to have data deleted. Prevent the use of unapproved systems by continuously monitoring and blocking as needed.

The output of a GenAI systems can also be a security risk

Assess the outputs of GenAI, especially if it’s used for code generation. Do not trust generated code until it’s verified to be free of errors through quality control processes.

Other security considerations

  • Consider an early discussion with your security team to help establish your agency’s level of preparedness.
  • Ensure your agency’s information management practices support data loss prevention measures, including sensitivity labelling and access management.
  • Ensure all GenAI systems used by your agency are certified and accredited before they’re made available to users, as advised within the New Zealand Information Security Manual (NZISM) chapter 4 System Certification and Accreditation. The certification process should validate that security controls, like application monitoring, are in place to identify misuse and support investigations.
  • Ensure sensitive or classified information is not entered in public GenAI tools. Even things that cannot identify a person could be aggregated over time to re-identify a person.  For more information, see Privacy and GenAI.
  • Ensure your staff receive adequate training and guidelines on the acceptable use of GenAI systems. Don’t solely focus your security strategy on technology. This will help your staff make the right choice or know when or how to seek assistance when they spot a potential vulnerability.
  • Review NZISM chapter 14 covering ‘Software Security’ and the Guidelines for Secure AI System Development if your agency is developing GenAI systems or using GenAI for application development.
  • Consider ways in which GenAI applications can be used to improve the security posture of your agency.

AI security scenario

Example scenario of security and GenAI

An agency developed a public-facing AI tool to simplify the process of accessing services for New Zealand people. During the testing phase a misconfiguration was identified that allowed the GenAI chatbot to inadvertently retrieve and share privileged information.

As sensitivity labels were used to limit chatbot responses, the agency’s information management practices supported a quick resolution. The issue was fixed before the tool was made public. This ensured information was managed appropriately, maintaining trust and saving the agency from significant reputational damage.

Resources for Security Leads

Related guidance

Utility links and page information

Was this page helpful?
Thanks, do you want to tell us more?

Do not enter personal information. All fields are optional.

Last updated