API guidelines — Part B: API security 2022
Learn about API (Application Programming Interface) security reference architecture and the technical details for implementing API security.
1. API security
1.1. Introduction
According to Gartner, by 2022 API abuses will be the most frequent attack vector for enterprise web application data breaches. As such, securing Representative State Transfer (RESTful) APIs is fundamental to the success of any API strategy or implementation. Any approach should include the following 3 key areas:
- domain of consideration
- domain of control
- identity[Footnote 1]-centric and holistic view.
1.1.1. Domain of consideration
Developing and securing RESTful APIs is more than just applying standards, it is a framework and state of mind that has to be understood and followed jointly by the business owners, IT architects and developers.
The API security framework must be defined at the organisation and business level and should always consider who, how and what users and applications (both internal and external to an organisation) will interact with the APIs. These considerations should be defined at the beginning of any project and driven from a desired business outcome — for example, provide real time information for the public about the closest location and address of a general practitioner.
1.1.2. Domain of control
The domain of control contains the components that need to be developed and deployed, and they need to work together to provide API security to support:
- registered application developer access to the API
- authenticated and authorised consuming application access to the API or events
- protected communication between the API, the event broker and the consuming application to ensure confidentiality and integrity
- the ability for applications to act on behalf of a customer.
Figure 1 illustrates these domains of consideration and control working together to provide API security.
1.1.3. Identity-centric and holistic view
The security of APIs should not just be seen as a bounded solution, but needs to be seen from a holistic perspective. It needs to incorporate management and understanding of user identities across:
- enterprise security
- mobile security and application security
- API security.
For example, securing an API that is targeted for a mobile application is not just about applying an OAuth (Open Authorization) profile, it should take into consideration how mobile devices and applications are managed and secured and how the enterprise security framework (for example, authentication) can be leveraged.
People- or user-centric security frameworks are key to defining the required access policies and controls for APIs. The management of identity (this includes users, device, servers and applications) should be central to any API security framework.
1.2. Definitions for APIs covered in these guidelines
This version of the API guidelines expands its remit to beyond REST APIs to include the additional API types: GraphQL, AsyncAPI and gRPC.
These additional types of API are covered in Part C: API development 2022, but from a high-level perspective the following definitions apply to all 4 API types.
- Authentication
- The process of verifying the identity of a customer (or device) who presents identity credentials and authentication keys.
- Authentication authority
- A system entity that provides authentication services to ensure only permitted customers (or devices) gain access.
- Authorisation
- The process of verifying that a customer (or device) has the right to perform an action and what they are allowed to access.
- Availability
- The ability to minimise API downtime by implementing threat protection.
- Confidentiality
- The ability to ensure information that is sent between users, applications and servers is only visible to those authorised to use it.
- Consent management
- The process that manages the collection of user data, ensuring that the required policies are applied, and the required consent has been obtained from the user, allowing the user to understand how the data is used and to be able to opt out if required. This is being driven by many global privacy laws.
- Delegation
- When a user authorises another user (or device) to serve as his or her representative for a particular task.
- Delegated authorisation
- A framework that defines how an owner of a set of resources can grant access (delegate) to a designated user or consuming application to perform actions on some of those resources on the owner’s behalf, but without sharing their credentials.
- Federation
- The process that allows for the leverage and reuse of identity credentials to multiple authentication authorities for authentication and / or Single Sign-On (SSO).
- Integrity
- The ability to ensure that information received has not been modified by a third party, also providing non-repudiation services
- Provisioning
- The automated or manual service for aggregating and correlating identity data resulting in the creation of user (IT) accounts and the delivery of user metadata used by systems to define access policies and controls for services.
- Threat protection
- The service for protecting APIs (at the ingress and egress points of an organisation) from known threats (for example, OWASP (Open Web Application Security Project) Top Ten) by preventing misuse or loss of availability.
Note: Threat protection should also be addressed at the operating system hardening level and should be an integral part of the API software development.
- User-Managed Access (UMA)
- Developed to provide a user data delegation model that enables a resource owner to control the authorisation of data sharing and other protected-resource access made between online services on the owner’s behalf or with the owner’s authorisation by an autonomous requesting party.
- Zero Trust (ZT)
- An evolving set of cybersecurity paradigms that move network defences from static, network-based perimeters to focus on users, assets and resources. A Zero Trust Architecture (ZTA) uses ZT principles to plan enterprise infrastructure and workflows. ZT assumes there is no implicit trust granted to assets or user accounts based solely on their physical or network location.
1.3. Risks
APIs are another channel into an organisation’s resources and information. Most organisations are accustomed to exposing a web interface, with good control over what information is released via that interface.
APIs offer direct, machine-to-machine access to resources and information, which makes it less obvious when information is incorrectly exposed. It becomes increasingly important for internal business stakeholders to decide what information and resources should be released via this channel, and to whom.
The security risks that APIs introduce will be similar to the traditional risks experienced on any web channel (web sites and web applications), except that there:
- is an increased attack surface due to more ways in, multiple services to potentially exploit
- is a risk of inadvertently exposing backend data, backend architecture and backend applications
- is potential for greater consequences if your API is compromised or hijacked and serves up malicious payloads to consumers
- are greater privacy concerns where APIs involve personally identifiable information
- are risks of malware in uploaded files due to performance overhead or lack of inline scanning
- is a risk of malformed APIs that are developed with limited security validation and inappropriate security validation
- is a risk related to cloud- and container-based systems where security best practices are not applied.
Risks posed by APIs include loss of integrity, confidentiality, and availability of data, for example:
- loopholes retrieving API resources may offer access to more information than was intended (especially if fields requested are built straight into a database (DB) query)
- write operations offer a means of polluting data stores, feeding misinformation into a system
- write operations could be used to form a Denial of Service (DoS) attack by overloading the server or data store
- use of wildcards in search fields can shut down APIs and backend applications
- cross-site scripting attacks made possible by consuming applications not checking user inputs
- Structured Query Language (SQL) injection into consuming applications that cause database damage at the API backend
- parameter attacks such as Hyper Text Transfer Protocol (HTTP) Parameter Pollution (HPP)
- man-in-the-middle attacks, modifying API requests or responses leading to data eavesdropping or misinformation insertion
- subverting authentication or authorisation mechanisms to spoof messages from legitimate consumers
- credential leakage or stealing authentication tokens to obtain information illicitly
- system information leakage through API error messages revealing details about an API’s construction or underlying system makeup
- broken session IDs, keys and authentication create exposure to unauthorised access through authentication factors that are not functioning because of poor security design or technology bugs
- other broken resource identifiers, authentication and authorisation mechanisms, allowing attackers to exploit flaws to obtain access, either temporarily or permanently
- exposing too much information through the use of generic resource APIs rather than specialising APIs for each specific circumstance.
1.3.1. Mitigation approach
API risks need to be mitigated in a number of ways. There is no single off-the-shelf security solution that can be dropped in to address all aspects of API security. APIs need to be secure by design — security needs to be built in from scratch and be considered within the context of existing protection mechanisms. The main areas that API security covers are:
- identity and access management to provide:
- authentication
- authorisation and delegated authority
- federation
- confidentiality
- integrity
- availability and threat detection
- logging, alerting and incident management.
This ensures that:
- the consuming application is known and can only get access to API resources they are allowed to
- message content has not been tampered with between consumer and provider
- resources are reliably from the provider intended when the consuming application made the request
- the API will be available when needed, and not brought down by attacks from malicious consuming applications.
In order to address API security risks, a security framework is needed that encapsulates all these aspects of security.
1.3.2. Zero Trust and decoupled environments
The Zero Trust Network Access (ZTNA) model has been talked about for a number of years. In the current environment it is now seen by many analysts as the direction organisations need to take. This section highlights areas of the model that can help mitigate risks related to APIs.
ZT architecture removed the concept of trusted internal and untrusted external networks and focuses on the policy of ‘never trust always verify’.
The selection of cloud services has changed this security model to one where all actors (employees, partners, and so on) require access controls no matter where they are coming from or on.
ZT is seen as an architecture that is critical for organisations moving towards decoupled microservices and API architectures.
Microservices architectures are the backbone of many API services offered by organisations and have embraced the concepts of identity and how permissions are created and enforced between different services.
Every microservice requires an identity that can be confirmed, and the required permissions and policies applied that should be based on attributes and contextual access.
Some of the areas that need to be considered by an organisation when planning their implementation of ZT include:
- applying strong identification and authentication
- building a digital trust model that is dynamic, and trust is only valid for the current session
- constant evaluation
- always authenticate
- applying contextual authorisation (attributes, consent, location, time, behaviour)
- building in a digital risk capability that maps to a level of confidence and constantly re-evaluates
- leveraging identity and access management capability from identity proofing to adaptive authentication
- incorporating endpoint security
- transaction-level verification and continuous session validation
- ensuring data security is applied with reference to encryption and user privacy controls including consent management
- implementing strong auditing, logging, event reporting and forensics providing insight and behavioural patterns
- smart threat detection with machine learning
- injecting identity context into the API traffic (user, application, device)
- using JWT (JavaScript Object Notation (JSON) Web Token) to provide secure and validated claims, which can also be encrypted
- applying fine-grained access at the egress point, allowing the enforcement point to allow, block filter or modify the response
- identifying propagation to backend services to make decisions
- JWTs limiting chatter in microservice environments
- all APIs should be secured and treated as if they are public APIs.
1.4. Security reference architecture
This section describes an API security reference architecture and its component parts to inform the construction of an API security framework. It is important to note that REST, gRPC, GraphQL and AsyncAPI are different architectural models for building synchronous and asynchronous APIs that can leverage the security controls (for example, OAuth 2.0 and OpenID Connect) defined in these guidelines, but they all have their own intrinsic security models (for example, throttling consideration in GraphQL) which are not covered in these guidelines.
1.4.1. Actors and security functional capabilities
Identity and access management defines the actors (users and devices) who interact with system components that manage and expose APIs. Figure 3 shows a typical model of API components (support stack) and actors. The actors and components are described in tables 1 and 2.
The components defined remain valid no matter what API architecture (internal, cloud, hybrid) is implemented.
Actors | Description |
---|---|
External users |
|
Devices |
|
Internal users |
|
The core components of an API security framework (the development portal, manager and gateway) provide a grouping of functionality. These functions can be delivered with discrete applications, or bespoke code development, via commercial off-the-shelf (COTS) products or through leveraging existing devices that can be configured to provide these functions or services.
Note: Some of the functionality may overlap or be combined into 1 or more products depending on the vendor used.
Table 2 lists the functions of a mature API delivery and security framework for an agency that is working with the development community. Together, these functions provide full support for the application developer building and developing consuming applications that will use the APIs exposed by the agency.
Depending on the requirements of the agency, some of these functions might not be required — for example, if the agency API exposed is purely for public consumption and only allows consuming applications to read information, then only a solution for enforcing threat protection (for example, DoS) might be required, and this could be delivered using an existing service protection capability.
Core components | Description |
---|---|
API portal |
The API portal often provides the following functions for internal and external application developers:
The API portal also supports the development, build and test of consuming applications. |
API manager |
The API manager functions cover:
|
API gateway |
The API gateway capability can:
|
Event broker |
The event broker (or ‘broker’) is responsible for receiving events (AKA messages) from publishers (services) and delivering them to subscribers (services), that is, the consumers who have registered interest in events of that type. Brokers often store events until they are delivered, which is what makes event driven architectures very resilient to failures. Examples of brokers are RabbitMQ, Apache Kafka and Solace. With the emergence of an AsyncAPI standard, event driven architectures are becoming more prevalent. |
API documentation |
OpenAPI (REST APIs) and AsyncAPI are API (message and event-based APIs) documentation specifications in a machine-readable format. |
API monitoring and analytics |
Business owners and security specialists need to be able to monitor the use of APIs to:
This helps adapt to change in usage and demand. |
Credential stores |
The credential stores are identity and key stores that are used to securely store:
These stores are used by the API gateway for authorisation and authentication services. |
Figure 4 shows how the model can also be split with the API support stack duplicated — 1 set to support internal API usage and 1 set to support external use.
Authentication, authorisation, confidentiality, integrity and availability can be applied across the components in the support stack, depending on component capabilities.
The actual configuration and location of the API functional capabilities will vary depending on individual circumstances (for example, some capabilities may be internal, some may be in the cloud, where API development is outsourced then an ‘internal’ functional stack may belong to the outsourcer, and so on). Also, some components might not be required or can be developed in-house.
1.5. Building secure APIs
Building in security starts from the ground up, so development of APIs needs to be done with awareness of the API security risks associated with the resources and information being exposed, and with appropriate mitigations in place for these API security risks.
When developing an API it is advisable to carefully consider potential malicious use, especially:
- PUTs and POSTs — which change internal data and could be used to attack or misinform
- DELETEs — which could be used to remove the contents of an internal resource repository.
Standard secure coding practices are always recommended, in line with New Zealand Information Security Manual (NZISM) guidance.
Security by Design Principles according to OWASP — patchstack
But API development should take special note of:
- design-driven development (refer to Part C: API development 2022)
- OWASP Top Ten — OWASP: a summary of the standard attacks and mitigations
- REST Security Cheat Sheet — OWASP: REST-specific risks and how to prevent them — for example, input validation
- OWASP API Security Project — OWASP: top 10 API-specific risks and how to prevent them.
The OWASP Cheat Sheet Series provides cheat sheets on a variety of security-related subjects.
OWASP Cheat Sheet Series — OWASP
It is worth reviewing them to see if others may apply to your specific circumstances. Special note should be taken of the following where your API accepts input values as parameters:
- OWASP Input Validation Cheat Sheet — OWASP: a summary of input risks and mitigations
- OWASP Cross-Site Scripting Prevention Cheat Sheet — OWASP: how to escape inputs to prevent cross-site scripting
- OWASP SQL Injection Prevention Cheat Sheet — OWASP: ensuring database queries are built internally
- OWASP Query Parameterization Cheat Sheet — OWASP: examples of SQL injection and stored procedure vulnerabilities.
It is also recommended that a security testing capability be incorporated into the development cycle that provides continuous, repeatable and automated tests to find security vulnerabilities in APIs and web applications during development and testing.
1.5.1. API security design principles
The following key principles should be applied when designing API security frameworks.
- Design with the objective that the API will eventually be accessible from the public internet, even if there are no plans to do so at the moment.
- Security first — build security into the API when being developed.
- Use a common authentication and authorisation pattern, preferably based on existing security components: avoid creating a bespoke solution for each API.
- Least privilege — access and authorisation should be assigned to API consumers based on the minimal amount of access they need to carry out the functions required, and strong authentication and authorisation models are applied.
- Maximise entropy (randomness) of security credentials by using API keys rather than username and passwords for API authorisation, as API keys provide an attack surface that is more challenging for potential attackers.
- Balance performance with security with reference to key life times and encryption / decryption overheads.
- Manage the exposure and lifetime of all APIs, and ensure all organisation APIs are covered by proactive scanning.
- Validate the content of all incoming messages, ensuring communications are secured (in other words, encrypted) and apply threat protection policies (for example, injection and throttling).
2. Usage patterns
Different API usage patterns require different authentication and authorisation models.
Note: The security components defined in the following diagrams are located, for simplicity, in the ‘trusted’ zone, for example, an area managed by an agency. It is possible that these components could reside in different zones that relate to varying levels of trust — for example, a Demilitarized Zone (DMZ).
2.1. Pattern 1: Internal use only
In pattern 1, an API is developed for internal use only by agency applications or systems.
Figure 5 shows there is the need to authenticate and authorise the internal user to the internal consuming application, and implement protection between the internal application and the API on the API gateway, which interacts with the backend application.
2.2. Pattern 2: Identifying an application developer
When an API is released for external use, the first interaction will be with application developers who want to try the API out. This will normally be via the API developer portal.
Figure 6 shows that the application developer needs to be authenticated to the API developer portal to register their new application and attain the relevant credentials, which are used to secure interactions with the new application during development. The developer has to agree to the conditions of use, and subsequent usage of the API can then be traced via the API keys (see pattern 4: identifying a consuming application).
2.3. Pattern 3: Anonymous consuming application
Pattern 3 applies when the API provider does not need to know which consuming applications are using their APIs.
Figure 7 shows a web application (the consuming application) on a web browser is unidentified (for example, has no API key) but can still use the API.
2.4. Pattern 4: Identifying a consuming application
Pattern 4 applies when the API provider needs to know which consuming applications are using their APIs (for communication, logging and analytics purposes).
Figure 8 shows a web application (the consuming application) on a web browser is authenticated (for example, has an API key, client secret, and so on) to use the API, but this is only used as a means of identification or registration.
2.5. Pattern 5: Authorising a system-to-system interaction (B2B)
The system-to-system model is where an API is being used to enable information sharing or integration with an external system — for example, a partner agency gaining access to supporting information.
Figure 9 shows an external application (the consuming system) needs to be authenticated to access the API.
In this model the aim is to ensure that only the correct consuming system has access to the API, and that the API is protected from malicious use. Business to business (B2B) models often carry sensitive information, so the consuming system needs to be authenticated to the provider for authorised access, confidentiality and integrity.
2.6. Pattern 6: Authorising a consuming application
Pattern 6 covers the case where different external consuming applications may be granted different levels of access to resources. The application’s access is not dependent on which customer is currently using the application, but on which application is using the API, for example, perhaps the developer for Application A pays a fee so Application A gets a different quality of service from the API than Application B.
Figure 10 shows 2 consuming applications, an application on a smartphone (consuming app A) and a web application on a web browser (consuming app B). Both consuming applications must be authenticated and authorised before accessing the API. This is normally enforced at the API gateway.
2.7. Pattern 7: Authorising a customer (delegated authority)
In pattern 7, external consuming applications may be granted different access to resources depending on which customer is currently using the application — for example, a learner authorises a mobile application to retrieve their own record of achievement.
Figure 11 shows the customer authenticates through the application (consuming application) on their smartphone device. The device and / or the application is already authorised to use the API. The customer logs in and authorises the device and / or application to access their information, for example, an internet banking application.
2.8. Pattern 8: Decoupled flow — Client Initiated Backchannel Authentication (CIBA)
In a traditional flow, the customer or end user is redirected to an authentication page. In pattern 8 and shown in figure 12, the authentication and consent process is delegated to an authentication device of the end user. This process is performed via a back channel with a request and response. This flow decouples the authentication device from the traditional flow. In this model the consuming application or client can initiate the authentication and consent of an end user via an out-of-band mechanism.
2.9. Quick reference
The following provides a quick reference to identify the most appropriate authentication and authorisation model to use for the patterns defined. The models are explained in detail in subsequent sections.
The initial consideration for any API security framework should be to use the authorisation code grant type. The following are pointers for agencies when considering their requirements.
- It is good practice to use API keys as the basis of all system-to-system authentication, such as consuming applications to API.
- For pattern 6: authorising a consuming application, the client credentials grant type is recommended but the API keys model can be used instead.
- For pattern 7: authorising a customer (delegated authority):
- the authorisation code grant type is recommended where the customer is the resource owner, the provider (agency) controls the resource server, but the authorisation server is not owned by the provider or is elsewhere within the provider organisation
- where there is a strong desire to not interrupt the customer’s experience with the consuming application, it is appropriate to use client initiated backchannel authentication
- the resource owner password credentials grant type is appropriate where the customer is the resource owner, the provider (agency) controls the resource server, and the API gateway is the OAuth 2.0 authorisation server.
- Using only API keys for customer authentication is not recommended and should be a last resort.
- For pattern 1: internal use only, the authorisation code grant type is recommended, where practical. Otherwise, it may be appropriate to leverage the provider agency’s existing internal authentication and authorisation providers.
- For pattern 2: identifying an application developer, it is appropriate to leverage (or build) the capabilities of the developer portal (for example, username and password).
- Pattern 8: decoupled flow (CIBA) supports out-of-band authentication and consent approval. It is recommended for organisations that want to enable push notification for web-based services and applications.
- The authorisation code grant type has been enhanced with Proof of Key for Code Exchange (PKCE) and addresses key security hacks. In OAuth 2.1 (draft) it is now recommended as a default option for authorisation code grant type.
3. API authentication and authorisation basics
Before looking at the technical solutions to API authentication and authorisation, this section describes the situations where authentication and authorisation are appropriate.
Authorisation and authentication are intrinsically linked inside the OAuth 2.0 framework, which in itself is regarded as synonymous with securing APIs. OAuth 2.0 uses its own terminology, which is worth becoming familiar with when adopting an OAuth 2.0 approach.
As the OAuth 2.0 framework is a commonly accepted approach to the securing of modern APIs (large companies like Google, Microsoft and Twitter use it), this section also covers an introduction to OAuth 2.0.
3.1. Authentication — Required
When securing APIs, authentication is required to identify the consumers and / or consuming applications that want to access or consume an API. Authentication enables the API provider to identify all consumers of an API and to confirm that the consumer requesting access is who they say they are. This does not automatically authorise them to access the APIs or the underlying resources.
Providers should define a registration process for each category of consumer, whether system or human.
The ability of knowing who is using an API cannot be overstated. This is critical when it comes time to implement aspects of the API service life cycle, such as service deprecation, or notification of an outage. It also enables the API provider to implement different service levels for different consumers. For example, commercial customers might have a higher request limit per day than customers not paying for the service.
Making application developers register for use of the API also means they must sign up to terms and conditions that define how they might use the data they get from the API, and that they agree to ensure that their consuming applications will behave in an acceptable and non-abusive manner.
3.1.1. Authentication techniques
Figure 13 lists the authentication techniques that can be used to secure APIs and Appendix B — Authentication provides guidance on when to use them.
There is a second security token option that is built on Security Assertion Markup Language (SAML), but it is only recommended if SAML is already in place within a particular sector (for example education), otherwise SAML is seen as a deprecated model for REST APIs.
These guidelines do not cover the SAML option. For details on how SAML is used as an authentication token in OAuth 2:
3.2. OAuth 2.0 basics
OAuth 2.0 provides a more comprehensive and extensible approach to security than some of the basic authentication and authorisation mechanisms. Based on security tokens, it can be used for delegated authority such as enabling a mobile application to act on behalf of its user. For more details see Appendix C.
The IT industry perceives the need for any production quality API security framework to be based on OAuth 2.0. In reality OAuth 2.0 is a delegated authorisation framework, but it provides the foundations on which secure services can be built in order to provide the complete security solution.
OAuth 2.0 requires some fundamental security components in order to work, and has its own terminology for describing these components and their roles.
Figure 14 illustrates these components and their roles:
- Resource owner — the person who has the right to grant a third party (for example, a consuming application) access to a protected resource (for example, information about themselves). Quite often the resource owner is the customer.
- Client (or client application) — a consuming application requesting access to a protected resource on behalf of a third party for example, a mobile application on a user’s (third party) smartphone or a web application accessed via a browser.
- OAuth server / authorisation server provides a security token server / infrastructure for managing tokens. It is responsible for issuing:
- authorisation (code) grant — approval tokens driven by resource owner approval
- access tokens used by the API to authorise access
- refresh tokens that allow new access tokens to be requested by the client and re-issued within a specified timeframe.
- Authentication server — this is not a component of OAuth 2.0, or defined by OAuth 2.0, but needs to be considered when defining a complete OAuth 2.0 framework. This could be a simple login capability or managed by an identity service provider.
- Resource server — this hosts the protected resources (APIs and backend applications) that only allow authenticated and authorised clients by:
- checking the access token in each incoming API request
- validating the access token against the authorisation server and the permitted access rights.
OAuth 2.0 comes with 4 types of grant flows (how client applications can gain access tokens), each appropriate to different situations and solution requirements:
- client credentials
- resource owner password credentials
- authorisation code
- implicit.
OAuth 2.0 is appropriate where there is a requirement for third party applications to access restricted resources. This should help mitigate the risks relating to:
- third-party applications storing user credentials (username and password)
- resource servers having to support user stores and password authentication
- resource owners not being able to define granular access to resources, including duration
- revoking credentials that are compromised.
The sections Resource-based scopes and Proof Key for Code Exchange (PKCE) are 2 additional concepts that should be clearly understood, and they provide additional security controls. Also, PKCE is seen as a security control that will be mandated in future versions of the OAuth 2 specification.
3.2.1. Resource-based scopes (for coarse-grained access)
Access and refresh tokens provide confirmation that the end user has consented to delegate access rights to the client. Resource-based scopes (scopes), which are linked to the access token, provide an additional level of authorisation, each one defining a specific capability, for example ‘read’ document or ‘write’ document.
Scopes are approved by the end user and enforced by the backend API. The authorisation server validates the token, which contains the scopes that have been consented to, and validates the client is not exceeding its access rights.
3.2.2. Proof Key for Code Exchange (PKCE)
The authorisation code grant type is regarded as the recommended solution for most scenarios and is regarded as one of the most secure options especially for server-side applications, where the final access token required to call the resource APIs is protected via an encrypted channel between the authorisation server and the client application (residing on a server).
To obtain the access token the client application makes an authorisation request to the authorisation server and if the required information is provided and approved, the authorisation server presents the client application with a code that is then used to request the access token over the secure back channel.
The back channel link is normally secured using Mutual Transport Layer Security (MTLS), but the initial flow to obtain the code token is over Transport Layer Security (TLS) and in certain architectures can be intercepted in a man-in-the-middle attack.
PKCE was developed specifically to address a man-in-the-middle attack, which is called the authorisation code interception attack, and in particular for mobile phone scenarios where malicious application (with elevated rights) could also be installed on the same mobile device as the client application and obtains the code, and with this then obtain the access token.
It works based on a random key (code verifier) that is generated, only known by the client and is used to create another random key, based on a hashing method, which is called the code challenge.
The challenge is sent (including the method) during the code flow and is stored by the authorisation server.
When the client requests the token with the code token, it also includes the original code verifier, which the authorisation server validates against the code challenge before the access token is returned to the client.
3.3. Basic OAuth 2.0 implementation patterns
There are 3 primary implementation patterns:
- distributed resource and authorisation servers
- client credential grant flow
- authorisation code grant flow.
Figure 15 shows that in the patterns ‘client credential grant flow’ and ‘authorisation code grant flow’, the resource server resides with the authorisation server. But OAuth 2.0 also supports a distributed model, if needed, where the resource server and authorisation server are separate.
By adding an authentication server component into the API security framework, a number of additional implementation models can be considered. The patterns in figure 15 have separate authentication servers, for example, external identity service providers, Google, RealMe and internal authorisation services.
There is a simpler security architecture shown in figure 16 where the authentication server is housed on the same system as the authorisation and resource servers.
3.3.1. Distributed model — User-Managed Access (UMA)
The distributed pattern models allow for multiple resource servers and authorisation servers and is addressed by User-Managed Access (UMA). UMA is a delegation access protocol allowing third parties to have temporary access to resources that are approved (consented to) by the resource owner. UMA is driven by the Kantara Initiative and currently has been put into production by a number of vendors.
In figure 17, the resource owner has the ability to provide predefined policies that define the relationship and access controls for all their resources across multiple resource servers of what can be accessed and by whom (for example, minimise their interaction in the current OAuth approval process) and provides a central point of control.
A centralised UMA authorisation server then keeps track of all resource servers associated with a given resource owner.
Figure 17 shows an overview of UMA. The resource owner establishes a token-based trust relationship between the resource server and the authorisation server.
The resource owner also defines the access control policies that grant access to potential consumers. A client application, driven by the requesting party who is requesting access to resources owned by the resource owner, attempts to access a resource on a resource server but will be redirected to the UMA authorisation server.
The client application has to obtain a client key, client secret and access token from the UMA authorisation server before trying to get to the required resource on the resource server. The resource server will validate the access token and the permitted level of access with the authorisation server before allowing the client application to consume its required resources.
To support these interactions, the UMA authorisation server offers 2 APIs:
- protection API — used by resource servers for getting authorisation-request tokens and validating access tokens
- authorisation API — used by the client application to obtain a token for accessing a specific resource.
OAuth 2.0 provides a consent model that is driven via a synchronous process — UMA provides enhanced consent capability that is driven via asynchronous processes, which allows for:
- the sharing of information with parties and groups based on relationships
- managing requests from third parties
- monitoring shares across sources.
3.4. OpenID Connect
OpenID Connect is the recommended security profile for the use of OAuth 2.0 authentication security tokens when an API requires more secure authentication than offered solely by API keys — for example, when data is a 2-way flow between the consuming application and the API.
3.4.1. Basic principles
OpenID Connect is used to:
- enhance the process and user experience during the onboarding process
- provide a SSO capability
- secure transfer of user data
- enrich the user experience
- provide a trust framework [integrity] for service and identity providers to share consented user data.
During the authorisation token exchange process a level of authentication is required. The authentication process is limited to what authentication services the authorisation server can support. For example, the resource owner authorisation process is normally supported out of the box by the authorisation server, using username and password.
These authentication services can be enhanced with authentication security tokens by implementing the OpenID Connect OAuth profile. For customers and internal users this can be achieved using a brokered or federated identity service.
Federated identity in the context of API security provides the ability to re-use user identities in an SSO way by providing a trust relationship between the identity service provider (OpenID Connect service) and the authorisation server. This establishment of trust between the identity provider and authorisation server is normally established using certificates and mutual authentication. All communication must be over TLS / MTLS to provide confidentiality and integrity controls.
OpenID Connect runs on top of OAuth 2.0 and is a lightweight RESTful framework for identity service interaction, to provide authorisation services. It allows claims (or attributes) about a user to be shared in a secure manner from an identity provider to the client application with the explicit consent of the user.
OpenID Connect uses all the flows, grant types and endpoints exposed by OAuth 2.0, and it adds the following additional capabilities to provide access to the users claims / attributes:
- an ID token
- a Userinfo endpoint.
The ID token is a JSON Web Token (JWT) that contains authenticated user information (and attributes) that the authorisation (OAuth 2.0 server) / authentication server (OpenID Connect server) provides to the client application.
This token can then be used to enforce finer grained access controls by providing additional attributes that can be used by the authorisation server to apply these policies.
The ‘new’ JWT has to be signed in order to address confidentiality and integrity requirements. There are also additional parameters (relates to code, session and access token hashes) defined that have been added to help address replay attacks.
The Userinfo endpoint can be called with the required access token to also obtain the same claims / attribute (for example, first name) provided in the ID token.
To address the risk of forged or stolen assertions, it is recommended that all communication is over TLS and tokens are at a minimum signed for authentication.
3.4.2. OpenID Connect grant flows
OpenID Connect builds on the existing OAuth 2.0 grant flows, and once implemented it is enacted using a specific scope (openid) in the initial authorisation call that the client makes to the OpenID Connect service.
There are a number of additional scopes that OpenID Connect introduces (for example, profile) that detail specific attributes (for example, name) that can be presented in the ID token. So, for example a client might request the profile information, but it is the identity provider that details what this will provide (for example, first and last name, phone and email address) and this has to be consented (via the OAuth 2 consent process) to by the user.
The request call also includes a response type that allows the application developer to request identity information tokens and security tokens depending on what information they want back, the type of client (for example, server/ client based). The link provides an excellent summary of the response type possible with OpenID Connect.
Diagrams of All The OpenID Connect Flows — Takahiko Kawasaki
The ID token not only includes identity information, it also includes additional technical claims that allow the client application to verify that the ID token is valid and came from the identity provider. It is highly recommended that a client validates the ID token (for example, signature, client ID and issuer). The ID token can also be encrypted to enhance the confidentiality and integrity of the information presented.
OpenID Connect works with all the existing OAuth 2.0 grant types and adds additional security controls to the authorisation code grant flow, for the implicit flow in OAuth 2.0 — this is seen as a less secure flow than the authorisation flow and is now being discouraged. OpenID Connect provides a level of enhancement to the implicit flow as the nonce and state are returned in a signed JWT, but this is really only recommended in login-only use cases.
Finally there is the concept of a hybrid flow that uses the authentication code flow as a base and (depending on what is required by the client and what is enabled by the authorisation server) it allows additional tokens (ID tokens) to be issued during the flow. These are both used to provide secure identity information and also additional confidentiality and integrity controls relating to state, nonce values and hashes.
Hybrid Flow Steps — OpenID Connect
A good example of where the hybrid flow is being mandated is in the management of consent in the Open Banking Consumer Data Rights specifications — GitHub.
3.4.3. OpenID Connect patterns
In the first OpenID Connect pattern, detailed in figure 18, the authentication and authorisation servers are conceptually hosted on different physical servers (the OpenID Connect server can be internal or external to the organisation). The user connects to the web application and is redirected to the OpenID Connect server for authorisation.
Once authorised, the user is redirected to the web application which, in turn, exchanges the ID token for an access token.
Note: It is the token flow and exchange that is key to understand in this model. The web application could be an application / API server and the interface to the authorisation and authentication server managed by the API gateway.
A trusted link is set up between the OpenID Connect server and the OAuth server (authorisation server) that also allows additional identity information to be provided in a JWT format via the Userinfo endpoint. This is important as the token exchange can result in the loss of user data that is required to provide fine-grained access.
If the OpenID Connect server was external, it would be the API gateway that would be responsible for the interface and token exchange process.
The application has to exchange the ID token for an access token in this pattern.
In the second pattern, shown in figure 19, the OpenID Connect server is run on the same server as the OAuth authorisation server, so the ID token and the access token can be issued at the same time. In both models, the OpenID Connect server can use Lightweight Directory Access Protocol (LDAP) for its backend user store, to provide authorisation for internal users.
One other model is for an external OpenID Connect pattern, where the OpenID Connect server is an external identity provider (for example, all-of-government service) and the API gateway is responsible for managing the token exchange, potentially also housing the authorisation server.
3.4.4. Health Relationship Trust (HEART) Working Group
The OpenID Foundation uses working groups to focus on specific problems, technology or specific marketplace sectors like Financial-Grade APIs (FAPI) or the Health sector HEART Working group.
Current Working Groups — OpenID Foundation
Health Relationship Trust (HEART) defines a set of profiles that enables patients to control how, when, and with whom their clinical data is shared.
HEART Working Group — OpenID Foundation
It also defines the interoperable process for systems to exchange patient-authorised healthcare data consistent with open standards, specifically Fast Healthcare Interoperability Resources (FHIR), OAuth, OpenID Connect, and UMA.
Two pertinent specifications are:
- HEART Profile for User-Managed Access 2.0 — OpenID Foundation
- HEART Profile for Fast Healthcare Interoperability Resources (FHIR) UMA 2 Resources — OpenID Foundation
3.4.5. CIBA flow
The OpenID Connect CIBA flow is important because it adds 3 ‘decoupled’ authorisation flows. Instead of using redirects through the browser, this model allows a user’s mobile device to be decoupled from the flow and the client application, and act as an authentication device on which the user authentication and the consent confirmations are performed.
The important point here is the client application and authorisation application / service do not have to run on the same device (for example, smartphone) or be linked.
In the CIBA flow the initial authorisation call is made to the new (OAuth2) backchannel authentication endpoint, and the authorisation server then delegates the authentication and consent approval tasks to the authentication device (smartphone) of the user, who will accept or deny the request.
The access token being sent to the client is managed by one of 3 flows:
- Poll — the client polls the authorisation server until the authorisation server has received the approval from the authentication device
- Ping — the client waits until it is notified by the authorisation server and then it requests the token
- Push — the authorisation server, when it receives approval from the authentication device, pushes the access, ID token and refresh token to the client.
3.5. Authorisation
Authorisation is the act of performing access control on a resource. Authorisation does not just cover the enforcement of access controls, but also the definition of those controls. This includes the access rules and policies, which should define the required level of access agreeable to both provider and consuming application. The foundation of access control is a provider granting or denying a consuming application and / or consumer access to a resource to a certain level of granularity.
In the authentication section the concepts of OAuth were introduced, and a number of authentication patterns were defined. This section focuses on authorisation and provides additional patterns that work with OAuth or provides an alternative.
3.5.1. Authorisation techniques
Authentication on its own does not necessarily provide permissions to access an API or application. It merely validates that you are who you say you are. If it is used for access control, it is an all or nothing control mechanism (for example, administration account).
Once a user is authenticated (for example, using username and password), an authorisation process will grant (or deny) them the right to perform an action or access to information. Normally this authorisation process is applied using either a coarse-grained or fine-grained access control process.
The normal model is to provide coarse-grained access at the API or API gateway request point, and fine-grained control at the backend services, but this model is changing as backend systems become more modular (for example, microservices) and less monolithic. This will result in a need for fine-grained authorisation support at the request point. The API gateway will support either using OAuth 2.0 or its own proprietary capability.
The 2 types of authorisation that are covered are:
- role-based or group-based authorisation — where membership in a role or group determines access rights for the consumer
- policy-based or attribute-based authorisation — where characteristics of the consumer are evaluated to determine access rights for the consumer.
3.5.2. Roles-based access controls (RBAC)
In many organisations Active Directory (AD) provides the authentication services for users. AD groups are then used to provide authorisation. This is classed as Discretionary Access Control (DAC) — access to systems is granted by applying Access Control Lists (ACLs) directly to the user, or to the groups in which users reside.
Note: LDAP directories can also provide this service, and in many organisations are used to provide the same service AD delivers but for external users.
AD (or LDAP) groups are synonymous with roles and can be used to provide coarse-grained authorisation for APIs.
3.5.2.1. Scopes (limited fine-grained access)
Based on the services (APIs) that are exposed, additional access controls can be applied using scopes. For example, a data service might provide ‘read’ and ‘write’ scopes that could be granted to a user based on the AD groups they were in.
When the ‘authorisation code’ token is passed to the authorisation server a scope parameter is included to define what scopes the client can use. Scopes can be used to limit the authorisation granted to the client by the resource owner. These scopes are defined by the client application developer. This is an important consideration as it can impact how the API service is defined, for example, single service with multiple scopes or multiple services with single scopes. The developer has to ensure that the minimum privileges are granted to client applications to carry out the tasks (exposed by the client application and APIs) the user wishes the application to complete.
Scopes provide a level of coarse-/or fine-grained access and represent specific access rights, for example, the ability to read a document or write a new document (or both) is limited by the access token.
Scopes can be used alone to define coarse- or fine-grained access, but these scopes need to be defined and built into the client application / API being built. Consideration is needed to understand, for example, if:
- a single scope protects the service
- scopes are defined to protect fine-grained business functionality
- services should be divided into many smaller services with one scope each.
Once the token is issued to a client application, the access rights bound by the scope are encapsulated in the access token for the length of its validation period.
A client application may invite a user to authorise the application to act on behalf of the user. Using NZ Business Number (NZBN) as an example, the client application (an accounting application for example) may invite a user to authorise the client application to update primary business data on the user’s behalf. The application may ask “Do you want [client application] to be able to update your business information on the NZBN Directory?”.
Assuming the user grants the client application this privilege, the authorisation token returned by the API will have a scope that enables update access. This scope is represented by an identifier.
Example
The NZBN update scope would look like:
- https://api.nzbn.govt.nz/auth/pbd.readwrite
Whereas a read-only scope would be:
- https://api.nzbn.govt.nz/auth/pbd.core.readonly
3.5.3. Attribute-based access controls
Attribute-based Access Control (ABAC) defines an access control process whereby access is granted based on policies that are built using attributes, for example, a policy might state that access to a specified resource is only permitted for users who are in sales or marketing, who are managers, during office hours only. ABAC provides fine-grained authorisation services.
The really important control that ABAC provides is the ability to provide context when applying access controls, for example, access decisions can be based on the IP address of the device, the operating system of the client and the last known transactions of a client.
The recognised standard for ABAC is eXtensible Access Control Markup Language (XACML), which is based on Extensible Markup Language (XML).
3.5.4. API gateway
API gateways have been mentioned previously in the context of API protection. Most API gateways on the market provide support for OAuth 2.0 and can also provide authorisation (and authentication) services via a direct connection to:
- an identity store containing groups
- an identity access management system
- a policy server.
4. Security controls
This section looks at additional controls that should be considered and implemented when protecting APIs.
The 5 areas that should be considered are:
- Confidentiality
- Integrity
- Availability
- Threat protection
- Logging and auditing.
Depending on the classification of the information that is presented in the APIs and the risk framework applied, different access controls will need to be applied.
Appendix F provides more detail about the security controls that can be applied.
4.1. Confidentiality and integrity
Confidentiality and integrity cover the handling of request and response data, both in transit and at rest. The aim is to protect the payload content from unauthorised access, manipulation or faking. An API request needs to be received intact by the API, with validation as to the source of the request. Untampered API responses need to be received by the consuming application, with confirmation that they are legitimately from the API.
4.2. Availability and threat protection
Availability in this context covers threat protection to minimise API downtime, looks at how threats against exposed APIs can be mitigated using basic design principles and how to apply protection against specific risks and threats.
Availability also covers scaling to meet demand and ensuring the hosting environments are stable. These levels of availability are addressed across the hardware and software stacks that support the delivery of APIs.
There are no specific standards for availability, but availability is normally addressed under business continuity and disaster recovery standards. These standards recommend a risk assessment approach to define the availability requirements. Find further information on business continuity and risks on the Standards New Zealand website.
For cloud services, Digital.govt.nz provides an assessment tool that includes a risk assessment tool which covers availability, business continuity and disaster recovery related questions.
As mentioned in section 1.3. Risks, there are various types of risk that impact APIs. This includes threats to availability as well as confidentiality and integrity. Many threats can be mitigated as indicated in section 1.3.1. Mitigation approach and good secure coding practices using OWASP guidelines.
Where the resources being exposed by an API are sensitive — for example, not public data — it is advisable to perform:
- threat assessment — early on in the API development lifecycle
- penetration test — once an API is developed and published (testable).
There are also automated vulnerability testing tools that can be used to give an indication of vulnerabilities in web application designs.
4.3. Logging and alerting
Traditional logging, alerting and incident management practices also apply to APIs, along with additional considerations such as:
- correlating API requests with specific backend system activity and the resulting API responses to support end-to-end tracing
- identifying specific API requests from consumers to help resolve API consumer problems
- detecting events that may indicate a malicious attempt to access an API.
5. API security use case
Figures 22 and 23 illustrate a sequence of information exchange events and highlight where security controls are implemented.
5.1. High-level view
Figure 22 shows the high-level interaction of the actors and participant service providers in the use case of invoicing and paying ACC levies.
5.2. Detailed-level view
Figure 23 shows the detailed interaction of the actors and participant service providers.
6. Appendix A — Standards for securing RESTful APIs
Table 3 captures (current) security standards that should be part of any API security strategy, ordered by type.
Note the provisioning standards[Footnote 2] are included to complete this standards table. This enables it to be used by architects to select the most appropriate option.
Standard | Description |
---|---|
Provisioning standards |
|
SCIM |
System for Cross-domain Identity Management is a RESTful API-based framework for Real Time provisioning and, like SPML, moves away from batch-based and delta-based provisioning processes. It uses a RESTful API to manage the provisioning of users. As SCIM is a REST API framework, it should be secured using OAuth. |
SPML |
Service Provisioning Markup Language is an XML-based framework for facilitating the exchange of provisioning information (creates, updates and deletes) on user objects (for example, an LDAP directory). This is normally implemented to provide Just-in-Time (Real Time) provisioning. Although now regarded by most as a legacy standard, it is still supported by most vendors and used by niche service vendors. |
Security standards |
|
2 (multi)-factor authentication | A method of confirming a user’s claimed identity (authentication) by using 2 or more pieces of evidence, relating to something they know, something they have and something they are. |
ALFA |
Abbreviated Language for Authorisation defines fine-grained authorisation rules in a JSON-like policy language. This language helps remove one of the greatest barriers for implementing XACML, which is complexity in writing of the access control policies. |
Authorisation standards | Provides a framework for controlling access to resources. |
CIBA | Client Initiated Backchannel Authentication is a decoupled grant flow that allows the end user to use their mobile device to authenticate and approve transactions. |
Claims | The contents of tokens and pieces of information asserted about a subject. For example, an ID token can contain a claim called mobile that asserts the subjects mobile phone number. |
Delegated authorisation | Provides a framework for delegating the specified access rights to a third party. |
Federation / authentication |
Provides authentication and SSO services to customers and secure transportation of authentication and authorisation information. |
FIDO U2F | The FIDO Alliance Universal Second Factor is an open authentication standard that can be incorporated into an API security framework. |
JSON security standards | A set of standards that provide security around the exchange of tokens, based on JSON. |
JWA | JSON Web Algorithm defines the algorithms used for encrypting the JWT. |
JWE | JSON Web Encryption standard provides integrity validation and can be used with or without digital signatures. |
JWK | JSON Web Keys represent the cryptographic key used for encrypting JWT. The algorithms for these are defined in JWA. |
JWS | JSON Web Signature is a standard for signing JSON, thus providing a level of authority (where it has come from) and integrity, by proving the JWT has not been changed in transit. |
JWT |
A JSON Web Token is designed to be compact and provides trusted information that is used in the authentication process. Used in API security to pass identity information, specifically by OpenID Connect. |
OpenID Connect |
An interoperability authentication protocol based on the OAuth 2.0 framework. This is a relatively recent federation protocol that provides lightweight federation services. It is like SAML, provides SSO services and allows the secure exchange of user authentication data, but it is not as feature rich as SAML. As it is based on REST / JSON, it is perceived as the federation service of choice for mobile services. |
OAuth 1.0a | Derived from the original OAuth 1.0 specification (Request for Comments (RFC) 5849) that provides a method for client applications to access resources on behalf of a resource owner. It is an authentication framework around the exchange of signed tokens. It has now been made obsolete by OAuth 2.0. |
OAuth 2.0 |
An open standard for a delegated authorisation framework. It is not backward compatible with OAuth 1.0, but is modelled on the framework with the objective of providing greater flexibility, and defines specific credential (grant) flows. It is also based on token exchange, with the primary difference being that the tokens are secured by mandating TLS on all communication connections (RFC 6749), where the OAuth 1 tokens are digitally signed. |
OAuth 2.1 |
An update to OAuth 2.0. |
PKCE | Proof of Key Code Exchange provides enhanced security for the authorisation code flow. |
SAML |
Security Assertion Markup Language is an XML-based, open-standard data format for exchanging authentication and authorisation data between parties, in particular, between an identity provider and a service provider. SAML is seen as complex but is regarded as a high-level security framework. As it is based on XML, SAML can have high payload overheads and thus can result in performance issues in the mobile application space. It is still widely used but its uptake is declining. It is included in this standard to support existing New Zealand SAML instances (for example, education, RealMe). |
TOTP / HOTP |
Time-based One-Time Password and Hash-based Message Authentication Code (HMAC)-based One-Time Password can be used in any of the authentication processes that are part of the process of gaining access to APIs. The requirements for these would be based on business requirements and risk analysis of the information or service being exposed by the API. These could be used to add a U2F to any existing or proposed authorisation mechanisms. |
UMA |
User-Managed Access is an OAuth-based access management protocol standard. It builds on OAuth 2.0 to provide additional delegated authorisation capabilities. Its key focus, for RESTful APIs, is to enable an individual (resource owner) to manage and define a set of access control policies that can be managed by an authorisation server, which controls access to a set of APIs. |
XACML |
The eXtensible Access Control Markup Language standard is XML-based and defines a fine-grained attribute-based access control policy language. It provides an architectural model and policy terminology that can be used to separate out the functions of any authorisation framework. As it is based on XML it is sometimes perceived as a legacy standard, but, from a RESTful API perspective, it provides a fine-grained attribute-based authorisation framework (RFC 7061). |
Industry-specific standards |
|
FAPI | Financial Grade API is an OpenID workgroup that now defines a set of specifications that enforces in specific legislation, for example, Open Banking (NZ) and Customer Data Rights (Australia) |
FHIR | Fast Healthcare Interoperability Resources is a standard describing data formats and elements and an API for exchanging electronic health records. |
NZ Trust Framework |
The framework that is driving NZ future Digital Identity direction. |
PNZ | Payments New Zealand is a government body that is driving the Open Banking initiative in New Zealand. |
7. Appendix B — Authentication
Find out more about different authentications used while implementing an API.
7.1. Anonymous authentication — Not recommended
Figure 24 shows an anonymous authentication model where the customer and the application they are using can gain access to backend API services without needing to authenticate in any way.
Anonymous authentication can be used when the risk associated with the API is negligible, for example, an API offering publicly available information. It can be used for internal APIs if the agency’s security policy allows.
The downside of this model is that it makes it difficult to gather effective analytics, and therefore to understand the implications of proposed changes to, and deprecation of, an API.
Although it is not recommended to use this model, it sometimes has to be included to support the scenario where a web application is making calls, on the home or login page, to a backend application server that stores the pages and the related JavaScripts. In this case it is the API gateway that provides the security, along with the due diligence required to ensure these pages and associated JavaScripts do not contain information that should be secured.
The anonymous authentication model should be protected against typical API vulnerabilities and threats, as listed on the OWASP web site. Typically, these relate to:
- throttling to prevent DoS attacks
- message analysis to block HTTP attacks and parameter attacks such as cross-site scripting, SQL injection, command injection and cross-site request forgery.
Note: If this approach is used, it might be appropriate to restrict access to the API based on other information (for example, based on IP address). This capability can be applied using an API gateway, or existing capabilities (for example, firewalls, load balancers) may be able to provide this level of protection.
7.2. Username and password authentication (direct authentication) — Not recommended
In the direct authentication model, illustrated in figure 25, the user is authenticated via an identity store using username and password (or hash) credentials over secure communications.
Username and password authentication is suitable for development purposes, during training or the initial stages of development, because it reduces the barrier to accessing the API. It could also be used for internal APIs, if the agency’s security policy allows, and may be suitable for customers if the identity risk associated with the API is low.
Internal users would likely use an internal LDAP directory (for example, Active Directory (AD)) while external users, for example customers, would require a separate identity store. The API gateway could provide the authentication services and, for external facing APIs, also threat protection services.
API security is provided by the application (web) server acting as a trusted subsystem with TLS links to the backend API server. The application / web server invokes the backend and provides the required user ID information, which can be in the form of a session token.
This model can be easy to implement but has many limitations including:
- an identity store like LDAP is required along with a full registration process for all user types (for example, applications and application developers)
- it cannot leverage a federated authentication model, so no SSO, requiring re-presentation of username and password at every step
- passwords would be in clear text — if direct authentication is used, TLS would be required to secure all communications
- it is open to brute force attacks
- passwords have low entropy, have to be reset and managed, and are difficult to revoke at a granular level.
This model can be used for testing and development purposes but is not recommended for production APIs — API keys are preferred.
Refer to the Identification Management Standards for guidance on customer authentication. If considering using this model for internal users, preference would be a SSO solution using Kerberos.
7.3. API keys authentication — Recommended
API keys are a digital authentication mechanism, with the API key taking the form of a long string of generated characters.
API keys are usually unique and can be assigned to an application, developer or user. The usual practice is for an application developer to obtain a key for their application from the API provider and utilise the key within their application.
To obtain an API key, the developer must undergo a registration process with the API provider. The steps involved in the registration process are dependent on the level of risk associated with the API.
As illustrated in figure 26, at run time, the consuming application automatically passes the API key to the API every time it requests an API resource. The API gateway validates the API key against the API key store (which can be part of the API gateway functionality or provided by another secure device) before allowing the consuming application access to the requested API (or set of APIs) and backend resources.
This model is similar to the username and password model, but it is the API gateway that can be responsible for creating, managing the API key and API secret, and storing a copy in the API key store for validation, rather than redirecting to an identity store for policy validation and approval.
Username and password authorisation models can have high administration and response time overheads (relating to cryptographic functions). API keys are not linked to users and require no cryptographic functions. Like usernames and passwords, they come in pairs and are defined as:
- API key — public unique identifier — a 40+ random character string to authenticate the consuming application to the API
- API secret — private unique identifier — only known by the API gateway and used to validate the API key. The API secret in this model is not passed over the network.
API keys should be used wherever system-to-system authentication is needed (especially with a production level API). They are suitable for simple public APIs which do not need more complex authentication models. API keys should be used in preference to username and passwords because they are:
- more secure — greater entropy than passwords – random long string of characters
- speedier — API keys do not involve any hashing process, that is, the hashing process required for passwords.
However, the risk is that anyone with a copy of the API key can use it as though they were the legitimate consuming application. Hence, all communications should be over TLS, to protect the key in transit. The onus is on the application developer to properly protect their copy of the API key.
If the API key is embedded into the consuming application, it can be decompiled and extracted. If stored in plain text files they can be stolen and re-used for malicious purposes.
Note: API keys are recommended as they provide a level of security to public APIs that can help protect sites from screen scraping or provide the required information to throttle, or possibly bill, access to data.
Organisations need to carry out a risk analysis of the possible threats against the classification of the data that could be obtained and from this decide if API keys are required.
7.4. Certificates (mutual) authentication
When the API requires stronger authentication than offered solely by API keys, and the overhead of certificate management is warranted, use the certificate authentication model.
In the certificate (mutual) authentication model, illustrated in figure 27, internal and external parties are authenticated with each other. Both the consuming application and the API provider hold a digital certificate. The digital certificate can be trusted because it was issued by a mutually trusted certificate authority (CA). When the consuming application makes a request to the API, the server hosting the API presents its certificate to the consuming application. The application verifies the server’s certificate then sends its own certificate to the server. The server verifies the client certificate and mutual trust is achieved, allowing the consuming application to use the API.
Certificate authentication is:
- not recommended for customer authentication as there would be a high overhead in terms of certificate management
- recommended for consuming application to server (gateway) or API to backend communications (if needed).
For guidance, refer to the NZ Information Security Manual — Government Communications Security Bureau.
7.5. Developer authentication
Developer authentication will normally take place at the API portal when gaining access to APIs. The API portal will offer an authentication solution for developers to provide a username and password (possible 2 factor) login process (see figure 28) and a user store.
Further details for this are not provided in these guidelines, but a vendor API portal will normally provide their own authentication solution and user store, or it can integrate with an existing identity and access management system.
Once the developer has logged into the API portal they can browse and discover the APIs available. API portals normally require the consuming application developer to:
- provide contact details like an email address
- register the application they are developing.
The API portal should provide registration services for the client application to use:
- API keys for basic authentication services and API monitoring
- OAuth services and the management of client ID and a client secrets (for applications)
- additional production authentication and authorisation service for example, basic, certificate.
7.6. Multi-factor authentication (MFA)
An application could use multi-factor authentication (MFA) to enhance other authentication techniques to mitigate identity risks. For example, by requesting a fingerprint from the customer in addition to their username and password (or API key).
Often smartphone capabilities can be leveraged to provide this additional factor, but other options are available, like smart cards or hardware tokens.
8. Appendix C — OAuth 2.0
OAuth 2.0 is a token-based authorisation framework and is defined and implemented using grant flow type patterns.
8.1. Grant flow types
There are 4 grant flow types supported by the OAuth 2.0. These define the different types of interaction a client application can perform to gain an ‘access token’ and thus access to the protected resource.
The different grant flow types are recommended for use in different situations for agencies implementing an API security framework. Recommendations are based on maximising the level of security for the APIs being exposed.
OAuth 2.0 introduced different grant types to provide organisations with the flexibility to support a variety of client application models. These specific models are not device driven — there is no specific device (for example, mobile) mapping to grant type; it is the level of risk an agency is prepared to support that needs to be defined to clarify which grant type should be selected.
For the application developer, the difference is in the infrastructure they need to provide, for example, the authorisation code grant flow type requires a managed server on which the client application runs.
The authorisation code is the most frequently used grant flow type as it is regarded as the most secure. It is covered in more depth in Appendix D. The implicit grant is the least secure and to quote from the OAuth RFC:
These OAuth 2.0 grant flow types replace the 2-legged and 3-legged patterns used in OAuth 1.0.
There is now an RFC that has been defined that provides a device authorization grant flow (grant_type) that has been developed to address ‘browser less’ devices like a smart TVs, printers and uses another device to connect to the authorisation server to approve the request via a polling process.
9. Appendix D — OAuth 2.0 and OpenID Connect tokens and credentials
Find out how OAuth 2.0 and OpenID Connect and tokens and credentials are becoming a standard in access controls.
Access tokens are becoming a standard form of access control without the need for passing credentials. Anyone with an access token (bearer token) is permitted access to the resource being controlled, which makes tokens a target for stealing or copying. Hence it is important to keep the lifetime of tokens as short as realistically possible, depending on the type of resource being exposed and business risk appetite.
The OAuth framework (RFC 6749 and 6750) relies heavily on TLS for the security of the bearer token. The following RFCs offer additional integrity and confidentiality capability that can be applied to the bearer token (access token):
- JSON Web Token (JWT) profile for OAuth 2.0 Client Authentication and Authorization Grants (RFC 7523)
- Proof-of-Possession Key Semantics for JSON Web Tokens (JWTs) (RFC 7800).
There are 2 types of tokens, Opaque and JWTs. Both the access and refresh token can be presented in either form.
The Opaque tokens are produced by, and stored in, the authentication server. Each one has a specific task in the OAuth 2.0 framework.
The JWT tokens are produced and signed by the authorisation server but are not stored there. They are stored by the client application and when used are verified by the signature.
Table 4 provides a list of the main tokens and credentials that are used to provide authentication and authorisation services in the OAuth 2.0 framework.
Token description | Recommendations |
---|---|
Access tokens (Opaque) Also called bearer tokens. No additional identity checks are carried out once this has been issued. Used by the client application to access protected resources on the provider, and it can be signed. It is a random character string that also contains ‘scope’ information to allow additional access policies to be applied, for example, duration of access. It is granted by the resource owner via the authorisation grant token flow and enforced by authorisation and resource servers. |
Required:
|
Refresh token (Opaque) Used to obtain new access tokens when the old one expires or is invalid. |
Required: The token is protected both in transit (TLS) and in storage (encryption). Recommended: The lifetime of the token is set to 24 hours. |
Opaque tokens vs JWTs Access and refresh tokens can be delivered as an Opaque string or as a JWT that is signed and if required can be encrypted. Possible pros and cons of the JWT are JWTs:
|
Recommended:
|
Authorisation code The authentication server sends the authorisation code to the client after being granted consent by the resource owner. Used to authenticate the client. |
Recommended: The lifetime of this token is set to 24 hours. |
API key A 40+ random character string used in some scenarios to authenticate the client application to the API. |
Required: When implementing an authorisation solution that uses API keys. This is normally presented as an option in the developer API portal. |
Client ID When registering an OAuth client application with the API portal, a client ID is issued. Used when interacting with the resource server. |
Required: When implementing an OAuth 2.0 model that supports ‘authorisation code’ flow. |
Client secret Also provided when the OAuth client application is registered. This is used with the client ID when exchanging an authorisation code for an access token. |
Required: When implementing an OAuth 2.0 model that supports authorisation code flow. |
ID token ID tokens are JWTs that can be used to:
|
Recommended:
|
MTLS MTLS allows a higher level of trust between different parties when exchanging credentials. |
Recommended:
|
JWT (private_key_JWT) OAuth 2.0 clients applications can use client IDs and client secrets to authenticate to the services (for example, token generation, user information and access revoke). RFC 7523 defines the concept of using JWTs for client authentication. This provides additional controls over the standard client ID and secret, and it mandates that the JWT is signed and verified, that it contains the required claims which are validated, and ones (for example, jti – JWT ID) that are used to ensure the JWT is not used twice. |
Recommended: Organisations implement this form of client authentication when protecting information classified confidential and above. |
9.1. OAuth scenario (authorisation code grant flow type)
This is a hypothetical scenario to demonstrate a key OAuth pattern using the authorisation code grant flow type. In this scenario Inland Revenue (IR — previously called IRD) has developed a set of APIs that can be used by professional accounting firms to offer additional services to their customers.
The assumption here is that IR have an API gateway, that offers:
- an API developer portal
- an OAuth authorisation server
- a resource server, exposing the APIs that can be called
- that IR are securing an API called ‘View IR Return’ with an authorisation code grant type.
9.1.1. Stage 1 client registration — Develop the application (third party)
The third party (in this case MyAccountantWebsite.com) develops a client application that will be exposed to their customers when they login to their website. It will allow customers of MyAccountantWebSite to authorise and set up delegated access for the MyAccountantWebSite application to view their IR returns, but without the customer having to provide their IR username and password to MyAccountantWebsite.
The application will use the APIs exposed by IR and is developed on an MyAccountantWebsite.com application (web) server that can securely store security credentials.
9.1.2. Stage 2 client registration — Register the (client) application
The developer needs to register as a user of the IR API portal. IR needs to verify that this person is allowed to register as a user. (This process is outside the scope of these guidelines and will vary depending on the sensitivity of the APIs exposed.)
As illustrated in figure 29, the authentication of the developer can be via:
- IR’s internal client login services
- an external identity service provided (OpenID Connect or SAML based, or social network provider).
Once the developer has been approved and has been granted login credentials (username and password) they log onto the IR API portal and register their client application — this is carried out over a TLS secure link. The following information is provided:
- name of the application (by the developer)
- return URL (by the developer)
- client ID (by IR) — stored securely
- client secret (by IR) — stored securely.
The developer completes development of the client application on MyAccountantWebsite. The next steps detail how the MyAccountantWebsite customer uses the application.
Note: The values of the client ID and secret are represented as simple strings in figure 29. In reality these are long, randomly generated strings.
9.1.3. Stage 3 client registration — Customer sets new service and authorises access
As illustrated in figure 30, the customer logs into MyAccountantWebsite from their browser and clicks on the IR button (view IR), which should allow the user access to information presented by the ‘view IR return’ API. As there is no current acess token stored for the client application to use, the user is redirected to the IR authorisation server ‘authorisation endpoint’ with the following information:
- IR URL A (authorisation endpoint URL)
- return URL (where the authorisation code will be sent)
- client ID = ABC (random long string that is used to identify the client application)
- scope = READ (defined by the application as to what the application can do)
- state = xyz123 (random long string that is used to mitigate man-in-the-middle attacks).
9.1.4. Stage 4 client registration — Authentication and approval by the resource owner
Figure 31 illustrates that, as the client application does not have an access token for this customer or a session set up with IR, the customer is redirected to the IR login page. The customer then logs into IR and will be presented with a request to accept the scopes (in this case Accept READ) for MyAccountantWebsite access.
9.1.5. Stage 5 client registration — Provide an authentication code
Figure 32 illustrates that with the customer’s acceptance of the scope, the authorisation server sends an authorisation code to the client application (with the same state parameter for the client to validate).
9.1.6. Stage 6 client registration — Authorisation code is sent to token endpoint
Figure 33 illustrates that to gain access to API resources, the client application sends the authorisation code to the token endpoint (T) on the authorisation server, along with the client ID and client secret it received when the client application was registered. This is used for authentication of the client application to the authorisation server.
Note: The communication must be over TLS.
9.1.7. Stage 7 client registration — The access token it provides
As illustrated in figure 34, the authorisation server provides an access token back to the client application, along with a refresh token (for later use) and an expiry time for the access token.
9.1.8. Stage 8 client registration — The client application uses the access token to access the resource
As illustrated in figure 35, the client application presents the access token to the resource server at IR, which is verified by the authorisation server and the requested data is returned from the IR backend system via the view IR return API to the MyAccountantWebsite client application.
This completes the OAuth scenario as the client application has retrieved, and can use, the resource data returned from the API.
10. Appendix E — Authorisation
To implement Attribute-based Access Control (ABAC), the current models defined use XACML.
XACML (developed by OASIS) provides a reference architecture, a request / response protocol and a policy language.
It is a highly distributed and loosely coupled architecture. It provides very useful generic definitions of the required components (services) that can be used to define any access control model.
It uses the following services to define the reference architecture:
- Policy Enforcement Point (PEP) — where the request to the resource is intercepted and policy applied (based on the decision made by the Policy Decision Point)
- Policy Decision Point (PDP) — this is the policy server to which the PEP sends the request for evaluation as to whether a user should or should not have access to a resource. The PDP has access to policy and can match the credentials and request against policy to make a permit / deny decision. It can also enforce policy-related obligations, for example, enhanced logging, notification and alerts, or re-routing to request additional authorisation process
- Policy Administration Point (PAP) — the interface where the policies are developed and defined
- Policy Information Point (PIP) — used to gather additional information about a user from identity stores or databases to provide additional attributes that are required by the PDP to validate the policy and apply the required outcome.
The links and flows between these services are detailed in figure 36.
XACML is generally perceived as being difficult to write policies in, but this is being addressed in 2 ways:
- OASIS is developing a request / response interface based on JSON and HTTP for XACML 3.0, version 1.0
- there is a JSON-based language called ALFA (Abbreviated Language for Authorization) that can be used to build XACML policies.
For XACML to support fine-grained access for APIs it requires a model such as illustrated in figure 37 (based on the scenario in Appendix D).
Figure 37 shows the following steps:
- The access token is obtained during the request and exchange process.
- The access token is presented during the resource request to the PEP, which also exposes the resource.
- The access request is passed to the authorisation server.
- The authorisation server verifies the access token and passes a XACML request to the PDP. This is where additional fine-grained access can be applied.
- The PDP authorises the PEP to allow access to the backend service.
11. Appendix F — Security controls
Appendix F captures a number of key security controls that should be applied. It is recommended that organisations should apply the relevant RFC (or active draft documents) that relate to securing APIs.
11.1. Communications security (confidentiality) — Required
All communications to or from an API must be over TLS 1.3 or higher. Other versions of TLS and Secure Sockets Layer (SSL) should be disabled. This provides a recognised level of confidentiality that covers all communications between all components.
The consuming application must validate the TLS certificate chain when making requests to protected resources, including checking the Certificate Revocation List (CRL).
11.2. State (integrity)
State is also a parameter that can be used during the authorisation grant stage to provide a level of security to address possible man-in-the-middle attacks.
The state parameter is a string of random letters and numbers that is sent to the authorisation server by the client when requesting an authorisation code. It is sent back to the client with the authorisation code and should be verified by the client application to confirm the authenticity of the response — that it came from the authorisation server to which the request was sent.
Note: State is used to provide a level of integrity when using the standard format of bearer tokens. The confidence in the level of integrity can be increased if JWT tokens are used for bearer tokens.
11.3. Content encryption (confidentiality)
Use encryption if content needs only to be visible to specific consumer endpoints. However, if content only needs to be guaranteed untampered and / or from a specific source (for example, provider) then use content signing.
Content encryption enables all or part of a JavaScript Object Notation (JSON) payload to be readable only by the target consumer(s). This is useful where the content being carried by the API is sensitive, and the API request or response transits multiple stopping points. While TLS protects the payload in transit, it only applies to each point to point connection between components (for example, mobile app to API gateway). If transit components are not totally under the provider’s control, it can be worthwhile performing body encryption. It may be sensible to encrypt credit card details passed between consumer and provider backend systems.
It is also worth considering how much protection the information needs while at rest (for example, information received from consuming applications, caches) and whether some content should be stored encrypted.
Encryption is only worthwhile implementing when data sensitivity or data protection requirements drive it, as encryption is computationally intensive. It also makes it more difficult for protection mechanisms, such as API gateways, to validate and transform API content. When only the integrity of the content passed needs to be ensured, consider using content signing instead.
There are many existing ways of encrypting message content, built into code libraries and development tools. It is required that any content encryption adheres to the standard algorithms laid out in the New Zealand Information Security Manual (NZISM) (Hash-based Message Authentication Code (HMAC) algorithms).
11.4. Content signing (integrity)
Content signing is used to assure content integrity and proof of authorship. It can apply to the whole body of the JSON message or specific elements of that content, for example, credit card details. There are many approaches to content signing and the most appropriate approach is requirements dependent. Standard signing algorithms exist within coding libraries, and JWT has a body that can contain verifiable (signed) JSON fields. API gateways can also be configured to sign content objects in transit, if provided with an appropriate private key.
Signing has less of a computational overhead than encryption, but can still affect performance, so it is advisable that it be used only when and where needed.
For APIs, this is a developing area: there are 2 standards currently under development to address content signing:
- Message Authentication Code — OAuth 2.0 Message Authentication Code (MAC) tokens (draft)
- Proof of Possession — OAuth 2.0 Proof-of-Possession (PoP) Security Architecture (draft).
It is recommended that where bearer tokens are used, they should be signed using JSON Web Tokens (JWT) as defined in:
- JSON Web Token (JWT) RFC 7519
- JSON Web Signature (JWS) RFC 7515
- JSON Web Token (JWT) Profile for OAuth 2.0 Client Authentication and Authorization Grants RFC 7523.
11.5. Non-repudiation (integrity)
Non-repudiation covers the means to ensure that a consumer cannot deny making a request and, similarly, a provider cannot claim they did not send a response. To aid non-repudiation for APIs, it is important to ensure credentials are not shared between consumers and to perform comprehensive logging of API request / responses.
Digital signatures are useful for not just guaranteeing authenticity and integrity, but also supporting non-repudiation.
11.6. Availability and threat protection
Table 5 lists threat risk types and some recommended approaches to help mitigate these.
Threat | Mitigation (OWASP) |
---|---|
Exposure of inappropriate API methods to access services. |
|
Denial of Service (DoS) attacks |
|
Malicious input, injection attacks and fuzzing |
|
Cross-site request forgery | Use tokens with state and nonce parameters. |
Cross-site scripting attacks | Validate input. |
11.7. Token threat mitigation
Securing OAuth flows relies on the exchange of tokens between consuming applications and API backend servers. There is always the threat of these tokens being obtained illicitly, losing confidentiality and integrity of message content, or the integrity of the sender of the token. This risk also applies to the transferring of API keys.
Table 6 captures the main token threats and possible mitigation strategies.
Threat | Mitigation |
---|---|
Token manufacture or modification — fake tokens and man-in-the-middle attacks) | Digital signing of tokens like JWS with JWT, or attaching a Message Authentication Code (MAC). |
Token disclosure — man-in-the-middle attack — the access token is passed in clear text with no hashing, signing or encryption |
|
Token redirects |
|
Token replay — where the threat actor copies an existing token (for example, refresh token or authorisation code) and reuses it on their own request |
|
API gateway capabilities can protect against many typical API vulnerabilities and threats. Typically, these relate to:
- throttling to prevent DoS attacks
- message analysis to block HTTP attacks — parameter attacks such as cross-site scripting, SQL injection, command injection and cross-site request forgery
- controlling egress of information via the API, aligned to set access permissions / policies.
As well as providing (if required) access control to API functionality.
12. Appendix G — Internet Engineering Task Force (IETF) RFCs relating to OAuth 2.0
RFC number and title | High level description |
---|---|
RFC 6749 The OAuth 2.0 Authorization Framework |
The core OAuth 2.0 RFC defining the authorisation framework. |
RFC 6750 The OAuth 2.0 Authorization Framework: |
How to use bearer tokens in HTTP requests to access OAuth 2.0 protected resources. Any party in possession of a bearer token (a ‘bearer’) can use it to get access to the associated resources (without demonstrating possession of a cryptographic key). To prevent misuse, bearer tokens need to be protected from disclosure in storage and in transport. Note: RFC 8996 deprecates Transport Layer Security (TLS) 1.0 and 1.1 and recommends 1.2 or ideally 1.3 in OAuth 2.0 implementations. |
RFC 6755 An IETF URN Sub-Namespace for OAuth |
This document establishes an IETF Uniform Resource Name (URN) Sub-namespace for use with OAuth-related specifications. |
RFC 6819 OAuth 2.0 Threat Model and Security Considerations |
Security considerations for OAuth beyond those in the OAuth 2.0 specification, based on a comprehensive threat model for the OAuth 2.0 protocol. |
RFC 7009 Token Revocation |
This profile defines a revocation endpoint on the authorisation server to enable clients (consuming applications) to revoke their own access or refresh tokens. This is essential should a token get into the wrong hands and be used for malicious purposes. Before allowing a client to revoke an access token and / or associated refresh tokens, the authorisation server first validates the client’s credentials. |
RFC 7519 JSON Web Token (JWT) |
URL-safe means of representing claims to be transferred between 2 parties. The claims in a JWT are encoded as a JSON object that is used as the payload of a JSON Web Signature (JWS) structure or as the plaintext of a JSON Web Encryption (JWE) structure, enabling the claims to be digitally signed or integrity protected with a Message Authentication Code (MAC) and / or encryption. |
RFC 7521 Assertion Framework for OAuth 2.0 Client Authentication and Authorization Grants |
Common framework for OAuth 2.0 to interact with other identity systems using an assertion and to provide alternative client authentication mechanisms. |
RFC 7522 Security Assertion Markup Language (SAML) 2.0 Profile for OAuth 2.0 Client Authentication and Authorization Grants |
The use of a Security Assertion Markup Language (SAML) 2.0 Bearer Assertion as a means for requesting an OAuth 2.0 access token as well as for client authentication. |
RFC 7523 JSON Web Token (JWT) Profile for OAuth 2.0 Client Authentication and Authorization Grants |
Use of a JSON Web Token (JWT) Bearer Token as a means for requesting an OAuth 2.0 access token as well as for client authentication. |
RFC 7591 OAuth 2.0 Dynamic Client Registration Protocol |
Mechanisms for dynamically registering OAuth 2.0 clients with authorisation servers. |
RFC 7592 OAuth 2.0 Dynamic Client Registration Management Protocol |
Methods for the management of OAuth 2.0 dynamic client registrations for use cases in which the properties of a registered client may need to be changed during the lifetime of the client. |
RFC 7636 Proof Key for Code Exchange by OAuth Public Clients |
OAuth 2.0 public clients utilising the authorisation code grant are susceptible to the authorisation code interception attack. This specification describes the attack as well as a technique to mitigate against the threat through the use of Proof Key for Code Exchange. |
RFC 7662 OAuth 2.0 Token Introspection |
Method for a protected resource to query an OAuth 2.0 authorisation server to determine the active state of an OAuth 2.0 token and to determine meta-information about this token. Provides authorisation context of the token from the authorisation server to the protected resource. |
RFC 7800 Proof-of-Possession Key Semantics for JSON Web Tokens (JWTs) |
How to declare in a JWT that the presenter of the JWT possesses a particular proof-of-possession key and how the recipient can cryptographically confirm proof of possession of the key by the presenter. |
RFC 8176 Authentication Method Reference Values |
Establishing a registry for authentication methods for example:
|
RFC 8252 OAuth 2.0 for Native Apps |
This RFC recommends external user agents like in-app browser tabs as the only secure and usable choice for OAuth, rather than embedded user agents. |
RFC 8414 OAuth 2.0 Authorization Server Metadata |
This defines how an OAuth 2.0 Client can interact with an authorisation server by providing a discovery endpoint that provides the endpoints and authorisation server capability. |
RFC 8628 OAuth 2.0 Device Authorization Grant |
The OAuth 2.0 device authorisation grant is designed for Internet-connected devices that either lack a browser to perform a user agent-based authorisation or are input constrained to the extent that requiring the user to input text in order to authenticate during the authorisation flow is impractical. It enables OAuth clients on such devices like smart TVs, media consoles, digital picture frames and printers, to obtain user authorisation to access protected resources by using a user agent on a separate device. |
RFC 8693 OAuth 2.0 Token Exchange |
Defines a protocol for a lightweight HTTP and JSON-based Security Token Service (STS) — covering requests of tokens from an authorisation server. |
RFC 8705 OAuth 2.0 Mutual-TLS Client Authentication and Certificate-Bound Access Tokens |
This document describes OAuth client authentication and certificate-bound access and refresh tokens using mutual TLS authentication with X.509 certificates. OAuth clients are provided a mechanism for authentication to the authorisation server using mutual TLS. OAuth authorisation servers are provided a mechanism for binding access tokens to a client’s mutual-TLS certificate, and OAuth protected resources are provided a method for ensuring that such an access token presented to it was issued to the client presenting the token. |
RFC 8707 Resource Indicators for OAuth 2.0 |
This document specifies an extension to the OAuth 2.0 Authorization Framework defining request parameters that enable a client to explicitly signal to an authorisation server about the identity of the protected resource(s) to which it is requesting access. |
RFC 8725 JSON Web Token Best Current Practices |
This document updates RFC 7519 to provide actionable guidance leading to secure implementation and deployment of JWTs. |
13. Appendix H — RFCs in development
These RFCs are pertinent to this guideline but are under development as at May 2022:
Description | High level description |
---|---|
OAuth 2.0 JWT Secured Authorization Request |
This document introduces the ability to send request parameters in a JSON Web Token (JWT) instead, which allows the request to be signed with JSON Web Signature (JWS) and encrypted with JSON Web Encryption (JWE) so that the integrity, source authentication and confidentiality property of the authorisation request is attained. |
JWT Response for OAuth Token Introspection |
The introspection response, as specified in OAuth 2.0 Token, is a plain JSON object. This specification extends the token introspection endpoint with the capability to return responses as JWTs. |
JSON Web Token (JWT) Profile for OAuth 2.0 Access Tokens |
This specification defines a profile for issuing OAuth 2.0 access tokens in JWT format. Authorisation servers and resource servers from different vendors can leverage this profile to issue and consume access tokens in an interoperable manner. |
The OAuth 2.1 Authorization Framework |
This is an in-progress update to version 2.0. Key points to note are:
|
OAuth 2.0 Security Best Current Practice |
This document describes best current security practice for OAuth 2.0. It updates and extends the OAuth 2.0 Security Threat Model to incorporate practical experiences gathered since OAuth 2.0 was published and covers new threats relevant due to the broader application of OAuth 2.0. |
OAuth 2.0 Rich Authorization Requests |
This document specifies a new parameter ‘authorization_details’ that is used to carry fine-grained authorisation data in the OAuth authorisation request. |
OAuth 2.0 Pushed Authorization Requests |
Pushed Authorization Requests (PAR) enable OAuth clients to push the payload of an authorisation request directly to the authorisation server in exchange for a request URI value, which is used as reference to the authorisation request payload data in a subsequent call to the authorisation endpoint via the user agent. |
OAuth 2.0 Authorization Server Issuer Identifier in Authorization Response |
This specifies a new parameter ‘iss’ that is used to explicitly include the issuer identifier of the authorisation server in the authorisation response of an OAuth authorisation flow. If implemented correctly, the ‘iss’ parameter serves as an effective countermeasure to ‘mix-up attacks’ which are aimed to steal an authorisation code or access token by tricking the client into sending the authorisation code or access token to the attacker instead of the honest authorisation or resource server. |
OAuth 2.0 Demonstrating Proof-of-Possession at the Application Layer |
This specification describes a mechanism for sender-constraining OAuth 2.0 tokens via a Proof-of-Possession mechanism (DPoP) on the application level and enables a client to demonstrate proof-of-possession of a public / private key pair by including a DPoP header (JWT) in an HTTP request, that enables the authorisation server to bind issued tokens to the public part of a client’s key pair. This mechanism allows for the detection of replay attacks with access and refresh tokens. |
OAuth 2.0 for Browser-Based Apps |
This specification describes the current best practices for implementing OAuth 2.0 authorisation flows in applications executing in a browser. An application that is dynamically downloaded and executed in a web browser, usually written in JavaScript. Also, sometimes referred to as a single-page application, or SPA. One of the key recommendations is the use of PKCE. |
14. Glossary of terms
- Analytics
- Analytics in the context of these guidelines is the capturing and reporting of API usage.
- API (Application Programming Interface)
- An API is a piece of software that provides a way for other disparate pieces of software (applications, systems) to talk to one another.
- API catalogue
- The API delivery component that lists the APIs offered, along with their interface specification and guidance on how to gain access and use the APIs.
- API developer
- The organisation (or person) who creates the API and maintains the interface specification for the API. An API developer creates and documents an agency’s APIs. API developers generally have a good understanding of an agency’s function.
- API developer portal
- The API delivery component that allows API providers to engage with, onboard, educate and manage application developers whether inside or outside the organisation. These management activities will generally include registration, documentation, analytics and so on.
- API gateway
- The API delivery component that allows API providers to offer APIs to the outside world. This component (physical or virtual) hosts the APIs ready for consumers to access. It provides an API provider with the ability to control who has access to their APIs by enforcing policies for access control. Some API gateways also offer additional capabilities.
- API manager
- The API delivery component that allows API providers to control an API’s visibility and behaviour. It is usually exposed as a UI / console to internal staff only, as it is seen as an administration component. It offers a variety of capabilities, including API registration and catalogue administration.
- API provider
- The organisation that provides the API to expose a resource (information or functionality) for consumers.
- Application
- The software behind the API that provides the business logic for the resource.
- Application developer
- Software developer or organisation that builds consuming applications that use the API. An application developer needs to be able to discover, understand and access your APIs. They can be internal to your agency, developers that work with trusted partners, developers from other agencies or developers from the private sector.
- Consumers
- Customers, consuming applications and application developers who use the API.
- Consuming application
- Any application (on any device) that consumes or uses an API.
- Context
- Context in these guidelines generally refers to request context. For example, a JWT (JSON WebToken) may contain information about the customer initiating an API request.
- Customers
- People (or organisations) that use the consuming applications to access the API resources the API provider is offering.
- DELETE
- One of the most common HTTP methods, DELETE is used to delete a resource specified by its URI.
- Discovery
- The ability for application developers to find resources and associated APIs to use in their consuming applications.
- Interface specification
- Provides technical information to the application developer community about the API. It includes information about how the API works, any access control features and any error conditions.
- POST
- One of the most common HTTP methods for retrieving from and sending data to a server, the POST method sends data to the server and creates a new resource. The resource it creates is subordinate to some other parent resource. When a new resource is POSTed to the parent, the API service will automatically associate the new resource by assigning it a new resource URI. In short, this method is used to create a new data entry.
- Product manager
- The product manager is usually a technical role. They understand an agencies API landscape and are owners of API management platforms.
- Product owner
- The product ownership function usually resides in a business area rather than technology. The role of the product owner is to understand the product that the agency is trying to deliver, shape and communicate the vision and delivery schedule for the product, represent the stakeholder and customer perspective, and to make decisions on the representation of the product in an API.
- Publish
- The act of releasing the interface specification and associated API to a location accessible by application developers.
- PUT
- One of the most common HTTP methods for retrieving from and sending data to a server, the PUT method is most often used to update an existing resource. If you want to update a specific resource (which comes with a specific URI), you can call the PUT method to that resource URI with the request body containing the complete new version of the resource you are trying to update.
- Resource
- The information or functionality exposed by the API.
- State
- State defines the point in time record of an in-flight transaction. Some systems maintain ‘user state’ for a period of time to enable a transaction to be continued from the point of last recorded state. APIs are usually, but not always, considered stateless.
- Zero Trust Network Access (ZTNA)
- A product or service that creates an identity- and context-based, logical access boundary around an application or set of applications. The applications are hidden from discovery, and access is restricted via a trust broker to a set of named entities. The broker verifies the identity, context and policy adherence of the specified participants before allowing access and prohibits lateral movement elsewhere in the network. This removes application assets from public visibility and significantly reduces the surface area for attack.
15. Glossary of acronyms
- ABAC
- Attribute-based Access Control
- ACL
- Access Control List
- AD
- Active Directory
- AFLA
- Abbreviated Language for Authorization
- API
- Application Programming Interface
- CA
- Certificate Authority
- CIBA
- Client Initiated Backchannel Authentication
- COTS
- Commercial off-the-shelf
- CRL
- Certificate Revocation List
- DAC
- Discretionary Access Control
- DB
- Database
- DHE
- Diffie-Hellman Ephemeral
- DMZ
- Demilitarized Zone
- DoS
- Denial of Service
- DPoP
- Proof-of-Possession mechanism
- ECDHE
- Elliptic Curve DHE
- FAPI
- Financial Grade API
- FHIR
- Fast Healthcare Interoperability Resources
- FIDO U2F
- FIDO Alliance Universal Second Factor
- GETS
- Government Electronic Tender Service
- HEART
- Health Relationship Trust
- HMAC
- Hash-based Message Authentication Code
- HOTP
- HMAC-based One-Time Password
- HPP
- HTTP Parameter Pollution
- HTTP
- Hyper Text Transfer Protocol
- IETF
- Internet Engineering Taskforce
- IP
- Internet Protocol
- JSON
- Javascript Object Notation
- JWA
- JSON Web Algorithms
- JWE
- JSON Web Encryption
- JWK
- JSON Web Key
- JWS
- JSON Web Signature
- JWT
- JSON Web Token
- LDAP
- Lightweight Directory Access Protocol
- MAC
- Message Authentication Code
- MFA
- Multi-factor Authentication
- MTLS
- Mutual Transport Layer Security
- NZBN
- New Zealand Business Number
- NZISM
- New Zealand Information Security Manual
- OAuth
- Open Authorization
- OWASP
- Open Web Application Security Project
- PAP
- Policy Administration Point
- PDP
- Policy Decision Point
- PEP
- Policy Enforcement Point
- PIP
- Policy Information Point
- PKCE
- Proof of Key for Code Exchange
- PNZ
- Payments New Zealand
- RAML
- REST API Modeling Language
- RBAC
- Roles-based Access Controls
- REST
- Representative State Transfer
- RFC
- Request for Comments
- RO
- Resource owner
- RS
- Resource server
- SAML
- Security Assertion Markup Language
- SCIM
- System for Cross-domain Identity Management
- SEO
- Search Engine Optimisation
- SLA
- Service Level Agreement
- SOAP
- Simple Object Access Protocol
- SPML
- Service Provisioning Markup Language
- SQL
- Structured Query Language
- SSL
- Secure Sockets Layer
- SSO
- Single Sign-On
- STS
- Security Token Service
- TLS
- Transport Layer Security (superseded SSL)
- TOTP
- Time-based One-time Password
- UMA
- User-Managed Access
- UI
- User interface
- URI
- Uniform Resource Identifier
- URL
- Uniform Resource Locator
- URN
- Uniform Resource Name
- WADL
- Web API Description Language
- XACML
- eXtensible Access Control Markup Language
- XML
- eXtensible Markup Language
- ZT
- Zero Trust
16. Further reading
- HTTP 1.1 Standards RFCs — IETF:
- RFC7230 HTTP/1.1: Message Syntax and Routing
- RFC7231 HTTP/1.1: Semantics and Content
- RFC7232 HTTP/1.1: Conditional Requests
- RFC7233 HTTP/1.1: Range Requests
- RFC7234 HTTP/1.1: Caching
- RFC7235 HTTP/1.1: Authentication
- RFC7236 HTTP/1.1: Authentication Scheme Registrations
- RFC7237 HTTP/1.1: Method Registrations
- NZ Protective Security — Protective Security Requirement (PSR)
- OpenAPI Specification — GitHub
- OWASP API Security Project — OWASP
- OWASP REST Security — REST Security Cheat Sheet
- OWASP Secure Coding Principles — OWASP
- OWASP Top Ten Project — OWASP
- Reserved JavaScript Keywords — W3Schools
- REST API Resource Modelling — Thoughtworks
- Strategy for a Digital Public Service
- Using HTTP Methods for RESTful Services — REST API Tutorial
Last updated