The OAuth 2.0 specification (RFC 6749), together with its ecosystem of extension RFCs, describes a protocol that is, when implemented precisely, reasonably secure. The specification is explicit about its security requirements: exact redirect URI matching, state parameter for CSRF protection, token binding, audience restriction, and so on. The OAuth Security Best Current Practice document (RFC 9700, formerly draft-ietf-oauth-security-topics) enumerates known attacks and their mitigations in detail.
And yet, OAuth remains one of the most reliably exploitable surfaces in modern web applications. The reason is not that the protocol is flawed. It is that the protocol is complex enough that the gap between specification and implementation is consistently exploitable, and the consequences of that gap , token theft, account takeover, cross-tenant data access , are severe.
The pattern is remarkably consistent: the OAuth library or identity provider does the right thing by default, and then the implementing team makes a series of small, individually reasonable configuration decisions that collectively create an exploitable deployment.
Redirect URI Validation: The Most Exploited Seam
The redirect URI is the mechanism by which the authorization server returns the authorization code (or token, in the implicit flow) to the client application. The specification requires that the authorization server validate the redirect URI against the client's registered redirect URIs. The security assumption is that the authorization code is delivered only to the legitimate client.
The failure mode is permissive redirect URI validation. Instead of exact-match validation (https://app.example.com/callback and nothing else), authorization servers or client registrations allow:
- Wildcard subdomains:
https://*.example.com/callback, exploitable if any subdomain has an open redirect or XSS vulnerability - Path prefix matching:
https://app.example.com/, exploitable by appending paths that trigger server-side redirects - Scheme flexibility: allowing
http://whenhttps://was intended , exploitable on networks where HTTP traffic can be intercepted - Localhost exceptions:
http://localhost:*, exploitable through local port binding on shared machines
Each of these relaxations is typically motivated by developer convenience , "we need to support multiple environments," "our callback path varies by feature," "localhost is needed for development." But each one expands the set of URLs that can receive authorization codes, and an attacker who can redirect the code to a URL they control can exchange it for an access token.
The practical exploitation requires the attacker to find a way to make the authorization server redirect to an attacker-controlled URL. If redirect validation allows https://*.example.com/callback, the attacker needs XSS or an open redirect on any example.com subdomain. If it allows path prefix matching, the attacker needs an open redirect on any path under the prefix. If it allows http://, the attacker needs network-level interception. In practice, finding one of these secondary conditions in a large web application is not difficult.
Authorization Code Interception Beyond Redirects
Even with correct redirect URI validation, authorization codes can leak through channels that the OAuth specification acknowledges but that implementations frequently ignore:
Referrer headers. When the browser follows the redirect URI, the full URL (including the authorization code in the query string) may be sent as the Referer header on subsequent requests , to third-party analytics scripts, to CDN resources, to any external resource loaded on the callback page. The mitigation is using Referrer-Policy: no-referrer on the callback page and processing the code immediately, but many implementations do not set this header.
Browser history and server logs. The authorization code appears in the URL, which means it is stored in browser history, proxy logs, and web server access logs. Any system with access to these logs , a browser extension, a corporate proxy, a log aggregation service , can extract codes. This is one reason the specification recommends short code lifetimes and single-use enforcement, but implementations that allow code reuse or long code validity windows remain common.
Referer leakage via callback page content. If the callback page includes third-party JavaScript (analytics, chat widgets, advertising), those scripts can read window.location and extract the authorization code. This is a variant of the third-party JavaScript trust problem applied specifically to the OAuth callback flow.
PKCE (Proof Key for Code Exchange, RFC 7636) mitigates code interception by binding the authorization code to a client-generated verifier. Even if the code leaks, it cannot be exchanged without the verifier, which only the legitimate client possesses. PKCE was originally designed for public clients (mobile apps, SPAs) where client secrets cannot be securely stored, but RFC 9700 recommends it for all clients, including confidential ones. The adoption of PKCE in server-side web applications has been slower than it should be, partly because many implementations predate the recommendation and partly because developers perceive it as unnecessary for confidential clients that already have a client secret.
Token Lifecycle Failures
The second major failure domain is token management after the OAuth flow completes. Access tokens and refresh tokens are bearer credentials , anyone who possesses them can use them. The security model depends on tokens being short-lived, narrowly scoped, and securely stored. In practice:
Access tokens with excessive lifetimes. Some implementations issue access tokens valid for hours or days, providing a long window for stolen tokens to be used. The specification does not mandate a specific lifetime, and the default in many identity providers is longer than necessary. A one-hour access token combined with a refresh token is a better pattern , the access token limits the window of abuse, and the refresh token (which should be rotated on each use) provides continuity.
Refresh tokens without rotation or binding. A refresh token that can be used indefinitely, from any client, without rotation, is a persistent credential equivalent to a password. If it leaks , through a database compromise, a log file, a stolen device , the attacker has long-term access to the victim's account. Refresh token rotation (issuing a new refresh token with each use and invalidating the old one) detects replay: if both the legitimate client and the attacker attempt to use the same refresh token, the rotation mismatch signals a compromise.
Overly-broad scopes. Applications that request read write admin scopes when they only need read:profile create tokens that are more powerful than necessary. If the token is stolen, the attacker's capabilities are bounded by the token's scope. Requesting minimal scopes is the OAuth equivalent of least-privilege IAM , it limits the blast radius of token theft.
Token storage in vulnerable locations. Storing tokens in localStorage exposes them to XSS. Storing them in cookies without HttpOnly and Secure flags exposes them to script access and network interception. Storing them in URL fragments or query parameters exposes them to referrer leakage and logging. The secure storage options , HttpOnly cookies for web applications, secure storage APIs for mobile applications, in-memory storage for SPAs with backend token management , are well-known but not universally adopted.
The Client Trust Problem
OAuth's security model fundamentally depends on the authorization server trusting the client to behave correctly , to validate tokens, to enforce scopes, to store credentials securely, to handle redirects properly. For first-party clients (the same organization controls both the authorization server and the client), this trust is manageable. For third-party clients (external applications accessing your users' data via OAuth), this trust is a policy decision with security consequences.
The authorization server cannot verify that a third-party client stores refresh tokens securely, or that it does not log access tokens, or that it requests only the scopes it actually needs, or that its callback endpoint is free of vulnerabilities. The authorization server can enforce some constraints (short token lifetimes, mandatory PKCE, restricted scopes), but it cannot verify the client's internal security practices.
This is why client registration and scope governance are critical controls. Every third-party client that can initiate OAuth flows against your authorization server is a potential token theft vector. The client registration process should verify the client's identity, restrict the scopes the client can request, validate the exact redirect URIs the client will use, and establish a review process for changes. Organizations that allow self-service client registration without review , which is the default in many identity platforms , are implicitly trusting every developer who registers a client to implement OAuth correctly. The history of OAuth vulnerabilities suggests this trust is rarely warranted.
What Effective Hardening Looks Like
The mitigations for OAuth misconfigurations are individually well-understood. The challenge is implementing all of them consistently across every client, every flow, and every edge case:
Exact redirect URI matching, enforced at the authorization server. No wildcards, no prefix matching, no scheme flexibility. Every redirect URI registered in the client configuration should be a complete, exact URL. Changes to registered redirect URIs should require security review.
PKCE for all clients. Not just public clients. Confidential clients benefit from PKCE as a defense-in-depth measure against code interception, even when they also use client secrets.
Refresh token rotation with replay detection. Every refresh token use should issue a new refresh token and invalidate the old one. If the old refresh token is used again (indicating it was stolen before the legitimate client rotated it), all tokens for the grant should be revoked.
Scope minimization and periodic review. Clients should request the minimum scopes needed. Existing client grants should be periodically reviewed to ensure they still require the scopes they were originally granted. Scopes should map to meaningful business permissions, not technical API groups.
Token binding where feasible. Sender-constrained access tokens (DPoP , Demonstrating Proof of Possession, RFC 9449) bind tokens to the client's cryptographic key pair. A stolen DPoP-bound token cannot be used without the corresponding private key. DPoP adoption is growing but still limited, partly because it requires client-side changes and partly because it adds complexity to API request handling.
The recurring theme across all of these is that OAuth security is not a protocol property. It is an implementation and operational discipline. The protocol provides the mechanisms. The security comes from using all of them, consistently, across every client and every flow, without the "small, reasonable exceptions" that consistently produce exploitable deployments.
OAuth incidents are rarely caused by cryptographic failures or protocol-level bugs. They are caused by an engineer who set redirect_uri_validation: prefix because exact matching was inconvenient, or a product manager who approved broad scopes because restricting them would delay a partner integration, or a platform team that left refresh token rotation disabled because it caused problems with a legacy client. The cumulative effect of these individually small decisions is a token theft surface that is practically exploitable by anyone who takes the time to probe it.

