In 2022, a security researcher used automated scanning to discover that thousands of mobile applications and websites had shipped production Firebase API keys, AWS access keys, Stripe secret keys, and other credentials in their client-side JavaScript bundles. The keys were not obfuscated. They were not encrypted. They were sitting in plain text, embedded in the JavaScript that every user's browser downloaded, parsed, and cached.
Some of these keys were harmless , Firebase API keys used for client-side configuration that are intended to be public and are restricted by server-side security rules. But others were catastrophic: AWS AKIA access keys with broad S3 permissions, Stripe secret keys that could process charges, and OAuth client secrets that should never leave the server. The common thread was that the developers who embedded these keys in frontend code did not understand , or did not think about , the difference between a public and a private credential, or between a client-side and a server-side trust boundary.
Why Secrets End Up in Frontend Code
The proximate causes are well-known, but the structural cause is less often discussed. Frontend build tools , webpack, Vite, esbuild, Next.js , are designed to bundle application code and configuration into a single deployable artifact. These tools read environment variables and embed them in the output bundle. The developer sets NEXT_PUBLIC_API_KEY=xyz in their .env file, the build tool replaces references to process.env.NEXT_PUBLIC_API_KEY with the literal string xyz, and the string appears in the minified JavaScript that ships to every user.
The structural problem is that the build tool does not distinguish between credentials that are safe to expose publicly and credentials that must remain server-side. The NEXT_PUBLIC_ prefix in Next.js is a naming convention that signals intent, but it is not enforced , a developer who accidentally puts a secret in a NEXT_PUBLIC_ variable, or who uses a build tool without such conventions, ships the secret to production.
The specific scenarios that produce secret leaks include:
Environment variable misconfiguration. A .env file contains both public and private credentials. The build tool is configured to embed all environment variables (or all variables matching a pattern), and the private credentials are included alongside the public ones. A common example is a Firebase project where the client-side API key (public, restricted by security rules) and the Firebase Admin SDK service account key (private, grants full database access) are in the same .env file. The developer embeds both, intending to use the admin key only server-side but failing to exclude it from the client bundle.
Development credentials left in code. During development, a developer hardcodes a test API key to avoid the friction of environment variable setup. The test key works against the development server. The code is committed. The code is merged. The test key is now in the production bundle. If the test key happens to work against the production API (because development and production share a key, or because the "test" key was actually a production key), the leak is immediately exploitable.
Build pipeline misconfiguration. CI/CD systems inject environment variables for deployment (cloud credentials, signing keys, registry tokens). If the build step and the deploy step share the same environment, and the build tool is configured to embed environment variables, the deployment credentials may be embedded in the built artifact. This is particularly common in monorepo setups where a single CI job handles both frontend builds and backend deployments.
How Scanning Tools Detect Secrets
Automated secret scanners use a multi-layered detection approach that combines pattern matching, entropy analysis, and contextual heuristics:
Pattern matching identifies known credential formats using regular expressions. Each credential type has a distinctive format:
| Credential Type | Pattern |
|---|---|
| AWS Access Key | AKIA[0-9A-Z]{16} |
| GitHub Token | ghp_[0-9a-zA-Z]{36} |
| Stripe Secret Key | sk_live_[0-9a-zA-Z]{24} |
| Slack Webhook | https://hooks.slack.com/services/T[A-Z0-9]+/B[A-Z0-9]+/[a-zA-Z0-9]+ |
| Google API Key | AIza[0-9A-Za-z\-_]{35} |
| JWT | eyJ[A-Za-z0-9_-]+\.eyJ[A-Za-z0-9_-]+\.[A-Za-z0-9_-]+ |
Scanners maintain databases of hundreds of these patterns. The matching is fast and produces high-confidence results for known formats.
Entropy analysis detects secrets that do not match known patterns. Cryptographic keys, randomly-generated tokens, and API secrets tend to have high Shannon entropy , they contain a near-uniform distribution of characters. A string like a3f8b2c1d9e0f4a5b6c7d8e9f0a1b2c3 has much higher entropy than hello_world_config. Scanners flag high-entropy strings that appear in contexts where secrets are typically found (variable assignments, configuration objects, HTTP headers).
Contextual heuristics consider the semantic context of a detected string. A high-entropy string assigned to a variable named apiKey, secret, token, or password is more likely to be a credential than the same string assigned to a variable named hash or checksum. A string used in an Authorization header or a fetch() call to an external API is more likely to be a live credential than a string in a comment or a test assertion.
The Blast Radius Varies Dramatically
Not all secrets in frontend code are equally dangerous. Understanding the blast radius of different credential types is important for triage:
Public API keys (low risk). Firebase client-side API keys, Google Maps API keys with domain restrictions, and Stripe publishable keys are designed to be used in client-side code. They are scoped by server-side rules (Firebase Security Rules, API key restrictions, publishable-vs-secret key separation). Finding these in a JavaScript bundle is not a security incident, though it is still worth verifying that the server-side restrictions are correctly configured.
Unrestricted API keys (medium to high risk). Google API keys without domain or IP restrictions, SendGrid API keys with full send permissions, and Twilio credentials can be abused by anyone who possesses them. The blast radius depends on what the API allows: sending emails, making phone calls, accessing data, or incurring charges.
Cloud provider credentials (critical risk). AWS access keys (AKIA...), GCP service account keys, and Azure service principal credentials in frontend code are a critical incident. These credentials may provide access to databases, storage, compute resources, and other cloud infrastructure. Even "read-only" cloud credentials can be used to enumerate resources, discover further attack paths, and exfiltrate data.
OAuth client secrets and signing keys (critical risk). These credentials enable impersonation of the application to identity providers and manipulation of authentication tokens. A leaked OAuth client secret allows the attacker to exchange authorization codes for access tokens, impersonate the application, and potentially access user data at scale.
Architectural Prevention
The only reliable prevention is ensuring that secrets never enter the frontend build pipeline:
Server-side proxy for API calls. Instead of calling external APIs directly from the browser with an API key, route all API calls through a backend proxy that adds the credential server-side. The frontend calls /api/maps-proxy?query=... instead of https://maps.googleapis.com/maps/api?key=SECRET&query=.... The API key never leaves the server.
Build-time environment isolation. The frontend build environment should contain only public configuration values. Private credentials should be available only to the backend deployment step. In CI/CD, this means separate environment variable sets for the frontend build job and the backend deploy job, with no overlap of secret credentials.
Pre-commit and pre-deploy scanning. Tools like gitleaks, TruffleHog, and detect-secrets can be integrated into git pre-commit hooks and CI pipelines to catch secrets before they are committed to version control or deployed to production. A pre-commit hook that fails on detected secrets prevents the credential from entering the repository at all , the cheapest and most effective point of intervention.
Runtime secret detection as a safety net. Scanning deployed JavaScript bundles from the outside (using the same tools an attacker would use) provides a continuous verification that no secrets have leaked through the preventive controls. This is the "trust but verify" layer that catches secrets that slipped through pre-commit hooks, CI scans, and code review.
The goal is not to catch secrets after they leak. It is to make leaking secrets structurally impossible by ensuring that the frontend build process never has access to credentials that should not be in the browser. Every secret in a JavaScript bundle is a secret that the entire internet can read. The architecture should enforce that distinction, not rely on developer diligence to maintain it.

