Thousands of Public Google Cloud API Keys Exposed with Gemini Access After API Enablement

Oh Look, Another API Key Leak. Déjà Vu.
After twenty years in DevSecOps, you’d expect we’d finally stop hardcoding credentials. But here we are—Google Cloud API keys exposed in public code and Gemini endpoints giving up data for free. The headlines repeat every year (Truffle Security’s April 2024 report), and yet developers and ops teams keep making the same mistakes.
Rapid Response: What to Do Right Now
If you’ve just discovered an exposed Google API key—stop reading and do this:
- Revoke the exposed key in the Google Cloud Console ASAP (docs).
- Create and rotate new credentials—prefer service accounts over API keys (docs).
- Identify scope of compromise: comb Cloud Audit Logs (docs), check billing for spikes, and review Vertex AI and Gemini usage.
- Scan all repos/history for other secrets: use TruffleHog, Gitleaks, GitGuardian.
- Purge exposed secrets from git history: git-filter-repo or BFG Repo-Cleaner.
- Notify stakeholders and, if in regulated industries, escalate per incident response playbook.
For a downloadable checklist (junior-friendly, <1 hour triage), grab the PDF here.
Why We Keep Falling for This
The "But It’s Just a Test Project" Pitfall
Anonymized real incident (2022): Inherited a GCP deployment where an API key—a simple “AIzaSy…” credential—was left in a public GitHub repo. The developer believed default restrictions would prevent abuse. Within days, attackers used the key to run Gemini and Vertex AI inference jobs, triggering five figures in unexpected billing. The fix involved: disabling the affected key, rotating all relevant service accounts, and purging secrets from all repositories and CI/CD artifacts. Audit logs revealed dozens of unauthorized calls—Google’s official logging guidance proved crucial.
Default API key permissions have limited scope (docs), but unless explicitly restricted by IP, referrer, and API, they remain vulnerable. Don’t treat API keys as passwords, and never treat them as magic amulets that ward off attackers.
Client-side Credentials: A Disaster in Waiting
Embedding API keys or service account credentials in frontend code is almost always a design blunder. Gemini endpoints and all generative AI APIs should be gated behind server-side authentication. Use OAuth2 or token exchange patterns (docs). Never expose credentials—long-lived or otherwise—in client bundles where CSP, CORS, or referrer restrictions are cosmetic at best.
- API keys: grant access to specific Google APIs, but are easily abused unless tightly constrained.
- Service account keys: provide broad access (typically via IAM), and leaking a JSON key is a direct path to privilege escalation.
Detection and Monitoring That Actually Works
Where to Begin:
- Cloud Audit Logs: Use filters to spot suspicious Gemini/Vertex AI calls, especially from unknown IPs.
Example:resource.type="project" protoPayload.methodName="google.cloud.genai.v1.Predict" - Billing anomaly alerts: Set alerts for usage spikes (docs) and configure quota/budget alarms.
- Enable Data Access logs: These capture who accessed which resources at what time, essential for forensics (docs).
- Log-based alerts: Route logs to a SIEM or use Google’s log-based metrics to trigger alerts for abnormal API key usage (docs).

The Architecture Nightmare We're Still Ignoring
You don’t get extra points for speed if you leave keys in the open.
Treat LLM endpoints (Gemini, Vertex AI) as server-side only. Enforce OAuth2, Workload Identity Federation, and never embed credentials in client code.
Use VPC Service Controls to fence off sensitive APIs—even if devs mess up elsewhere.
-
Restrict API keys: By referrer, IP, and API (docs).
-
Least-privilege roles: Assign only the required permission using “principle of least privilege.” Create via:
gcloud iam service-accounts create my-sa --display-name="Least Privilege Service Account"Then attach only essential roles (docs).
-
Workload Identity Federation: Drop the old approach of exporting JSON keys—use federation so apps get short-lived tokens.
Prevention Playbook: Stop Leaks Before They Happen
- Secret Manager: Store credentials out of code (docs).
- Workload Identity Federation: Avoid JSON keys—use federated access everywhere, especially for CI/CD pipelines.
- API key restriction:
- In Console: “Restrict by referrer/IP/API” (docs).
- By CLI:
gcloud services api-keys create --restrictions=...
- VPC Service Controls: Isolate sensitive projects (docs).
- IAM least-privilege: Regularly audit with
Rotate keys and review permissions monthly.gcloud projects get-iam-policy PROJECT_ID - CI/CD secret scanning:
- Pre-commit: Use Gitleaks.
- Full repo/history: TruffleHog, GitGuardian, git-filter-repo, BFG Repo-Cleaner.
- Reference: OWASP Secret Detection Cheat Sheet.
The Harsh Reality and What Needs to Change
Exposed keys are a symptom, not the disease.
The root cause: dev teams sacrificing basic security for speed, misconfigured IAM policies, and treating cloud primitives as “black boxes.” Security culture, incentives, and mandatory training still lag (NIST guidance).
You can mitigate risk, but it takes discipline—technical and organizational.
Author & How to Get Peer-Reviewed Advice
Written by:
Alex Sharpe
Principal Security Engineer (DevSecOps, Cloud), 20 years in incident response
Clients: Fortune 100, fintech, and SaaS. Prior posts: Black Hat 2022 talk, incident report on GCP credential abuse.
Published: 2024-06-13
Closing Thought
You’ll never outpace attackers by hoping the defaults will save you. Credentials belong with radioactive material: lock them down, rotate relentlessly, and assume breach is inevitable. Next time you see a leaked key, ask yourself—do you want to be the subject of the post-mortem, or the engineer who stopped it cold?