Non-Human Identities (NHI): The Hidden Security Crisis Powering AI Agent Attacks in 2026
Machine identities now outnumber humans 40–100:1 in enterprise environments. With AI agents minting thousands of new credentials daily, NHIs have become the fastest-growing and least-governed attack surface in cybersecurity. Here is what every security team needs to know.
The Identity Crisis Nobody Saw Coming
In 2024, a security team at a Fortune 500 bank audited their identity infrastructure expecting to find around 50,000 human user accounts. What they found was staggering: over 4.2 million non-human identities: service accounts, API keys, OAuth tokens, certificates, and bot credentials. Most were unmanaged, many carried excessive privileges, and thousands were completely abandoned.
This is not an anomaly. It is the new normal.
Non-human identities (NHIs) now outnumber human identities in enterprise environments by ratios ranging from 40:1 to over 100:1, with some hyper-automated organizations hitting 500:1. With the explosion of AI agents in 2026, that ratio is accelerating at a pace that traditional identity and access management (IAM) tools were never designed to handle.
Gartner named "Identity and Access Management Adapts to AI Agents" as one of its Top 6 Cybersecurity Trends for 2026, published in February. The World Economic Forum called NHIs "agentic AI's new frontier of cybersecurity risk." Yet most organizations still manage non-human identities the same way they did in 2019: spreadsheets, manual rotation schedules, and shared API keys that never expire.
The result: 68% of IT security incidents now involve machine identities. 50% of enterprises have already suffered a breach due to unmanaged NHIs. And the window to get ahead of this is closing fast.
What Exactly Is a Non-Human Identity?
A non-human identity is any credential, token, or secret that allows a software system, automated process, or AI agent to authenticate and access resources, with no human directly involved in that specific transaction.
| NHI Type | Examples | Risk Level |
|---|---|---|
| Service Accounts | Database service accounts, Windows service accounts | High: often over-privileged, rarely rotated |
| API Keys | AWS access keys, Stripe keys, OpenAI API keys | Critical: frequently hardcoded, long-lived |
| OAuth Tokens | App-to-app authorization, third-party integrations | High: broad scope, hard to track |
| Certificates & Secrets | TLS certs, SSH keys, JWT signing keys | Medium-High: expiry gaps cause outages and breaches |
| CI/CD Credentials | GitHub Actions secrets, pipeline tokens | Critical: direct access to code and infrastructure |
| AI Agent Identities | Autonomous agent API keys, MCP server tokens, tool access tokens | Emerging: rapidly growing, almost no governance |
According to the 2026 NHI Reality Report, the average enterprise now has over 250,000 NHIs across cloud environments. 71% have not been rotated within recommended timeframes, 97% carry excessive privileges beyond what their function requires, and only 15% of organizations feel highly confident in their ability to prevent NHI-based attacks.
Why NHIs Are the Attacker's Favorite Target
Non-human identities offer attackers advantages that compromised human credentials simply do not. They are persistent. A human employee who notices suspicious activity might flag it. An API key never notices anything. Compromised NHIs can remain active for months or years. The average dwell time after an NHI breach is over 200 days, more than three times the average for compromised human accounts. They are over-privileged by default. Developers creating service accounts or API keys tend to grant broad permissions to avoid friction. The OWASP NHI Top 10 lists excessive permissions as the single most prevalent NHI risk. An API key with broad read/write access to cloud storage is a skeleton key, not a service credential. They multiply the blast radius. The 2025 Salesloft-Drift incident made this concrete: attackers who compromised OAuth tokens connecting multiple SaaS platforms gained access to hundreds of downstream customer environments through a single credential. The blast radius was 10x greater than a typical human credential breach, because that one NHI was trusted by many interconnected systems. They are invisible without active effort. Only 21% of executives report complete visibility into agent permissions, tool usage, or data access patterns. Most NHI breaches are discovered through external reports or accidental discovery rather than through internal monitoring. They never expire unless you make them. API keys and service account passwords are often set once and forgotten, surviving employee departures, product pivots, and infrastructure migrations for years.
The Five OWASP NHI Risks Security Teams Must Address
OWASP published a dedicated Non-Human Identities Top 10 framework in 2025, recognizing that NHI security cannot be handled by existing application security tools or human IAM processes. Here are the five most critical risks and what to do about each.
1. Improper Offboarding
When a service is deprecated, an employee leaves, or a vendor relationship ends, the NHIs associated with those entities are rarely cleaned up. These "zombie credentials" retain full access long after their purpose is gone.
Attackers actively hunt for zombie credentials by enumerating endpoints, scanning certificate transparency logs, and searching code repositories for abandoned keys. Systems with no active monitoring are ideal staging grounds. What security teams should do:
- Tie NHI deprovisioning directly to your HR offboarding and vendor management workflows as an automated gate, not a manual follow-up step
- Set hard expiry dates on all NHIs at creation time; no NHI should exist without an expiry
- Run a monthly report of NHIs with no activity in 30 days and escalate immediately
- Assign every NHI a named human owner; if the owner leaves, the NHI is automatically flagged for review
2. Secret Leakage
Credentials hardcoded in source code, committed to version control, baked into container images, or stored in CI/CD environment variables represent one of the most consistently exploited NHI risks. This problem is growing because AI coding assistants frequently generate code with placeholder credentials that developers replace with real values without a secret-scanning gate in place. What security teams should do:
- Deploy pre-commit secret scanning on every developer workstation and in every CI/CD pipeline; tools like detect-secrets, Trufflehog, or GitGuardian can block credential commits before they reach the repository
- Rotate any credential that may have touched a repository, even a private one, as a precaution
- Never allow secrets in environment variables for production systems; use a secrets vault instead
- Make secret scanning part of your definition of "done" for any code review
The following .pre-commit-config.yaml gets you scanning in under five minutes:
# .pre-commit-config.yaml - add to repo root, then run: pre-commit install
repos:
<ul class="list-disc pl-6 mb-4 space-y-2">
<li class="text-gray-600 ml-2">repo: https://github.com/trufflesecurity/trufflehog</li>
</ul>
rev: v3.88.0
hooks:
<ul class="list-disc pl-6 mb-4 space-y-2">
<li class="text-gray-600 ml-6">id: trufflehog</li>
</ul>
name: TruffleHog secret scan
entry: trufflehog git file://. --since-commit HEAD --fail
language: system
pass_filenames: false
<ul class="list-disc pl-6 mb-4 space-y-2">
<li class="text-gray-600 ml-2">repo: https://github.com/Yelp/detect-secrets</li>
</ul>
rev: v1.5.0
hooks:
<ul class="list-disc pl-6 mb-4 space-y-2">
<li class="text-gray-600 ml-6">id: detect-secrets</li>
</ul>
args: ['--baseline', '.secrets.baseline']
3. Excessive Permissions
97% of NHIs carry more access than they need. This is not an edge case; it is the default outcome when permissions are set once and never revisited. Unlike human identity access reviews, NHI permissions are almost never included in quarterly review cycles.
The fix is adopting least privilege by design: every NHI should have access to the minimum set of resources required for its specific function, verified against actual usage data, not developer assumptions. What security teams should do:
- Pull 90-day usage logs for all NHIs and compare actual permissions used against permissions granted; revoke everything unused
- Establish a permission approval process for new NHIs ; no NHI should be created with admin or wildcard scopes without explicit sign-off
- Add NHI permission reviews to your quarterly access review cycle alongside human accounts
- Use cloud-provider IAM tools (AWS Access Analyzer, Azure Access Reviews) to flag NHIs with unused permissions automatically
AWS Access Analyzer makes it straightforward to generate a least-privilege policy from actual usage:
# Generate a report of services accessed by a role in the last 90 days
aws iam generate-service-last-accessed-details --arn arn:aws:iam::123456789012:role/MyAgentRole# List services that were NEVER used - safe to remove from the policy
aws iam get-service-last-accessed-details --job-id <job-id> --query "ServicesLastAccessed[?TotalAuthenticatedEntities==<code class="bg-gray-200 text-gray-800 px-1.5 py-0.5 rounded text-sm font-mono">0</code>].ServiceName"
Any service with zero authenticated entities in 90 days can be removed from the role's policy with no operational impact.
4. Third-Party NHI Risks
Every SaaS integration, vendor connection, and OAuth application you authorize creates NHIs you do not directly control. These third-party machine identities operate under your trust, with your data, governed by someone else's security practices.
The February 2026 Moltbook breach illustrated this clearly: attackers compromised a third-party integration on an AI agent platform, then pivoted to client environments across the entire platform through the trusted NHIs that integration held. What security teams should do:
- Maintain a complete, searchable inventory of all third-party NHIs and OAuth authorizations; this means auditing your SaaS app catalog, not just your own infrastructure
- Review and narrow the scope of all third-party OAuth grants; most grant far more than the vendor actually requires
- Set a 90-day review cycle for all third-party NHI authorizations; revoke any you cannot justify
- Monitor third-party NHI activity via your SIEM; unusual access patterns from vendor integrations are a common early indicator of supply chain compromise
5. Insecure Authentication Methods
Most NHIs still authenticate with static, long-lived credentials: API keys that never rotate, passwords stored in config files, certificates with multi-year lifetimes. Modern workload identity standards such as OIDC, short-lived tokens, and certificate-bound access dramatically shrink the attack surface but require deliberate investment to implement. What security teams should do:
- Prioritize eliminating static API keys in cloud environments first; AWS, Azure, and GCP all offer workload identity federation that removes the need for long-lived keys entirely
- Set a maximum credential lifetime policy: API keys expire in 90 days, service account passwords rotate every 30 days, certificates are renewed automatically before expiry
- Implement Just-in-Time (JIT) access for AI agents: credentials are issued for the duration of a specific task and revoked immediately after, rather than granted as standing access
- Build break-glass procedures so that if an agent is compromised, all its credentials can be revoked within minutes rather than hours
Just-in-Time Access vs. Standing Access
One of the highest-impact changes security teams can make for AI agent NHIs is moving from standing access to just-in-time access. The difference in risk is significant:
| Factor | Standing Access | Just-in-Time Access |
|---|---|---|
| Exposure window | Continuous | Minutes per task |
| Blast radius if compromised | All granted permissions, always | Only the current task's scope |
| Audit granularity | Low (access is always authorized) | High (every task logged individually) |
| Operational overhead | Low | Medium (requires vault integration) |
| Overall risk | High | Low |
The NHI Governance Maturity Model
Most organizations are at Level 1. Level 3 is where risk becomes manageable for enterprises deploying AI at scale.
| Level | What it looks like | Approx. share of enterprises in 2026 |
|---|---|---|
| Level 0: Unaware | No NHI inventory. Credentials managed ad hoc. No rotation. | ~20% |
| Level 1: Reactive | Partial inventory. Rotation happens after incidents. Basic vault usage. | ~40% |
| Level 2: Managed | Full inventory. Automated rotation. Least-privilege enforcement underway. Regular NHI reviews. | ~25% |
| Level 3: Optimized | JIT access. Continuous behavioral monitoring. AI agent identities fully governed and audited. | ~15% |
Security Team Action Plan
If your organization is deploying AI agents today without a formal NHI governance program, here is a concrete starting point. This week: establish visibility
- Run a discovery scan across all cloud environments and CI/CD pipelines to inventory existing NHIs; use tools like Entro, Clutch, or your cloud provider's IAM analysis tools
- Identify all credentials with no expiry date; these are your highest priority for immediate remediation
- List every AI agent deployment and document what credentials each agent is using, who created them, and whether they have a named owner
- Deploy a secrets management platform if you do not have one (HashiCorp Vault, AWS Secrets Manager, or Azure Key Vault are all viable starting points)
- Implement pre-commit secret scanning across all developer repositories
- Establish an NHI ownership policy: every NHI must have a named human owner on record
- Review and reduce permissions on your 50 highest-privilege NHIs using actual 90-day usage data
- Implement automated credential rotation for all NHIs, starting with cloud service accounts and CI/CD credentials, which have the highest blast radius
- Build behavioral baselines for AI agent identities and configure alerts for anomalous access patterns
- Integrate NHI offboarding into HR and vendor management workflows so credentials are revoked automatically when employees leave or vendor contracts end
- Design a JIT access model for new AI agent deployments going forward
The Bigger Picture
The NHI crisis is fundamentally a governance gap accelerated by AI. The same patterns that created shadow IT in the 2010s (teams moving fast, credentials created for immediate needs, no lifecycle management) are producing shadow NHI at a far larger scale in the 2020s.
Each autonomous AI workflow, each new integration, each MCP server connection adds to the NHI footprint. Unlike human identities, NHIs do not have offboarding interviews or IT-managed device wipes. They persist silently until someone notices them, ideally your security team before an attacker does.
Gartner projects that by 2027, organizations without formal NHI governance will experience three times the identity-related breach rate of those that have it. The gap between Level 0 and Level 2 is not minor technical debt. For organizations deploying AI agents at scale, it is an existential risk.
The tools, frameworks, and practices exist today. The OWASP NHI Top 10, Gartner's IAM guidance, and a growing market of NHI management platforms give security teams a clear path. The only remaining variable is whether NHI governance gets treated as a first-class security priority rather than something addressed only after the next breach.
Start with visibility. You cannot protect what you cannot see.
Further Reading
The following reports and frameworks were used as the basis for this article:
- [Gartner: Top Cybersecurity Trends for 2026](https://www.gartner.com/en/newsroom/press-releases/2026-02-05-gartner-identifies-the-top-cybersecurity-trends-for-2026) - Gartner's official February 2026 press release naming IAM for AI agents as a top trend
- [OWASP Non-Human Identities Top 10](https://owasp.org/www-project-non-human-identities-top-10/) - The OWASP framework for NHI security risks
- [AI Agents Are Creating an Identity Security Crisis in 2026](https://www.iansresearch.com/resources/all-blogs/post/security-blog/2026/02/24/ai-agents-are-creating-an-identity-security-crisis-in-2026) - IANS Research analysis
- [Non-Human Identities: Agentic AI's New Frontier of Cybersecurity Risk](https://www.weforum.org/stories/2025/10/non-human-identities-ai-cybersecurity/) - World Economic Forum
- [AI Agents: The Next Wave Identity Dark Matter](https://thehackernews.com/2026/03/ai-agents-next-wave-identity-dark.html) - The Hacker News, March 2026
- [2026 NHI Reality Report](https://cyberstrategyinstitute.com/2026-nhi-reality-report/) - Cyber Strategy Institute
- [Why Non-Human Identities Are Your Biggest Security Blind Spot in 2026](https://www.csoonline.com/article/4125156/why-non-human-identities-are-your-biggest-security-blind-spot-in-2026.html) - CSO Online
- [The State of Non-Human Identity and AI Security](https://cloudsecurityalliance.org/artifacts/state-of-nhi-and-ai-security-survey-report) - Cloud Security Alliance
Questions & Answers
Need Help with Your Security?
Our team of security experts can help you implement the strategies discussed in this article.
Contact Us