[repo]

The LiteLLM Supply Chain Attack — When Your AI Gateway Becomes the Threat

TeamPCP compromised LiteLLM through Trivy, weaponizing the skeleton key package that holds every AI app's API keys. The trust chain broke where it mattered most.

6 min · Mar 24, 2026
trigger
BerriAI/litellm
source ↗
Mar 24, 2026
#supply-chain-attack#ai-infrastructure#pypi-security#credential-theft

LiteLLM — the Python package that routes API calls to 100+ LLM providers and serves as the gateway for thousands of AI applications — was compromised on March 24, 2026. Versions 1.82.7 and 1.82.8 on PyPI contain a credential-stealing payload that runs on install, no import needed. The cruelest irony: TeamPCP didn’t hack a random library. They hacked the skeleton key. And they got in through Trivy — a security scanner.

  • What: LiteLLM 1.82.7 and 1.82.8 published to PyPI contain credential-stealing malware
  • How: TeamPCP compromised Trivy security scanner → stole LiteLLM’s PyPI credentials → bypassed GitHub releases
  • Impact: 95 million monthly downloads, every AI app’s API keys at risk
  • Action: Check pip show litellm NOW. If 1.82.7/1.82.8: rotate ALL credentials immediately

LiteLLM Compromise — What Happened

TeamPCP published backdoored LiteLLM versions directly to PyPI on March 24, bypassing the project’s GitHub release process entirely. Version 1.82.7 embedded the payload in proxy_server.py, requiring an import to trigger. Version 1.82.8 escalated with a .pth file — Python’s site module processes these at interpreter startup, meaning every Python process on the machine triggers the payload automatically.

The malware is a three-stage attack: credential harvester, Kubernetes lateral movement toolkit, and persistent backdoor. It sweeps SSH keys, cloud credentials, Kubernetes secrets, cryptocurrency wallets, and .env files, encrypts the haul, and exfiltrates to models.litellm.cloud. If Kubernetes service account tokens are present, it deploys privileged alpine

pods to every cluster node, mounting the host filesystem and installing a systemd backdoor at ~/.config/sysmon/.

Discovery was pure luck. FutureSearch found it when an MCP plugin in Cursor pulled litellm as a transitive dependency and crashed the machine with an exponential fork bomb — the .pth child process re-triggered itself until RAM exhausted. Without that crash, it could have stayed hidden much longer.

Why This Matters

LiteLLM isn’t just another Python package. It’s the API key management gateway for the AI ecosystem — a unified interface to OpenAI, Anthropic, Google, Azure, and 100+ other model providers. With 95 million monthly downloads, it’s embedded as a transitive dependency in AI agent frameworks, MCP servers, and orchestration tools. Many victims never ran pip install litellm themselves.

The attack chain reveals the AI ecosystem’s hidden vulnerability: we trust PyPI, which trusts maintainer credentials, which trust CI/CD tools, which trust… other packages. TeamPCP found that the weakest link in the AI supply chain is the supply chain of the supply chain tools.

Here’s the cascade: On March 19, TeamPCP compromised Trivy — Aqua Security’s vulnerability scanner used by thousands of projects in their CI/CD pipelines. LiteLLM’s build process used Trivy, giving attackers access to the project’s PyPI publishing credentials. Rather than compromise the GitHub repository (which would be noticed), they used stolen credentials to publish malicious versions directly to PyPI.

This isn’t TeamPCP’s first rodeo. Between March 19-24, they executed a coordinated campaign across five ecosystems: GitHub Actions (Trivy, KICS), Docker Hub, npm, OpenVSX, and PyPI. On March 23, they hijacked 35 tags of Checkmarx KICS GitHub Action. Each compromise feeds the next — credentials from one victim enable attacks on their dependencies’ infrastructure.

The timing matters. As MCP servers proliferate and AI agent frameworks mature, LiteLLM has become critical infrastructure for multi-model orchestration. Compromising it means compromising every LLM API key, cloud credential, and secret in environments that depend on it. For AI applications, this is a skeleton key attack — one compromised package unlocks everything.

Compare this to the npm left-pad incident of 2016. That broke builds. This steals the keys to your entire AI infrastructure. The AI ecosystem built itself on implicit trust chains, and TeamPCP just demonstrated how to weaponize them.

The Take

The AI agent ecosystem’s trust model just collapsed. We’ve been building on a foundation of transitive dependencies where the most critical packages — the ones that hold all your API keys — are also the most vulnerable to supply chain attacks.

Pinning to LiteLLM 1.82.6 isn’t a fix. It’s damage control. The real problem is architectural: when your API gateway becomes a single point of failure for credential security, and that gateway depends on CI/CD tools that can be compromised by attackers, you’re one stolen credential away from total exposure.

TeamPCP posted on Telegram that they’re “gonna be around for a long time stealing terrabytes of trade secrets with our new partners” and plan to target more security tools and open-source projects. This isn’t a one-off. It’s the new normal.

If you’re building AI agents, MCP servers, or any system that touches LiteLLM as a dependency, your threat model needs an update. Supply chain security for AI infrastructure can’t be an afterthought when the supply chain itself is the attack vector.

The trust chain didn’t break. It was weaponized. And until we acknowledge that security scanners can be attack vectors and API gateways are credential honeypots, this will happen again.