CVE-2025-59425
Severity CVSS v4.0:
Pending analysis
Type:
Unavailable / Other
Publication date:
07/10/2025
Last modified:
16/10/2025
Description
vLLM is an inference and serving engine for large language models (LLMs). Before version 0.11.0rc2, the API key support in vLLM performs validation using a method that was vulnerable to a timing attack. API key validation uses a string comparison that takes longer the more characters the provided API key gets correct. Data analysis across many attempts could allow an attacker to determine when it finds the next correct character in the key sequence. Deployments relying on vLLM's built-in API key validation are vulnerable to authentication bypass using this technique. Version 0.11.0rc2 fixes the issue.
Impact
Base Score 3.x
7.50
Severity 3.x
HIGH
Vulnerable products and versions
| CPE | From | Up to |
|---|---|---|
| cpe:2.3:a:vllm:vllm:*:*:*:*:*:*:*:* | 0.11.0 (excluding) | |
| cpe:2.3:a:vllm:vllm:0.11.0:rc1:*:*:*:*:*:* |
To consult the complete list of CPE names with products and versions, see this page
References to Advisories, Solutions, and Tools
- https://github.com/vllm-project/vllm/blob/4b946d693e0af15740e9ca9c0e059d5f333b1083/vllm/entrypoints/openai/api_server.py#L1270-L1274
- https://github.com/vllm-project/vllm/commit/ee10d7e6ff5875386c7f136ce8b5f525c8fcef48
- https://github.com/vllm-project/vllm/releases/tag/v0.11.0
- https://github.com/vllm-project/vllm/security/advisories/GHSA-wr9h-g72x-mwhm



