Home

Description

vLLM is an inference and serving engine for large language models (LLMs). Before version 0.11.0rc2, the API key support in vLLM performs validation using a method that was vulnerable to a timing attack. API key validation uses a string comparison that takes longer the more characters the provided API key gets correct. Data analysis across many attempts could allow an attacker to determine when it finds the next correct character in the key sequence. Deployments relying on vLLM's built-in API key validation are vulnerable to authentication bypass using this technique. Version 0.11.0rc2 fixes the issue.

PUBLISHED Reserved 2025-09-15 | Published 2025-10-07 | Updated 2025-10-07 | Assigner GitHub_M




HIGH: 7.5CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:N

Problem types

CWE-385: Covert Timing Channel

Product status

< 0.11.0rc2
affected

References

github.com/...t/vllm/security/advisories/GHSA-wr9h-g72x-mwhm

github.com/...ommit/ee10d7e6ff5875386c7f136ce8b5f525c8fcef48

github.com/...333b1083/vllm/entrypoints/openai/api_server.py

github.com/vllm-project/vllm/releases/tag/v0.11.0

cve.org (CVE-2025-59425)

nvd.nist.gov (CVE-2025-59425)

Download JSON