Home

Description

vLLM is an inference and serving engine for large language models (LLMs). From versions 0.10.2 to before 0.11.1, a memory corruption vulnerability could lead to a crash (denial-of-service) and potentially remote code execution (RCE), exists in the Completions API endpoint. When processing user-supplied prompt embeddings, the endpoint loads serialized tensors using torch.load() without sufficient validation. Due to a change introduced in PyTorch 2.8.0, sparse tensor integrity checks are disabled by default. As a result, maliciously crafted tensors can bypass internal bounds checks and trigger an out-of-bounds memory write during the call to to_dense(). This memory corruption can crash vLLM and potentially lead to code execution on the server hosting vLLM. This issue has been patched in version 0.11.1.

PUBLISHED Reserved 2025-10-07 | Published 2025-11-21 | Updated 2025-11-24 | Assigner GitHub_M




HIGH: 8.8CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H

Problem types

CWE-20: Improper Input Validation

CWE-123: Write-what-where Condition

CWE-502: Deserialization of Untrusted Data

CWE-787: Out-of-bounds Write

Product status

>= 0.10.2, < 0.11.1
affected

References

github.com/...t/vllm/security/advisories/GHSA-mrw7-hf4f-83pf

github.com/vllm-project/vllm/pull/27204

github.com/...ommit/58fab50d82838d5014f4a14d991fdb9352c9c84b

cve.org (CVE-2025-62164)

nvd.nist.gov (CVE-2025-62164)

Download JSON