Home

Description

vLLM is an inference and serving engine for large language models (LLMs). From version 0.5.5 to before 0.11.1, users can crash the vLLM engine serving multimodal models by passing multimodal embedding inputs with correct ndim but incorrect shape (e.g. hidden dimension is wrong), regardless of whether the model is intended to support such inputs (as defined in the Supported Models page). This issue has been patched in version 0.11.1.

PUBLISHED Reserved 2025-10-10 | Published 2025-11-21 | Updated 2025-11-21 | Assigner GitHub_M




HIGH: 8.3CVSS:4.0/AV:N/AC:L/AT:N/PR:L/UI:N/VC:N/VI:N/VA:H/SC:N/SI:N/SA:H

Problem types

CWE-129: Improper Validation of Array Index

Product status

>= 0.5.5, < 0.11.1
affected

References

github.com/...t/vllm/security/advisories/GHSA-pmqf-x6x8-p7qw

github.com/vllm-project/vllm/pull/27204

github.com/vllm-project/vllm/pull/6613

github.com/...ommit/58fab50d82838d5014f4a14d991fdb9352c9c84b

cve.org (CVE-2025-62372)

nvd.nist.gov (CVE-2025-62372)

Download JSON