Home

Description

vLLM is an inference and serving engine for large language models (LLMs). From 0.8.3 to before 0.14.1, when an invalid image is sent to vLLM's multimodal endpoint, PIL throws an error. vLLM returns this error to the client, leaking a heap address. With this leak, we reduce ASLR from 4 billion guesses to ~8 guesses. This vulnerability can be chained a heap overflow with JPEG2000 decoder in OpenCV/FFmpeg to achieve remote code execution. This vulnerability is fixed in 0.14.1.

PUBLISHED Reserved 2026-01-09 | Published 2026-02-02 | Updated 2026-02-03 | Assigner GitHub_M




CRITICAL: 9.8CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:H

Problem types

CWE-532: Insertion of Sensitive Information into Log File

Product status

>= 0.8.3, < 0.14.1
affected

References

github.com/...t/vllm/security/advisories/GHSA-4r2x-xpjr-7cvv

github.com/vllm-project/vllm/pull/31987

github.com/vllm-project/vllm/pull/32319

github.com/vllm-project/vllm/releases/tag/v0.14.1

cve.org (CVE-2026-22778)

nvd.nist.gov (CVE-2026-22778)

Download JSON