Home

Description

vLLM is an inference and serving engine for large language models (LLMs). The SSRF protection fix for CVE-2026-24779 add in 0.15.1 can be bypassed in the load_from_url_async method due to inconsistent URL parsing behavior between the validation layer and the actual HTTP client. The SSRF fix uses urllib3.util.parse_url() to validate and extract the hostname from user-provided URLs. However, load_from_url_async uses aiohttp for making the actual HTTP requests, and aiohttp internally uses the yarl library for URL parsing. This vulnerability in 0.17.0.

PUBLISHED Reserved 2026-02-09 | Published 2026-03-09 | Updated 2026-03-10 | Assigner GitHub_M




HIGH: 7.1CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:N/A:L

Problem types

CWE-918: Server-Side Request Forgery (SSRF)

Product status

>= 0.15.1, < 0.17.0
affected

References

github.com/...t/vllm/security/advisories/GHSA-v359-jj2v-j536

github.com/...t/vllm/security/advisories/GHSA-qh4c-xf7m-gxfc

github.com/vllm-project/vllm/pull/34743

github.com/...ommit/6f3b2047abd4a748e3db4a68543f8221358002c0

cve.org (CVE-2026-25960)

nvd.nist.gov (CVE-2026-25960)

Download JSON