We use these services and cookies to improve your user experience. You may opt out if you wish, however, this may limit some features on this site.
Please see our statement on Data Privacy.
vLLM is an inference and serving engine for large language models (LLMs). In version 0.8.0 up to but excluding 0.9.0, the vLLM backend used with the /v1/chat/completions OpenAPI endpoint fails to validate unexpected or malformed input in the "pattern" and "type" fields when the tools functionality is invoked. These inputs are not validated before being compiled or parsed, causing a crash of the inference worker with a single request. The worker will remain down until it is restarted. Version 0.9.0 fixes the issue.
Reserved 2025-05-28 | Published 2025-05-30 | Updated 2025-05-30 | Assigner GitHub_MCWE-20: Improper Input Validation
github.com/...t/vllm/security/advisories/GHSA-vrq3-r879-7m65
github.com/vllm-project/vllm/pull/17623
Support options