We use these services and cookies to improve your user experience. You may opt out if you wish, however, this may limit some features on this site.
Please see our statement on Data Privacy.
vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. Versions starting from 0.6.5 and prior to 0.8.5, having vLLM integration with mooncake, are vulnerable to remote code execution due to using pickle based serialization over unsecured ZeroMQ sockets. The vulnerable sockets were set to listen on all network interfaces, increasing the likelihood that an attacker is able to reach the vulnerable ZeroMQ sockets to carry out an attack. vLLM instances that do not make use of the mooncake integration are not vulnerable. This issue has been patched in version 0.8.5.
Reserved 2025-04-08 | Published 2025-04-30 | Updated 2025-04-30 | Assigner GitHub_MCWE-502: Deserialization of Untrusted Data
github.com/...t/vllm/security/advisories/GHSA-hj4w-hm2g-p6w5
github.com/...t/vllm/security/advisories/GHSA-x3m8-f7g5-qhm7
github.com/...ommit/a5450f11c95847cf51a17207af9a3ca5ab569b2c
github.com/...stributed/kv_transfer/kv_pipe/mooncake_pipe.py
Support options