Home

Description

LiteLLM is a proxy server (AI Gateway) to call LLM APIs in OpenAI (or native) format. From version 1.80.5 to before version 1.83.7, the POST /prompts/test endpoint accepted user-supplied prompt templates and rendered them without sandboxing. A crafted template could run arbitrary code inside the LiteLLM Proxy process. The endpoint only checks that the caller presents a valid proxy API key, so any authenticated user could reach it. Depending on how the proxy is deployed, this could expose secrets in the process environment (such as provider API keys or database credentials) and allow commands to be run on the host. This issue has been patched in version 1.83.7.

PUBLISHED Reserved 2026-04-25 | Published 2026-05-08 | Updated 2026-05-08 | Assigner GitHub_M




HIGH: 8.6CVSS:4.0/AV:N/AC:L/AT:N/PR:L/UI:N/VC:H/VI:H/VA:N/SC:N/SI:N/SA:N

Problem types

CWE-1336: Improper Neutralization of Special Elements Used in a Template Engine

Product status

>= 1.80.5, < 1.83.7
affected

References

github.com/...itellm/security/advisories/GHSA-xqmj-j6mv-4862

github.com/BerriAI/litellm/releases/tag/v1.83.7-stable

cve.org (CVE-2026-42203)

nvd.nist.gov (CVE-2026-42203)

Download JSON