We use these services and cookies to improve your user experience. You may opt out if you wish, however, this may limit some features on this site.
Please see our statement on Data Privacy.
In the `manim` plugin of binary-husky/gpt_academic, versions prior to the fix, a vulnerability exists due to improper handling of user-provided prompts. The root cause is the execution of untrusted code generated by the LLM without a proper sandbox. This allows an attacker to perform remote code execution (RCE) on the app backend server by injecting malicious code through the prompt.
Reserved 2024-11-06 | Published 2025-03-20 | Updated 2025-03-20 | Assigner @huntr_aiCWE-77 Improper Neutralization of Special Elements used in a Command ('Command Injection')
huntr.com/bounties/72d034e3-6ca2-495d-98a7-ac9565588c09
Support options