We use these services and cookies to improve your user experience. You may opt out if you wish, however, this may limit some features on this site.
Please see our statement on Data Privacy.
llama.cpp is an inference of several LLM models in C/C++. Integer Overflow in the gguf_init_from_file_impl function in ggml/src/gguf.cpp can lead to Heap Out-of-Bounds Read/Write. This vulnerability is fixed in commit 26a48ad699d50b6268900062661bd22f3e792579.
Reserved 2025-07-07 | Published 2025-07-10 | Updated 2025-07-10 | Assigner GitHub_MCWE-122: Heap-based Buffer Overflow
CWE-680: Integer Overflow to Buffer Overflow
github.com/...ma.cpp/security/advisories/GHSA-vgg9-87g3-85w8
github.com/...ommit/26a48ad699d50b6268900062661bd22f3e792579
Support options