llama.cpp is an inference of several LLM models in C/C++. Prior to version b5721, there is a signed vs. unsigned integer overflow in llama.cpp's tokenizer implementation (llama_vocab::tokenize) (src/llama-vocab.cpp:3036) resulting in unintended behavior in tokens copying size comparison. Allowing heap-overflowing llama.cpp inferencing engine with carefully manipulated text input during tokenization process. This issue has been patched in version b5721.
Metrics
Affected Vendors & Products
References
History
Tue, 24 Jun 2025 22:15:00 +0000
Type | Values Removed | Values Added |
---|---|---|
Metrics |
ssvc
|
Tue, 24 Jun 2025 03:45:00 +0000
Type | Values Removed | Values Added |
---|---|---|
Description | llama.cpp is an inference of several LLM models in C/C++. Prior to version b5721, there is a signed vs. unsigned integer overflow in llama.cpp's tokenizer implementation (llama_vocab::tokenize) (src/llama-vocab.cpp:3036) resulting in unintended behavior in tokens copying size comparison. Allowing heap-overflowing llama.cpp inferencing engine with carefully manipulated text input during tokenization process. This issue has been patched in version b5721. | |
Title | llama.cpp tokenizer signed vs. unsigned heap overflow | |
Weaknesses | CWE-119 CWE-195 |
|
References |
| |
Metrics |
cvssV3_1
|

Status: PUBLISHED
Assigner: GitHub_M
Published: 2025-06-24T03:21:19.009Z
Updated: 2025-06-24T21:49:53.200Z
Reserved: 2025-06-18T03:55:52.036Z
Link: CVE-2025-52566

Updated: 2025-06-24T21:49:47.523Z

Status : Awaiting Analysis
Published: 2025-06-24T04:15:46.967
Modified: 2025-06-26T18:58:14.280
Link: CVE-2025-52566

No data.