vLLM is an inference and serving engine for large language models (LLMs). From 0.8.3 to before 0.14.1, when an invalid image is sent to vLLM's multimodal endpoint, PIL throws an error. vLLM returns this error to the client, leaking a heap address. With this leak, we reduce ASLR from 4 billion guesses to ~8 guesses. This vulnerability can be chained a heap overflow with JPEG2000 decoder in OpenCV/FFmpeg to achieve remote code execution. This vulnerability is fixed in 0.14.1.
Metrics
Affected Vendors & Products
References
History
Wed, 04 Feb 2026 00:15:00 +0000
| Type | Values Removed | Values Added |
|---|---|---|
| Weaknesses | CWE-209 | |
| References |
| |
| Metrics |
threat_severity
|
threat_severity
|
Tue, 03 Feb 2026 16:15:00 +0000
| Type | Values Removed | Values Added |
|---|---|---|
| Metrics |
ssvc
|
Mon, 02 Feb 2026 23:15:00 +0000
| Type | Values Removed | Values Added |
|---|---|---|
| Description | vLLM is an inference and serving engine for large language models (LLMs). From 0.8.3 to before 0.14.1, when an invalid image is sent to vLLM's multimodal endpoint, PIL throws an error. vLLM returns this error to the client, leaking a heap address. With this leak, we reduce ASLR from 4 billion guesses to ~8 guesses. This vulnerability can be chained a heap overflow with JPEG2000 decoder in OpenCV/FFmpeg to achieve remote code execution. This vulnerability is fixed in 0.14.1. | |
| Title | vLLM leaks a heap address when PIL throws an error | |
| Weaknesses | CWE-532 | |
| References |
| |
| Metrics |
cvssV3_1
|
Status: PUBLISHED
Assigner: GitHub_M
Published: 2026-02-02T21:09:53.265Z
Updated: 2026-02-03T15:42:57.155Z
Reserved: 2026-01-09T18:27:19.388Z
Link: CVE-2026-22778
Updated: 2026-02-03T15:42:51.507Z
Status : Awaiting Analysis
Published: 2026-02-02T23:16:06.700
Modified: 2026-02-03T16:44:03.343
Link: CVE-2026-22778