ghsa-9pcc-gvx5-r5wm
Vulnerability from github
Affected Environments
Note that this issue only affects the V0 engine, which has been off by default since v0.8.0. Further, the issue only applies to a deployment using tensor parallelism across multiple hosts, which we do not expect to be a common deployment pattern.
Since V0 is has been off by default since v0.8.0 and the fix is fairly invasive, we have decided not to fix this issue. Instead we recommend that users ensure their environment is on a secure network in case this pattern is in use.
The V1 engine is not affected by this issue.
Impact
In a multi-node vLLM deployment using the V0 engine, vLLM uses ZeroMQ for some multi-node communication purposes. The secondary vLLM hosts open a SUB
ZeroMQ socket and connect to an XPUB
socket on the primary vLLM host.
https://github.com/vllm-project/vllm/blob/c21b99b91241409c2fdf9f3f8c542e8748b317be/vllm/distributed/device_communicators/shm_broadcast.py#L295-L301
When data is received on this SUB
socket, it is deserialized with pickle
. This is unsafe, as it can be abused to execute code on a remote machine.
https://github.com/vllm-project/vllm/blob/c21b99b91241409c2fdf9f3f8c542e8748b317be/vllm/distributed/device_communicators/shm_broadcast.py#L468-L470
Since the vulnerability exists in a client that connects to the primary vLLM host, this vulnerability serves as an escalation point. If the primary vLLM host is compromised, this vulnerability could be used to compromise the rest of the hosts in the vLLM deployment.
Attackers could also use other means to exploit the vulnerability without requiring access to the primary vLLM host. One example would be the use of ARP cache poisoning to redirect traffic to a malicious endpoint used to deliver a payload with arbitrary code to execute on the target machine.
{ "affected": [ { "package": { "ecosystem": "PyPI", "name": "vllm" }, "ranges": [ { "events": [ { "introduced": "0.5.2" }, { "last_affected": "0.8.5.post1" } ], "type": "ECOSYSTEM" } ] } ], "aliases": [ "CVE-2025-30165" ], "database_specific": { "cwe_ids": [ "CWE-502" ], "github_reviewed": true, "github_reviewed_at": "2025-05-06T16:38:35Z", "nvd_published_at": "2025-05-06T17:16:11Z", "severity": "HIGH" }, "details": "### Affected Environments\n\nNote that this issue only affects the V0 engine, which has been off by default since v0.8.0. Further, the issue only applies to a deployment using tensor parallelism across multiple hosts, which we do not expect to be a common deployment pattern.\n\nSince V0 is has been off by default since v0.8.0 and the fix is fairly invasive, we have decided not to fix this issue. Instead we recommend that users ensure their environment is on a secure network in case this pattern is in use.\n\nThe V1 engine is not affected by this issue.\n\n### Impact\n\nIn a multi-node vLLM deployment using the V0 engine, vLLM uses ZeroMQ for some multi-node communication purposes. The secondary vLLM hosts open a `SUB` ZeroMQ socket and connect to an `XPUB` socket on the primary vLLM host.\n\nhttps://github.com/vllm-project/vllm/blob/c21b99b91241409c2fdf9f3f8c542e8748b317be/vllm/distributed/device_communicators/shm_broadcast.py#L295-L301\n\nWhen data is received on this `SUB` socket, it is deserialized with `pickle`. This is unsafe, as it can be abused to execute code on a remote machine.\n\nhttps://github.com/vllm-project/vllm/blob/c21b99b91241409c2fdf9f3f8c542e8748b317be/vllm/distributed/device_communicators/shm_broadcast.py#L468-L470\n\nSince the vulnerability exists in a client that connects to the primary vLLM host, this vulnerability serves as an escalation point. If the primary vLLM host is compromised, this vulnerability could be used to compromise the rest of the hosts in the vLLM deployment.\n\nAttackers could also use other means to exploit the vulnerability without requiring access to the primary vLLM host. One example would be the use of ARP cache poisoning to redirect traffic to a malicious endpoint used to deliver a payload with arbitrary code to execute on the target machine.", "id": "GHSA-9pcc-gvx5-r5wm", "modified": "2025-05-06T19:56:47Z", "published": "2025-05-06T16:38:35Z", "references": [ { "type": "WEB", "url": "https://github.com/vllm-project/vllm/security/advisories/GHSA-9pcc-gvx5-r5wm" }, { "type": "ADVISORY", "url": "https://nvd.nist.gov/vuln/detail/CVE-2025-30165" }, { "type": "PACKAGE", "url": "https://github.com/vllm-project/vllm" }, { "type": "WEB", "url": "https://github.com/vllm-project/vllm/blob/c21b99b91241409c2fdf9f3f8c542e8748b317be/vllm/distributed/device_communicators/shm_broadcast.py#L295-L301" }, { "type": "WEB", "url": "https://github.com/vllm-project/vllm/blob/c21b99b91241409c2fdf9f3f8c542e8748b317be/vllm/distributed/device_communicators/shm_broadcast.py#L468-L470" } ], "schema_version": "1.4.0", "severity": [ { "score": "CVSS:3.1/AV:A/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H", "type": "CVSS_V3" } ], "summary": "Remote Code Execution Vulnerability in vLLM Multi-Node Cluster Configuration" }
- Seen: The vulnerability was mentioned, discussed, or seen somewhere by the user.
- Confirmed: The vulnerability is confirmed from an analyst perspective.
- Exploited: This vulnerability was exploited and seen by the user reporting the sighting.
- Patched: This vulnerability was successfully patched by the user reporting the sighting.
- Not exploited: This vulnerability was not exploited or seen by the user reporting the sighting.
- Not confirmed: The user expresses doubt about the veracity of the vulnerability.
- Not patched: This vulnerability was not successfully patched by the user reporting the sighting.