Skip to content

Harden OneHot operator input validation and output size computation#28014

Open
GopalakrishnanN wants to merge 1 commit intomicrosoft:mainfrom
GopalakrishnanN:FixDOSAttack
Open

Harden OneHot operator input validation and output size computation#28014
GopalakrishnanN wants to merge 1 commit intomicrosoft:mainfrom
GopalakrishnanN:FixDOSAttack

Conversation

@GopalakrishnanN
Copy link
Copy Markdown

@GopalakrishnanN GopalakrishnanN commented Apr 8, 2026

Description

Hardens input validation and output-size computation in the OneHot operator (CPU and CUDA execution providers). Previously, the output size calculation in PrepareOutputShape() performed indices_shape.Size() * depth_val using plain int64_t, so a valid-looking input (large indices shape combined with a large depth value) could silently overflow and produce a nonsensical output shape before allocation. This change tightens validation, uses checked arithmetic, and adds related hardening.

Motivation and Context

  • The overflowed shape could propagate into allocation/compute paths with unpredictable behavior.
  • On CUDA, suffix_dim_size and depth_val * suffix_dim_size are passed to fast_divmod (which requires int32). The existing gsl::narrow_cast<int> silently truncates, so very large but non-overflowing values would still reach the kernel with wrong divisors.
  • prefix_dim_size was computed via an unchecked loop of int64_t multiplications.
  • Output() return value was not null-checked before use.

Changes

onnxruntime/core/providers/cpu/tensor/onehot.cc

  • Use SafeInt<int64_t> for both the output-size computation and the prefix_dim_size multiplication loop; return a clear error on overflow.
  • Guard against division by zero when prefix_dim_size is zero.
  • Null-check context->Output(...) in Compute().

onnxruntime/core/providers/cuda/tensor/onehot.cc

  • Before the fast_divmod narrowings, validate that suffix_dim_size and depth_val * suffix_dim_size fit in int32_t; error out cleanly otherwise instead of silently truncating.
  • Null-check context->Output(...) in ComputeInternal().

onnxruntime/test/providers/cpu/tensor/onehot_op_test.cc

  • DepthTooLarge_OutputSizeOverflowdepth = INT64_MAX, indices = [2,3].
  • DepthTooLarge_OutputSizeOverflow_LargeIndicesdepth = INT64_MAX / 500, indices = [1000].
  • NegativeDepth — negative depth is rejected.
  • DepthOne — minimum valid depth = 1 edge case.
  • ScalarIndicesRejected — rank-0 indices are rejected (ONNX spec requires indices rank ≥ 1).
  • DefaultAxis_Opset9 — opset 9 coverage for the default-axis path.

Testing

Built onnxruntime_provider_test on Windows (Release) and ran the OneHotOpTest.* suite:

[==========] 34 tests from 1 test suite ran.
[  PASSED  ] 34 tests.

Overflow tests produce the expected error, e.g.:
OneHot: output tensor size would overflow for the given indices shape and depth value (9223372036854775807).

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Addresses a reported DoS risk in the OneHot operator by adding shape-size validation and extra allocation guarding to reduce the chance of oversized/overflowing output allocations during execution (CPU/CUDA).

Changes:

  • Add an int64 element-count overflow check in PrepareOutputShape() for OneHot.
  • Add output allocation null checks in CPU and CUDA OneHot compute paths.
  • Add new unit tests intended to verify overflow rejection behavior.

Reviewed changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 4 comments.

File Description
onnxruntime/core/providers/cpu/tensor/onehot.cc Adds output element-count overflow validation and an output allocation null guard.
onnxruntime/core/providers/cuda/tensor/onehot.cc Adds an output allocation null guard in the CUDA compute path.
onnxruntime/test/providers/cpu/tensor/onehot_op_test.cc Adds two new failure-mode tests for extremely large depth values.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +506 to +509
// which would overflow when computing total element count.
test.AddInput<int64_t>("indices", {2, 3}, {1, 2, 3, 4, 5, 6});
test.AddInput<int64_t>("depth", {1}, {std::numeric_limits<int64_t>::max()});
test.AddInput<int64_t>("values", {2}, {0, 1});
Copy link

Copilot AI Apr 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This test uses std::numeric_limits but the file does not include . Please add the missing standard header include so this compiles reliably (don’t rely on indirect includes).

Copilot uses AI. Check for mistakes.
Comment on lines +106 to +113
// This prevents unbounded memory allocation from an attacker-controlled depth value.
{
int64_t total_elements = 1;
for (auto dim : output_shape) {
if (dim > 0 && total_elements > std::numeric_limits<int64_t>::max() / dim) {
return ORT_MAKE_STATUS(ONNXRUNTIME, INVALID_ARGUMENT,
"OneHot: output tensor size would overflow. depth value (", depth_val,
") is too large for the given indices shape.");
Copy link

Copilot AI Apr 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new overflow error message always blames the depth value ("depth value (...) is too large"), but this check can also trip due to very large indices dimensions even when depth is small. Consider rephrasing the message to describe the output shape/element-count overflow more generally (and optionally include both indices shape and depth).

Suggested change
// This prevents unbounded memory allocation from an attacker-controlled depth value.
{
int64_t total_elements = 1;
for (auto dim : output_shape) {
if (dim > 0 && total_elements > std::numeric_limits<int64_t>::max() / dim) {
return ORT_MAKE_STATUS(ONNXRUNTIME, INVALID_ARGUMENT,
"OneHot: output tensor size would overflow. depth value (", depth_val,
") is too large for the given indices shape.");
// This prevents unbounded memory allocation from an attacker-controlled output shape.
{
int64_t total_elements = 1;
for (auto dim : output_shape) {
if (dim > 0 && total_elements > std::numeric_limits<int64_t>::max() / dim) {
return ORT_MAKE_STATUS(ONNXRUNTIME, INVALID_ARGUMENT,
"OneHot: output tensor size would overflow for the given indices shape "
"and depth value (", depth_val, ").");

Copilot uses AI. Check for mistakes.
Comment on lines +105 to +117
// Validate that the total output tensor element count does not overflow int64.
// This prevents unbounded memory allocation from an attacker-controlled depth value.
{
int64_t total_elements = 1;
for (auto dim : output_shape) {
if (dim > 0 && total_elements > std::numeric_limits<int64_t>::max() / dim) {
return ORT_MAKE_STATUS(ONNXRUNTIME, INVALID_ARGUMENT,
"OneHot: output tensor size would overflow. depth value (", depth_val,
") is too large for the given indices shape.");
}
total_elements *= dim;
}
}
Copy link

Copilot AI Apr 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The added guard only checks for int64 element-count overflow; it does not prevent attacker-controlled depth values that produce extremely large but non-overflowing outputs (e.g., depth=2^32 with small indices) from triggering multi-GB allocations. If the goal is to mitigate allocation-based DoS as described in the PR, add an explicit upper bound (ideally on output bytes/elements) before calling Output().

Copilot uses AI. Check for mistakes.
Comment on lines 57 to 61
Tensor* output = ctx->Output(0, TensorShape(output_shape));
ORT_RETURN_IF_NOT(output, "OneHot: failed to allocate output tensor. Output shape may be too large.");

// edge case where we have a dim with a value of 0
if (output->Shape().Size() == 0)
Copy link

Copilot AI Apr 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

After allocation, the CUDA path still narrows shape-derived values to 32-bit types (fast_divmod takes int, CUDA_LONG is int32_t). For large but non-overflowing outputs (element count > INT32_MAX or suffix_dim_size > INT_MAX), these casts can truncate and lead to incorrect indexing or out-of-bounds writes in the CUDA kernels. Please add explicit validation (e.g., output element count, suffix_dim_size, and depth*suffix all within the required 32-bit limits) or update the kernel implementation to support 64-bit indexing.

Copilot uses AI. Check for mistakes.
@GopalakrishnanN GopalakrishnanN changed the title Fix OneHot depth amplification DoS vulnerability Harden OneHot operator input validation and output size computation Apr 16, 2026
@microsoft-github-policy-service
Copy link
Copy Markdown
Contributor

@GopalakrishnanN please read the following Contributor License Agreement(CLA). If you agree with the CLA, please reply with the following information.

@microsoft-github-policy-service agree [company="{your company}"]

Options:

  • (default - no company specified) I have sole ownership of intellectual property rights to my Submissions and I am not making Submissions in the course of work for my employer.
@microsoft-github-policy-service agree
  • (when company given) I am making Submissions in the course of work for my employer (or my employer has intellectual property rights in my Submissions by contract or applicable law). I have permission from my employer to make Submissions and enter into this Agreement on behalf of my employer. By signing below, the defined term “You” includes me and my employer.
@microsoft-github-policy-service agree company="Microsoft"
Contributor License Agreement

Contribution License Agreement

This Contribution License Agreement (“Agreement”) is agreed to by the party signing below (“You”),
and conveys certain license rights to Microsoft Corporation and its affiliates (“Microsoft”) for Your
contributions to Microsoft open source projects. This Agreement is effective as of the latest signature
date below.

  1. Definitions.
    “Code” means the computer software code, whether in human-readable or machine-executable form,
    that is delivered by You to Microsoft under this Agreement.
    “Project” means any of the projects owned or managed by Microsoft and offered under a license
    approved by the Open Source Initiative (www.opensource.org).
    “Submit” is the act of uploading, submitting, transmitting, or distributing code or other content to any
    Project, including but not limited to communication on electronic mailing lists, source code control
    systems, and issue tracking systems that are managed by, or on behalf of, the Project for the purpose of
    discussing and improving that Project, but excluding communication that is conspicuously marked or
    otherwise designated in writing by You as “Not a Submission.”
    “Submission” means the Code and any other copyrightable material Submitted by You, including any
    associated comments and documentation.
  2. Your Submission. You must agree to the terms of this Agreement before making a Submission to any
    Project. This Agreement covers any and all Submissions that You, now or in the future (except as
    described in Section 4 below), Submit to any Project.
  3. Originality of Work. You represent that each of Your Submissions is entirely Your original work.
    Should You wish to Submit materials that are not Your original work, You may Submit them separately
    to the Project if You (a) retain all copyright and license information that was in the materials as You
    received them, (b) in the description accompanying Your Submission, include the phrase “Submission
    containing materials of a third party:” followed by the names of the third party and any licenses or other
    restrictions of which You are aware, and (c) follow any other instructions in the Project’s written
    guidelines concerning Submissions.
  4. Your Employer. References to “employer” in this Agreement include Your employer or anyone else
    for whom You are acting in making Your Submission, e.g. as a contractor, vendor, or agent. If Your
    Submission is made in the course of Your work for an employer or Your employer has intellectual
    property rights in Your Submission by contract or applicable law, You must secure permission from Your
    employer to make the Submission before signing this Agreement. In that case, the term “You” in this
    Agreement will refer to You and the employer collectively. If You change employers in the future and
    desire to Submit additional Submissions for the new employer, then You agree to sign a new Agreement
    and secure permission from the new employer before Submitting those Submissions.
  5. Licenses.
  • Copyright License. You grant Microsoft, and those who receive the Submission directly or
    indirectly from Microsoft, a perpetual, worldwide, non-exclusive, royalty-free, irrevocable license in the
    Submission to reproduce, prepare derivative works of, publicly display, publicly perform, and distribute
    the Submission and such derivative works, and to sublicense any or all of the foregoing rights to third
    parties.
  • Patent License. You grant Microsoft, and those who receive the Submission directly or
    indirectly from Microsoft, a perpetual, worldwide, non-exclusive, royalty-free, irrevocable license under
    Your patent claims that are necessarily infringed by the Submission or the combination of the
    Submission with the Project to which it was Submitted to make, have made, use, offer to sell, sell and
    import or otherwise dispose of the Submission alone or with the Project.
  • Other Rights Reserved. Each party reserves all rights not expressly granted in this Agreement.
    No additional licenses or rights whatsoever (including, without limitation, any implied licenses) are
    granted by implication, exhaustion, estoppel or otherwise.
  1. Representations and Warranties. You represent that You are legally entitled to grant the above
    licenses. You represent that each of Your Submissions is entirely Your original work (except as You may
    have disclosed under Section 3). You represent that You have secured permission from Your employer to
    make the Submission in cases where Your Submission is made in the course of Your work for Your
    employer or Your employer has intellectual property rights in Your Submission by contract or applicable
    law. If You are signing this Agreement on behalf of Your employer, You represent and warrant that You
    have the necessary authority to bind the listed employer to the obligations contained in this Agreement.
    You are not expected to provide support for Your Submission, unless You choose to do so. UNLESS
    REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING, AND EXCEPT FOR THE WARRANTIES
    EXPRESSLY STATED IN SECTIONS 3, 4, AND 6, THE SUBMISSION PROVIDED UNDER THIS AGREEMENT IS
    PROVIDED WITHOUT WARRANTY OF ANY KIND, INCLUDING, BUT NOT LIMITED TO, ANY WARRANTY OF
    NONINFRINGEMENT, MERCHANTABILITY, OR FITNESS FOR A PARTICULAR PURPOSE.
  2. Notice to Microsoft. You agree to notify Microsoft in writing of any facts or circumstances of which
    You later become aware that would make Your representations in this Agreement inaccurate in any
    respect.
  3. Information about Submissions. You agree that contributions to Projects and information about
    contributions may be maintained indefinitely and disclosed publicly, including Your name and other
    information that You submit with Your Submission.
  4. Governing Law/Jurisdiction. This Agreement is governed by the laws of the State of Washington, and
    the parties consent to exclusive jurisdiction and venue in the federal courts sitting in King County,
    Washington, unless no federal subject matter jurisdiction exists, in which case the parties consent to
    exclusive jurisdiction and venue in the Superior Court of King County, Washington. The parties waive all
    defenses of lack of personal jurisdiction and forum non-conveniens.
  5. Entire Agreement/Assignment. This Agreement is the entire agreement between the parties, and
    supersedes any and all prior agreements, understandings or communications, written or oral, between
    the parties relating to the subject matter hereof. This Agreement may be assigned by Microsoft.

@GopalakrishnanN GopalakrishnanN force-pushed the FixDOSAttack branch 2 times, most recently from 6e871f2 to 3fcb761 Compare April 17, 2026 01:49
- Add overflow check in PrepareOutputShape using SafeInt for output size and prefix_dim_size multiplication to prevent unbounded allocation when depth or indices shape would overflow int64

- Guard against division by zero when prefix_dim_size is zero

- Add CUDA int32 range validation before fast_divmod to avoid silent truncation in gsl::narrow_cast for suffix_dim_size and depth_val * suffix_dim_size

- Check for nullptr from Output() in both CPU and CUDA Compute paths

- Add unit tests: depth overflow (two variants), negative depth, depth=1 edge case, scalar-indices rejection (ONNX spec requires rank>=1), and opset 9 coverage
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 3 out of 3 changed files in this pull request and generated 3 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +554 to +561
// Test scalar (rank-0) indices are rejected per ONNX spec (indices must have rank >= 1).
TEST(OneHotOpTest, ScalarIndicesRejected) {
OpTester test("OneHot", 11);
test.AddInput<int64_t>("indices", {}, {2});
test.AddInput<int64_t>("depth", {1}, {5});
test.AddInput<int64_t>("values", {2}, {0, 1});
test.AddOutput<int64_t>("output", {5}, {0, 0, 1, 0, 0});
test.Run(OpTester::ExpectResult::kExpectFailure, "Indices tensor must have rank >= 1");
Copy link

Copilot AI Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ScalarIndicesRejected expects OneHot to reject rank-0 indices, but the current implementation of ValidateInputs()/PrepareOutputShape() (used by both CPU and CUDA) allows scalar indices and will compute a valid output. As written, this test will fail unless you add an explicit rank check (e.g., reject indices->Shape().NumDimensions() == 0) or adjust the test expectation to match the supported behavior/spec.

Suggested change
// Test scalar (rank-0) indices are rejected per ONNX spec (indices must have rank >= 1).
TEST(OneHotOpTest, ScalarIndicesRejected) {
OpTester test("OneHot", 11);
test.AddInput<int64_t>("indices", {}, {2});
test.AddInput<int64_t>("depth", {1}, {5});
test.AddInput<int64_t>("values", {2}, {0, 1});
test.AddOutput<int64_t>("output", {5}, {0, 0, 1, 0, 0});
test.Run(OpTester::ExpectResult::kExpectFailure, "Indices tensor must have rank >= 1");
// Test scalar (rank-0) indices are accepted by the current implementation.
TEST(OneHotOpTest, ScalarIndices) {
OpTester test("OneHot", 11);
test.AddInput<int64_t>("indices", {}, {2});
test.AddInput<int64_t>("depth", {1}, {5});
test.AddInput<int64_t>("values", {2}, {0, 1});
test.AddOutput<int64_t>("output", {5}, {0, 0, 1, 0, 0});
test.Run();

Copilot uses AI. Check for mistakes.
Comment on lines +72 to +73
ORT_RETURN_IF_NOT(depth_val <= kInt32Max / std::max(suffix_dim_size, int64_t{1}),
"OneHot: depth (", depth_val, ") * suffix dimension size (", suffix_dim_size,
Copy link

Copilot AI Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This file now uses std::max(...) but does not include <algorithm>. Relying on transitive includes is brittle and may break builds depending on toolchain/headers; add the proper standard header include.

Copilot uses AI. Check for mistakes.
Comment on lines +66 to +75
// Validate that dimensions used by CUDA kernels fit in int32 range.
// fast_divmod requires int32 operands.
constexpr int64_t kInt32Max = std::numeric_limits<int>::max();
ORT_RETURN_IF_NOT(suffix_dim_size <= kInt32Max,
"OneHot: suffix dimension size (", suffix_dim_size,
") exceeds int32 range supported by the CUDA kernel.");
ORT_RETURN_IF_NOT(depth_val <= kInt32Max / std::max(suffix_dim_size, int64_t{1}),
"OneHot: depth (", depth_val, ") * suffix dimension size (", suffix_dim_size,
") exceeds int32 range supported by the CUDA kernel.");

Copy link

Copilot AI Apr 17, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new hardening relies on PrepareOutputShape() to catch size overflow, but in CUDA plugin builds PrepareOutputShape() is provided by the shim in core/providers/cuda/plugin/cuda_kernel_adapter.h, which still uses unchecked int64_t multiplication/division and has none of the new overflow/zero-div checks. To fully address the overflow/truncation issues in all CUDA build modes, mirror these PrepareOutputShape() validations in the plugin shim as well.

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants