Skip to content

Safetensor metadata mismatch fix in Mcore export#1422

Open
jinhangchoi wants to merge 1 commit intoNVIDIA:mainfrom
jinhangchoi:jinhangc/safetensor-metadata-fix
Open

Safetensor metadata mismatch fix in Mcore export#1422
jinhangchoi wants to merge 1 commit intoNVIDIA:mainfrom
jinhangchoi:jinhangc/safetensor-metadata-fix

Conversation

@jinhangchoi
Copy link
Copy Markdown

@jinhangchoi jinhangchoi commented May 9, 2026

What does this PR do?

Type of change: Bug fix

While Mcore export is being processed, a test reveals the following scenario:

  • per-layer JSON written → safetensors written, 28 seconds apart for the last shard
    • Between those two writes, _get_mtp_state_dict() ran and added MTP to layer_state_dicts
    • JSON captured the state BEFORE MTP; safetensors captured AFTER MTP
    • Consequently, MTP metadata is being missed in the final index.json.

To avoid this mismatch, we should force writing safetensor first, followed by per-layer JSON metadata.

Usage

# Add a code snippet demonstrating how to use this

Testing

tested in MCore PTQ on Nemotron model

Before your PR is "Ready for review"

Make sure you read and follow Contributor guidelines and your commits are signed (git commit -s -S).

Make sure you read and follow the Security Best Practices (e.g. avoiding hardcoded trust_remote_code=True, torch.load(..., weights_only=False), pickle, etc.).

  • Is this change backward compatible?: ✅
  • If you copied code from any other sources or added a new PIP dependency, did you follow guidance in CONTRIBUTING.md: N/A
  • Did you write any new necessary tests?: ❌
  • Did you update Changelog?: ❌
  • Did you get Claude approval on this PR?: ✅ / ❌ / N/A

Additional Information

Summary by CodeRabbit

  • Bug Fixes
    • Fixed metadata consistency during model layer export to ensure exported data and metadata remain synchronized.

Signed-off-by: Jinhang Choi <jinhangc@nvidia.com>
@jinhangchoi jinhangchoi requested a review from a team as a code owner May 9, 2026 00:03
@jinhangchoi jinhangchoi requested a review from meenchen May 9, 2026 00:03
@copy-pr-bot
Copy link
Copy Markdown

copy-pr-bot Bot commented May 9, 2026

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented May 9, 2026

Review Change Stack

📝 Walkthrough

Walkthrough

The PR reorders operations within the per-layer safetensors export function to move the file write before metadata JSON generation. This ensures the metadata is computed from the same layer_state_dict while the safetensors payload is persisted first.

Changes

Safetensors Write Order

Layer / File(s) Summary
Safetensors Write Order
modelopt/torch/export/plugins/mcore_custom.py
Per-layer .safetensors file write is moved to immediately after filename setup and before per-layer metadata JSON (weight_map/total_size) creation for consistency.

Estimated code review effort

🎯 2 (Simple) | ⏱️ ~8 minutes

🚥 Pre-merge checks | ✅ 6
✅ Passed checks (6 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title 'Safetensor metadata mismatch fix in Mcore export' clearly and specifically summarizes the main change: fixing a metadata mismatch bug in Mcore export by reordering safetensor and JSON writes.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.
Security Anti-Patterns ✅ Passed No security anti-patterns found. Code uses safe file I/O and JSON serialization. No unsafe torch.load, numpy.load, trust_remote_code, eval/exec, nosec, or new non-permissive dependencies detected.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Tip

💬 Introducing Slack Agent: The best way for teams to turn conversations into code.

Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.

  • Generate code and open pull requests
  • Plan features and break down work
  • Investigate incidents and troubleshoot customer tickets together
  • Automate recurring tasks and respond to alerts with triggers
  • Summarize progress and report instantly

Built for teams:

  • Shared memory across your entire org—no repeating context
  • Per-thread sandboxes to safely plan and execute work
  • Governance built-in—scoped access, auditability, and budget controls

One agent for your entire SDLC. Right inside Slack.

👉 Get started


Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
modelopt/torch/export/plugins/mcore_custom.py (1)

308-320: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Freeze layer_state_dict once to fully eliminate metadata/file drift.

This reorder helps, but you still read a live mutable dict twice. If layer_state_dict changes after save_file(...) and before the metadata loop, .json can diverge from the written .safetensors.

Proposed hardening
     for layer_index, layer_state_dict in layer_state_dicts.items():
         filename = name_template.format(layer_index, total_layers)
         meta_filename = filename + ".json"
         ckpt_filename = filename + ".safetensors"

+        # Freeze key->tensor mapping used by both outputs.
+        frozen_layer_state_dict = dict(layer_state_dict)
+
         # Write safetensors first, then build the per-layer meta JSON from the same dict.
         # Order matters: any late mutations to layer_state_dict (e.g. MTP tensors added after
         # the dict was first constructed) must be captured by both files.  Writing safetensors
         # first ensures the JSON is always consistent with what is physically on disk.
-        save_file(layer_state_dict, save_directory + "/" + ckpt_filename, metadata={"format": "pt"})
+        save_file(
+            frozen_layer_state_dict,
+            save_directory + "/" + ckpt_filename,
+            metadata={"format": "pt"},
+        )

         weight_map = {}
         layer_total_size = 0
-        for key, val in layer_state_dict.items():
+        for key, val in frozen_layer_state_dict.items():
             tensor_size = val.numel() * val.element_size()
             layer_total_size += tensor_size
             weight_map[key] = ckpt_filename
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@modelopt/torch/export/plugins/mcore_custom.py` around lines 308 - 320,
layer_state_dict is mutated after being written which can cause metadata/file
drift; snapshot it and use that immutable copy for both the safetensors write
and the metadata loop. Specifically, create a frozen copy of layer_state_dict
(e.g., snapshot = dict(layer_state_dict)) and pass snapshot to save_file(...)
and iterate snapshot.items() when building weight_map/layer_total_size so
save_file, weight_map, and layer_total_size are computed from the exact same
data; reference save_file, layer_state_dict, weight_map, ckpt_filename in your
change.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Outside diff comments:
In `@modelopt/torch/export/plugins/mcore_custom.py`:
- Around line 308-320: layer_state_dict is mutated after being written which can
cause metadata/file drift; snapshot it and use that immutable copy for both the
safetensors write and the metadata loop. Specifically, create a frozen copy of
layer_state_dict (e.g., snapshot = dict(layer_state_dict)) and pass snapshot to
save_file(...) and iterate snapshot.items() when building
weight_map/layer_total_size so save_file, weight_map, and layer_total_size are
computed from the exact same data; reference save_file, layer_state_dict,
weight_map, ckpt_filename in your change.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Enterprise

Run ID: 8c7dee09-3fac-42b5-92ed-6c95d8e50462

📥 Commits

Reviewing files that changed from the base of the PR and between 1d796f9 and a08d952.

📒 Files selected for processing (1)
  • modelopt/torch/export/plugins/mcore_custom.py

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant