Update device selection to use torch.accelerator for device-agnostic code#3883
Update device selection to use torch.accelerator for device-agnostic code#3883scinqyz wants to merge 4 commits intopytorch:mainfrom
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/tutorials/3883
Note: Links to docs will display an error until the docs builds have been completed. ❗ 1 Active SEVsThere are 1 currently active SEVs. If your PR is affected, please view them below: This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
Hi @scinqyz! Thank you for your pull request and welcome to our community. Action RequiredIn order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you. ProcessIn order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA. Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with If you have received this in error or have any questions, please contact us at [email protected]. Thanks! |
|
Thank you for signing our Contributor License Agreement. We can now accept your code for this (and any) Meta Open Source project. Thanks! |
|
Claude finished @sekyondaMeta's task in 1m 15s —— View job PR Review: Update device selection to use torch.accelerator
SummaryThe intent of this PR is good — updating from Issues1. Missing tensor move and print statement (Bug) The original code: if torch.cuda.is_available():
tensor = tensor.to('cuda')
print(f"Device tensor is stored on: {tensor.device}")The new code: device = torch.accelerator.current_accelerator().type if torch.accelerator.is_available() else 'cpu'The PR sets a 2. Suggested fix The replacement should be: device = torch.accelerator.current_accelerator().type if torch.accelerator.is_available() else "cpu"
tensor = tensor.to(device)
print(f"Device tensor is stored on: {tensor.device}")This keeps the tutorial's teaching purpose intact (demonstrating device transfer) while adopting the device-agnostic API. |
|
@scinqyz See review for bug to fix |
Fixes #3876
Description
Updates suboptimal
torch.cuda.is_available(), that checks only for CUDA while ignoring other accelerators, with device-agnostictorch.accelerator.current_accelerator().type.Checklist