This repository contains a Bash script to renumber Proxmox VE QEMU VMs (classic qm virtual machines) from an old VMID scheme (e.g. 100–199) to a new one (e.g. 400–499).
Depending on your storage backend, this is more than just renaming a config file: the script also renames the underlying disks/volumes so that Proxmox remains consistent after the VMID change.
⚠️ Warning / Risk Notice
Changing VMIDs is an invasive operation touching configuration files and storage objects.
If config and volumes get out of sync, VMs may end up in a “missing disk” / “undefined” state.
Use at your own risk and create backups before applying changes.
- Features
- Prerequisites
- Quickstart
- Options
- How It Works
- Supported Storage Backends
- What Is NOT Covered Automatically
- Post-Migration Checks
- Troubleshooting
- Recovery / Manual Fix
- Repository Layout
- Contributing
- License
- Disclaimer
- ✅ Dry-run by default (prints what would happen)
- ✅ Collision avoidance: picks the next free VMID in the target range instead of blindly using
+300 - ✅ Renumbers QEMU VMs (
qm) including:- Proxmox VM config (
/etc/pve/qemu-server/<VMID>.conf) - Optional firewall config (
/etc/pve/firewall/<VMID>.fw) - Referenced disks/volumes (
vm-<id>-disk-*, etc.), depending on storage backend
- Proxmox VM config (
- ✅ Creates a backup directory with a
mapping.txt - ✅ Supports common storage types:
- ZFS (zvol) via
zfs rename - LVM/LVM-thin via
lvrename - Directory storages (qcow2/raw files) via
mv/ directory rename
- ZFS (zvol) via
- Run on a Proxmox VE node with:
qm,pvesm
- Root privileges (
root@hostorsudo -i) - Depending on storage:
- ZFS:
zfsCLI available, datasets local - LVM:
lvs,lvrename(packagelvm2)
- ZFS:
Recommended: run directly on the PVE node holding the VM configs and storage.
-
Copy the script to your PVE node, e.g.:
cp pve-renumber.sh /root/pve-renumber.sh chmod +x /root/pve-renumber.sh
-
Dry-run first (recommended):
/root/pve-renumber.sh
This will:
- create a backup directory under
/root/pve-vmid-renumber-backup-YYYYMMDD-HHMMSS - write the computed mapping to
mapping.txt - print the planned renumbering (e.g.
100 -> 400) - print all operations it would execute
- create a backup directory under
-
If the plan looks correct, run with
--apply:/root/pve-renumber.sh --apply
-
If some VMs might be running, you can add:
/root/pve-renumber.sh --apply --shutdown --force-stop
-
--applyActually perform changes. Without this flag, the script runs in dry-run mode. -
--shutdownIf a VM is running, try a graceful shutdown and wait up to 300 seconds. -
--force-stopIf--shutdownwas used and the VM is still running after waiting, force-stop it. -
--help/-hPrint usage information.
At a high level, Proxmox VMIDs are tied to:
-
Configuration files:
- QEMU VMs:
/etc/pve/qemu-server/<VMID>.conf - Optional firewall rules:
/etc/pve/firewall/<VMID>.fw
- QEMU VMs:
-
Storage objects referenced by the config:
- ZFS zvols like
rpool/data/vm-100-disk-0 - LVM logical volumes like
/dev/pve/vm-100-disk-0 - Directory-based images like
/var/lib/vz/images/100/vm-100-disk-0.qcow2
- ZFS zvols like
The script performs these steps:
-
Detect all QEMU VMIDs in the source range (default:
100–199). -
Build a target mapping into the target range (default:
400–499), skipping already-used VMIDs. -
Create a backup directory and write
mapping.txt. -
For each VM:
-
Ensure it is stopped (or shut it down / force-stop based on options)
-
Parse
qm config <VMID>for referenced disks/volids -
Resolve each volid to an actual path via
pvesm path -
Rename volumes depending on storage type:
- ZFS:
zfs rename - LVM:
lvrename - Directory:
mvfolder/file andvm-<old>-➜vm-<new>-
- ZFS:
-
Backup the config file
-
Rewrite references inside the config (
vm-<old>-➜vm-<new>-, plus common/images/<id>/paths) -
Move
/etc/pve/qemu-server/<old>.conf➜<new>.conf -
Optionally move firewall config
-
Typical naming:
rpool/data/vm-100-disk-0
The script renames datasets via:
zfs rename rpool/data/vm-100-disk-0 rpool/data/vm-400-disk-0Typical paths:
/dev/pve/vm-100-disk-0
The script uses lvrename to rename the LV accordingly.
Typical paths:
/var/lib/vz/images/100/vm-100-disk-0.qcow2
The script renames:
- the directory
/images/<old>/➜/images/<new>/(if applicable) - files
vm-<old>-*➜vm-<new>-*
Depending on your environment, you may need additional manual work:
-
Ceph / RBD (or other remote storages)
pvesm pathmay not produce a local filesystem path, and renaming objects can require backend-specific commands. The script may skip these volumes with a warning. -
Replication / HA / Backup jobs Proxmox replication, HA resources, scheduled backup jobs, monitoring systems, and external automation often reference VMIDs. After renumbering, update those references accordingly.
-
Snapshots / complex storage states Snapshots can introduce additional volume naming/relationships. While renaming can still work, it increases risk. Consider consolidating/handling snapshots carefully before renumbering.
After an --apply run:
-
Verify VM list:
qm list
-
Verify a VM config resolves all disks:
qm config 400
-
Verify storage paths (example):
pvesm path local-zfs:vm-400-disk-0
-
Search for leftover references to old VMIDs:
grep -R "vm-1[0-9][0-9]-" /etc/pve/qemu-server/ || true
-
Start a few VMs and confirm they boot as expected:
qm start 400 qm status 400
This often happens if:
- the dataset was already renamed during a previous run
- the config still references the old name
Check existing datasets:
zfs list -H -o name | grep 'vm-'Then ensure the config (/etc/pve/qemu-server/<id>.conf) references the correct vm-<new>-disk-* names.
This usually means:
- volumes were renamed but the config file was not updated/moved (or vice versa)
This warning is commonly related to an ACL/user token entry in Proxmox, and usually not caused by this script. It can often be fixed by cleaning up invalid tokens/ACL entries, but it is not typically blocking.
If a VM ends up in an inconsistent state (example: volumes renamed but config not updated), you can recover manually:
-
Find the VM’s disks (ZFS example):
zfs list -H -o name | grep 'vm-100\|vm-400'
-
Decide the target VMID (e.g.
400) and update the config references:-
Backup config:
cp -a /etc/pve/qemu-server/100.conf /root/100.conf.bak
-
Replace references inside the config:
sed -i -E \ -e 's/(^|[^0-9])vm-100-/\1vm-400-/g' \ -e 's#/images/100/#/images/400/#g' \ /etc/pve/qemu-server/100.conf
-
Move config to new VMID:
mv /etc/pve/qemu-server/100.conf /etc/pve/qemu-server/400.conf
-
-
Verify:
qm config 400
Typical layout:
.
├── pve-renumber.sh
└── README.md
Contributions are welcome. Ideas:
- Improve support for additional storage backends (Ceph/RBD, iSCSI/LUN naming patterns, etc.)
- Add a “verify-only” mode that checks whether all
pvesm pathreferences resolve - Add smarter snapshot handling (where possible)
- Extend to also handle containers (
pct) if desired
Please open an issue/PR with:
- Proxmox version (
pveversion -v) - storage configuration (
/etc/pve/storage.cfgwith sensitive info removed) - a sample
qm config <VMID>(remove secrets/tokens)
This script is provided “as is”, without warranty of any kind. Use at your own risk. Always ensure you have backups and a recovery plan before applying changes.
::contentReference[oaicite:0]{index=0}