How to migrate Windows Cluster VMs (incl. RDMs) from old to new storage with minimal downtime?
February 14, 2022 at 12:22 pm Leave a comment
Recently , we did a storage upgrade for one of our customer. The VM migration was pretty easy for almost 90% of the workloads. However , the remaining 10 % of the VM’s were having RDM disks ,and for the standalone RDM disks (non clustered workloads) we were able to leverage the Storage VMotion to convert the RDM’s to VMDK and migrate it.
On the other hand , for clustered VM’s , Initially we were planning to utilize the RP4VM to migrate the VM’s(Failover to the Replica). But this approach was not fruitful as we faced a compatibility issue and there was no workaround and we dropped the idea.
During this time our colleague from the Storage Team suggested this alternate plan and we tested it and it was successful. The steps are here as listed.
make a note of the RDM mapping to the VMs (note the SCSI ID assigned to the RDM in the VM configuration)
# shutdown the VMs of that Virtual cluster.
# unmap the RDMs.
# Create the Luns on the destination array (must be at least as large as the source !!!)
# present to ESXhost (rescan, …)
# use Storage vMotion to move the VM to the new datastores.(This is to move the VM with the OS disk and any other non RDM based disks)
# use ESX CLI vmkfstools to copy the RDM content to the new lun and this will automatically create the new RDM vmdk pointer file (destination.vmdk)
vmkfstolls -i <srcdisk.vmdk> -d rdmp:device <destination.vmdk>
Example:
vmkfstools -i TestVM_RDM1.vmdk -d rdmp:/vmfs/devices/disks/vml.02000100006006016044440000f8b164674b51e111565241494420 TestVM_NewRDM.vmdk
==== remap the new LUN as RDMp with the same SCSI ID====
=== don’t forget to set the bus sharing if it disappeared===
Source:
Kudos to EricDeWitte1 (Contributor)
Trackback this post | Subscribe to the comments via RSS Feed