Skip to content

[VMware] Disk controller mappings#10454

Open
winterhazel wants to merge 22 commits intoapache:mainfrom
scclouds:disk-controller-mappings
Open

[VMware] Disk controller mappings#10454
winterhazel wants to merge 22 commits intoapache:mainfrom
scclouds:disk-controller-mappings

Conversation

@winterhazel
Copy link
Copy Markdown
Member

Description

This is a refactor of the disk controller related logic for VMware that also adds support for SATA and NVME controllers.

A detailed description of these changes is available at https://cwiki.apache.org/confluence/display/CLOUDSTACK/Disk+Controller+Mappings.

Types of changes

  • Breaking change (fix or feature that would cause existing functionality to change)
  • New feature (non-breaking change which adds functionality)
  • Bug fix (non-breaking change which fixes an issue)
  • Enhancement (improves an existing feature and functionality)
  • Cleanup (Code refactoring and cleanup, that may add test cases)
  • build/CI
  • test (unit or integration test code)

Feature/Enhancement Scale or Bug Severity

Feature/Enhancement Scale

  • Major
  • Minor

How Has This Been Tested?

The tests below were performed for VMs with the following rootDiskController and dataDiskController configurations:

  • osdefault/osdefault (converted to lsilogic/lsilogic)
  • ide/ide
  • pvscsi/pvscsi
  • sata/sata
  • nvme/nvme
  • sata/lsilogic
  • ide/osdefault
  • osdefault/ide
  1. VM deployment: I deployed one VM with each of the configurations. I verified in vCenter that they had the correct amount of disk controllers, and that each volume was associated to the expected controller. The sata/lsilogic VM was the only one that had a data disk; the others only had a root disk.

  2. VM start: I stopped the VMs deployed in (1) and started them again. I verified in vCenter that they had the correct amount of disk controllers, and that each volume was associated to the expected controller.

  3. Disk attachment: while the VMs were running, I tried to attach a data disk. All the data disks were attached successfully (expect for the VMs using IDE as the data disk controller, which does not allow hot plugging disks; for these, I attached the disks after stopping the VM). I verified that all the disks were using the expected controller. Then, I stopped and started the VM, and verified that they were still using the expected controllers. Finally, I stoped the VMs and detached the volumes. I verified that they were detached successfully.

  4. VM import: I unmanaged the VMs and imported them back. I verified that their settings were infered successfully according to the existing disk controllers. Then, I started the VMs, and verified that the controllers and the volumes were configured correctly.

The next tests were performed using the following imported VMs:

  • osdefault/osdefault
  • ide/ide
  • nvme/nvme
  • sata/lsilogic
  1. Volume migration: I migrated the volumes from NFS to local storage, and verified that the migration finished successfully. Then, I started the VMs and verified that both the controllers and the disks were configured correctly.

  2. Volume resize: I expanded all of the disks, and verified in vCenter that their size was changed. Then, I started the VMs and verified that both the controllers and the disks were configured correctly.

  3. VM snapshot: I took some VM snapshots, started the VMs and verified that everything was ok. I changed the configurations of the VM using osdefault/osdefault to sata/sata and started the VM to begin the reconfiguration process. I verified that the disk controllers in use were not removed, and that the disks were still associated with the previous controllers; however, the SATA controllers were also created. The VM was working as expected. Finally, I deleted the VM snapshots.

  4. Template creation from volume: I created templates from the root disks. Then, I deployed VMs from the templates. I verified that all the VMs had the same disk controllers as the original VM, and that the only existing disk was correctly associated with the configured root disk controller.

  5. Template creation from volume snapshot: I took snapshots from the root disks, and created templates from the snapshots. Then, I deployed VMs from the templates. I verified that all the VMs had the same disk controllers as the original VM, and that the only existing disk was correctly associated with the configured root disk controller.

  6. VM scale: with the VMs stopped, I scaled the VM from Small Instance to Medium Instance. I verified that the offering was changed. I started the VMs, and verified that the VMs were correctly reconfigured in vCenter.

Other tests:

  • System VM creation: after applying the patches, I recreated the SSVM and the CPVM. I verified that they were using a single LSI Logic controller. I also verified the controllers of a new VR and of an existing VR.

  • I attached 3 disks to the ide/ide controller. When trying to attach a 4th disk, I got an expected exception, as the IDE bus reached the maximum amount of devices (the 4th one was the CD/DVD drive).

  • I removed all the disks from the sata/lsilogic VM. I tried to attach the root disk again, and verified that it was attached successfully. I started the VM, and verified that it was configured correctly.

  • I attached 8 disks to the pvscsi/pvscsi VM, and verified that the 8th disk was successfully attached to device number 8 (device number 7 is reserved for the controller).

@winterhazel
Copy link
Copy Markdown
Member Author

@blueorangutan package

@winterhazel winterhazel changed the title Disk controller mappings [VMware] Disk controller mappings Feb 24, 2025
@blueorangutan
Copy link
Copy Markdown

@winterhazel a [SL] Jenkins job has been kicked to build packages. It will be bundled with KVM, XenServer and VMware SystemVM templates. I'll keep you posted as I make progress.

@codecov
Copy link
Copy Markdown

codecov bot commented Feb 24, 2025

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 3.52%. Comparing base (5d61ba3) to head (adce7af).
⚠️ Report is 1 commits behind head on main.

❗ There is a different number of reports uploaded between BASE (5d61ba3) and HEAD (adce7af). Click for more details.

HEAD has 1 upload less than BASE
Flag BASE (5d61ba3) HEAD (adce7af)
unittests 1 0
Additional details and impacted files
@@              Coverage Diff              @@
##               main   #10454       +/-   ##
=============================================
- Coverage     18.02%    3.52%   -14.50%     
=============================================
  Files          5973      464     -5509     
  Lines        537466    40063   -497403     
  Branches      65991     7534    -58457     
=============================================
- Hits          96855     1414    -95441     
+ Misses       429689    38461   -391228     
+ Partials      10922      188    -10734     
Flag Coverage Δ
uitests 3.52% <ø> (ø)
unittests ?

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@blueorangutan
Copy link
Copy Markdown

Packaging result [SF]: ✖️ el8 ✖️ el9 ✖️ debian ✖️ suse15. SL-JID 12549

@winterhazel
Copy link
Copy Markdown
Member Author

@DaanHoogland it seems there were some merge issues in main. org.apache.cloudstack.backup.VeeamBackupProvider is missing some methods and imports.

@DaanHoogland
Copy link
Copy Markdown
Contributor

@DaanHoogland it seems there were some merge issues in main. org.apache.cloudstack.backup.VeeamBackupProvider is missing some methods and imports.

I'll check and update

@DaanHoogland
Copy link
Copy Markdown
Contributor

@winterhazel , please see #10457 . I have had no time (or infra) to test yet.

@winterhazel
Copy link
Copy Markdown
Member Author

@blueorangutan package

@blueorangutan
Copy link
Copy Markdown

@winterhazel a [SL] Jenkins job has been kicked to build packages. It will be bundled with KVM, XenServer and VMware SystemVM templates. I'll keep you posted as I make progress.

@blueorangutan
Copy link
Copy Markdown

Packaging result [SF]: ✔️ el8 ✔️ el9 ✔️ debian ✔️ suse15. SL-JID 12586

@github-actions
Copy link
Copy Markdown

This pull request has merge conflicts. Dear author, please fix the conflicts and sync your branch with the base branch.

@JoaoJandre
Copy link
Copy Markdown
Contributor

@winterhazel could you fix the conflicts?

@winterhazel
Copy link
Copy Markdown
Member Author

@blueorangutan package

@blueorangutan
Copy link
Copy Markdown

@winterhazel a [SL] Jenkins job has been kicked to build packages. It will be bundled with KVM, XenServer and VMware SystemVM templates. I'll keep you posted as I make progress.

@blueorangutan
Copy link
Copy Markdown

Packaging result [SF]: ✔️ el8 ✔️ el9 ✔️ debian ✔️ suse15. SL-JID 13603

@sureshanaparti sureshanaparti requested a review from nvazquez June 5, 2025 09:30
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Copilot reviewed 29 out of 29 changed files in this pull request and generated 7 comments.


💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

@blueorangutan
Copy link
Copy Markdown

[SF] Trillian Build Failed (tid-15764)

@blueorangutan
Copy link
Copy Markdown

[SF] Trillian Build Failed (tid-15762)

@blueorangutan
Copy link
Copy Markdown

[SF] Trillian Build Failed (tid-15763)

@blueorangutan
Copy link
Copy Markdown

[SF] Trillian test result (tid-15761)
Environment: kvm-ol8 (x2), zone: Advanced Networking with Mgmt server ol8
Total time taken: 57305 seconds
Marvin logs: https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr10454-t15761-kvm-ol8.zip
Smoke tests completed. 151 look OK, 0 have errors, 0 did not run
Only failed and skipped tests results shown below:

Test Result Time (s) Test File

@DaanHoogland
Copy link
Copy Markdown
Contributor

@blueorangutan test ol9 vmware-80u3

@blueorangutan
Copy link
Copy Markdown

@DaanHoogland a [SL] Trillian-Jenkins test job (ol9 mgmt + vmware-80u3) has been kicked to run smoke tests

@blueorangutan
Copy link
Copy Markdown

[SF] Trillian Build Failed (tid-15778)

@blueorangutan
Copy link
Copy Markdown

[SF] Trillian Build Failed (tid-15783)

@github-actions
Copy link
Copy Markdown

This pull request has merge conflicts. Dear author, please fix the conflicts and sync your branch with the base branch.

@DaanHoogland
Copy link
Copy Markdown
Contributor

conflicts @winterhazel

don’t worry about the failed env builds, those are due to an upstream epel repo not being ready. It is running now in the backend.

@winterhazel
Copy link
Copy Markdown
Member Author

@blueorangutan package

@blueorangutan
Copy link
Copy Markdown

@winterhazel a [SL] Jenkins job has been kicked to build packages. It will be bundled with no SystemVM templates. I'll keep you posted as I make progress.

@blueorangutan
Copy link
Copy Markdown

Packaging result [SF]: ✖️ el8 ✖️ el9 ✖️ debian ✖️ suse15. SL-JID 17316

@blueorangutan
Copy link
Copy Markdown

[SF] Trillian test result (tid-15788)
Environment: vmware-70u3 (x2), zone: Advanced Networking with Mgmt server ol9
Total time taken: 46772 seconds
Marvin logs: https://github.com/blueorangutan/acs-prs/releases/download/trillian/pr10454-t15788-vmware-70u3.zip
Smoke tests completed. 133 look OK, 7 have errors, 11 did not run
Only failed and skipped tests results shown below:

Test Result Time (s) Test File
test_CRUD_operations_guest_OS_mapping Error 2.40 test_guest_os.py
test_guest_OS_mapping_check_with_hypervisor Error 2.47 test_guest_os.py
ContextSuite context=TestListIdsParams>:setup Error 0.00 test_list_ids_parameter.py
test_01_snapshot_root_disk Error 3.20 test_snapshots.py
test_02_list_snapshots_with_removed_data_store Error 67.90 test_snapshots.py
ContextSuite context=TestSnapshotStandaloneBackup>:setup Error 186.15 test_snapshots.py
test_11_destroy_ssvm Error 7.27 test_ssvm.py
test_01_create_template Error 1.18 test_templates.py
test_CreateTemplateWithDuplicateName Error 1.21 test_templates.py
ContextSuite context=TestTemplates>:setup Error 104.96 test_templates.py
test_01_snapshot_usage Error 3.21 test_usage.py
test_01_template_usage Error 1.52 test_usage.py
test_03_live_migrate_VM_with_two_data_disks Error 41.02 test_vm_life_cycle.py
test_08_migrate_vm Error 24.12 test_vm_life_cycle.py
all_test_vnf_templates Skipped --- test_vnf_templates.py
all_test_volumes Skipped --- test_volumes.py
all_test_vpc_conserve_mode Skipped --- test_vpc_conserve_mode.py
all_test_vpc_ipv6 Skipped --- test_vpc_ipv6.py
all_test_vpc_redundant Skipped --- test_vpc_redundant.py
all_test_vpc_router_nics Skipped --- test_vpc_router_nics.py
all_test_vpc_vpn Skipped --- test_vpc_vpn.py
all_test_webhook_delivery Skipped --- test_webhook_delivery.py
all_test_webhook_lifecycle Skipped --- test_webhook_lifecycle.py
all_test_host_maintenance Skipped --- test_host_maintenance.py
all_test_hostha_kvm Skipped --- test_hostha_kvm.py

@DaanHoogland
Copy link
Copy Markdown
Contributor

@blueorangutan test ol9 vmware-80u3

@blueorangutan
Copy link
Copy Markdown

@DaanHoogland a [SL] Trillian-Jenkins test job (ol9 mgmt + vmware-80u3) has been kicked to run smoke tests

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants