After migrating some VM's from OpenStack to VMware, we ran into some issues where vCenter refuses to boot a VM due to the error: Object type requires hosted I/O.
To fix this, SSH into the ESXi host where the VM is being hosted, or where you can reach the disk files if it is on a network datastore. Once SSH'ed into the host, browse to the VM folder containing the disk: cd /vmfs/volumes/DS1/VM1/ vmkfstools -x check disk.vmdk Disk needs repaired vmkfstools -x repair “disk.vmdk” Disk was successfully repaired. Start VM from vCenter. Remember the vmdk may be different if you have snapshotted the VM. Take a look at the VM hardware and disk location if you need help finding the disk.
Recently, there were some issues within OpenStack which we wanted to investigate further, but to keep our users happy, we migrated their VM's from OpenStack to VMware. Our setup is as follows:
First, in OpenStack snapshot your VM, then save your image locally:
openstack image save myVM --file=myVM.img
Ideally the OpenStack command will work, but if you have large files, the command may timeout. If that's the case, you can save it directly from ceph, here are the command you'll need:
rbd export images/<VM UUID> myVM.img
Now convert the image to vmdk:
qemu-img convert -f raw -O vmdk myVM.img myVM.vmdk
Next create your VMX file, it is best to create as minimal a vmx file as possible, this is what worked for me for a ubuntu server VM:
.encoding = “UTF-8” config.version = "8" virtualHW.version = "14" memsize = "2048" displayName = "VM1" scsi0.present = "true" scsi0.sharedBus = "none" scsi0.virtualDev = "lsilogic" virtualHW.productCompatibility = "hosted" guestOS = "linux" ethernet0.present= "true" ethernet0.startConnected = "false" ethernet0.virtualDev = "e1000" ethernet0.connectionType = "hostonly" ethernet1.present= "true" ethernet1.startConnected = "true" ethernet1.virtualDev = "e1000" ethernet1.connectionType = "nat" ethernet1.networkName = "Public VM Network" ide0:0.present = "TRUE" ide0:0.fileName = "myVM.vmdk" ide0:0.redo = "" sched.ide0:0.shares = "normal" sched.ide0:0.throughputCap = "off" Here are some further details/references about some of those lines. There are only three required lines in a VMX file: config.version = "8" This value should most likely be "8", "7" is legacy, and 6 is for even older hardware. virtualHW.version = "14" This value depends on the product version and compatibility, 14 is for a minimum of ESXi 6.5. guestOS = "linux" For windows VM's this list may not be complete and you might want to dig into this file to find more. For example, windows 10, 64-bit, should be "windows9-64". To help determine the nic, you should pick this based off of the guestOS. ethernet0.virtualDev = "e1000" This also shows how the guestOS is tied to the ethernet. numvcpus = "2" To change the default, number of CPU's from 1 to 2. memsize = "2048 To change the RAM in MB, there are some limitations based on virtualHW version and ESXi version. I hope this helps anyone who is trying to migrate OpenStack VM's to VMware.
Current Setup:
MacOS 10.15 (Catalina) Java SE Development Kit 13.0.1
A few ansible settings that I like to have to help speed up the fun....
[defaults] host_key_checking=False pipelining=True forks=100 gathering=smart
host_key_checking: disables host key checking. This will stop the annoying: WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED! (On by default since V1.3)
pipelining: this will help speed up ansible significantly by reducing the ssh connections. forks: This increases the number of parallel ssh connections you can have. (BY default this is "A very very conservative number since V1.3") gathering: Speeds up fact checking by cacheing the results (New since V1.6) These settings can be set in any of the following (V1.5+): * ANSIBLE_CONFIG (an environment variable) * ansible.cfg (in the current directory) * .ansible.cfg (in the home directory) * /etc/ansible/ansible.cfg
But I usually just set it in my home directory:
~/.ansible.cfg Resources: https://docs.ansible.com/ansible/intro_configuration.html If you have Ubuntu MAAS (Metal As A Service) and, like most of us, have a hard drive configuration that is over 2 TB, you will run into trouble deploying. At least that was the case for me. In our server room we have Dell R710's with 12TB's of storage (6, 2TB drives). For these systems, I have them configured in RAID 6 giving us 8TB of space. However, any time I deployed, it would fail. But there was never a problem with our R410's that have only 900GB of space. Also, I know that a 8TB deployment was possible, since we've been doing it with FUEL since Ubuntu 12. So what gives? Was there some sort of limitation mentioned that we overlooked? Not according to the troubleshooting guide [5].
What was/is the problem? Our Environment: We have:
Error message: Error: attempt to read or write outside of disk `hd0'. Entering rescue mode... grub rescue> Attempts: Fails:
Success!
Conclusion: I do not know why the larger (1GB) boot directory was critical but it has worked across all of our 710's while the others have failed. I hope this helps someone. I got the idea of the separate partitions from resource 3 below. Hopefully as MAAS matures more, this bug will work itself out, however, it has been noted since 2014, which is a bit concerning [4]. Resources: [1] . http://en.community.dell.com/techcenter/extras/w/wiki/2837.hdd-support-for-2-5tb-3tb-drives-and-beyond [2] . https://askubuntu.com/questions/495994/what-filesystem-should-boot-be [3] . https://askubuntu.com/questions/470823/ubuntu-14-04-lts-maas-boot-fails-on-fresh-install-on-a-dell-2950 [4] . https://bugs.launchpad.net/ubuntu/+source/ubiquity/+bug/1284196 [5] . https://docs.ubuntu.com/maas/2.1/en/troubleshoot-faq
Quick way to get the inventory of you host and write it to a file:
root@r6-410-1:/home/ubuntu# ansible all -i INVENTORY_FILE -m setup --tree /tmp/facts
This will print the inventory into terminal and write it into a file with the name of the file being the name of the machine listed in your inventory file. This could be either a FQDN or IP address. The file will be in JSON format as well for easy viewing. Enjoy!
Receiving an error like this:
root@r6-410-1:/home/ubuntu# ansible all -i tmp -m setup --tree /tmp/facts 192.168.6.29 | FAILED! => { "changed": false, "failed": true, "module_stderr": "Shared connection to 192.168.6.29 closed.\r\n", "module_stdout": "/bin/sh: 1: /usr/bin/python: not found\r\n", "msg": "MODULE FAILURE", "rc": 0 }
Go to your inventory file and add this:
ansible_python_interpreter=/usr/bin/python3 So your inventory file might look something like this now: 192.168.6.29 ansible_python_interpreter=/usr/bin/python3 |
AuthorJames Benson is an IT professional. Archives
August 2022
Categories
All
|