I just tried to create an OpenShift 4.7 cluster using a vCenter appliance that was configured with the "tiny" size from the installer. This gives it 2 vcpus and 10GB RAM. I was using Installer Provided Infrastructure (IPI) on vSphere 6.7. The cluster creation failed with a timeout. I looked at the vCenter server performance stats and saw that it was using all of its CPU and memory. So I destroyed the cluster and doubled the resources on the vCenter VM. I then ran the cluster creation again, and everything completed as expected.
Showing posts with label vmware. Show all posts
Showing posts with label vmware. Show all posts
Monday, March 22, 2021
Wednesday, March 17, 2021
Overprovisioning vCPUs in ESXi as a VMWare guest
Background
I have a large server (96 vcpus and 1TB RAM) for working on cloud projects. A limitation, however, is that VMWare Workstation Pro 16 has a limitation of allocating a max of 32 vCPUs and 128GB RAM to any one guest VM. Normally this isn't a problem, but when you're dealing with OpenShift and Watson AIOps on-prem install, that's not enough. Specifically, Watson AIOps states that it requires a minimum of 36 cores for the all of the VMs required.
Solution
It turns out that the 32 vCPU limit isn't really a problem in this case. VMWare products allow you to overprovision the host resources to guests. And ESXi doesn't have any limitations on the number of vCPUs or amount of memory you can assign to one of its guests. This means that I can run ESXi as a guest under VMWare Workstation, and allocate more than the 32 vCPUs to its guest VMs. Here's a picture that I think clears up the situation:
As you can see, my ESXi guest (32 vCPUs) has three guest VMs that are using a total of 42 vCPUs, and they're all running fine. If all of the vCPUs get busy, performance will degrade, but I don't expect that to ever happen in my lab.
I've seen discussions where people overprovision vCPUs in a ratio of up to 4:1, meaning that I could possibly allocate 128 vCPUs in my ESXi guest as long as the guest VMs aren't too busy.
Thursday, March 11, 2021
VMWare Workstation 16 Pro on Windows 10 Shared Folders Not Available In Linux Guest
This is a small problem with an easy fix, but I wanted to document it for others.
Running VMWare Workstation 16 Pro on Windows 10 Pro, when you create a Linux guest VM with Shared Folders enabled, the shared folders don't always show up in the guest VM. In the guest VM, you should see any shared folders under /mnt/hgfs. The problem is that they aren't seen there, even if you reboot the guest VM. The silly fix is to disable shared folders for the VM from VM->Settings->Options and SAVE that. Then go back into VM->Settings->Options and enable shared folders and SAVE that. Your shared folders should now be available under /mnt/hgfs.
Friday, April 22, 2016
The easiest way to work around the problem of having one VM on a NAT network and one on a host-only network in VMWare Workstation
The Situation
I have a BigFix environment with a Windows BigFix server on the host-only network, and I've got IBM Control Desk installed on a Red Hat VM on one of the NAT networks. I want to integrate the two for asset management, which requires the Integration Composer, which has to communicate with both servers simultaneously. (We've done an air-gapped integration for a couple of customers, but I wanted to use the out-of-the-box mechanism).
Bad Solutions
Some of the solutions that I considered, but threw out because of the work involved:
Change an IP address
Simply move one server to the other network and then add routes to communicate between different subnets IP addresses on the same network
Easy Solution
I added a network card on the NAT network to the BigFix Windows server. VMWare Workstation quickly suspended and resumed the VM, the adapter was seen, and it got a DHCP address on the NAT network. And then I could communicate between the two machines!
Tuesday, October 6, 2015
How to reset your OpenStack services to "Up" in your IBM Cloud Manager dashboard for a vSphere cloud
While trying test and manage components in your vSphere cloud, you may see "Services Down" in some parts of the dashboard. For example, under "Host Aggregates":
I basically found that restarting all of the "nova*" services on my controller to be the answer to this problem.
The one-line answer is to log into your controller node as root and run the following:
for i in `systemctl -a | grep \.service | grep " active"| grep nova | awk '{ print $1 }'`; do systemctl restart $i;echo $i; done
So it's iterating through the results of the 'systemctl -a' command that contain active services containing the word "nova", and restarting each of those services. After you run the above, you should see that it shows "Services Up" for all availability zones on all hosts (since in a vSphere cloud, these services are all running on the controller node, rather than on the VMware nodes themselves).
I basically found that restarting all of the "nova*" services on my controller to be the answer to this problem.
The one-line answer is to log into your controller node as root and run the following:
for i in `systemctl -a | grep \.service | grep " active"| grep nova | awk '{ print $1 }'`; do systemctl restart $i;echo $i; done
So it's iterating through the results of the 'systemctl -a' command that contain active services containing the word "nova", and restarting each of those services. After you run the above, you should see that it shows "Services Up" for all availability zones on all hosts (since in a vSphere cloud, these services are all running on the controller node, rather than on the VMware nodes themselves).
Thursday, October 1, 2015
There's a new cloud in town, Part 4 How to reset ICMWO 4.3 to reinstall a cloud
After making sure ICMWO (IBM Cloud Manager With OpenStack) had the correct fixpack installed, the installation and configuration of ICO 2.5 succeeded and is working fine. I'm able to deploy individual VMs and Heat stacks to an OpenStack cloud. I've only created a small vSphere cloud, and I believe that's the reason that I haven't had much success getting ICMWOS to work with that cloud (I can launch instances, but those instances can't see their operating system). But I've learned several useful pieces of information through the process. I'll list the most important one here, and I'll write a new post for each of the others.
How to "reset" ICMWO to reinstall a cloud
While you're kicking the tires (or even installing into a production environment), you will certainly encounter the need to try to re-deploy a cloud. This "reset" functionality isn't made available from the GUI, and really involves doing some things outside of ICMWO. Happily, ICMWO doesn't install anything on your vCenter or ESXi servers (the controller uses the appropriate vSphere APIs through the vCenter server to do all the dirty work). So, to reset things so you can re-deploy a vSphere cloud, you need to:
1. Delete and re-create the "controller" node that you previously specified. This is the server that ICMWO deployed OpenStack to. You created this server specifically for this purpose based on the topology requirements of ICMWO. My entire environment is running under VMWare Workstation, so I simply took a snapshot of this VM once I had the OS installed and configured, so I could revert to that snapshot before each successive attempt.
2. Next, you need to delete the TWO Chef resources associated with the controller. There is a NODE and a CLIENT that have been created for the controller. To delete those, you need to run the following two commands (where "vmc.mynet.foo" is the FQDN of the controller for your VMWare cloud):
knife client delete vmc.mynet.foo
knife node delete vmc.mynet.foo
3. Finally, to delete the cloud from ICMWO Deployer GUI (https://icmwos.mynet.foo:8443), you need to log into the ICMWO server (via the console, ssh, VNC, etc.) as the same user you use to log into the Deployer GUI and delete a directory. The name of the directory contains the name of the cloud that you specified in ICMWO when you deployed the cloud and the datetimestamp when it was created. The directory is under:
$HOME/icm/clouds
And the name will be "cloudName_datetimestamp". So to delete the cloud named "vc55" from my GUI, I needed to run this command:
rm -rf ~/icm/clouds/vc55_2015-09-29_173201
And now ICMWO is ready to allow you to try to deploy a cloud to that node. I don't know if this name is stored anywhere else, so the safest route in my opinion is to use a different name for the new cloud.
Thursday, September 24, 2015
There's an updated cloud in town Part 2: Still Installing ICO 2.5
A few more hurdles overcome as I get closer to getting ICO 2.5 installed.
systemctl stop firewalld
systemctl disable firewalld
In my case, it was blocking port 53 (dns), which I needed open to configure the vCenter server (next section). I first just used Applications->Sundry->Firewall to open port 53, then realized that I could just turn it completely off in my test environment so I don't hit any more problems with it.
I didn't want to get a Windows machine involved if at all possible, and it turns out to be fairly straightforward to do this. You will find 99% of the instructions in this great article:
http://www.unixarena.com/2015/05/how-to-deploy-vcsa-6-0-on-vmware-workstation.html
Specifically, the .OVA file can be found in the .ISO file that you download from VMWare. It just doesn't have a .ova extension. So you need to extract the file, change the name to include the .ova extension, and then you're mainly off to the races. HOWEVER, you have to do ONE MORE THING to actually get it working. Specifically, you need to add this additional line to the end of the .vmx file after you import the .ova file:
guestinfo.cis.appliance.net.dns.servers="172.16.30.8"
Some RHEL 7 notes
The firewall in RHEL7 (and 7.1) is not iptables. Instead, it's the firewalld service that's controlled by systemd. I'm not sure which install option causes it to be configured because it wasn't running on all of my RHEL 7.1 systems. Anyway, to turn it off, you can run:systemctl stop firewalld
systemctl disable firewalld
In my case, it was blocking port 53 (dns), which I needed open to configure the vCenter server (next section). I first just used Applications->Sundry->Firewall to open port 53, then realized that I could just turn it completely off in my test environment so I don't hit any more problems with it.
Installing vSphere 6.0 Without a Windows Machine
I decided to also install vSphere 6.0 to use that as a testbed, and that has a few challenges. Specifically, the vCenter Server Appliance (VCSA) no longer ships directly as a .OVA file. It is now an ISO file that you're supposed to mount and run on a Windows machine to remotely install the vCenter Server Appliance on a remote ESXi server.I didn't want to get a Windows machine involved if at all possible, and it turns out to be fairly straightforward to do this. You will find 99% of the instructions in this great article:
http://www.unixarena.com/2015/05/how-to-deploy-vcsa-6-0-on-vmware-workstation.html
Specifically, the .OVA file can be found in the .ISO file that you download from VMWare. It just doesn't have a .ova extension. So you need to extract the file, change the name to include the .ova extension, and then you're mainly off to the races. HOWEVER, you have to do ONE MORE THING to actually get it working. Specifically, you need to add this additional line to the end of the .vmx file after you import the .ova file:
guestinfo.cis.appliance.net.dns.servers="172.16.30.8"
Set the value appropriately for your network. If you don't add this, the VM will start up, but will have the error:
Failed to configure network
And I couldn't find a way to fix that in the VM as it stood. I updated the DNS settings, rebooted the server, did lots of other things, etc., and it still just showed that error. So I knew I would have to recreate the VM from the OVA file, but needed to figure out how to set the DNS server of the VM from the VMX file.
So I mounted the VCSA ISO file on Linux and ran the following command at the root of it:
grep -r guestinfo.cis *
Somewhat amazingly, that came back within seconds and I found all of the settings from the linked article, and then I searched for "dns" and found the above REQUIRED setting.
And I couldn't find a way to fix that in the VM as it stood. I updated the DNS settings, rebooted the server, did lots of other things, etc., and it still just showed that error. So I knew I would have to recreate the VM from the OVA file, but needed to figure out how to set the DNS server of the VM from the VMX file.
So I mounted the VCSA ISO file on Linux and ran the following command at the root of it:
grep -r guestinfo.cis *
Somewhat amazingly, that came back within seconds and I found all of the settings from the linked article, and then I searched for "dns" and found the above REQUIRED setting.
I didn't have a "good" DNS server on my network, so I quickly created a DNS server on one of my RHEL7.1 systems. It's REALLY easy to do this if you have all of your hosts in the /etc/hosts file. You just need to run the command:
service dnsmasq start
systemctl start dnsmasq
(Edit 9/26: I changed the above command to use the systemd mechanism for starting the service)
And that's it. You now have a DNS server.
I thought that the vSphere Web Client would allow me to just use a browser, but that's not quite right. the web interface requires Flash, and really only supports Windows or MacOS clients. So I've had to bring a Windows machine into the mix anyway.
systemctl start dnsmasq
(Edit 9/26: I changed the above command to use the systemd mechanism for starting the service)
And that's it. You now have a DNS server.
You still NEED a Windows machine for vSphere
I thought that the vSphere Web Client would allow me to just use a browser, but that's not quite right. the web interface requires Flash, and really only supports Windows or MacOS clients. So I've had to bring a Windows machine into the mix anyway.
Subscribe to:
Posts (Atom)