Friday, November 6, 2015

A feature that I would love to see in OpenStack

A graphical user interface for the commands.

Yes, Horizon is pretty nice, but there are lots of things that it doesn't allow you to do. Additionally, the CLI commands are pretty intimidating. Here's the documentation for the current release: http://docs.openstack.org/cli-reference/content/openstackclient_commands.html . Notice how each command has at least 20 options, each with its own flags, etc. And Horizon doesn't actually call these commands. It uses the REST API (and/or other API), which is completely separate.

So what I'd like to see is something graphical that lets you first pick a command ('nova', 'glance', etc.), then you have a drop-down choice list for the next parameter, and the next, etc. Many of these commands require output from other commands as their input. So the idea would be that the interface is running each of the required commands to build the appropriate list of options. And after building the command, this tool could output that for you to use in scripts, etc.

If it was an easy task, it would already exist, and it doesn't. There's also quite a lot of movement in the code, so keeping this tool up-to-date would be quite a challenge. But I personally think it would be worth it.


ICO 2.5: Where does the heat template for the "Deploy LAMP stack" come from?

ICO 2.5 comes with a few samples in the Self-Service Catalog, including one named "Deploy LAMP stack" (under "Deploy customized cloud services"). Once you have your linux_img image configured correctly and your linux_key key pair created, you can request this offering to deploy a LAMP stack among three machines: Apache, MySQL and PHP. I found the heat template command logs on each machine in /var/log/cloud-init-output.log and /tmp/install.log, and wanted to find where exactly that template was coming from. It took me far too long to find it because it's an inline string in the JavaScript code that can be seen with Business Process Designer. Specifically, you can find it in the Embeddable Deploy Lamp Template service in the SCOrchestrator OpenStack Services toolkit, in the Build Parameters element on the diagram, here:



You can see it in the green text as part of the "Implementation" in the bottom panel.

ICO 2.5: Creating a Red Hat 6.5 image with Gnome for use with a VMWare vSphere cloud

Introduction

When creating your private OpenStack-managed vSphere cloud, you're going to need some "images" ("VM Templates" in VMWare terminology) so you can launch/deploy instances. The really sticky part about this configuration is that OpenStack has traditionally only supported the KVM hypervisor, which uses a different disk format than VMWare (KVM uses QCOW2 and VMWare uses VMDK). I found some great QCOW2 images here: http://docs.openstack.org/image-guide/content/ch_obtaining_images.html and some great CentOS VMDK images here: http://osboxes.org. I had some different hurdles with each of those and finally decided just to install RHEL 6.5 from scratch, then modify that VM to work, then create a VM Template that would be automatically discovered by OpenStack as an image. In this post I'll cover the highlights of this technique.

Install Red Hat

Install the "Desktop" option. This will mean that you have to do a little cleanup later, but I'll cover that.

Install VMWare Tools

Go ahead and install VMWare tools just to make your life a little easier.

Configure Yum Repositories and Install Packages

If you're using Red Hat Subscription Manager, you don't have to go through these steps. If you're not using RHSN, you'll need to manually configure some repositories. I found for this exercise that the CentOS repositories, along with the EPEL repository, worked great. To configure the CentOS repositories, create a file named /etc/yum.repos.d/YOU_PICK_A_NAME.repo with the following contents:

[base]
name=CentOS-6 - Base
mirrorlist=http://mirrorlist.centos.org/?release=6&arch=x86_64&repo=os
#baseurl=http://mirror.centos.org/centos/6/os/x86_64/
gpgcheck=0

#released updates
[updates]
name=CentOS-6 - Updates
mirrorlist=http://mirrorlist.centos.org/?release=6&arch=x86_64&repo=updates
#baseurl=http://mirror.centos.org/centos/6/updates/x86_64/
gpgcheck=0

#additional packages that may be useful
[extras]
name=CentOS-6 - Extras
mirrorlist=http://mirrorlist.centos.org/?release=6&arch=x86_64&repo=extras
#baseurl=http://mirror.centos.org/centos/6/extras/x86_64/
gpgcheck=0

#additional packages that extend functionality of existing packages
[centosplus]
name=CentOS-6 - Plus
mirrorlist=http://mirrorlist.centos.org/?release=6&arch=x86_64&repo=centosplus
#baseurl=http://mirror.centos.org/centos/6/centosplus/x86_64/
gpgcheck=0
enabled=0

#contrib - packages by Centos Users
[contrib]
name=CentOS-6 - Contrib
mirrorlist=http://mirrorlist.centos.org/?release=6&arch=x86_64&repo=contrib
#baseurl=http://mirror.centos.org/centos/6/contrib/x86_64/
gpgcheck=0
enabled=0

Then run 'yum repo-list' to verify the repositories were created. Now you can run the following commands:

yum -y epel-release
yum -y cloud-init
yum -y cloud-utils
yum -y heat-cfntools

Configure Networking

Since the "Desktop" option was chosen, the NetworkManager service was installed and enabled. You need to change that with these commands:

service NetworkManager stop
chkconfig NetworkManager off
service network start
chkconfig network

Now run the following commands so that eth0 will get configured properly:

sed -i '/^HWADDR/d' /etc/sysconfig/network-scripts/ifcfg-eth0
echo -n > /etc/udev/rules.d/70-persistent-net.rules
echo -n > /lib/udev/rules.d/75-persistent-net-generator.rules

Configure cloud.cfg

The default configuration will try to contact multiple non-existent IP addresses for metadata, and you don't need that by changing /etc/cloud/cloud.cfg. Add the following line to that file:

datastore_list: [ NoCloud, ConfigDrive, None ]

You *may* also want to set:

disable_root: 0
ssh_pwauth: 1

This will allow you to access the VM as root, and via ssh with a password.

Make a Template

Via the vSphere Web Client or vSphere Windows Client, clone the VM to a template. After you do this, wait a few minutes for OpenStack to "see" this new template as an Image.

Conclusion

Following the above steps should get you a working template from which you can explore the other capabilities of ICO and ICMWO.

Tuesday, October 6, 2015

How to reset your OpenStack services to "Up" in your IBM Cloud Manager dashboard for a vSphere cloud

While trying test and manage components in your vSphere cloud, you may see "Services Down" in some parts of the dashboard. For example, under "Host Aggregates":


I basically found that restarting all of the "nova*" services on my controller to be the answer to this problem.

The one-line answer is to log into your controller node as root and run the following:

for i in `systemctl -a | grep \.service | grep " active"| grep nova | awk '{ print $1 }'`; do systemctl restart $i;echo $i; done

So it's iterating through the results of the 'systemctl -a' command that contain active services containing the word "nova", and restarting each of those services. After you run the above, you should see that it shows "Services Up" for all availability zones on all hosts (since in a vSphere cloud, these services are all running on the controller node, rather than on the VMware nodes themselves).

Thursday, October 1, 2015

There's a new cloud in town, Part 4 How to reset ICMWO 4.3 to reinstall a cloud

After making sure ICMWO (IBM Cloud Manager With OpenStack) had the correct fixpack installed, the installation and configuration of ICO 2.5 succeeded and is working fine. I'm able to deploy individual VMs and Heat stacks to an OpenStack cloud. I've only created a small vSphere cloud, and I believe that's the reason that I haven't had much success getting ICMWOS to work with that cloud (I can launch instances, but those instances can't see their operating system). But I've learned several useful pieces of information through the process. I'll list the most important one here, and I'll write a new post for each of the others.

How to "reset" ICMWO to reinstall a cloud

While you're kicking the tires (or even installing into a production environment), you will certainly encounter the need to try to re-deploy a cloud. This "reset" functionality isn't made available from the GUI, and really involves doing some things outside of ICMWO. Happily, ICMWO doesn't install anything on your vCenter or ESXi servers (the controller uses the appropriate vSphere APIs through the vCenter server to do all the dirty work). So, to reset things so you can re-deploy a vSphere cloud, you need to:

1. Delete and re-create the "controller" node that you previously specified. This is the server that ICMWO deployed OpenStack to. You created this server specifically for this purpose based on the topology requirements of ICMWO. My entire environment is running under VMWare Workstation, so I simply took a snapshot of this VM once I had the OS installed and configured, so I could revert to that snapshot before each successive attempt.

2. Next, you need to delete the TWO Chef resources associated with the controller. There is a NODE and a CLIENT that have been created for the controller. To delete those, you need to run the following two commands (where "vmc.mynet.foo" is the FQDN of the controller for your VMWare cloud):

knife client delete vmc.mynet.foo
knife node delete vmc.mynet.foo

3. Finally, to delete the cloud from ICMWO Deployer GUI (https://icmwos.mynet.foo:8443), you need to log into the ICMWO server (via the console, ssh, VNC, etc.) as the same user you use to log into the Deployer GUI and delete a directory. The name of the directory contains the name of the cloud that you specified in ICMWO when you deployed the cloud and the datetimestamp when it was created. The directory is under:

$HOME/icm/clouds

And the name will be "cloudName_datetimestamp". So to delete the cloud named "vc55" from my GUI, I needed to run this command:

rm -rf ~/icm/clouds/vc55_2015-09-29_173201

And now ICMWO is ready to allow you to try to deploy a cloud to that node. I don't know if this name is stored anywhere else, so the safest route in my opinion is to use a different name for the new cloud.

Monday, September 28, 2015

There's an updated cloud in town Part 3: Reinstalling everything

The prerequisite checker doesn't check the ICMWO fixpack

ICO 2.5 has a great prerequisite check script, but I found out the hard way that it doesn't check to make sure you have the correct ICMWO fixpack installed (fp 2 or later is required). Specifically, what I got was a crash of the script, rather than anything useful. Even after I made it so the script didn't crash, the error still didn't directly point to the fact that I had forgotten to install the fixpack. Here's the failure message I received at the end of the prereq_checker.sh output:

Traceback (most recent call last):
  File "/mnt/hgfs/ico25/ico_install/installer/read-params.py", line 347, in <module>
    if not validator.validate(param_dict, prereq, log_file) and not os.path.exists("/tmp/skip-checking"):
  File "/mnt/hgfs/ico25/ico_install/installer/validator/__init__.py", line 74, in validate
    openstackServices.validate(params, response1, messageValidator)
  File "/mnt/hgfs/ico25/ico_install/installer/validator/openstackServices.py", line 70, in validate
    if not user_response:
UnboundLocalError: local variable 'user_response' referenced before assignment

I edited line 70 of openstackServices.py to allow it to at least complete correctly, and this was the error message at that point:

 -  Checking that the following services are available: ['cinder', 'glance', 'nova', 'neutron|nova network']
   -  Status: Failed
   -  The following services must be available and running : ['cinder', 'glance', 'nova', 'neutron|nova network']
ERROR: The cinder service API is not enabled
   -  User response: Ensure that the openstack services are available

All of those services were running, but it jarred me into remembering that I hadn't installed FP3. Once I installed the fixpack, the prereq checker script and install were able to complete successfully.

Verifying you ran ICM_config_ico.sh correctly

To verify that you ran the ICM_config_ico.sh script correctly on your servers, open the OpenStack Dashboard (https://openstack_controller) and navigate to IDENTITY->Domains, then click the down arrow far to the right of a domain and select Edit Domain to get this dialog:


Specifically, notice that there are 5 tabs: Domain Information, Domain Members, Domain Groups, Domain Administrators, Quota and Availability Zones. This is how it should look. If it doesn't, you did something wrong in running the ICM_config_ico.sh script. I think my problem was that I ran it twice on my controller - once with the "master_controller" option and once with the "controller" option. I should have only run it ONCE, with the "master_controller" option.

You will need to go to the Availability Zones tab and add the "nova" availability zone to the Default domain.

Also navigate to IDENTITY->Projects and click the down arrow to the far right of the "admin" project, then select Edit project and you should see this dialog:


Notice that it has an Availability Zones tab. Again, that is not there if the scripts didn't run correctly.

And now for the project (similar to the domain above), you will need to go to the Availability Zones tab and add the "nova" availability zone to the projects you'll be working with through ICO.

Thursday, September 24, 2015

There's an updated cloud in town Part 2: Still Installing ICO 2.5

A few more hurdles overcome as I get closer to getting ICO 2.5 installed.

Some RHEL 7 notes

The firewall in RHEL7 (and 7.1) is not iptables. Instead, it's the firewalld service that's controlled by systemd. I'm not sure which install option causes it to be configured because it wasn't running on all of my RHEL 7.1 systems. Anyway, to turn it off, you can run:

systemctl stop firewalld
systemctl disable firewalld

In my case, it was blocking port 53 (dns), which I needed open to configure the vCenter server (next section). I first just used Applications->Sundry->Firewall to open port 53, then realized that I could just turn it completely off in my test environment so I don't hit any more problems with it.

Installing vSphere 6.0 Without a Windows Machine

I decided to also install vSphere 6.0 to use that as a testbed, and that has a few challenges. Specifically, the vCenter Server Appliance (VCSA) no longer ships directly as a .OVA file. It is now an ISO file that you're supposed to mount and run on a Windows machine to remotely install the vCenter Server Appliance on a remote ESXi server.

I didn't want to get a Windows machine involved if at all possible, and it turns out to be fairly straightforward to do this. You will find 99% of the instructions in this great article:

http://www.unixarena.com/2015/05/how-to-deploy-vcsa-6-0-on-vmware-workstation.html

Specifically, the .OVA file can be found in the .ISO file that you download from VMWare. It just doesn't have a .ova extension. So you need to extract the file, change the name to include the .ova extension, and then you're mainly off to the races. HOWEVER, you have to do ONE MORE THING to actually get it working. Specifically, you need to add this additional line to the end of the .vmx file after you import the .ova file:

guestinfo.cis.appliance.net.dns.servers="172.16.30.8"

Set the value appropriately for your network. If you don't add this, the VM will start up, but will have the error:

Failed to configure network

And I couldn't find a way to fix that in the VM as it stood. I updated the DNS settings, rebooted the server, did lots of other things, etc., and it still just showed that error. So I knew I would have to recreate the VM from the OVA file, but needed to figure out how to set the DNS server of the VM from the VMX file.

So I mounted the VCSA ISO file on Linux and ran the following command at the root of it:

grep -r guestinfo.cis *

Somewhat amazingly, that came back within seconds and I found all of the settings from the linked article, and then I searched for "dns" and found the above REQUIRED setting.

I didn't have a "good" DNS server on my network, so I quickly created a DNS server on one of my RHEL7.1 systems. It's REALLY easy to do this if you have all of your hosts in the /etc/hosts file. You just need to run the command:

service dnsmasq start

systemctl start dnsmasq

(Edit 9/26: I changed the above command to use the systemd mechanism for starting the service)

And that's it. You now have a DNS server.

You still NEED a Windows machine for vSphere


I thought that the vSphere Web Client would allow me to just use a browser, but that's not quite right. the web interface requires Flash, and really only supports Windows or MacOS clients. So I've had to bring a Windows machine into the mix anyway.