I successfully installed ICO 2.4.0.1 using 2 machines:
1: Deployment Server (where I could then run the 'ds wizard' command)
2: My all-in-one deployment to a KVM hypervisor.
Both machines are running RHEL 6.5 and have lots of CPU, RAM and disk.
My HUGE problem after install was that the Deployment Server automatically started an unnecessary dnsmasq process. And because of this, I could not create any new instances. Every time I tried, I got errors similar to the following in /var/log/nova/compute.log (and similar in network.log also):
2015-01-27 13:29:14.987 19521 ERROR oslo.messaging.rpc.dispatcher [-] Exception during message handling: Remote error: ProcessExecutionError Unexpected error while running command.
Command: sudo nova-rootwrap /etc/nova/rootwrap.conf env CONFIG_FILE=["/etc/nova/nova.conf"] NETWORK_ID=3 dnsmasq --strict-order --bind-interfaces --conf-file=/etc/dnsmasq.conf --pid-file=/var/lib/nova/networks/nova-br1002.pid --listen-address=10.10.1.3 --except-interface=lo --dhcp-range=set:franknet,10.10.1.3,static,255.255.255.0,120s --dhcp-lease-max=256 --dhcp-hostsfile=/var/lib/nova/networks/nova-br1002.conf --dhcp-script=/usr/bin/nova-dhcpbridge --leasefile-ro --domain=novalocal --no-hosts --addn-hosts=/var/lib/nova/networks/nova-br1002.hosts
Exit code: 2
Stdout: u''
Stderr: u"2015-01-27 13:29:13.983 10307 INFO nova.openstack.common.periodic_task [req-0d06a702-1f2a-4b95-a8f3-0f8665d4c83b None None] Skipping periodic task _periodic_update_dns because its interval is negative\n2015-01-27 13:29:13.985 10307 INFO nova.network.driver [req-0d06a702-1f2a-4b95-a8f3-0f8665d4c83b None None] Loading network driver 'nova.network.linux_net'\n2015-01-27 13:29:13.986 10307 DEBUG nova.servicegroup.api [req-0d06a702-1f2a-4b95-a8f3-0f8665d4c83b None None] ServiceGroup driver defined as an instance of db __new__ /usr/lib/python2.6/site-packages/nova/servicegroup/api.py:65\n2015-01-27 13:29:14.107 10307 DEBUG stevedore.extension [-] found extension EntryPoint.parse('file = nova.image.download.file') _load_plugins /usr/lib/python2.6/site-packages/stevedore/extension.py:156\n2015-01-27 13:29:14.121 10307 DEBUG stevedore.extension [-] found extension EntryPoint.parse('file = nova.image.download.file') _load_plugins /usr/lib/python2.6/site-packages/stevedore/extension.py:156\n\ndnsmasq: failed to create listening socket for 10.10.1.3: Address already in use\n"
[u'Traceback (most recent call last):\n', u' File "/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py", line 134, in _dispatch_and_reply\n incoming.message))\n', u' File "/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py", line 177, in _dispatch\n return self._do_dispatch(endpoint, method, ctxt, args)\n', u' File "/usr/lib/python2.6/site-packages/oslo/messaging/rpc/dispatcher.py", line 123, in _do_dispatch\n result = getattr(endpoint, method)(ctxt, **new_args)\n', u' File "/usr/lib/python2.6/site-packages/nova/network/floating_ips.py", line 193, in deallocate_for_instance\n super(FloatingIP, self).deallocate_for_instance(context, **kwargs)\n', u' File "/usr/lib/python2.6/site-packages/nova/network/manager.py", line 563, in deallocate_for_instance\n instance=instance)\n', u' File "/usr/lib/python2.6/site-packages/nova/network/manager.py", line 246, in deallocate_fixed_ip\n address, instance=instance)\n', u' File "/usr/lib/python2.6/site-packages/nova/network/manager.py", line 980, in deallocate_fixed_ip\n self._teardown_network_on_host(context, network)\n', u' File "/usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py", line 249, in inner\n return f(*args, **kwargs)\n', u' File "/usr/lib/python2.6/site-packages/nova/network/manager.py", line 1906, in _teardown_network_on_host\n self.driver.update_dhcp(elevated, dev, network)\n', u' File "/usr/lib/python2.6/site-packages/nova/network/linux_net.py", line 1004, in update_dhcp\n restart_dhcp(context, dev, network_ref)\n', u' File "/usr/lib/python2.6/site-packages/nova/openstack/common/lockutils.py", line 249, in inner\n return f(*args, **kwargs)\n', u' File "/usr/lib/python2.6/site-packages/nova/network/linux_net.py", line 1118, in restart_dhcp\n _execute(*cmd, run_as_root=True)\n', u' File "/usr/lib/python2.6/site-packages/nova/network/linux_net.py", line 1211, in _execute\n return utils.execute(*cmd, **kwargs)\n', u' File "/usr/lib/python2.6/site-packages/nova/utils.py", line 165, in execute\n return processutils.execute(*cmd, **kwargs)\n', u' File "/usr/lib/python2.6/site-packages/nova/openstack/common/processutils.py", line 196, in execute\n cmd=sanitized_cmd)\n', u'ProcessExecutionError: Unexpected error while running command.\nCommand: sudo nova-rootwrap /etc/nova/rootwrap.conf env CONFIG_FILE=["/etc/nova/nova.conf"] NETWORK_ID=3 dnsmasq --strict-order --bind-interfaces --conf-file=/etc/dnsmasq.conf --pid-file=/var/lib/nova/networks/nova-br1002.pid --listen-address=10.10.1.3 --except-interface=lo --dhcp-range=set:franknet,10.10.1.3,static,255.255.255.0,120s --dhcp-lease-max=256 --dhcp-hostsfile=/var/lib/nova/networks/nova-br1002.conf --dhcp-script=/usr/bin/nova-dhcpbridge --leasefile-ro --domain=novalocal --no-hosts --addn-hosts=/var/lib/nova/networks/nova-br1002.hosts\nExit code: 2\nStdout: u\'\'\nStderr: u"2015-01-27 13:29:13.983 10307 INFO nova.openstack.common.periodic_task [req-0d06a702-1f2a-4b95-a8f3-0f8665d4c83b None None] Skipping periodic task _periodic_update_dns because its interval is negative\\n2015-01-27 13:29:13.985 10307 INFO nova.network.driver [req-0d06a702-1f2a-4b95-a8f3-0f8665d4c83b None None] Loading network driver \'nova.network.linux_net\'\\n2
Once I killed the offending dnsmasq process, I could successfully create instances.
SOURCE: http://fedoraproject.org/wiki/QA:Testcase_launch_an_instance_on_OpenStack
Tuesday, January 27, 2015
Saturday, January 24, 2015
How to remove a bridged network interface in Red Hat Linux
So ICO 2.4 requires the Deployment Server machine to have interface br0 defined as a bridge connected to eth0. However, ICO 2.4.0.1 does NOT want br0, but wants eth0. I had already created br0 using the virsh command, so I had to look up how to remove it, and it's very simple once you find it:
virsh iface-unbridge br0
and that's it. It puts all of the configuration back on eth0 and you're in business.
virsh iface-unbridge br0
and that's it. It puts all of the configuration back on eth0 and you're in business.
Friday, January 23, 2015
I'm giving up on ICO 24 on RHEL 6.6
After installing the Deployment Service (which gives you the ds command), I tried to deploy an all-in-one ICO environment, and it failed very badly. So badly that multiple components stopped running and there was no recovering from it. So I'm now starting over with RHEL 6.5.
Thursday, January 22, 2015
Getting IBM Cloud Orchestrator to install on Red Hat Enterprise Linux 6.6
This is probably fixed in ICO 2.4.1, but in 2.4, you need to modify a file to get the installation to work on RHEL 6.6 because it believes that only RHEL 6.4 and 6.5 are supported. You'll encounter an error when you run through the ds wizard command.
The file you need to update (after installing the Deployment Service) is:
/opt/ibm/cloud-deployer/precheck.json
In that file, you'll see multiple stanzas similar to this:
"os": {
"release": "Red Hat Enterprise Linux Server release",
"arch": "64",
"version": ["6.5","6.4"]
The file you need to update (after installing the Deployment Service) is:
/opt/ibm/cloud-deployer/precheck.json
In that file, you'll see multiple stanzas similar to this:
"os": {
"release": "Red Hat Enterprise Linux Server release",
"arch": "64",
"version": ["6.5","6.4"]
},
You need to update the line that contains "version" to be:
"version": ["6.6","6.5","6.4"]
And then the wizard can complete.
Some background:
I found this because I saw the error:
"DeployTaskFailed: Failed to execute deployment task: deploy-precheck, error message: precheck failed on ico24demo.mynet.foo: ico24demo.mynet.fooincorrect os version,expected:['6.5', '6.4']actually:6.6\n\n"
And I ran the following command in multiple directories:
grep -r "6\.5" *
until I found the file I needed to change.
Configuring RHEL prerequisites for IBM Cloud Orchestrator 2.4
Introduction
The first basic requirement for your systems when deploying ICO 2.4 is to have DNS name resolution working correctly. For some reason or another, this is the area that has the largest number and most frequent problems in every single enterprise. So this short post is meant to help you get something going quickly, normally in a test environment, until you can communicate with your networking team to get the problems solved permanently.My Simple Environment
My test environment is the demo configuration with one RHEL 6.6 server with 8 cores and 32GB of RAM, with KVM for virtualization.Setting Your Hostname
You need to set your hostname to a Fully Qualified Domain Name (FQDN), and in a test environment, the format just needs to be correct - the actual name doesn't have to be registered anywhere. For example, I named my host ico24demo.mynet.foo. I don't care if .foo is a valid root or not because I get to make the rules in my own environment.To permanently set your hostname to an FQDN if you didn't do it at install time, you need to edit the file:
/etc/sysconfig/network
and set:
HOSTNAME=your.full.fqdn
If you don't want to reboot to have it set, also run the following as root:
hostname your.full.fqdn
Update Your Hosts File
You also need to update your /etc/hosts file with your hostname and IP. So edit /etc/hosts and add that information for your host. In my environment, my hostname is ico24demo.mynet.foo and my static IP address is 192.168.1.250. So here's my /etc/hosts entry:192.168.1.250 ico24demo.mynet.foo ico24demo
Start dnsmasq
dnsmasq is a simple DNS server (and DHCP and some other things) that we'll configure locally for name resolution.
rpm -q -a | grep dnsmasq
If it's not installed, install it.
Assuming it is installed, make sure it's not already running with:
ps -ef | grep dnsmasq
If it is, kill it with:
service stop dnsmasq
The above may or may not work. If the process is still running kill it with the kill command.
And now that your /etc/hosts file has your host information, start dnsmasq with:
service start dnsmasq
Test Your New DNS Server
Probably the easiest way to test your server is to first edit your /etc/resolv.conf file to set your nameserver to your local machine. So my local IP is 192.168.1.250, and the IP address of my subnet's nameserver is 192.168.1.1. So in my /etc/resolv.conf file, I have the following:# Generated by NetworkManager
nameserver 192.168.1.250
nameserver 192.168.1.1
host 192.168.1.250
The correct output for me is:
250.1.168.192.in-addr.arpa domain name pointer ico24demo.mynet.foo.
If you get something different, go back over the steps above. But if it's correct, keep going.
Make the Nameserver Change Permanent
Go to System->Preferences->Network Connections to set your nameserver to be your local IP address.A non-GUI way to do this is to update your /etc/sysconfig/network-scripts/ifcfg-eth0 file to set the DNS server.
Good luck!
Monday, July 7, 2014
APM UI 7.7 - Creating Windows Services
If you've installed APM UI v7.7 on a Windows server, you probably have noticed that IBM doesn't create Windows services. As a result, none of the services start after a reboot, etc.
Below are the steps necessary to create Windows services for APM UI and SCR (Service Component Repository). In this scenario, we have the SCR database using DB2 vs. Derby.
We've also included the commands necessary to create the service for SCR on Derby (however, this hasn't been tested).
Assumptions
Our base install directory for the APM UI is C:\IBM\APMUI - your path may differ, adjust the commands below as appropriate.
Procedure
prunsrv //IS//SCR --Startup=auto --DisplayName="IBM APMUI - SCR" --Description="IBM WebSphere Liberty Profile SCR" ++DependsOn=Tcpip ++DependsOn=DB2-0 --LogPath=C:\IBM\APMUI\usr\servers\apmui\logs --StdOutput=auto --StdError=auto --StartMode=exe --StartPath=C:\IBM\APMUI --StartImage=C:\IBM\APMUI\bin\server.bat --StartParams=start#scr --StopMode=exe --StopPath=C:\IBM\APMUI --StopImage=C:\IBM\APMUI\bin\server.bat --StopParams=stop#scr
prunsrv //IS//APMUI --Startup=auto --DisplayName="IBM APMUI - APMUI" --Description="IBM WebSphere Liberty Profile APMUI" ++DependsOn=SCR --LogPath=C:\IBM\APMUI\usr\servers\apmui\logs --StdOutput=auto --StdError=auto --StartMode=exe --StartPath=C:\IBM\APMUI --StartImage=C:\IBM\APMUI\bin\server.bat --StartParams=start#apmui --StopMode=exe --StopPath=C:\IBM\APMUI --StopImage=C:\IBM\APMUI\bin\server.bat --StopParams=stop#apmui
The name specified after //IS// is effectively the "short name" for the service. So, you can issue commands such as "net start apmui", "net stop scrderby", etc. based on those names. As with all services, you can use the full Display Name if you enjoy typing (net start "IBM APMUI - APMUI", etc.)
The typical startup sequence would be:
The typical shutdown sequence would be:
Sample - APMUI Service
Below are the steps necessary to create Windows services for APM UI and SCR (Service Component Repository). In this scenario, we have the SCR database using DB2 vs. Derby.
We've also included the commands necessary to create the service for SCR on Derby (however, this hasn't been tested).
Assumptions
Our base install directory for the APM UI is C:\IBM\APMUI - your path may differ, adjust the commands below as appropriate.
Procedure
- Download the Apache Commons Daemon (link).
- Extract commons-daemon-1.0.15-bin-windows.zip and copy prunsrv.exe to C:\IBM\APMUI\bin\
- Open a Command Prompt and change into your C:\IBM\APMUI\bin directory
- Create the Windows service for SCR and have it depend on Tcpip and DB2 being up first.
prunsrv //IS//SCR --Startup=auto --DisplayName="IBM APMUI - SCR" --Description="IBM WebSphere Liberty Profile SCR" ++DependsOn=Tcpip ++DependsOn=DB2-0 --LogPath=C:\IBM\APMUI\usr\servers\apmui\logs --StdOutput=auto --StdError=auto --StartMode=exe --StartPath=C:\IBM\APMUI --StartImage=C:\IBM\APMUI\bin\server.bat --StartParams=start#scr --StopMode=exe --StopPath=C:\IBM\APMUI --StopImage=C:\IBM\APMUI\bin\server.bat --StopParams=stop#scr
- Create the Windows service for APMUI and have it depend on SCR being up first.
prunsrv //IS//APMUI --Startup=auto --DisplayName="IBM APMUI - APMUI" --Description="IBM WebSphere Liberty Profile APMUI" ++DependsOn=SCR --LogPath=C:\IBM\APMUI\usr\servers\apmui\logs --StdOutput=auto --StdError=auto --StartMode=exe --StartPath=C:\IBM\APMUI --StartImage=C:\IBM\APMUI\bin\server.bat --StartParams=start#apmui --StopMode=exe --StopPath=C:\IBM\APMUI --StopImage=C:\IBM\APMUI\bin\server.bat --StopParams=stop#apmui
Derby vs. DB2
If you have SCR running under Derby vs. DB2, you can create a third service for SCRDERBY. The start-up of SCR would then depend on SCRDERBY instead of DB2-0.- Create the Windows service for SCRDERBY and have it depend on TCPIP being up first
- In this case, have the SCR service depend on SCRDERBY instead of DB2.
The name specified after //IS// is effectively the "short name" for the service. So, you can issue commands such as "net start apmui", "net stop scrderby", etc. based on those names. As with all services, you can use the full Display Name if you enjoy typing (net start "IBM APMUI - APMUI", etc.)
The typical startup sequence would be:
- net start scrderby (if running Derby vs. DB2)
- net start scr
- net start apmui
The typical shutdown sequence would be:
- net stop apmui
- net stop scr
- net stop scrderby (if running Derby vs. DB2)
prunsrv Documentation/Usage:
Here's a link to the prunsrv documentation/usage.Sample - APMUI Service
Sample - APMUI Service Dependencies
Wednesday, January 29, 2014
MongoDB Setup and Monitoring with Application Insight
Introduction
MongoDB might be new to you. It certainly was new to us, so when we took some time to experiment with IBM SmartCloud Monitoring - Application Insight of a MongoDB, our first hurdle was setting up MongoDB! It should be noted that stand-alone DB instances are not currently supported by the monitoring agent, so we needed to build up a clustered/replicated MongoDB configuration. This is common in a production environment, but a bit more than we expected to need in the lab.
MongoDB Setup Overview
Our MongoDB setup was accomplished using a single virtual machine. While this isn't a recommended setup for a production environment, it is perfectly fine for this testing. Our setup included the following overall steps:
A special thanks to the following website for providing some demo database data and a nice presentation on MongoDB and Ruby apps. Also thanks to the MongoDB docs for creating a cluster.
You may ask, why 3 databases, aren't 2 enough? This is because the recommended MINIMUM number of databases for a Replica Set is 3. If your setup only has 2 databases, Application Insight will flag the Global MongoDB Status as a Warning.
- Create 3 databases, each configured to use unique paths and ports (ports: 37017, 37018, 37019)
- Connect to the "soon-to-be" primary database (port 37017) and configure the replica set.
- Add second (port 37018) and third (port 37019) databases to the replica set.
- Installed a demo database using JSON import
- Started a Configuration Server (port 37020)
- Started mongos, pointing at Configuration Server and listening on port 37021
- Connected to mongos (port 37021)
- Added replica set to Shard
- Enabled Sharding on our demodb
- Enabled Sharding on the collection (aka table)
A special thanks to the following website for providing some demo database data and a nice presentation on MongoDB and Ruby apps. Also thanks to the MongoDB docs for creating a cluster.
You may ask, why 3 databases, aren't 2 enough? This is because the recommended MINIMUM number of databases for a Replica Set is 3. If your setup only has 2 databases, Application Insight will flag the Global MongoDB Status as a Warning.
MongoDB Setup
VM hostname is "openpulse".
Replica Set is named "rs1".
Download MongoDB (we used 64-bit Linux) from here.
Untar file. Location of files will be referred to as $MEDIA
The remaining commands assume the $MEDIA/bin directory are in your path.
mkdir -p /srv/mongodb/rs1-0 /srv/mongodb/rs1-1 /srv/mongodb/rs1-2
# Start up 3 databases
Replica Set is named "rs1".
Download MongoDB (we used 64-bit Linux) from here.
Untar file. Location of files will be referred to as $MEDIA
The remaining commands assume the $MEDIA/bin directory are in your path.
mkdir -p /srv/mongodb/rs1-0 /srv/mongodb/rs1-1 /srv/mongodb/rs1-2
# Start up 3 databases
mongod --port 37017 --dbpath /srv/mongodb/rs1-0 --replSet rs1 --smallfiles --oplogSize 128
mongod --port 37018 --dbpath /srv/mongodb/rs1-1 --replSet rs1 --smallfiles --oplogSize 128
mongod --port 37019 --dbpath /srv/mongodb/rs1-2 --replSet rs1 --smallfiles --oplogSize 128
# Connect to to-be-primary
# Connect to to-be-primary
mongo --port 37017
#Give Primary a configuration for a replication set
rsconf = {
_id: "rs1",
members: [
{
_id: 0,
host: "openpulse:37017"
}
]
}
#Initiates the replica set
rs.initiate( rsconf )
#Displays Replication set
rs.conf()
#Add 2nd db - NOTE: you might need to wait for prompt to change to PRIMARY before continuing.
rs.add("openpulse:37018")
#Add 3rd db
rs.add("openpulse:37019")
#Display Replication set, confirm three are listed
rs.conf()
#Install DemoDB
curl -L http://c.spf13.com/OSCON/venuesImport.json | mongoimport --port 37017 -d demodb -c venues
#Start a config server
mkdir /data/configdb2
mongod --configsvr --dbpath /data/configdb2 --port 37020
#Start mongos pointing at config server
mongos --port 37021 --configdb openpulse:37020
#Connect to mongos
mongo --host openpulse --port 37021
#Add replica set to shard
sh.addShard( "rs1/openpulse:37017" )
#Shows replica set and other stats
db._adminCommand("connPoolStats");
#Connect to mongos
mongo --host openpulse --port 37021
#Enable sharding for the DB
sh.enableSharding("demodb")
#switch DB
use demodb
#Set index for sharding
db.venues.ensureIndex( { _id : "hashed" } )
db.venues.getIndexes()
#Setup sharding of the collection (table)
sh.shardCollection("demodb.venues", { "_id": "hashed" })
MongoDB Agent Setup
Reference IBM documentation here.
Our monitoring instance is named "Pulse21"
mongodb-agent.sh install
mongodb-agent.sh config pulse21
Agent configuration started...
Edit "Monitoring Agent for MongoDB" settings? [ 1=Yes, 2=No ] (default is: 1):
Agent Configuration :
Configuration variables for agent start up
The directory where the Java runtime is located. This value is optional. The agent will use the built in Java by default. Use this to over ride the Java to be used.
Java (default is: ):
Allows the user to say whether these system is a cluster or single replication set. This value is optional. By default the agent will monitor a cluster.
Type [ 1=Cluster, 2=Single Set ] (default is: 1):
Port Number for the router of a MongoDB cluster or a mongod instance of a MongoDB replication set being monitored. This value is optional. The agent will automatically discover the cluster location if only one is present on the system. This is used to over ride discovery of a cluster or explicitly monitor a replication set.
Port (default is: ): 37021 (Note: this is the port of the mongos process, not the individual databases)
The ip address for the host system of the router or the mongod instance. This value is optional. The agent will automatically discover the default ip address. This is used to select a particular interface on a system where MongoDB is bound to only one of several addresses.
Host (default is: ):
Agent configuration completed...
As a reminder, you should restart appropriate instance(s) for new configuration settings to take effect.
./mongodb-agent.sh start pulse21
Processing. Please wait...
Starting Monitoring Agent for MongoDB ...
Monitoring Agent for MongoDB started
Application Insight Screenshots
Thanks,
Anthony Segelhorst and Jamie Carl
Subscribe to:
Posts (Atom)