Tuesday, December 30, 2008

TPM 5.1.1.2 Custom Inventory Scan

Well it is finally available. The custom inventory scan is one of the features that existing Tivoli Configuration Manager (TCM) customers required in TPM before they could move to using TPM completely (yes there are still more, but this was pretty important)

A while back, I created a blog entry on creating custom inventory scans with TCM (see http://blog.gulfsoft.com/2008/03/custom-inv-scan.html) that I thought I would look at moving to TPM. This is a basic VBS script that will read in a registry key and record to the values to a MIF file. This MIF file is then imported into the DCM and can be used in a query from the Reports section of the TPM UI.

The steps involved are:
1. Define the requirements for the data to determine table columns
2. Create the table
3. Create the pre and post scripts to create the output file (can be MIF or XML)
4. Create the inventory extension properties file and import
5. Deploy the scripts to the target
6. Run the custom discovery created by the import in step 3
7. Run the custom report created by the import in step 3

Define Requirements
For this sample, the requirements for the data to be inserted are the Registry Hive, Key(s) and Value.

Create the Table
The table will be made up of the requirements defined above and columns to uniquely identify the computer and discovery id. Also the fix pack documentation recommends adding foreign keys for cascade deletes. For this example, I set the various fields to VARCHAR and defined a possible length for them. I also assigned it to the tablespace IBM8KSPACE, well, just because I felt like it ;)

Table Creation Syntax for DB2
create table db2admin.registrydata1(REG_PATH VARCHAR(256), REG_VALUE VARCHAR(32), REG_DATA VARCHAR(128)) in IBM8KSPACE;
alter table db2admin.registrydata add column SERVER_ID BIGINT;
alter table db2admin.registrydata add column DISCOVERY_ID BIGINT;
alter table db2admin.registrydata add foreign key (SERVER_ID) references db2admin.SERVER(SERVER_ID) on DELETE CASCADE;
alter table db2admin.registrydata add foreign key (DISCOVERY_ID) references db2admin.DISCOVERY(DISCOVERY_ID) on DELETE CASCADE;

Create the pre and post scripts
One of the possible issues is around passing arguments to the scripts, but then again, this was not that easy in TCM. The examples in the FP docs do not talk about passing arguments in the extension properties file (discussed in the next step). From the limited testing I did, this does not seem possible.

The pre script will be used to generate the MIF file to be retrieved. One issue I have with the pre script is that it does not accept arguments. So this means that to execute the VBS script that I want to use, I have to create a wrapper BAT file to pass the arguments.

The post script does not really need to do anything. It could be as simple as just an exit 0.

prescript.windows.bat
==============================================
cscript //nologo c:\test\registrydata.vbs "Software\Martin" c:\test\regdata.windows.mif
==============================================

postscript.windows.bat
==============================================
echo Running post script
==============================================

regdata.windows.mif (sample output)
==============================================
START COMPONENT
NAME = "REGISTRY VALUE DATA"
DESCRIPTION = "List registry value and data entries"
START GROUP
NAME = "REGISTRYDATA"
ID = 1
CLASS = "DMTF|REGISTRYDATA|1.0"
START ATTRIBUTE
NAME = "REG_PATH"
ID = 1
ACCESS = READ-ONLY
TYPE = STRING(256)
VALUE = ""
END ATTRIBUTE
START ATTRIBUTE
NAME = "REG_VALUE"
ID = 2
ACCESS = READ-ONLY
TYPE = STRING(32)
VALUE = ""
END ATTRIBUTE
START ATTRIBUTE
NAME = "REG_DATA"
ID = 3
ACCESS = READ-ONLY
TYPE = STRING(128)
VALUE = ""
END ATTRIBUTE
KEY = 1,2,3
END GROUP
START TABLE
NAME = "REGISTRYDATA"
ID = 1
CLASS = "DMTF|REGISTRYDATA|1.0"
{"Software\\Martin","test","test"}
{"Software\\Martin","test1","test1"}
END TABLE
END COMPONENT
==============================================


Create the inventory extension properties file
The extension file is used to define the table that is used to define various properties required to populate TPM with the definition, tables, scripts and output file to be used in the discovery process. The properties file for this example is as follows:

==============================================
#name the extension
extName=REGISTRYDATA

#Description of the extension
extDescription=GBS Collect Registry data

#Custom Table Names
TABLE_1.NAME=REGISTRYDATA

#file for Windows platform
WINDOWS=yes
pre_windows=C:\\test\\prescript.windows.bat
out_windows=c:\\test\\regdata.windows.mif
post_windows=c:\\test\\postscript.windows.bat
==============================================

extName is used to define the DCM object name for the new extension.
extDescription is used in the description field for the discovery configuration and report (if created)

Multiple table names can be used by using sequential numbers after the TABLE_. For the purposes of this demo, only one table is used

Operating system flags – This can be defined for WINDOWS, AIX, HPUX, SOLARIS and LINUX

Pre/Post scripts and Output files – are used for each operating system. Used the prefixes pre_, out_ and post_ with the OS definitions of windows, aix, hpux, solaris and/or linux

Save the file as whatever naming convention is desired. For the purposes of this example the file was created in C:\IBM\tivoli\custom\inventory\registrydata\registrydata.properties

Import the properties file
To import the properties, use the command %TIO_HOME%\tools\inventoryExtension.cmd. This command can be used to create, delete and list inventory extensions. The syntax used for this example was:

inventoryExtention.cmd create –p C:\IBM\tivoli\custom\inventory\registrydata\registrydata.properties –r yes

The “p” parameter defines the file to be used and the “-r yes” is used to tell the import to also create the custom report for the inventory extension.

Command Output:
=========================================
C:\IBM\tivoli\tpm\tools>inventoryExtension.cmd create -p C:\IBM\tivoli\custom\inventory\registrydata\registrydata.properties -r yes
2008-12-30 09:59:03,890 INFO log4j configureAndWatch is started with configura
ion file: C:\ibm\tivoli\tpm/config/log4j-util.prop
2008-12-30 09:59:04,062 INFO COPINV006I Parsing the command line arguments ...
2008-12-30 09:59:04,796 INFO COPINV007I ... command line arguments parsed.
2008-12-30 09:59:04,796 INFO COPINV008I Start processing ...
2008-12-30 09:59:04,812 INFO Start parsing property file
2008-12-30 09:59:08,780 INFO Finished parsing property file
2008-12-30 09:59:08,874 INFO COPINV021I The specified extension: REGISTRYDATA has been successfully registered.
2008-12-30 09:59:08,874 INFO Creating discovery configuration...
2008-12-30 09:59:14,984 INFO Discovery REGISTRYDATA_Discovery successfully created
2008-12-30 09:59:15,390 INFO Report REGISTRYDATA_Report succesfully created
2008-12-30 09:59:15,484 INFO COPINV009I ... end processing.
2008-12-30 09:59:15,484 INFO COPINV005I The command has been executed with return code: 0 .
=========================================

Note the names of the Discovery Configuration and the Report created. TPM does not need to be restarted for this to take effect.

Deploy the scripts to the target
The next step is to deploy the script(s) to the target(s). In TCM, this was done by creating a dependency on the inventory object. Currently this is not something that is documented, but could be done with a software package, a workflow or possibly modifying the CIT package (not sure if this would be advisable). I am going to leave this out for now as I think this will need to be investigated further. So for now, this was just a manual copy of the files to the remote target. Copy the files to C:\TEST as defined in the properties file.


Run the custom discovery
Once the import is completed, a new discovery configuration is created. This discovery configuration will be labeled according to the extName field in the properties file and suffixed with “_Discovery”.




One of the main differences between a normal inventory discovery and the custom is on the Parameters tab of the discovery. There is a new field called Inventory Extension.




Run the custom report (if created)
Once the scan is completed, execute the custom report. This will display the results for the data collected. The report is created under the Discovery section.




In the results the data for the scanned registry section for each computer will be displayed.



Other Random Notes

There are also 3 tables (that I found so far). The names are INVENTORY_EXTENSION, INVENTORY_EXTENSION_FILE and INVENTORY_EXTENSION_TABLE. The table INVENTORY_EXTENSION_FILE contains the pre, post and output files. I was able to modify the entries to use a different file on the target without having to re-import the properties file.

Wednesday, December 24, 2008

ITM 6.2.1 Agent Availability Monitoring - What's new?

If you have been using MS Offline Situations in ITM 6.x so far, you know the drawbacks of such monitoring.  For example, You will get hundreds of such situations when a RTEMS goes offline even if the agent itself running. You  can use process monitoring situations (for the process kntcma.exe), but it has its own drawbacks. ITM 6.2.1 introduces a new approach to this problem and this article explains the new solution.

In ITM 6.2.1, we can use the Tivoli Proxy Agent Services to monitor the availability of the agents. For example, if a Windows OS agent goes down, the Tivoli Proxy Agent Services can restart it.   In case the OS agent went down too many times, you can easily monitor the condition using a situation. Just use the Alert Message attribute in the Alerts Table attribute group and check if the agent exceeded restarted count.   Here are few other conditions that you can monitor with the Alerts Message attribute.
  • Agent Over utilizing CPU
  • Agent Over Utilizing Memory
  • Agent Start Failed
  • Agent Restart Failed
  • Agent Crashed (abnormal stop)
  • Agent Status Check Script Failed
  • Managed/Unmanaged agent removed from the system
With the combination Tivoli Proxy Agent Services auto restart feature and set of situations to monitor the above exceptions,  you can devise an effective agent availability monitoring solution with fewer and only relevant alerts reaching the console.  Do you have questions about the above solution? Please feel free to write back.

Merry Christmas!

Tuesday, December 23, 2008

Creating a Web Services/SOAP client in eclipse

The Eclipse development environment is very powerful, so I figured I would create a simple SOAP client in it from a given WSDL file. As it turns out, it's a little painful, so I wanted to write up the steps I had to go through.

I used Eclipse 3.3.2 (Europa) JEE: Eclipse 3.3.2 (Europa) JEE on Windows XP in a VM

I started with an empty Java project named "FirstJava".

First import your WSDL file into your project.
- Select the "src" folder of your proejct, right-click and select "Import"
- Choose General/File System
- Select the folder containing your file and click OK
- now place a check mark next to your WSDL file and click OK

Now download Tomcat. There is a built-in "J2EE Preview" that works for some things, but not for this.

Now create a new Target Runtime and server associated with Tomcat.
- Select the project and select Project->Properties, then select "Targeted Runtimes" on the left of the dialog.



- Click the "New" button
- Select Apache->Tomcat 5.5 AND ALSO CHECK "Also create new local server"



- Input the appropriate values as needed.


- Select the newly-create runtime


- Click OK to save the properties.

Create a new Web Services Client
- Right-click your WSDL file name and select New->Other..., then in the dialog displayed, select Web Services->Web Service Client).



- Your WSDL file name should be filled in at the top of the dialog


- In this dialog, use the slider underneath "Client Type" to specify "Test client" (move it all the way to the top).
- Click Finish.
- This will create a bunch of new code in your current project, plus it will create a new project (named "FirstJavaSample" in my case) with the JSPs you'll be able to (hopefully) run to test your client.


- This will give you an error about the JSP not supporting org.apache.axis.message.MessageElement[]. Just click OK several times until the error box goes away. We'll fix that later.

If all went well, you should see something like the following:


Now we have to fix the errors.

Create a JAR file named FirstJava.jar containing the contents of the bin directory of your FirstJava project.

Copy that file to the Tomcat "Common/lib" folder (C:/Apache/Tomcat5.5/Common/lib on my system).

You will additionally need to find these files under the eclipse/plugins directory and copy them to the Tomcat Common/Lib folder:

axis.jar
saaj.jar
jaxrpc.jar
javax.wsdl15_1.5.1.v200705290614.jar
wsdl4j-1.5.1.jar

(If you can't find them on your system, use Google to find and download them. One of them - I don't recall which - was tricky to find for me because it was actually in another JAR file.)

Now stop the Tomcat server by selecting it in the "Servers" view and clicking the stop (red square) button.


Now re-run your application by opening the FirstJavaSample project and finding the file named "TestClient.jsp". Right-click that file and select Run As->Run On Server, select your Tomcat server and click Finish.

You should now see that things are working correctly.


You may need to edit the generated JSP files to add input fields and such, but that's specific to your particular file.

Good luck, and happy coding.

Thursday, December 11, 2008

Enabling JMX in TBSM

Since TBSM runs on top of Tomcat, you can enable JMX access to TBSM's Tomcat server to give you some insight to how the JVM is doing. To do this, you'll need to edit the $NCHOME/bin/rad_server file to add a line (in the appropriate place, which should be easy to spot once you're in the file):

JAVA_OPTS="${JAVA_OPTS} -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=8999 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false"

This specifies that no authentication is needed, which is fine in a test environment, but for a production environment, you would want to enable SSL and/or authentication (Google will find lots of links for you).

You do need to restart TBSM to have the changes take effect. Once it starts back up, you can view the JMX data using the jconsole command that is included with the Java 1.5 (and above) JDK. When you startup jconsole, specify the hostname of your TBSM server and port 8999 (specified above), with no user or password. That will give you a nifty GUI that looks like this:



One of the available tabs is "Memory", which will give you some (possibly) useful information about memory utilization in the JVM.


As you can see, there are other tabs, which you should investigate to see what additional information is available.








Wednesday, November 19, 2008

TADDM TUG Presentation

I'm giving this presentation on TADDM for the NYC TUG today and the Philadelphia TUG tomorrow, and wanted to make it available online.

Full size: https://drive.google.com/open?id=0B2lRAtNC_A9BYkZ1THlEb3lMeEU

Sunday, November 16, 2008

Accessing your Windows files from a Linux VM

At least in VMWare Workstation 6.5 (and probably earlier versions, tho I'm not sure) running on Windows, you can easily access any of your host OS files from any Linux VM. You just need to enable Shared Folders (from VM->Settings, in the Options tab) and specify the folders you want to have accessible from Linux. Once you do this, you should see those folders under /mnt/hgfs in Linux. So it looks just like a regular filesystem from the Linux perspective.

Note: I verified this with CentOS 5.

Adding disk space to a Linux VM in VMWare

I had a CentOS 5 VM that just didn't have enought disk space, so I wanted to give it some more. I didn't think it would be too hard, and in the end it wasn't, but it sure took me a while to find all the steps to accomplish it. So here are the ones I found useful. YMMV :)


Host OS: Windows Vista x64 SP1

VMWare Software: VMWare Workstation 6.5

Guest OS: Centos 5 (code equivalent to RHEL5)


1. Power off the VM (have to do this to add a new disk)

2. Create a new virtual disk (this is the easy part)
a. Go into VM->Settings and in the Hardware tab, click the Add... button.
b. Follow the instructions. This is very straightforward. I created a new 8GB disk.

3. Power on the VM and log in as root.

4. I decided to use the LVM subsystem, and that's what these steps address:

a. Create a Physical Volume representing the new disk: pvcreate /dev/sdb
b. Extend the default Volume Group to contain the new PV:
vgextend VolGroup00 /dev/sdb

c. Extend the default Logical Volume to include the newly-acquired space in the VG:
lvextend --size +7.88G /dev/VolGroup00/LogVol00
(The disk is 8GB according to VMWare, but it looks like around 7.88GB to Linux)

d. Extend the device containing the / (root) filesystem to stretch across the entire LV:
resize2fs -p /dev/mapper/VolGroup00-LogVol00

And that's it. I took the defaults on installing CentOS, so my / (root) filesystem is of type ext3, which supports this dynamic resizing.

So in this case, this disk is basically tied to this VM. If you wanted to create a disk that could be used by different VMs, you would certainly go about it differently, but that's a different topic.