Tuesday, December 30, 2008

TPM 5.1.1.2 Custom Inventory Scan

Well it is finally available. The custom inventory scan is one of the features that existing Tivoli Configuration Manager (TCM) customers required in TPM before they could move to using TPM completely (yes there are still more, but this was pretty important)

A while back, I created a blog entry on creating custom inventory scans with TCM (see http://blog.gulfsoft.com/2008/03/custom-inv-scan.html) that I thought I would look at moving to TPM. This is a basic VBS script that will read in a registry key and record to the values to a MIF file. This MIF file is then imported into the DCM and can be used in a query from the Reports section of the TPM UI.

The steps involved are:
1. Define the requirements for the data to determine table columns
2. Create the table
3. Create the pre and post scripts to create the output file (can be MIF or XML)
4. Create the inventory extension properties file and import
5. Deploy the scripts to the target
6. Run the custom discovery created by the import in step 3
7. Run the custom report created by the import in step 3

Define Requirements
For this sample, the requirements for the data to be inserted are the Registry Hive, Key(s) and Value.

Create the Table
The table will be made up of the requirements defined above and columns to uniquely identify the computer and discovery id. Also the fix pack documentation recommends adding foreign keys for cascade deletes. For this example, I set the various fields to VARCHAR and defined a possible length for them. I also assigned it to the tablespace IBM8KSPACE, well, just because I felt like it ;)

Table Creation Syntax for DB2
create table db2admin.registrydata1(REG_PATH VARCHAR(256), REG_VALUE VARCHAR(32), REG_DATA VARCHAR(128)) in IBM8KSPACE;
alter table db2admin.registrydata add column SERVER_ID BIGINT;
alter table db2admin.registrydata add column DISCOVERY_ID BIGINT;
alter table db2admin.registrydata add foreign key (SERVER_ID) references db2admin.SERVER(SERVER_ID) on DELETE CASCADE;
alter table db2admin.registrydata add foreign key (DISCOVERY_ID) references db2admin.DISCOVERY(DISCOVERY_ID) on DELETE CASCADE;

Create the pre and post scripts
One of the possible issues is around passing arguments to the scripts, but then again, this was not that easy in TCM. The examples in the FP docs do not talk about passing arguments in the extension properties file (discussed in the next step). From the limited testing I did, this does not seem possible.

The pre script will be used to generate the MIF file to be retrieved. One issue I have with the pre script is that it does not accept arguments. So this means that to execute the VBS script that I want to use, I have to create a wrapper BAT file to pass the arguments.

The post script does not really need to do anything. It could be as simple as just an exit 0.

prescript.windows.bat
==============================================
cscript //nologo c:\test\registrydata.vbs "Software\Martin" c:\test\regdata.windows.mif
==============================================

postscript.windows.bat
==============================================
echo Running post script
==============================================

regdata.windows.mif (sample output)
==============================================
START COMPONENT
NAME = "REGISTRY VALUE DATA"
DESCRIPTION = "List registry value and data entries"
START GROUP
NAME = "REGISTRYDATA"
ID = 1
CLASS = "DMTF|REGISTRYDATA|1.0"
START ATTRIBUTE
NAME = "REG_PATH"
ID = 1
ACCESS = READ-ONLY
TYPE = STRING(256)
VALUE = ""
END ATTRIBUTE
START ATTRIBUTE
NAME = "REG_VALUE"
ID = 2
ACCESS = READ-ONLY
TYPE = STRING(32)
VALUE = ""
END ATTRIBUTE
START ATTRIBUTE
NAME = "REG_DATA"
ID = 3
ACCESS = READ-ONLY
TYPE = STRING(128)
VALUE = ""
END ATTRIBUTE
KEY = 1,2,3
END GROUP
START TABLE
NAME = "REGISTRYDATA"
ID = 1
CLASS = "DMTF|REGISTRYDATA|1.0"
{"Software\\Martin","test","test"}
{"Software\\Martin","test1","test1"}
END TABLE
END COMPONENT
==============================================


Create the inventory extension properties file
The extension file is used to define the table that is used to define various properties required to populate TPM with the definition, tables, scripts and output file to be used in the discovery process. The properties file for this example is as follows:

==============================================
#name the extension
extName=REGISTRYDATA

#Description of the extension
extDescription=GBS Collect Registry data

#Custom Table Names
TABLE_1.NAME=REGISTRYDATA

#file for Windows platform
WINDOWS=yes
pre_windows=C:\\test\\prescript.windows.bat
out_windows=c:\\test\\regdata.windows.mif
post_windows=c:\\test\\postscript.windows.bat
==============================================

extName is used to define the DCM object name for the new extension.
extDescription is used in the description field for the discovery configuration and report (if created)

Multiple table names can be used by using sequential numbers after the TABLE_. For the purposes of this demo, only one table is used

Operating system flags – This can be defined for WINDOWS, AIX, HPUX, SOLARIS and LINUX

Pre/Post scripts and Output files – are used for each operating system. Used the prefixes pre_, out_ and post_ with the OS definitions of windows, aix, hpux, solaris and/or linux

Save the file as whatever naming convention is desired. For the purposes of this example the file was created in C:\IBM\tivoli\custom\inventory\registrydata\registrydata.properties

Import the properties file
To import the properties, use the command %TIO_HOME%\tools\inventoryExtension.cmd. This command can be used to create, delete and list inventory extensions. The syntax used for this example was:

inventoryExtention.cmd create –p C:\IBM\tivoli\custom\inventory\registrydata\registrydata.properties –r yes

The “p” parameter defines the file to be used and the “-r yes” is used to tell the import to also create the custom report for the inventory extension.

Command Output:
=========================================
C:\IBM\tivoli\tpm\tools>inventoryExtension.cmd create -p C:\IBM\tivoli\custom\inventory\registrydata\registrydata.properties -r yes
2008-12-30 09:59:03,890 INFO log4j configureAndWatch is started with configura
ion file: C:\ibm\tivoli\tpm/config/log4j-util.prop
2008-12-30 09:59:04,062 INFO COPINV006I Parsing the command line arguments ...
2008-12-30 09:59:04,796 INFO COPINV007I ... command line arguments parsed.
2008-12-30 09:59:04,796 INFO COPINV008I Start processing ...
2008-12-30 09:59:04,812 INFO Start parsing property file
2008-12-30 09:59:08,780 INFO Finished parsing property file
2008-12-30 09:59:08,874 INFO COPINV021I The specified extension: REGISTRYDATA has been successfully registered.
2008-12-30 09:59:08,874 INFO Creating discovery configuration...
2008-12-30 09:59:14,984 INFO Discovery REGISTRYDATA_Discovery successfully created
2008-12-30 09:59:15,390 INFO Report REGISTRYDATA_Report succesfully created
2008-12-30 09:59:15,484 INFO COPINV009I ... end processing.
2008-12-30 09:59:15,484 INFO COPINV005I The command has been executed with return code: 0 .
=========================================

Note the names of the Discovery Configuration and the Report created. TPM does not need to be restarted for this to take effect.

Deploy the scripts to the target
The next step is to deploy the script(s) to the target(s). In TCM, this was done by creating a dependency on the inventory object. Currently this is not something that is documented, but could be done with a software package, a workflow or possibly modifying the CIT package (not sure if this would be advisable). I am going to leave this out for now as I think this will need to be investigated further. So for now, this was just a manual copy of the files to the remote target. Copy the files to C:\TEST as defined in the properties file.


Run the custom discovery
Once the import is completed, a new discovery configuration is created. This discovery configuration will be labeled according to the extName field in the properties file and suffixed with “_Discovery”.




One of the main differences between a normal inventory discovery and the custom is on the Parameters tab of the discovery. There is a new field called Inventory Extension.




Run the custom report (if created)
Once the scan is completed, execute the custom report. This will display the results for the data collected. The report is created under the Discovery section.




In the results the data for the scanned registry section for each computer will be displayed.



Other Random Notes

There are also 3 tables (that I found so far). The names are INVENTORY_EXTENSION, INVENTORY_EXTENSION_FILE and INVENTORY_EXTENSION_TABLE. The table INVENTORY_EXTENSION_FILE contains the pre, post and output files. I was able to modify the entries to use a different file on the target without having to re-import the properties file.

Wednesday, December 24, 2008

ITM 6.2.1 Agent Availability Monitoring - What's new?

If you have been using MS Offline Situations in ITM 6.x so far, you know the drawbacks of such monitoring.  For example, You will get hundreds of such situations when a RTEMS goes offline even if the agent itself running. You  can use process monitoring situations (for the process kntcma.exe), but it has its own drawbacks. ITM 6.2.1 introduces a new approach to this problem and this article explains the new solution.

In ITM 6.2.1, we can use the Tivoli Proxy Agent Services to monitor the availability of the agents. For example, if a Windows OS agent goes down, the Tivoli Proxy Agent Services can restart it.   In case the OS agent went down too many times, you can easily monitor the condition using a situation. Just use the Alert Message attribute in the Alerts Table attribute group and check if the agent exceeded restarted count.   Here are few other conditions that you can monitor with the Alerts Message attribute.
  • Agent Over utilizing CPU
  • Agent Over Utilizing Memory
  • Agent Start Failed
  • Agent Restart Failed
  • Agent Crashed (abnormal stop)
  • Agent Status Check Script Failed
  • Managed/Unmanaged agent removed from the system
With the combination Tivoli Proxy Agent Services auto restart feature and set of situations to monitor the above exceptions,  you can devise an effective agent availability monitoring solution with fewer and only relevant alerts reaching the console.  Do you have questions about the above solution? Please feel free to write back.

Merry Christmas!

Tuesday, December 23, 2008

Creating a Web Services/SOAP client in eclipse

The Eclipse development environment is very powerful, so I figured I would create a simple SOAP client in it from a given WSDL file. As it turns out, it's a little painful, so I wanted to write up the steps I had to go through.

I used Eclipse 3.3.2 (Europa) JEE: Eclipse 3.3.2 (Europa) JEE on Windows XP in a VM

I started with an empty Java project named "FirstJava".

First import your WSDL file into your project.
- Select the "src" folder of your proejct, right-click and select "Import"
- Choose General/File System
- Select the folder containing your file and click OK
- now place a check mark next to your WSDL file and click OK

Now download Tomcat. There is a built-in "J2EE Preview" that works for some things, but not for this.

Now create a new Target Runtime and server associated with Tomcat.
- Select the project and select Project->Properties, then select "Targeted Runtimes" on the left of the dialog.



- Click the "New" button
- Select Apache->Tomcat 5.5 AND ALSO CHECK "Also create new local server"



- Input the appropriate values as needed.


- Select the newly-create runtime


- Click OK to save the properties.

Create a new Web Services Client
- Right-click your WSDL file name and select New->Other..., then in the dialog displayed, select Web Services->Web Service Client).



- Your WSDL file name should be filled in at the top of the dialog


- In this dialog, use the slider underneath "Client Type" to specify "Test client" (move it all the way to the top).
- Click Finish.
- This will create a bunch of new code in your current project, plus it will create a new project (named "FirstJavaSample" in my case) with the JSPs you'll be able to (hopefully) run to test your client.


- This will give you an error about the JSP not supporting org.apache.axis.message.MessageElement[]. Just click OK several times until the error box goes away. We'll fix that later.

If all went well, you should see something like the following:


Now we have to fix the errors.

Create a JAR file named FirstJava.jar containing the contents of the bin directory of your FirstJava project.

Copy that file to the Tomcat "Common/lib" folder (C:/Apache/Tomcat5.5/Common/lib on my system).

You will additionally need to find these files under the eclipse/plugins directory and copy them to the Tomcat Common/Lib folder:

axis.jar
saaj.jar
jaxrpc.jar
javax.wsdl15_1.5.1.v200705290614.jar
wsdl4j-1.5.1.jar

(If you can't find them on your system, use Google to find and download them. One of them - I don't recall which - was tricky to find for me because it was actually in another JAR file.)

Now stop the Tomcat server by selecting it in the "Servers" view and clicking the stop (red square) button.


Now re-run your application by opening the FirstJavaSample project and finding the file named "TestClient.jsp". Right-click that file and select Run As->Run On Server, select your Tomcat server and click Finish.

You should now see that things are working correctly.


You may need to edit the generated JSP files to add input fields and such, but that's specific to your particular file.

Good luck, and happy coding.

Thursday, December 11, 2008

Enabling JMX in TBSM

Since TBSM runs on top of Tomcat, you can enable JMX access to TBSM's Tomcat server to give you some insight to how the JVM is doing. To do this, you'll need to edit the $NCHOME/bin/rad_server file to add a line (in the appropriate place, which should be easy to spot once you're in the file):

JAVA_OPTS="${JAVA_OPTS} -Dcom.sun.management.jmxremote -Dcom.sun.management.jmxremote.port=8999 -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false"

This specifies that no authentication is needed, which is fine in a test environment, but for a production environment, you would want to enable SSL and/or authentication (Google will find lots of links for you).

You do need to restart TBSM to have the changes take effect. Once it starts back up, you can view the JMX data using the jconsole command that is included with the Java 1.5 (and above) JDK. When you startup jconsole, specify the hostname of your TBSM server and port 8999 (specified above), with no user or password. That will give you a nifty GUI that looks like this:



One of the available tabs is "Memory", which will give you some (possibly) useful information about memory utilization in the JVM.


As you can see, there are other tabs, which you should investigate to see what additional information is available.








Wednesday, November 19, 2008

TADDM TUG Presentation

I'm giving this presentation on TADDM for the NYC TUG today and the Philadelphia TUG tomorrow, and wanted to make it available online.

Full size: https://drive.google.com/open?id=0B2lRAtNC_A9BYkZ1THlEb3lMeEU

Sunday, November 16, 2008

Accessing your Windows files from a Linux VM

At least in VMWare Workstation 6.5 (and probably earlier versions, tho I'm not sure) running on Windows, you can easily access any of your host OS files from any Linux VM. You just need to enable Shared Folders (from VM->Settings, in the Options tab) and specify the folders you want to have accessible from Linux. Once you do this, you should see those folders under /mnt/hgfs in Linux. So it looks just like a regular filesystem from the Linux perspective.

Note: I verified this with CentOS 5.

Adding disk space to a Linux VM in VMWare

I had a CentOS 5 VM that just didn't have enought disk space, so I wanted to give it some more. I didn't think it would be too hard, and in the end it wasn't, but it sure took me a while to find all the steps to accomplish it. So here are the ones I found useful. YMMV :)


Host OS: Windows Vista x64 SP1

VMWare Software: VMWare Workstation 6.5

Guest OS: Centos 5 (code equivalent to RHEL5)


1. Power off the VM (have to do this to add a new disk)

2. Create a new virtual disk (this is the easy part)
a. Go into VM->Settings and in the Hardware tab, click the Add... button.
b. Follow the instructions. This is very straightforward. I created a new 8GB disk.

3. Power on the VM and log in as root.

4. I decided to use the LVM subsystem, and that's what these steps address:

a. Create a Physical Volume representing the new disk: pvcreate /dev/sdb
b. Extend the default Volume Group to contain the new PV:
vgextend VolGroup00 /dev/sdb

c. Extend the default Logical Volume to include the newly-acquired space in the VG:
lvextend --size +7.88G /dev/VolGroup00/LogVol00
(The disk is 8GB according to VMWare, but it looks like around 7.88GB to Linux)

d. Extend the device containing the / (root) filesystem to stretch across the entire LV:
resize2fs -p /dev/mapper/VolGroup00-LogVol00

And that's it. I took the defaults on installing CentOS, so my / (root) filesystem is of type ext3, which supports this dynamic resizing.

So in this case, this disk is basically tied to this VM. If you wanted to create a disk that could be used by different VMs, you would certainly go about it differently, but that's a different topic.

Monday, October 20, 2008

Why don't databases have foreign keys any more?

If you've looked at a database from a vendor lately (Tivoli included; some example products that include databases are CCMDB, TADDM, ITCAM for WAS, ITCAM for RTT to name a few), you'll notice that there are very few (or NO) relationships between the tables. What this means is that you can't get a nice Entity Relationship Diagram from them, and that makes it quite a bit harder to write reports using the data in them. 

So why is it this way?

More and more, software developers are using Object-relational mapping (ORM) components to abstract the software from the database. Here is a pretty comprehensive list of object-relational mapping software. What this software does is abstract the data relationships from BOTH the software AND from the database, putting that relationship information into various files that are used by the selected ORM implementation. The impact this has is that it is difficult as a consumer/user of the software to write reports directly against the database - because the first step is having to reverse engineer the usage of the tables in the database.

What can be done about it?

I can think of a couple of approaches:

1. The best solution I can think of is that you need to ask the vendor for ERDs for all databases that will be used to store collected metrics. 

2. Since there may be some valid intellectual property-related arguments against no. 1 above, the next best approach would be to ask the vendor for the SQL needed to produce specific reports.

3. If neither of the above works, then reverse engineering is the only approach left. I've had success in this area by turning up the debugging on the software and looking for SQL "SELECT" statements in the log files. 

Wednesday, October 15, 2008

Data visualization using Google Spreadsheets, Yahoo Pipes and Google Maps

OK, so this isn't a Tivoli-related post, but since a big part of what we all want to do is visualize data, I thought that this post that I found through reddit.com was an exellent description of how to use several free web-based utilities to gather and display data. Here's the link:


Here is the author's summary:

So to recap, we have scraped some data from a wikipedia page into a Google spreadsheet using the =importHTML formula, published a handful of rows from the table as CSV, consumed the CSV in a Yahoo pipe and created a geocoded KML feed from it, and then displayed it in a Yahoo map.

I haven't come up with an implementation that gathers Tivoli data, but I can think of one:

The main thing you would need is to get HTML that can be consumed by Google Spreadsheets. One way to do this would be with a JSP (on WAS) or PHP (on IBM HTTP Server) or other server-side language that makes SOAP or direct database calls to retrieve data from TDW and display it in HTML. Once you have that, you could then follow the steps in the article to viualize the data.

Thursday, September 25, 2008

Creating TPM Signatures for Multi_sz registry entries

When creating some software signatures for Windows the use of registry entries to identify an application/operating system on a system. If you check out many of the existing Windows signatures, you will see that they use registry entries extensively.

One issue I came across was around the use of multi_sz type registry entries. This came up when scanning a Windows server, the operating system was not defined. In order to define the operating system the following keys were needed:

HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProductName=Microsoft Windows Server 2003
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\CSDVersion=Service Pack 1
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\ProductOptions\ProductType=ServerNT

And the multi_sz
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\ProductOptions\ProductSuite=Blade Terminal Server

For this last key, the Blade and Terminal Server were on separate lines. After looking at other signatures that also used multi_sz, I could not see what was required to separate the lines, so I tried a space, \n and a couple others that never worked.

I then tried the dcmexport command and looked at other multi_sz definitions. They were useless as the character was not a normal ASCII character.

Then I found Windows signatures located in $TIO_HOME/eclipse/plugins/windows-operating-system/xml. In there I found the code & # 1 4 5 ; (remove the spaces) used in existing definitions. I then created the software signature, associated it with the Software Definition Windows Server 2003 Standard Edition SP1 (for lack of a better one for now), ran a scan and the operating system was recognized.

Hope this helps.

Monday, September 8, 2008

TCR Report Packages

Tivoli Common Reporting provides two ways to package your BIRT report design files viz. Report Design Format and Report Definition Format. Both have the same acronym, so we can't abbreviate them.  This article explains the two formats in detail.


Report Design Format

This is the simplest of the two formats. How do you create them?   Simply create a zip file of your BIRT project directory (note! you need to create the zip file of the PROJECT not the ReportDesign file).  That's it. In Windows, you can right click on the project directory and click Send To->Compressed (Zipped) Folder, there you're done!  Import the resulting zip file into TCR and it will import the files with .rptdesign extension as TCR reports. 

The Path to the report design files will be reflected in the TCR navigation tree.  For example, if your report design files are located in /GBSReports/Netcool directory, then the report designs in TCR will appear under GBSReports->Netcool in the TCR Navigation tree. 

Report Definition Format

TCR Documentation does not give good idea about Report Definition Format. Creating Report Definition Format involves creating an XML file that describes the report set structure. The advantage of this format is that you can create a well documented report set and it also provides the ability to share the same report design across multiple levels in the navigation tree.   Even though, this format helps to create a well-documented report set, ironically, TCR does not have a detailed documentation such as details about the XML Schema.   The OPAL ITM62 Report set uses the Report Definition Format and it should give you some idea, but that's all I have come across so far.

Overall, the Report Design Format should be good enough for most of our needs and it is much easier to change the Report set structure as well.  However, Report Definition Format gives more control over Report set organization and one hopes that more information about the format will be available in future releases.


Wednesday, August 27, 2008

And here's how to use extended attributes in TADDM

UPDATE: Since TADDM 7.1.2 is MUCH friendlier about extended attributes, I changed the title of this post. Details on reporting on extended attributes to come soon.

... you'll want to be able to populate them during discovery. And this requires the use of the SCRIPT: directive in your custom server template. The documentation on this is pretty sketchy, so that's why I'm including a full script here that will set the value of the "FrankCustom" custom attribute for a LinuxComputerSystem.

Prerequisites
1. You must create the extended attribute named FrankCustom for the LinuxUnitaryComputerSystem class (or any class in its inheritance chain).

2. You must enable the LinuxComputerSystemTemplate custom Computer System sensor. You can do this from the TADDM GUI, Discovery drawer, click on Computer Systems, and edit the LinuxComputerSystemTemplate to set it to "enabled".

3. You have to create a file named $COLLATION_HOME/etc/templates/commands/LinuxComputerSystemTemplate with the following line:

SCRIPT:/myscripts/myscript.py

If it's a Jython script you're calling (which it is in this case), the script filename MUST end with a ".py" extension. If you try to use ".jy", it will absolutely fail.

Now, you need to create the file /myscripts/myscript.py with the following contents:



import sys
import java
from java.lang import System

coll_home = System.getProperty("com.collation.home")
System.setProperty("jython.home",coll_home + "/external/jython-2.1")
System.setProperty("python.home",coll_home + "/external/jython-2.1")

jython_home = System.getProperty("jython.home")
sys.path.append(jython_home + "/Lib")
#sys.path.append(coll_home + "/lib/sensor-extensions")
sys.prefix = jython_home + "/Lib"

import traceback
import string
import re
import StringIO
import jarray

from java.lang import Class
from com.collation.platform.util import ModelFactory
from java.util import Properties
from java.util import HashMap
from java.io import FileInputStream
from java.io import ByteArrayOutputStream
from java.io import ObjectOutputStream


#Get the osobject and the result object
os_handle = targets.get("osobject")
result = targets.get("result")
system = targets.get("system")
# two more pieces of information available
#env = targets.get("environment")
#seed = targets.get("seed")

# Execute a command (it's best to give the full path to the command) and
# store its output in the variable named output
output = os_handle.executeCommand("/bin/ls -l /tmp")
s_output = StringIO.StringIO(output)
output_list = s_output.readlines()

# This loop just stores the last line of output from the above command in
# the variable named "output_line"
for line in output_list:
output_line = line
s_output.close()

#Get the AppServer ModelObject for the MsSql server
#app_server = result.getServer()

#Set up the HashMap for setting the extended attributes
jmap=HashMap()
jmap.put("FrankCustom",output_line)
bos=ByteArrayOutputStream()
oos=ObjectOutputStream(bos)
oos.writeObject(jmap)
oos.flush()
data=bos.toByteArray()
oos.close()
bos.close()

#Call setExtendedAttributes on the AppServer
system.setExtendedAttributes(data)


And there you have it. The examples you find in the documentation and on the TADDM wiki just won't lead you to any kind of success because they're missing some very key information.

Tuesday, August 26, 2008

Feel free to use extended attributes in TADDM

Update: TADDM 7.1.2 has massive improvements in data accessibility, so this post is no longer valid.


Don't let the documentation lead you astray - extended attributes in TADDM should be your LAST CHOICE for customization!

I realize that's a bold statement (especially since there's so much information about creating extended attributes in the documentation), but I'll explain why it's a valid statement:

1. You can't run any reports based on values stored in extended attributes! None. You can say, for example "give me all ComputerSystems where myCustomAttribute contains the string 'foo'". Just can't do it. The only call to get extended attribute values requires the GUID (or GUIDs) of the object(s) you're interested in. This is enough reason to stay away from extended attributes.

2. Extended attributes don't get imported into the CCMDB unless you do a LOT of customization to the TADDM integration adapter.

If you have custom data that you need to store in TADDM, what you need to do is become familiar with the CDM (Common Data Model) and the APIs available in TADDM. They are documented in three different archives that you have to extract (these all exist on your TADDM server):

CDMWebsite.zip - contains information about the CDM, including entity relationship diagrams.
model-javadoc.tar.gz - contains javadoc information for interacting with model objects.
oalapi-javadoc.tar.gz - contains javadoc information for accessing TADDM, from which you may just output XML, or you might obtain references to model objects, whose methods are defined in the above file.

If you're reading these files on your own workstation, make sure to grab a fresh copy from the TADDM server after each fixpack, as Tivoli does update this documentation.

By reading and understanding the documentation above, you can determine the appropriate object types to use to effectively store all of your data.

Getting the TADDM Discovery Client (aka TADDM GUI) to run when you have Java 1.5 and 1.6 installed

Getting the TADDM Discovery Client (aka TADDM GUI) to run when you have Java 1.5 and 1.6 installed

The TADDM GUI uses the Java Web Start technology (i.e. a JNLP file) to run. This allows the application to be launched as a full-screen application (rather than just an applet) over the web. If you ONLY have Java 5 installed , there shouldn’t be a problem. But if you have 1.5 AND 1.6 installed, you’ll probably get an error stating something similar to:


[SunJDK14ConditionalEventPump] Exception occurred during event dispatching:

java.lang.ClassCastException: java.lang.StringBuffer cannot be cast to java.lang.String

at com.collation.gui.client.GuiMain.setApplicationFont(GuiMain.java:511)

at com.collation.gui.client.GuiMain.doLogin(GuiMain.java:442)

at com.collation.gui.client.GuiMain.access$100(GuiMain.java:83)

The reason the problem occurs is that the .JNLP file states that it wants Java 1.5+, which means 1.5 or higher, so javaws will pick the highest one available, which is 1.6. But 1.6 isn't supported by the TADDM GUI. So here's how to fix it.


This file describes what has to be done to fix the problem. I apologize for the PDF link, but trying to get all of the screenshots uploaded was driving me crazy.


Monday, August 25, 2008

Java Info

Java versions are even more confusing than you think they are, so I wanted to clear up at least a little of it here.

Java 1.4.2 is also called Java 2 Platform Version 1.4.

Java 1.5 is also called Java 5

Java 1.6 is also called Java 6

Java 1.5 (aka Java 5) introduced a LARGE number of features that aren't available in version 1.4.2. And from what I've found, it's pretty difficult to have both Java 1.5 (or 1.6) and 1.4.2 installed on the same system. If you NEED to run some apps that require Java 1.4.2 (like the TEP desktop client, for example) and some apps that require Java 1.5, I would truly create a separate Windows virtual machine (using the virtual machine software of your choosing, though I would recommend VMWare) to run version 1.4.2.

After 1.4.2, however, it really appears (so far) that you can install the different versions on the same machine AND have them play well together. My next post will have some information about this.

If some piece of software states that it requires Java 1.5, that *should* mean that the JDK or JRE with a version number of 1.5.0_xx, where xx is the Update Number, will work; it's not guaranteed, but in my experience, it does NORMALLY work. Similar is true for Java 1.6 - version 1.6.0_xx *should* work.

What's the difference between the JDK and JRE

The JRE (Java Runtime Engine) is what most people need. It contains the java executable and all of the other executables that are needed to run Java clients, but it does NOT contain any of the executables needed to WRITE and package Java applications (i.e. javac and jar, among others).

The JDK (Java Developer Kit) contains all of the JRE, plus all of the developer tools needed to write and package Java applications.

What about SE and EE variants?

SE (Standard Edition) and EE (Enterprise Edition) are used to distinguish between types of developer environments.

SE is for creating client applications that will run inside a standalone JVM (Java Virtual Machine) on a user's local machine. Applets and Java Web Start applications, though they are accessed over the web, are examples of applications created using the SE developer kit.

EE contains all of the SE, PLUS it allows a developer to create applications that will run inside an application server, such as WebSphere, WebLogic, JBoss, Oracle App Server, and many others. These applications actually run on a JVM on a server. A user connects to that server to access the application, but the Java application itself is using CPU cycles on the server.

What's Java Web Start

Java Web Start (JavaWS) is a technology that allows application developers to create a web-launchable Java application directly from the browser. A Java Web Start application is defined in a .JNLP file (which is just a text file, so you can open one up to look at it), and is launched using the javaws executable. This is different than an applet in a few ways:

1. An applet generally runs inside the browser itself, or it can open a new window in which it runs *seemingly* outside of the browser. I say "seemingly" because if you close the browser, the applet will still die. Any windows opened by the applet are child windows of the browser and will automatically be closed when the browser is closed. A JavaWS application's control file (the JNLP file) is downloaded by the browser, but is then launched by the javaws executable. So if you close the browser, the application won't close.

2. A JavaWS application is a completely standalone application, which has full control over menus, windowing elements, look-and-feel, etc. of itself. An applet is constrained by the browser in several ways. (This difference is mainly interesting to developers, but I think it's useful for users to be aware of).

3. A JavaWS application can be launched outside of a browser; you can just double-click the javaws executable (or a .JNLP file, for that matter) to launch a JavaWS application. An applet MUST be launched in a browser.

Where to get Java

Sun Java:

ANY VERSION older than the current version: http://java.sun.com/products/archive/index.html

Current Update of 1.5: http://java.sun.com/javase/downloads/index_jdk5.jsp

Current Update of 1.6: http://java.sun.com/javase/downloads/index.jsp

IBM Java:
Links to different platforms and versions: http://www.ibm.com/developerworks/java/jdk/

Tuesday, August 19, 2008

Converting TDW Timestamps to DateTime in BIRT

We have previously published articles on how to convert TDW timestamps to "regular" timestamps in DB2. In BIRT, you may need to tackle the same problem again. The TDW timestamps can not be used for DateTime arithmetic and Charting functions in BIRT and has to be converted to regular DateTime. Though you can use the SQL to convert these timestamps to regular, in some cases, you may want to use the JavaScript integration of BIRT to convert them as well. Here is how to do it.

  1. In your BIRT data set, ensure that you're selecting the TDW Timestamp that needs to be converted to the regular timestamp.
  2. In the "Edit DataSet Window", click on Computed Columns and Click New button.
  3. Give the column name for the new Computed Column, select the column data type as "Date Time".
  4. In the expression field, click on the fX button and enter the following JavaScript code. Replace row["Timestamp"] with the appropriate Column name for the TDW Timestamp.

if (row["Timestamp"] == null)
{
null
}
else {
new Date(
(candleTime=row["Timestamp"],
(parseInt(1900) +
parseInt(candleTime.substr(0,3))) + "/" +
candleTime.substr(3,2)
+ "/" + candleTime.substr(5,2) + " " +
candleTime.substr(7,2) +
":" + candleTime.substr(9,2) + ":" +
candleTime.substr(11,2)
))
}

5. Goto the Preview Results and ensure that the newly Computed Column appears in the output.

This is the kind of method that ITM 6.2 built-in reports use and hope you find it useful.

Friday, August 8, 2008

CCMDB 7.1.1 Install Error: CTGIN2381E

When running the CCMDB 7.1.1 installer from a Windows system to install the code on a UNIX/Linux box (since the installer only runs on Windows, this is what you have to do if your server is UNIX/Linux), you may encounter the error:

CTGIN2381E: Maximo Database upgrade command failed

It turns out that this error message can be caused by lots of different issues, so the text of the error itself isn't necessarily very helpful. In my case, I found that the error was occurring because of an exception being thrown in the middle of the nodeSync() function of the DeploymentManager.py script (that is installed on your Windows deployment machine when you run the CCMDB installer). And what I found is that it just needed a sleep statement to allow the node manager to come up successfully. So the line I added (immediately after the "try:" statement at line 115) was:

lang.Thread.currentThread().sleep(30000)

This causes it to sleep for 30 seconds. You can probably set this timeout to a smaller value (esp. since the nodeSync() function is actually called quite a few times during the install), but 30 seconds worked perfectly for me.

Tuesday, July 29, 2008

Tivoli Common Reporting (TCR) Space on IBM DeveloperWorks

It's available here:

http://www.ibm.com/developerworks/spaces/tcr

It's got lots of good information and links on TCR. Also, you probably want to poke around a little to see if there's anything else on devworks that might interest you.

You can configure a laptop with 8GB of RAM

I recently got an 8GB kit for my laptop from:
http://www.compsource.com/ttechnote.asp?part_no=KVR667D2S5K28G&vid=229&src=F

and it works GREAT. I've got a Thinkpad t61p (Santa Rosa) running Vista Ultimate x64, and it sees all 8GB. Additionally (and most importantly), VMWare Workstation also sees all of the memory and allows me to allocate it to a virtual machine.

Friday, June 27, 2008

JDBC configuration in TCR v1.1.1

One of the important thing in configuring Tivoli Common Reporting server is the JDBC configuration. This step doesn't seem to be documented in the installation guide properly. The installation guide talks about modifying a python script but it is not needed and there is a simple way to get configure the JDBC driver.

The simplest way to get it working is to copy the JDBC drivers to a particular directory deep down in the TCR installation tree. The location is %TCR_HOME%\lib\birt-runtime-2_2_1\ReportEngine\plugins\org.eclipse.birt.report.data.oda.jdbc_2.2.1.r22x_v20070919\drivers

Once you copied the JDBC drivers to that location and recycled TCR, the driver classes will be available to TCR and you should be able to configure JDBC connections using TCR web interface (Right click on a report -> Edit Data Sources).

Hope this helps.

Wednesday, June 4, 2008

How to get GBSCMD?

Of late, there has been some confusion about how to get a copy of GBSCMD. GBSCMD is free and getting it is very easy. Just write an email from your company email address to Tony (Tony dot Delgross at gulfsoft.com) and ask for a copy. He will send a copy it to you. That's it. If you want some additional features to be included in it, please feel free to write to me. (venkat at gulfsoft.com).

Sunday, June 1, 2008

Friday, May 30, 2008

Parsing the Download Manager file

When using the Download Director from IBM, a file is created called dlmgr.pro. This file is created in the directory that you download your files to. I have often found it annoying with the naming convention of files that I download to determine what a file is after I have downloaded it and left it for a while. I found that the dlmgr.pro file does contain pretty much all the info needed to identify the file information. So I thought I would create a quick perl script to parse the file. It is not the prettiest script, but it does what I need.

The script requires the path and file name to the dlmgr.pro file as an argument, and then creates a file called dlmgr.csv in the current directory.

Click here to download

Thursday, May 29, 2008

TPAP/ISMP/CCMDB/Maximo/TAMIT/etc. info

Tivoli is now starting to roll out solutions based on Maximo, so I figured an overview, explanation of the names of things and list of resources would be helpful.

Overview

IBM Tivoli CCMDB 7.1 was the first product introduced by Tivoli based on the newly-revised Maximo platform. is now up to version 7.1.1, and there are now additional Maximo 7.1-based applications available, such as IBM Tivoli Asset Management for IT 7.1, IBM Maximo Asset Management 7.1, Tivoli Business Continuity Process Manager 7.1.0, IBM Tivoli Release Process Manager 7.1.1 and Tivoli Service Request Manager 7.1. (Notice how the naming isn't quite standardized; I imagine they will correct this in the future). All of these products are fundamentally based on the same core software and database, which was previously known as the "Maximo base software". This base software is now known by a couple of names, depending on which documentation you're viewing: Tivoli Process Automation Platform (TPAP) or IBM Service Management Platform (ISMP or SMP)

(I'll use TPAP as my acronym of choice from here on, because it seems to be used in the latest redbooks). TPAP generally consists of the supporting middleware (WAS 6.2, Tivoli Directory Server 6.1, ITUP Composer, DB2 9.2, Rational Agent Controller 7.0, IBM HTTP Server and others), the database schema, and some WebSphere-based applications. I say generally because you do have options to use non-IBM middleware, and other software may end up being used in your implementation.

TPAP is NOT available as a standalone product, similar to the way that Tivoli Framework was/is not available as a standalone product. By itself, TPAP does "provide you with the capability" to do several things, but its usefulness really comes through when there are additional components bundled with it. This is the case with all of the products listed above - they are shipped as TPAP plus something. Exactly what that "something" is depends on the product. If you purchase more than one of the above products, then you will only install TPAP once, then you will use product installer to add additional components on top of what you already have installed.

Example: CCMDB

As one example, when you install CCMDB, you get TPAP plus:

Change Process Manager - This consists of additional applications, roles, workflows, job plans, tools, reports and management controls to support the ITIL-defined Change process.

Configuration Process Manager - This consists of additional applications, roles, workflows, job plans, tools, reports and management controls to support the ITIL-defined Configuration Management process.

TADDM - A standalone application that is used to discover configuration items and their relationships, and this data is uploaded to the TPAP database through the IBM Tivoli Integration Composer.

Example: TAMIT

As another, slightly different example, when you install Tivoli Asset Manager for IT, you get TPAP plus:

customized TPAP-based applications, roles, workflows, job plans, tools, reports and management controls to support all of the functions surrounding the management of IT Assets (such as Contract Management, Procurement and Financial Management).


Tivoli License Compliance Manager - A standalone application that is used to gather inventory/asset information, which is then fed into the TPAP database using the IBM Tivoli Integration Composer.

Tivoli License Compliance Manager for z/OS - similar to above, for z/OS.

Installing Multiple Products

As you install additional products, you will see additional options in the Go To menu of the TPAP interface. This interface is accessible by pointing your browser to http://your_server/maximo. A large issue currently is finding documentation on the integration of the functions provided when you have multiple products installed. I imagine IBM will publish some documentation on this as the product suite matures, and I plan to add blog entries as I develop more generic solutions/opinions for the integration issues from the implementations we're currently performing. A good place to check for information from IBM is at http://www.redbooks.ibm.com/cgi-bin/searchsite.cgi?query=maximo.

Etc.

There are some other process managers listed in Tivoli's documentation, but note that ONLY those process managers at version 7.1 or above are actually part of the current TPAP. The others are generally version 1.1, which are note completely outdated.

You will see the ISMP Toolbox referenced in several places, and that can be found on OPAL here. It's really a great collection of documentation related to IBM's service management strategy.

Monday, May 26, 2008

Thanks to everyone who came by our booth at Pulse


The conference was much larger than it has been in years past, and we got a chance to visit with old friends and to make some new ones. And for everyone who didn't get to go, here's a picture of us with our booth.

From left to right: Tony, Venkat, IV, Martin, Frank, Jason

Tuesday, April 29, 2008

Sample Emailing Script

Here is a simple custom emailing script using Perl. You need to install Mail::Sendmail module from CPAN to get it to work. You also need to change the values of $mailhost and $from field values.

Here is a typical usage of this program.

/itm_email.pl somebody@gulfsoft.com "Sample Subject" "Sample message body"


#!/usr/bin/perl

# Send emails via this platform independent module
use Mail::Sendmail;

main();

sub main {
my ($to, $subject, $message) = @ARGV;
send_email($to, $subject, $message);
}

# Sends email using Mail::Sendmail module
sub send_email {
my ($to, $subject, $message) = @_;
my ($mailhost, $from, %mail, @to);

$mailhost = "smtp.mydomain.com";
$from = "itm_alerts\@mydomain.com";

%mail = (
Smtp => $mailhost,
From => $from,
To => $to,
Subject => $subject,
Message => $message,
);
sendmail(%mail);
}


Hope you find it useful.

Sunday, April 20, 2008

TCM 4.2.3 FP06

Not sure how I missed this but FP06 is now out (since March 28).

Some interesting features:

1. Windows 2008 is supported as an Endpoint
2. AIX 6.1 is supported as an Endpoint
3. WSUS 3.0 support

There are also some other changes that you can read further about at ftp://ftp.software.ibm.com/software/tivoli_support/patches/patches_4.2.3/4.2.3-TIV-TCM-FP0006/4.2.3-TIV-TCM-FP0006.README

Thursday, April 17, 2008

Tivoli System Automation for Multiplatforms Overview

TSAM is not a new product, but it's definitely one that is becoming more popular these days, especially with the introduction of the Balanced Data Warehouse, which relies on its capabilities. So I thought a general overview might be useful. This high-level overview doesn't go into the gory details, as those are very well documented in the product documentation and in several redbooks and redpieces (links at the end of this post). Rather, this overview is meant to be a little more explanatory than the existing marketing material.

TSAM competes directly with Veritas Cluster Services (VCS). The purpose of both products is the same: to provide high availability to resources. A large portion of the functionality of TSAM is provided by Reliable Scalable Cluster Technology (RSCT) component, which is included as part of AIX, and is available for Windows, Red Hat and SuSE Linux. RSCT, along with its prerequisites (such as General Purpose File System), provides the mechanisms that allow you to define resources and have them monitored. There is no graphical interface for the configuration of RSCT or TSAM, so all of the configuration must be done from the command line.

RSCT provides the ability to "harvest" resources in your environment, so it will find and identify all of your network adapters, disks, filesystems, etc. You don't have to define *those* components. RSCT also provides the *ability* to define arbitrary logical resources and the *ability* to define automations to react to changes in the environment. RSCT also allows you to group resources into Resource Groups, which allow you to manage multiple resources as a single logical entity. RSCT also allows you to define Relationships between resources, allowing you to specify, for example, that these two filesystems must ALWAYS be available on the same node. (Update: these features are actually provided by TSAM) RSCT also allows you to define actions that should be taken when different resource change states. However, if you wanted to leverage RSCT directly, you would have to do a LOT of detailed customization for each component that you want actually automate to make highly available.

This is where TSAM fits in to make life easier for you. You've still got a good bit of customization to do, but it's a LOT less that you would otherwise have to do with the base RSCT. TSAM sits on top of RSCT and provides the mechanism that actually allows you to define arbitrary resources, resource groups and relationships, and based on your configuration (or "policy"), TSAM configures all of the automation necessary to make your resources highly available among the nodes in a cluster.

Specifically, TSAM allows you define resources of type "IBM.Application" (these are your applications) for which you configure Start, Stop, and Monitor commands. TSAM also provides the feature that automates the reflex actions required when your resources change state (for example, when an application goes Offline). With the introduction of these features (on top of RSCT as the base technology), TSAM fully automates the high availability of your applications based upon your configuration.

Much (certainly not all) of TSAM is scripts that are wrappers for the underlying RSCT commands. So when debugging TSAM, you'll primarily use RSCT commands. This is important to note, since it means that you must be intimately familiar with RSCT to be able to successfully deploy and manage TSAM. And TSAM makes a point of not replicating the RSCT documentation, so many of the commands you'll use to configure TSAM are documented in the RSCT manuals.

TSAM also provides an Operations Console, which is implemented as a WebSphere application that plugs into the WebSphere-based Integrated Solutions Console. This is really JUST an Operations Console, in that it ONLY allows operator-level management of your environment (e.g. bring an application online or offline, view information). This console provides you with NO configuration capabilities. You need to create all of your Resources, Resource Groups, Relationships, etc. from the command line. But it's not all painful. TSAM also provides the capability to export your "Automation Policy" (essentially, your entire configuration) to an XML file, and also to import from the same file. So this allows IBM and others to provide sample automation XML files that you can download, modify for your environment, and add to your existing policy.

All of the above is provided by the TSAM Base Automation Component. TSAM also provides an End to End (E2E) Automation Adapter, which allows you to manage and automate actions to be performed on resources across multiple clusters. This E2E component supports not only TSAM clusters, but also HACMP, Linux clusters, and even VCS clusters on Solaris.

TSAM also provides a TEC EIF publisher so you can send events to TEC (or the Omnibus EIF Probe) to allow you to monitor the status of your TSAM environment.

Links:

Lots of Documentation and Downloads:
http://www.ibm.com/software/tivoli/products/sys-auto-linux/downloads.html

Version 2.1 Redbook:
http://www.redbooks.ibm.com/redbooks/pdfs/sg247117.pdf

Main Product Page:
http://www-306.ibm.com/software/tivoli/products/sys-auto-multi/index.html

Wednesday, April 16, 2008

Installing BIRT on Windows Vista?

Well, installing BIRT is a no-brainer.  Just download the BIRT All-in-one from eclipse.org and unzip it. Very simple, right?  Prepare for a little bit of surprise when you do it in Windows Vista.   While trying to unzip on my Windows Vista computer, Vista displayed the following error. 

 

The source file name(s) are larger than is supported by the file system. Try moving to a location which has a shorter path name or try renaming to shorter name(s) before attempting this operation".

 

This error occurs due to 255 character limit on file names and C:\users\administrator\downloads\birt-report-designer-all-in-one-2_2_2.zip takes about 75 characters! This does not leave much room for the files within the zip causing the above error.

 

The solution is very simple. Rename the zip file to something like b.zip and move it up to the root directory (C:\).   Now try extracting it, it works like a charm. 

 

And, don’t ask me how the heck this worked in Windows XP and NOT in Vista? J  

 

 

Thursday, April 3, 2008

Some other useful VMWare tips

Since Venkat posted some nice tips for VMWare, I thought I would add a couple of my own.

Pre-allocate Disk
There are significant performance increases in pre-allocating the disk. Obviously this take more space, but it can really save having a fragmented image. This can also save on having to defragment all the time.

Split Disk
When building the image, make sure to split the disk into 2 GB files.

Run VM image from separate physical drive than the host
The host drive is going to be busy enough, if you have a second drive, use it! Also if you are using a laptop, there is a very good chance that the internal drive is a 4200 or 5400 RPM drive. These are really slow and to slow to run anything other than one VM that you are going to use Notepad on.

External Drive Solutions
I have tried a couple different drives running through USB, but this really is not much better than running on the host drive. USB is too slow for this type of use. You could use firewire, but not many x86 machines have firewire. One of the newer things to come out is eSATA (http://en.wikipedia.org/wiki/Serial_ATA#eSATA_in_comparison_to_other_external_buses). I purchased a 7200 RPM 2.5" drive with an eSATA enclosure and eSATA PCMCIA card for about $250.00. This will work without an external power supply, which is great

Here are the parts
- SEAGATE 160GB Momentus 7200.2 SATA Notebook HDD w/ G-Force Protection, NCQ, 8MB Cache
- Vantec eSATA 2-Port SATA PCMCIA Adapter (there is also an express card version)
- Vantec Nexstar 3 eSATA Aluminum 3.5" Enclosure, eSATA & USB2.0

This drive gets about 55Mb/s transfer versus my internal 5400 RPM drive at about 25Mb/s.

Image Snapshots
Snapshots are supposed to only cause a very minor amount of performance impact on the VM image, but if you don't need the snapshot, get rid of it.

Disk Space
Make sure there is more than 30% disk space available on the physical disk. There are many write ups on this topic that state when you drop below 30% free, there is significant slow down in performance.

Defrag
Make sure that there is a regular defrag of the physical drive along with defragmenting the VM drive.

A couple links for your viewing pleasure
http://www.virtualization.info/2005/11/how-to-improve-disk-io-performances.html

http://www.scribd.com/doc/281249/Vmware-server-tips-tricks-vmworld-2006

Hope this helps.

Martin

Wednesday, April 2, 2008

Some useful VMware parameters

If you are running VMWare server/workstation, you will find the following Vmware parameters useful.

mainMem.useNamedFile = "False"

This parameter is helpful if you have a slower disk like flash drive or drivers like ntfs-3g.

tools.syncTime = "TRUE"

Syncs the VM's clock with the host. Helpful to avoid time slips while running VMs.

MemTrimRate = "0"

Vmware, by default, trims the unused VMs memory and gives it back to host OS. This parameter disable the behavior. While this results in increased memory usage per VM, it also improves the VM performance.

Sched.mem.pshare.enable = "False"

Disables page sharing across VMs. This is again useful on systems with large amount of memory.

prefvmx.minVmMemPct = "100"

If you have a huge amount of memory, you might want to "pre-allocate" all allocated memory. It gives good performance boost but obviously limits the number of VMs you can run on the host.

Hope you find these parameters useful.

Tuesday, April 1, 2008

Strange errors with the Warehouse Proxy Agent if you have a bad initial configuration

I was just installing all of the pieces of ITM 6.2, ITCAM for Web Resources 6.2 and ITCAM for Response Time 6.2 on Windows with DB2 9.1 and I ran into a painful problem. The problem is that the WPA trace log had these errors:

(Tuesday, April 1, 2008, 3:35:20 PM-{9AC}khdxdbb.cpp,2360,"checkUTF8ClientEncoding") Database client encoding is not UTF8.
+47F29C88.0003 You need to set the OS environment variable DB2CODEPAGE=1208 for DB2 or
+47F29C88.0003 NLS_LANG=_.AL32UTF8 for ORACLE
(Tuesday, April 1, 2008, 3:35:20 PM-{9AC}khdxdbb.cpp,1728,"createStatusLogTable") Table WAREHOUSELOG exists already
(Tuesday, April 1, 2008, 3:35:20 PM-{9AC}khdxdbb.cpp,1735,"createStatusLogTable") Index WHLOG_IDX1 on WAREHOUSELOG exists already
(Tuesday, April 1, 2008, 3:35:20 PM-{9AC}khdxdbb.cpp,1746,"createStatusLogTable") Index WHLOG_IDX2 on WAREHOUSELOG exists already
(Tuesday, April 1, 2008, 3:35:20 PM-{9AC}khdxdbb.cpp,1757,"createStatusLogTable") Index WHLOG_IDX3 on WAREHOUSELOG exists already
(Tuesday, April 1, 2008, 3:35:20 PM-{9AC}khdxbase.cpp,250,"setError") Error 219/3/-443(FFFFFE45)/0 executing SQLColumns
(Tuesday, April 1, 2008, 3:35:20 PM-{9AC}khdxbase.cpp,266,"setError") Error "[IBM][CLI Driver][DB2/NT] SQL0443N Routine "SYSIBM.SQLCOLUMNS" (specific name "COLUMNS") has returned an error SQLSTATE with diagnostic text "SYSIBM:CLI:-727". SQLSTATE=38553
+47F29C88.0009 "
(Tuesday, April 1, 2008, 3:35:20 PM-{9AC}khdxdbb.cpp,792,"initializeDatabase") Initialization with Datasource "ITM Warehouse" failed
(Tuesday, April 1, 2008, 3:35:20 PM-{9AC}khdxsrvc.cpp,722,"testDatabaseConnection") testDatabaseConnection failed
(Tuesday, April 1, 2008, 3:35:20 PM-{9AC}khdxsrvc.cpp,653,"setupExportServer") A retry will be attempted in 10 minute(s).
(Tuesday, April 1, 2008, 3:35:20 PM-{9AC}khdxsrvc.cpp,654,"setupExportServer") Please check the troubleshooting guide for any error not described above.

And that meant that no data was getting in the warehouse.

The root of the problem is that I had NOT set the environment variable "DB2CODEPAGE=1208" before I started installing, and at this point some tables were created incorrectly. To fix the problem, I had to:

1. Stop the WPA

2. drop the UTF8TEST and WAREHOUSELOG tables from the WAREHOUS database (don't worry - the WPA recreates them when it starts back up)

3. Restart the WPA

And at this point, all was well.

Tuesday, March 25, 2008

Customizing ITM 6.1 tep.jnlp to work with Java 1.5

The default tep.jnlp file works great in certain environments, and not at all in others. Basically, it works if you only have Java 1.4.2 installed, or if you have Java 1.4.2 and Java 1.5 installed. But things don't work so well if you have Java 1.6 installed. This change fixes that problem.

What I did was change the line that reads:

j2se version="1.4+" size="64m" args="-showversion -noverify"

to the following:

j2se version="1.5*" size="64m" args="-showversion -noverify"

If you save this file with a new name, you must also update the line in the file that references itself (i.e. change 'href="tep.jnlp"' to 'href="newfilename.jnlp"').

And I found that ONE of these two files (the original or the changed) will work for almost all users. They just need to try one, and if that doesn't work, try the other. The problem is that the Java 1.4.2 browser plugin doesn't support the changed syntax. So you can just tell your users to try the first, then the second.

Edit: If a user only has Java 1.6 installed, the changed file will work and will automatically download and install Java 1.5.

Friday, March 21, 2008

GBSCMD V3.5 new features

In the past, we have posted few articles about gbscmd. We have updated the tool with few additional cool features such as clearing offline entries, executing commands on the agents and refreshing TEC EIF Integration component. This article explains how to use these features.

Clearing Offline Entries


This is one of the long requested requirement. Whenever the servers are decommissioned from monitoring environment, one has to remove the servers by selecting "Clear Offline" from TEP. However, there are no CLI equivalent for this function in tacmd, which is a handicap if you want to automate the decommisioning process. You can use gbscmd clearoffline feature to perform this function from command line.

Executing commands on agents


I wrote an earlier blog article about this feature. This feature is very handy to run non-interactive commands on the remote monitoring system. However, this feature does not capture the output of the command.

Refreshing TEC EIF Adapter component


If your site relies on tecserver.txt for setting situation severities or updates the MAP files frequently, any changes to these files require hub TEMS restart. Instead, you can use the refresheif feature of gbscmd to cycle the EIF component alone.

Examples


The following example restarts a Windows OS agent.
gbscmd command --auth ~/itm61.auth --command "net stop kntcma_primary && net start kntcma_primary" --managedsystem Primary:ITM61:NT

The following example restarts a Unix OS agent.
gbscmd command --auth ~/itm61.auth --command "/opt/IBM/ITM/bin/itmcmd agent stop ux;/opt/IBM/ITM/bin/itmcmd agent start ux" --managedsystem uxitm61:KUX

The following example clears an offline entry, which can be handy when servers are decommissioned from the monitoring environment.

gbscmd clearoffline --auth ~/itm61.auth --managedsystem uxitm61:KUX

More Info


You can download the manual from here. Please contact Tony (Tony.Delgross at gulfsoft.com) if you have any questions or need a copy of gbscmd.

Saturday, March 15, 2008

TPM SPB Import Automation Package

Posted by: martinc on Mar 06, 2008 - 04:07 PM

With TPM it is possible to import SPBs with the TCM integration, Software Package Editor or manually from the Web UI. I wanted to add another.

What happens if you have a bunch of SPBs that you want to import and you do not want to use the integration? Use the SPE or Web UI? That would take forever!

So I thought I would whip up a quick automation package to do just that. Click here to download the automation package.

The package is made up of 3 workflows

GBS_FindAllSpb.wkf
Function: Retrieves a list of all SPBs based on the FileRepository rootPath and the RelSearchPath

Arguments:

  • FileRepositoryID - This is the ID of the file repository to use for the search. Yes this could be changed to the name if you want.
  • RelSearchPath - This would be the path to search relative to the FileRepository.
    For example:
    if you enter "/" (no quotes) this will search the rootPath of the file repository and all sub-directories for spb files
    if you enter "/ActivePerl" (no quotes) this will search the rootPath of the file repository and all sub-directories under ActivePerl for spb files

    Calls: GBS_GetSpbVariables.wkf

    GBS_GetSpbVariables.wkf
    Function: Retrieves various package attributes used to define the Software Module to TPM.

    Arguments:
  • SpbRelPath - This is the relative path to the SPB found in GBS_FindAllSpb.
  • SpbFileName - The file name to retrieve the package attributes from.
  • FileRepositoryID - This is the ID of the file repository to use for root path

    Calls: GBS_ImportSPB.wkf

    GBS_ImportSPB.wkf
    Funtion: Using the variables passed in from GBS_GetSpbVariables.wkf, creates a XML file in the same directory as the SPB and then uses the built-in workflow called DcmXmlImport to import the XML file.

    Arguments:
  • SpName - value retrieved from the field Name in the SPB
  • SpVersion - value retrieved from the field Version in the SPB
  • SpVariables - value(s) retrieved from the default_variables field in the SPB
  • SpbRelPath - This is the relative path to the SPB
  • SpbFileName - File name of the SPB
  • FileRepositoryID - This is the ID of the file repository to use for root path

    Extra Notes

    It is also possible to just run GBS_GetSpbVariables.wkf if there is only one SPB to define. Just pass the required fields into the workflow execution and only that SPB will be defined

    With this template, it is very easy to make some changes that would suit your environment. Any comments or feedback is appriciated.

    Martin Carnegie
  • TPM 5.1.0.2 FP0003 and 5.1.1 FP0001 are available!

    Posted by: martinc on Mar 05, 2008 - 04:54 PM

    Fix Pack 3 for TPM 5.1 and Fix Pack 1 for 5.1.1 are now available! So what does it do?

    The most important thing is that this allows for current 5.1.0 users to upgrade to 5.1.1. In order to do this, you must first be at 5.1.0 FP002. When the install is done, you will be at 5.1.1.1. Also brings the 5.1.1 to 5.1.1.1.

    5.1.0 - FP0002 readme
    5.1.0.2-TIV-TPM-WIN-FP0003.README.HTM

    5.1.1 - FP0001 readme
    5.1.1-TIV-TPM-WIN-FP0001.README.HTM

    IBM Automation Package for IBM Tivoli Monitoring
    There is supposed to be an updated automation package on OPAL, but I do not see it yet. Very curious to see what this one offers as the old one was, um, lacking.

    VMware Virtual Infrastructure 3 support
    I know that there have been a few people asking about this one

    Automation package for management of large numbers of zones, regions, and depot servers
    This is great! Nothing more tedious than manually creating these.


    Here are the list of defects for each version
    5.1.0
    5.1.0-TIV-TPM-FP0002.DEFECTS.HTM

    5.1.1
    5.1.1-TIV-TPM-FP0001.DEFECTS.HTM

    Unfortunately the one that is missing still is the Vista agent.

    I will get these downloaded and see how the install goes. I am keeping my fingers crossed :)

    Edit:
    To download go to
    Tivoli Provisioning Manager Fix Pack 5.1.1-TIV-TPM-FP0001


    Tivoli Provisioning Manager Fix Pack 5.1.0.2-TIV-TPM-FP0003

    TPM 5.1.1 Classroom Comments

    Posted by: martinc on Mar 05, 2008 - 07:42 PM

    I recently taught a custom class we developed for TPM 5.1.1 and thought I would provide some feedback on TPM 5.1.1

    Performance
    Well first of all, I just want to say a word on performance. I taught almost the same class on 5.1.0 FP01 and 5.1.1 using all the same hardware and I was impressed with the performance. I had some students in this class from the class I taught a year ago on 5.1.0 and they stated that everything was running faster. In fact I had more students this time and there was almost no lag when running various tasks.

    Comments on the use/functionality
    Web Replay - although this was not really part of the class, we had some extra time to look at it. We felt that the tool was interesting and probably good as a training tool and possibly for some tasks. Generally speaking, I (or someone that has been using the interface for a while) can manage to get things done faster than Web Replay. This is still a good addition though.

    Computer information
    The layout of the Computer information is very clean. The General tab displays a very good summary of important information about the computer.

    Compliance and Remediation (also applies to patching)
    The 3 step process layout makes the creation of a compliance check very easy.

    Here is a snapshot of the Compliance tab for a group (Click on the image to see a larger one).



    Notice the easy Step 1 2 and 3. Also under the Step one there is a quick checkbox to add the OS Patches and Updates Check. Simple!

    Discovery
    The integration of SMB and SSH discovery in one discovery configuration is also a biggy. Seems minor, but it is a pain to set up two seperate discovery scans that hit the same ip addresses.

    Another nice feature in the discovery is that it will populate the operating system name. To do this in previous versions, you either had to run an inventory scan or manually add it. This saves some time.

    Depot (CDS) Installation
    In 5.1.0 in order to install a depot on a system that was already a TCA, you would have to first uninstall the TCA and then run the depot install. In 5.1.1, this has been fixed. The install will recogize that the TCA is already installed and only install the depot subagent.

    SOA Rename
    One fun thing while developing the course was that I noticed that everything was changed from SOA to SDI. SDI is Scalable Distribution Infrastructure. This name sure makes a lot more sense for the function than SOA.

    Other
    Just some Acronyms for you to make your day.

    CAS – Common Agent Services
    CDS - Content Delivery Service
    CIT – Common Inventory Technology
    DCM – Data Centre Model
    DMS – Device Management Service
    JES – Job Execute Service (also referred to as DMS)
    SAP – Service Access Point
    SDI – Scalable Distribution Infrastructure
    SIE – Software Installation Engine
    SOA – Service Oriented Architecture
    SPE – Software Package Editor
    TPM – Tivoli Provisioning Manager
    TPMfOSD – TPM for Operating System Deployment
    TPMfOSDEE – TPMfOSD Embedded Edition
    TPMfSW – TPM for Software
    WSDL – Web Services Definition Language

    ITM 6.2 Workspace Parameter - more portal tuning

    Posted by: jlsham on Mar 01, 2008 - 05:00 AM

    itm61
    If you have more than 200 of any type of agent, you start to see messages in your views about the number of systems being queried is too high.
    Well, in your cq.ini - look for

    #KFW_REPORT_NODE_LIMIT=xxx

    Uncomment this line and set it to a more meaningful value for the number of agents you need to query.

    Restart the portal and you're done.

    Modified applet.html file to resolve one TEP IE browser connection error.

    Posted by: napomokoetle on Feb 18, 2008 - 05:11 PM

    This short blog is related to the blog I wrote a while ago title "Upgrade from ITM6.1 FP6 to ITM6.2 may break logon through IE TEP".
    I had not posted the applet.html file I'm posting herein with the original blog I wrote on the problem because IBM/Tivoli had insisted the problem was unique to the environment I was working on at the time.




    Well I guess the error has proven not to be unique to the environment I was working on since other folks keep seeing the same problem and requesting I pass them the modified applet.html file. Posting the file here will save me from searching all over for the file every time I get asked for it by those who would like to try it out.

    TO RECAP:


    Environment:
    Upgraded from ITM6.1 FP6 to ITM6.2 RTM:

    TEPS & TEMS installed on same W2K3 (Dual 3.06 GHz Intel Xeon (Hyper-threaded0) host with 3808MB ram.

    PROBLEM: From an XP remote host, TEP in IE 6.0.2800 browser reports “cannot connect” error.
    When I clicked on the Java WebStart created icon I got the same error as in IE browser.

    Plugin150.trace reports:

    (475405aa.0c474f80-(null)AWT-EventQueue-2:Bundle,0,"Bundle.Bundle(String,String,Locale,String)") Resource bundle: id = kjri, baseName = candle.kjr.resources.KjrImageBundle, actual locale used:


    (475405b4.10ed7f00-(null)Thread-10:DataBus,0,"DataBus.make()") EXCEPTION: Unable to create DataBus object: org.omg.CORBA.INITIALIZE: Could not initialize (com/borland/sanctuary/c4/EventHandler) unexpected EOF at offset=0 vmcid: 0x0 minor code: 0 completed: No


    (475405b4.10ed7f00-(null)Thread-10:QueryModelMgr,0,"QueryModelMgr.QueryModelMgr()") EXCEPTION: InstantiationException --> Unable to instantiate DataBus object


    (475405b4.10ed7f00-(null)Thread-10:QueryModelMgr,0,"QueryModelMgr.make()") EXCEPTION: InstantiationException --> Unable to instantiate QueryModelMgr object


    (475405b4.10ed7f00-(null)Thread-10:LogonDialog,0,"LogonDialog.processOK()") EXCEPTION: Unable to connect to CandleNet Portal server: java.net.ConnectException


    (475405b6.33611380-(null)AWT-EventQueue-2:CNPClientMgr,0,"CNPClientMgr.terminate(CNPAppContext,boolean)") Performing normal exit of TEP Workplace


    (475405b6.33611380-(null)AWT-EventQueue-2:UserManager,0,"UserManager.loadPermissionXRef()") Error loading User Permission Cross-Reference tables: KFWITM219E Request failed during creation.





    Solution:
    After some multiple cycles troubleshooting the error with IBM, they eventually made a couple changes to the applet.html for me to try out. Please make a backup copy of your current applet.html, and then replace the version found in your ..\cnb directory with the attachment on the link below. If this makes no difference in your testing, then you will need to open an ITM 6.2 PMR with IBM/Tivoli so that support can work this issue to a successful conclusion with you.

    Click here to download the modified applet.html

    Adios,
    J. Napo Mokoetle