Tuesday, December 21, 2010

Adding a custom Java portlet to the TIP for use with WebTop, WebGUI, TBSM, etc.

REMOVED


This article has been removed because it worked for very specific versions of products, but not for the latest versions, and customers were calling Tivoli support to open PMRs when it didn't work. If you're trying to add a portlet, this is CUSTOM work, for which you should not open a PMR.

So I'll revisit this issue when I have time to test with the latest and greatest versions. Until then, if you would like the article, you can send me an email at frank dawt tate aat gulfsoft dot com.

Thursday, October 7, 2010

Using Tivoli Software Package Blocks in BigFix Enterprise Server v8 – Part 2

Now that the Disconnected Command Line (DCLI) is in place, it is time to start defining the SPBs.

At a high level the steps are:

  1. Create a site for Tivoli Software Packages and set the relevance
  2. Use the Software Distribution Wizard to import the SPB
  3. Modify the new task to use the correct syntax (wdinstsp)

I will not cover the creation of SPBs in this blog as if you are interested in using BF for SPBs you are probably already familiar with them.


Create Tivoli Software Packages Site

For this example, I found that it is best to create a specific site for the SPBs so that we can also set the subscriptions using relevance to check for the existence of the DCLI.

  1. Navigate to the Systems Lifecycle domain
  2. Navigate to All Systems Lifecycle > Sites
  3. Right click in the List Panel and select “Create Custom Site…”
  4. Set the name to “Tivoli Software Packages”. Press the OK button
  5. Click on the “Computer Subscriptions” tab
  6. Select the “Computers which match the condition below”
    1. Set the property to “Relevance Expression”
    2. Set the operator to “is true”
    3. Press the “Edit Relevance…” button and enter the text exists file "C:\Program Files\Tivoli\disconn\w32-ix86\classes\swdis_env.bat". Press OK
  7. Press the Create button and enter your password

Use the Software Distribution Wizard to import the SPB

For this example, I am using a simple software package that deploys the Orca.msi. This was created using the Software Package Editor with the MSI Wizard.

I have also been doing some work on using the Sha1.exe (http://support.bigfix.com/fixlet) and BfArchive.8.0.0.exe (http://support.bigfix.com/cgi-bin/kbdirect.pl?id=452) which will allow for the use of the sha1 keys.


Using the Wizard to create the task

  1. Navigate to Systems Lifecycle > All Systems Lifecycle Wizards
  2. Click on Windows Software Distribution Wizard
  3. Replace the with Orca and press the Next button
  4. Select the File option and browse to the SPB file. Press Next
  5. Set the desired platforms. Press Next
  6. Set the target relevance to us the Registry Key: "HKLM\Software\Microsoft\Windows\Uninstall\{63A68338-16A3-4763-8478-A45F91A61E7A}". Press Next
  7. Leave the command line alone for now as this will be modified later. Press Next
  8. Review the summary and press Finish


Manually modify the task to use the wdinstsp command

  1. Set the “Create in site” to “Tivoli Software Packages”
  2. Click on the Actions tab
    1. Replace the “wait __Download\orca.spb” with the following lines

appendfile call "c:\program files\tivoli\disconn\w32-ix86\classes\swdis_env.bat"

appendfile wdinstsp.exe -f __Download\orca.spb

copy __appendfile __Download\orca_install.bat

wait __Download\orca_install.bat

  1. Click the Relevance tab and verify the value. For this example, the value was (name of it = "WinXP" OR (name of it = "Win2003" AND NOT x64 of it)) of operating system AND (not exists key "HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Uninstall {63A68338-16A3-4763-8478-A45F91A61E7A}" of native registry)
  2. Click on the Properties tab and set the source to “Software Distribution SPB”. This is done just to create a separate folder for viewing “By Source”
  3. Press the OK button and enter the password


Now that the task has been created, it is just a matter of taking action and deploying like any other task.


This takes care of the setup for deploying software packages blocks using BigFix. There are a few other items that would need to be added this to make it really production ready, but I cannot give away everything ;)


If you have any questions/comments, feel free to comment on this blog or email me at martin dot carnegie at gulfsoft dot com.

Monday, October 4, 2010

Using Tivoli Software Package Blocks in BigFix Enterprise Server v8 - Part 1

After doing some work with BigFix, I started investigating methods of implementing the use of SPBs into BigFix. After a little bit of trial, I have developed a fairly simplistic way to achieve this.

At a high level, the steps are
1. Create a standalone copy of the disconnected command line (DCLI from now on)
2. Create a task to deploy the DCLI
3. Create baseline to deploy DCLI to desired targets
4. Create tasks to deploy SPBs and executed with the DCLI


Create a standalone copy of the DCLI
The DCLI is a facility provided with TCM to allow package builders the ability to test SPBs locally on a test system. This is used to make sure that a package is behaving as desired without having to import into TCM and use the framework to install. By using the DCLI, a package builder is able to make changes to a package and “redeploy” in a relatively quick manner. Once a package installs with the desired effect via the DCLI, it is then imported into TCM for further testing. For products such as TPM, TPMfSW and the now defunct TPMDSD/TEM, this standalone method was what was being used.

In order to create a standalone version, you will first need to have the Software Package Editor as this contains the binaries required for the DCLI. You will also need the Tivoli Endpoint installed (this is a requirement for the SPE anyway) as this will have a couple DLLs that are also required. Once you have the SPE installed, follow the instructions below to create the image

1. Create a directory called C:\Program Files\Tivoli\disconn
2. Copy the directory C:\Program Files\Tivoli\swdis\speditor\w32-ix86 to the directory created in step 1. Note that the swdis directory may be installed in a different directory.
3. Edit the file C:\Program Files\Tivoli\disconn\w32-ix86\classes\swdis_env.bat and set it to the following:
set INTERP=w32-ix86 set speditor_dir=C:\Program Files\Tivoli\disconn\w32-ix86\classes set speditor_lib_path=C:\Program Files\Tivoli\disconn\w32-ix86\classes\..\lib set speditor_bin_path=C:\Program Files\Tivoli\disconn\w32-ix86\classes\..\bin set Path=%speditor_dir%;%speditor_lib_path%;%speditor_bin_path%;

4. Copy the following files from C:\Program Files\Tivoli\lcf\bin\w32-ix86\mrt to c:\Program Files\Tivoli\disconn\w32-ix86\lib. Note: this list was created based on some testing of simple packages. There maybe more of these DLLs required.
a. Libcpl60.dll
b. Libdes60.dll
c. Libguid60.dll
d. Libmem60.dll
e. Libmrt60.dll
5. Optional: cleanup extra binaries that are not required for the DCLI
a. In the C:\Program Files\Tivoli\disconn\w32-ix86\classes directory, remove all files except swdis_env.bat
b. Remove the C:\Program Files\Tivoli\disconn\w32-ix86\msg_cat directory

This will be the working copy of the DCLI that will be used to import into BigFix. I have found other methods that can be used to import, such as using Winzip, but for now let’s stay with the importing of files and folders built into BigFix.

Create a task to deploy the DCLI


In the previous section the files that are required for the DCLI were identified and made ready for importing into BigFix. Now these tools need to be imported into BigFix and made ready for deployment.

In this section we will take the image created of the DCLI and build a Task out of it. This task will be under the Systems Lifecycle domain and then create a baseline to apply the Task to all computers.

1. Copy the DCLI files to the BES Server. For this example, they have been copied to C:\CID\disconn
2. Navigate to Systems Lifecycle > Wizards > All Wizards > Windows Software Distribution Wizard
3. Set the application name to Tivoli Disconnected Command Line. Press Next
4. Choose the “Folder” option and browse to (or type) C:\CID\disconn. Check the “Include Subfolders”. Press Next
5. Choose the operating systems that are desired for support. This should work for any platform that TCM supports in Windows, but this was only tested on Windows XP and 2003.
6. Set the target relevance to use the File option and set to C:\Program Files\Tivoli\disconn\w32-ix86\classes\swdis_env.bat.
7. In the “Full command line”, leave it with the setup.exe for now, this will be modified later.
8. Review the summary and press “Finish”

The wizard is now complete and the task will be displayed. From here, we need to make some custom modifications to extract the files/dirs and put them in C:\Program Files\Tivoli

1. In the Create Task dialog, select the Actions tab
2. Make the following changes:
a. Remove the line: wait __Download\setup.exe
b. Add the line: dos mkdir "C:\Program Files\Tivoli\disconn"
c. Add the line: dos move /y __Download\w32-ix86 "C:\Program Files\Tivoli\disconn"
3. Press the Edit button beside the Include custom success criteria”
4. Select “…the following relevance clause evaluates to false” and enter the string not exist file "C:\Program Files\Tivoli\disconn\w32-ix86\classes\swdis_env.bat". Press OK
5. Set the “Create in site:” to Master Action Site, and set the “Create in domain” to Systems Lifecycle. Note: the site could be made to something else, but for this example, we will just use the default. Press the OK button and enter the key password

The task is now created for deploying the Tivoli Disconnected Command Line to targets. Now that this is created, the next step is to deploy this task to the desired computers.


Deploying DCLI to targets

The task has now been created, what next? Well, we need it to get out to the targets so that we can then deploy SPBs. For this example, I will not be using any real complex targeting, I just want to get it out. Targeting is another discussion all together (which we kind of hit on when we go to deploy the SPBs). For my lab, my target computers all start with the name “win2kcli”, so what this example will do is create a site to do just that. Then create a baseline to target all the computers that are subscribed to the site and apply the DCLI as a policy.

Create custom site
1. Navigate to Systems Lifecycle > All Systems Lifecycle > Sites > Custom. Right click in the List Panel and select Create Custom Site
2. In the Create Custom Site dialog, enter “All WIN2KCLI Computers”. Press OK
3. This will create the new site and display it in the List Panel. From here the subscription needs to be set. Select the “Computers which match the condition below”
a. Set the property to “Relevance Expression”
b. Set the operator to “is true”
c. Press the “Edit Relevance…” button and enter the text computer name as lowercase contains "win2kcli". Press OK
4. Press the “Save changes” button and enter password.

Now that the custom site is created, target computers will start appearing under the site’s “Subscribed Computers”

Create Custom Group
In order to target for the baseline, a computer group needs to be created. This group will be assigned the same relevance as the site.

1. Navigate to Systems Lifecycle > All Systems Lifecycle > Sites > Custom > All WIN2KCLI Computers > Computer Groups
2. Right click in the List Panel and select “Create Automatic Computer Group”
3. Set the Name: All WIN2KCLI Computers CG
4. Create in site: All WIN2KCLI Computers
5. Create in domain: Systems Lifecycle
6. Set the relevance field to “Relevance Expression”
7. Set the condition to “is true”
8. Press the “Edit Relevance…” button
a. Enter: computer name as lowercase contains "win2kcli"
9. Press the Create button and enter your password

Create Baseline
The site has been created and the subscriptions set, now the baseline policy needs to be set to deploy the DCLI.

1. Navigate to Systems Lifecycle > All Systems Lifecycle > Sites > Custom > All WIN2KCLI Computers
2. Right click in the List Panel and select “Create New Baseline…”
3. Set the Name to “Deploy DCLI”
4. Set the Description to “Deploy Tivoli Disconnected Command Line”
5. Click on the Components tab (image create_baseline_2.jpg)
a. Set the Group Name to DCLI and press Save Group Name
b. Click on the “add components to group” link and press the Tasks tab
c. Navigate to All Tasks > By Source > Software Distribution Wizard and select Software Distribution – Deploy: Tivoli Disconnected Command Line and press OK
6. Click on the Relevance tab and select “Computers which match all of the relevance clauses below”. Set the clause to: not exist file "C:\Program Files\Tivoli\disconn\w32-ix86\classes\swdis_env.bat".
7. Set the Create in site to All WIN2KCLI Computers
8. Press the OK key and enter password

Activate Baseline
With the new baseline created, it now needs to be activated. Since we need to be on all the computers we need to set this as a policy.

1. Navigate to Systems Lifecycle > All Systems Lifecycle > Sites > Custom > All WIN2KCLI Computers > Baselines
2. Select the “Deploy DCLI” baseline
3. Press the “Take Action” button
4. In the “Preset” field, set it to Policy
5. On the Target tab, select the option “All computers with the property values selected in the tree below
6. Expand to All Computers > By Group and select All WIN2KCLI Computers
7. Press the OK button and enter the password

This takes care of the DCLI. This is currently a proof of concept and I need to do some more testing to verify that I have set the various properties/groups/etc. If you have any comments/suggestions, please post a comment on this blog.

For the next blog entry, we will take a SPB and import it into BigFix. Stay tuned, it will be posted in a couple days.

Thursday, September 23, 2010

ITNM 3.8 NMAPScan Agent

Recent updates to IBM Tivoli Network Manager 3.8 introduced a new discovery agent that utilizes Nmap (Network Mapper) to provide some extra details about devices without SNMP access or certain types of end nodes. The extra information includes operating system type based on nmap’s OS fingerprinting capability along with port and service information.

This sounds great, but there are some serious drawbacks…

A. It’s slow. It’s a typical ITNM perl based agent that handles parallelism by spawning more instances of nmap to scan individual hosts rather than utilizing the large volume scanning capabilities inherent to nmap.

B. The required version of nmap is 4.85 and most enterprise *nix platforms are still shipping 4.0-4.11 so chances are you will need to acquire a recent version from the Nmap project page (http://nmap.org).

C. You can’t just turn the agent on. After you get an appropriate version of nmap installed you have to edit $NCHOME/ precision/disco/agents/perlAgents/ NMAPScan.pl to uncomment and set the path to nmap :

my $nmapBinary = '/usr/bin/nmap';

D. Running ITNM as setuid root does not work with the default nmap arguments. If you do not want your ITNM processes running as root you will need to adjust the scan settings in NMAPScan.pl or chown root ncp_disco_perl_agent then chmod u+s ncp_disco_perl_agent and then modify root’s environment so that the ITNM perl is used rather than the system perl. Or you could just run as root.

E. The OS type value is really just a guess. Sometimes it is a little off. For example CentOS 5 indentified as Gentoo.

F. Did I mention that it is slow?

Here are some screen shots of examples of the information collected.

So what would be a good use of the NMAPScan agent? For starters it would help classifying NoSNMPAccess devices.

Consider this AOC file that defines the class Linux_NoSNMPAccess:

//*************************************************************

//

// File : Linux_NoSNMPAccess.aoc

//

//*************************************************************

active object 'Linux_NoSNMPAccess'

{

super_class = 'NoSNMPAccess';

instantiate_rule = "ExtraInfo->m_OSType LIKE '.*Linux.*'";

visual_icon = 'NoSNMPAccess';

};

With this solution it is possible to create buckets to dump your devices into to provide the ability to at a minimum do ping polling via a class filter without pinging stuff you could care less about.

Saturday, September 18, 2010

Adding web pages to WebTop/TBSM/ITNM

You can add any web page you want to be protected by WebTop/TBSM by putting that file under the directory:


INSTALL_DIR/tip/profiles/TIPProfile/systemApps/isclite.ear/Webtop.war/

accessed via:

https://hostname:16316/ibm/console/webtop/filename

Now, where it gets pretty exciting is that WebTop 2.2 and above are hosted on WebSphere, a full-fledged app server, and it supports JSP pages (which basically let you write server-side Java code to do anything you want, PLUS output HTML). An example JSP file can be found here:

http://www.java2s.com/Code/Java/JSP/Printtherequestheadersandthesessionattributes.htm

Just put that file into the above directory and you'll see all of the session and request information available to your JSP.

Have fun.

Friday, September 3, 2010

Tivoli Common Reporting 1.3 - Framework Manager Installation

The latest and greatest TCR product, as previously noted, contains both BIRT and Cognos reporting engines and you are free to develop reports in either one of those formats.  To develop custom reports in BIRT, you need to use the Eclipse based report designer. How about Cognos?   For Cognos, you should be able to develop reports using the browser based Report Studio tool.  End users with right permissions should be able to modify the reports according to their needs.  However, there is one limitation.   The Report Studio tool can operate only on already published Cognos packages.  So, the question is, how do you publish new packages containing data model for your reports?   That is where the Framework Manager comes into picture.    Framework Manager lets you define your own data sources, Query Subjects, etc.  and package these definitions into a Cognos package, which will be published on the TCR server.
 

Prerequisites

 

Before installing Framework manager, you will need the following.

 

1.       A Windows box to install framework manager. IT is NOT supported on other operating systems.

2.       Necessary ODBC data sources or Database client software installed  on the Windows system.  You can't use JDBC type-4 drivers.  

 

Installation and Configuration

 

Installing Framework manager is pretty straight-forward. The installation media contains a separate folder named "CognosModeling". You basically run the issetup.exe from win32 subdirectory to install it.    However, you need to perform couple of easy post install configurations to get the product to work.   The steps are given below.

 

1.       Bring up the Cognos Configuration under Programs->IBM Cognos 8->IBM Cognos Configuration. Note: There is another "IBM Cognos Configuration" under Tivoli Common Reporting.  Do not make changes to it as it will break the TCR product.

2.       Select the Environment Group under Local Configuration.

3.       Change the Gateway URI property to https://<tcrserver>:16316/tarf/servlet/component

4.       Change the "Dispatcher URI for external applications" to http://<tcrserver>:16315/tarf/servlet/dispatch

5.       Again the above values for default TCR installation (assuming 16316 is HTTPS and 16315 is HTTP). You can double check by bringing up the "IBM Cognos Configuration" under Tivoli Common Reporting and comparing the property values for "Gateway URI" & "External dispatcher URI" with the above values. 

 

Once you configured these values, you should be able to bring up Framework Manager, create a new project and signin with your TIP id (e.g. tipadmin) to create your custom data model.

 

Hope this helps.

Friday, August 27, 2010

Getfile, Putfile, Executecommand - added in ITM 6.2.2 fp02

Enable this functionality (Check out Venkat’s earlier post)

GETFILE:
{-m|--system} SYSTEM
{-s|--source} REMOTE_FILE
{-d|--destination} LOCAL_FILE
[{-t|--type} Transfer MODE] - Either [text|bin]
[{-f|--force}] – Forces an overwrite of the file if it exists

Example:
tacmd getfile -m itmtest:LZ -s /tmp/itmtest.log -d /base_logs/itmtest_08272010.out -t text –f
--------------------------------------------------------------


PUTFILE:

{-m|--system} SYSTEM
{-s|--source} LOCAL_FILE
{-d|--destination} REMOTE_FILE
[{-t|--type} MODE] ] - Either [text|bin]
[{-f|--force}] – Forces an overwrite of the file if it exists

Example:

tacmd putfile -m itmtest:LZ -s /automationscripts/ testing.sh -d /tmp/testing.sh -t text –f

NOTE: When transferring the file, it sets the permissions of the file to “666”.

--------------------------------------------------------------


EXECUTECOMMAND:

tacmd executecommand
{-m|--system} SYSTEM
{-c|--commandstring} COMMAND_STRING
[{-w|--workingdir} REMOTE_WORKING_DIRECTORY}]
[{-o|--stdout}]
[{-e|--stderr}]
[{-r|--returncode}]
[{-l|--layout}]
[{-t|--timeout} TIMEOUT]
[{-d|--destination} LOCAL_STD_OUTPUT_ERROR_FILENAME]
[{-s|--remotedestination} REMOTE_STD_OUTPUT_ERROR_FILENAME]
[{-f|--force} FORCE_MODE}
[{-v|--view}]

Example:

tacmd executecommand -m itmtest:LZ -c "/tmp/testing.sh" -o -e -r -l –v


NOTE: You will need to parse the STDOUT from this command to get the local STDERR, STDOUT, Return Code etc…

Monday, August 23, 2010

Tivoli Common Reporting 1.3 Overview

The Tivoli Common Reporting product introduced about 3 years back now  fills in the reporting requirements for various Tivoli products. It is tightly integrated with various Tivoli products such as Tivoli monitoring, Maximo, Tivoli Provisioning manager and access manager. A new version of this product is available now called Tivoli Common Reporting for Asset and Performance Management 1.3.  
 
I personally think the version number increment is misleading. This product introduces a number of changes and features that should warrant a 2.x instead of 1.x.  Anyway, here are few notes about this TCR product.
 
Report Engine
 
The TCR 1.3 product uses two report engines for report generation. The BIRT engine (2.2.1) used by the older TCR product is continue to be available in the newer version as well.  The reports developed using TCR 1.1.x or 1.2 will work correctly.   However, this version of  TCR product also gives you additional choice to develop reports using Cognos.  There are some very cool features available in Cognos reporting such as on-the-fly report creation by end users,  browser based report studio to design reports, etc.  While the Cognos reporting is powerful, it also introduces own complexities such as separate data modeling phase and additional components for report  design and generation.
 
Distributed Installation
 
The new TCR product supports distributed installation meaning TCR components (especially Cognos related components) can be installed over multiple systems for better performance.  If you have an existing Cognos setup, TCR can be integrated with your Cognos BI setup to leverage the expertise you already have.
 
Cognos Components
 
In addition, the Cognos piece introduces quite a  few additional components such as Framework Manager for data modeling, Report and Query Studio for report and query design,  Cognos Admin page for administering Cognos report packages, etc.   Please stay tuned for an upcoming article on Cognos Reporting for more details on the Cognos BI Reporting terminology. 
 
Report Emailing and Scheduling
 
While Report Scheduling feature is available in TCR 1.2, the new version provides a report emailing feature as well. However, the feature is limited to ONLY Cognos reports. With Cognos, you can even schedule AND email the reports periodically, a very powerful and useful feature.  This feature is not available for BIRT reports but can be custom developed for BIRT reports using scripting. 
 
Please stay tuned for more articles related to TCR 1.3 and Cognos.

Wednesday, July 28, 2010

BIRT Tip: Dependent Parameters in your reports

In BIRT, using report parameters is the highly powerful (and easy!) way to customize the report according to the end user needs. Report parameters provides runtime values such as start and end dates for reports, specific item for which report needs to be run, etc based on the user input. While this is very useful, some of the reports may need multiple parameters that have dependent relationship among them. For example, if your report needs to list Customers in a specific state in a specific country, the "regular" report parameter will not meet your requirements because you will need to "dynamically" adjust the list of states available for selection based on the Country that the user chooses. How can you do this in BIRT? Read on.


In BIRT, you will need to use the "Cascading Parameters" to define the relationship between the parameters. Continuing the above example, let us take the CUSTOMERS table in BIRT CLASSICMODELS sample database.






  1. Create a new dataset called COUNTRIES to select all unique Country information. e.g. A sample SQL is "SELECT distinct COUNTRY from CUSTOMERS".

  2. Create a new dataset called StatesInCountry to select all unique State information for a given country. e.g. A Sample SQL is "SELECT distinct STATE from CUSTOMERS WHERE COUNTRY = ?". Leave the parameter binding for the first parameter (param_1) to NONE for now. We will change it later.

  3. Create a new Cascading Parameter Group called "paramStates". Choose "Multiple Dataset" option as we will have to select information from two datasets created above.

  4. Click on Add Button on the "New Cascading Parameter" dialog to add a new Cascading parameter. Enter the name for the cascading parameter as "paramCountries", choose COUNTIRES dataset created in Step1 as the dataset and select the "COUNTRY" fields for value and display text.

  5. Click on Add button again to add the state cascading parameter. Enter the name for the cascading parameter as "paramStatesInCountry", choose the StatesInCountry dataset created in Step 2 as the dataset and select the "STATE" field for value and display text.

  6. Click Ok to close "New Cascading Parameter" dialog. Please see the attached screenshot containing the Cascading Parameter Group information.


  7. Now edit the StatesInCountry dataset created in step 2, goto Parameters section, edit the param_1 parameter binding and set the LinkedToReport Parameter value to the "paramCountries" cascading parameter created in step 4. Click ok to save the changes.

Now your reports are ready to use the cascading paramters. If you run the report now, whenever you change the COUNTRY in the parameter selection, the available STATE selection also changes accordingly. The attached screenshots show the State information for USA and Australia.




PS: It is easier to do these things in BIRT than telling how to do them. I have tried my best to describe the steps involved. If you have questions or need clarifications on the above steps, please feel free to write to me.

Wednesday, July 14, 2010

5 Things You Didn't Know About Java series from Developerworks

This is part 2, on Performance Monitoring. It's dealing with newer releases, so it won't be directly applicable to all environments, but I think it's very useful information:



Tuesday, June 29, 2010

Create a virtual data center with POWER7 and IBM Tivoli Provisioning Manager

 
 

Sent to you by Frank Tate via Google Reader:

 
 


Have you ever wondered how to bundle together data center resources? Do you ever have to manually deploy and configure your servers, operating systems, middleware, applications, storage and networking devices? They can be managed as a single entity using physical and virtual IBM servers. In this article, you will learn what a virtual data center is, how to create one using POWER7 VMControl and IBM Tivoli Provisioning Manager, and how to use a virtual data center to manage your IT systems and virtualization technologies as a single point of control access. In the process, we&apos;ll show you an example of how you can use the Tivoli product for patch management, which is one of the most difficult tasks to manage in a large server farm.

 
 

Things you can do from here:

 
 

5 things you didn't know about ... Java performance monitoring, Part 1

 
 

Sent to you by Frank Tate via Google Reader:

 
 


Blaming bad code (or bad code monkeys) won&apos;t help you find performance bottlenecks and improve the speed of your Java applications, and neither will guessing. Ted Neward directs your attention to tools for Java performance monitoring, starting with five tips for using Java 5&apos;s built-in profiler, JConsole, to collect and analyze performance data.

 
 

Things you can do from here:

 
 

Friday, June 18, 2010

HTML 5 TEP Interface

I think this where i will focus any free time. This should allow for mobile browsers to render the interface and be a lot quicker.

Thursday, June 17, 2010

Never watch 8 episodes of the Sopranos

This is not technical, just a word of advice. I watched 8 episodes of the Sopranos on DVD back to back. I started swearing at truly inapprpriate moments. Probably good to throw a Brady Bunch episode in the mix.

Wednesday, June 16, 2010

How to enable file transfer feature in ITM 6.2.2 FP 2

One of the coolest and long awaited feature introduced in ITM 6.2.2 FP2 is the file transfer feature. Using this, you should be able to transfer monitoring scripts, dependency files and config files down to the monitored agent. ITM provides necessary CLI to transfer files to/from the agent using "tacmd getfile" and "tacmd putfile".


However this feature is not enabled in the TEMS by default. TO enable this feature, you need to do the following steps.


1. Edit $CANDLEHOME/config/*_ms_*.config

2. Add the following enviornment variable at the end of the config file.

KT1_TEMS_SECURE='Y'

3. Restart the TEMS.

4. Now run "tacmd login" followed by "tacmd getfile" or "tacmd putfile".


Hope this helps.

Who needs an agent?

What agent should I build and offer for free next? Let me know and I
will see what I can do.

Silent Install for Agent Builder Files

To install an agent from the Agent Builder, you use the option to Create a Solution Installer Image

There is also a silent installation option. A silent.txt, is included with the final package - configure this silent file with all the connection details required. If you require additional template options - I will cover that later.

Run the appropriate installer for the operating system with the -silent option. 

For example:
setupwin32.exe -silent -options response_file


A better alternative to free virtualization.

A recently had to build a new server, really big server... 48 cores, 128gb RAM and 16tb of local storage. I wanted to use VMware ESXi server - the free bare metal hypervisor, but it is limited to 32 cpu's. I looked at Microsoft's HyperV and I have never been less impressed by a product. So I looked at Citrix Xenserver 5.6 - free version. It supports up to 64 cpu's and is a true bare metal hypervisor. There are desktop clients to manage from a workstation, it manages disk stores very niceley and has tools for all the guest OSes. It's not quite as friendly as VMware, but IMHO much more usable and easier to setup clusters.

Hushing those Navigator Updates in TEP

Add these to the cq.ini


KFW_CMW_DETECT_AGENT_ADDR_CHANGE=N
The Navigator function detects when the IP@ for an Agent is discovered. If the Agent environment is constantly changing or has improper configurations that generate excessive Navigator tree rebuilding, consider adding this environment variable to have any discovery of changes or additions of IP address ignored.

KFW_CMW_DETECT_AGENT_HOSTNAME_CHANGE=N
This variable is like the one for detecting Agent address change except that it prevents the Navigator rebuilding if an agent hostname is changed.

KFW_CMW_DETECT_AGENT_PROPERTY_CHANGE=N
Similar to the above except that it prevents the Navigator rebuilding if an agent affinity or affinity version changes.

Wednesday, May 26, 2010

Configuring the TBSM 4.2.1 Discovery Library Toolkit to work with TADDM 7.1.1 and later

EDIT: Corrected the information after re-testing on a clean machine.

On a Windows TBSM server (I haven't tested this on other platforms), you need to copy the taddm-api-client.jar AND platform-api.jar files from the TADDM SDK into TWO different directories on the TBSM data server to get the Discovery Library Toolkit to work. Specifically, you need to copy taddm-api-client.jar to:

D:\ibm\tivoli\tbsm\XMLToolkit\sdk\clientlib

and copy platform-model.jar to:

D:\ibm\tivoli\tbsm\XMLToolkit\sdk\lib

I found that if you only copy the first file, the toolkit won't work at all, and won't generate any errors or messages of any use.

Scheduled Wakeup in Ubuntu

Well, this is hardly related to Tivoli, but just thought of sharing this cool stuff. I have a 5-year old desktop running Ubuntu Lucid used mainly for running scheduled jobs off of cron. So, basically it plays some music everyday for couple of hours and sits idle for the rest of the day. Ideally, I wanted to put the system on standby all the time and waking it up only when the scheduled job needs run. It is relatively easy to do it in Ubuntu (especially if your BIOS supports it).

Here are the exact steps needed on my Lucid Lynx. Please note that you need to have Kernel 2.6.22 or later for this to work.

1) Install the Power management interface tools.
sudo apt-get install powermanagement-interface
2) Copy the following code somewhere in your filesystem and save it as "suspend_x_hours.sh".
#!/bin/bash
# This script puts the system under standby mode for x hours
usage() {
echo "usage: $0 <n-hours>"
echo "where <n-hours> is the number of hours to be on standby"
exit 0

}
if [ $# -ne 1 ]
then
usage
fi

PATH=$PATH:/usr/sbin
hours=$1
echo 0 > /sys/class/rtc/rtc0/wakealarm
echo `date '+%s' -d "+ $hours hours"` > /sys/class/rtc/rtc0/wakealarm
pmi action suspend

3) Schedule the script in root's crontab. e.g the following crontab entry runs at 8PM and puts the system in sleep for 10 hours, waking it up at 6:00 AM.
00 20 * * * /home/venkat/bin/suspend_x_hours.sh 10 2>/dev/null

That's it. It takes only about 10 seconds to resume from sleep and it even restores your SSH sessions when it comes back from sleep!
Hope you find it useful.

Sunday, May 16, 2010

ITM Tip: Disabling default situations during install time

Many ITM sites want to disable default ITM situations so that unnecessary alerts are not sent to the operations.  In old days, we usually disable the situations by running a simple script. One such example of the script is posted below.

http://blog.gulfsoft.com/2008/03/disable-all-default-situations.html

This task is much easier with ITM 6.2.2.  While seeding application support, ITM asks you to whether you want to add the default managed system groups to the situation distribution. If you answer no, the default situations will not be distributed to any managed system unless you explicitly assign them! However, this feature is applicable only for fresh installation of ITM 6.2.2 and may not apply to those who are upgrading from old version  of ITM.




Tuesday, March 9, 2010

GbsNcoSql V2.0

I hope you read the earlier article about the GbsNcoSql tool at the link below.
 
 
In the first version, the tool could perform only SELECT statements, but now we have updated the tool to execute non-SELECT statements such as INSERT, DELETE and UPDATEs as well. For those who subscribed to this tool, I will send out an updated version this week.
 
Again, this tool is 100% free. Please send an email request from your work email to tony delgross (tony dot delgross at gulfsoft.com). 
 
One disclaimer:  This tool uses FreeTDS libraries to access Omnibus Object server database. While this works, please understand that IBM does not support this method.

Tuesday, March 2, 2010

IBM - Recording RPT 8 HTTP scripts

Tivoli is doing a great job of updating their support information, including this very detailed article on using RPT with ITCAMfT:

IBM - Recording RPT 8 HTTP scripts

Tuesday, February 9, 2010

Some possible responses to TADDM Error CTJTD3602E

IBM's documentation contains the following description of this error:


CTJTD3602E: The Change Manager is still processing. Wait and retry the discovery at a later time.

Explanation: The Change Manager is still running to process recent changes discovered. The discovery cannot be started until the Change Manager completes.

Operator Response: None.

Administrator Response: Allow time for the Change Manager to complete its processing before starting a discovery.

Programmer Response: None.


What we have found is that this condition can possibly occur for at least a couple of different reasons. The two situations we've found have been:

1. There was a deadlock on the database that we had to clear. A "little while" after we cleared the lock, we were able to successfully run a discovery again without receiving the error message.

2. The change manager partially ran, but didn't update the CHANGE_SERVER_STATUS table. Specifically, it left the value of the STATUS column set to 16 for the last discovery. To fix this, we had to run the following SQL:

update DB2TADDM.CHANGE_SERVER_STATUS set status = 17

This updates the STATUS column for all rows in the table. You could limit it with a WHERE clause, but in our case, it was valid as is.

We then needed to stop and restart TADDM, and the error message went away.

Monday, February 1, 2010

Identifying newly added systems in Tivoli Monitoring using GBSCMD

Hope you have been using the GBSCMD utility developed by Gulf Breeze for Tivoli Monitoring product.
 
In the latest version of GBSCMD (V3.7.1), there is a cool new feature added to the program. With "gbscmd listnewsystems", it can list the recently deployed systems in the environment along with the deployed time and the type of the agent.  With this feature, you can setup "after-install" policy scripts in ITM  just like Framework with little scripting.
 
Note: IV Blankenship wrote a blog article (http://www.gulfsoft.com/blog_new/print.php?sid=322) about this idea sometime back. 
 
Please note that the list of new systems is lost when TEMS is recycled, so you need to make sure this information is saved in a persistent store like a database or flatfile.
 
You can download the latest documentation for GBSCMD from the following link.
 
 
Not yet subscribed to GBSCMD. Please contact us from your company email and we will send you a copy. Don't worry we will use the email only to send you future updates to GBSCMD. For those, who already subscribed to GBSCMD, we will send an update this week.
 
Are you going to Pulse 2010?
Come see Gulf Breeze at Pulse so we can show why we are the best.

Friday, January 22, 2010

Interesting Visualization

Anyone in IT cares about data at some point, so I try to bookmark visualizations that I think are noteworthy. The one I saw today is:

I'm not interested in the data that's behind it; I just think it's a neat way to present it.

This is IBM's community Many Eyes project, which can be used by anyone.

This is Google's Visualization API, also open to anyone.

Graph/node/relationship visualization.


Thursday, January 21, 2010

GbsNcoSql User Guide

You can download the complete documentation for GbsNcoSql from the following link.
 
 
Hope you find it useful.

Wednesday, January 20, 2010

Now you have an alternative to FreeTDS - GbsNcoSql


Thanks to Venkat Saranathan, who developed this tool and wrote the description. He also wrote 'gbscmd' for use with ITM 6.x.

Many times, you need to analyze the Omnibus data using scripts. For this purpose, Netcool Omnibus provides nco_sql utility that lets you run the Omnibus queries from the command line. However, one of the main limitations of the nco_sql utility is that it is nearly impossible to parse the output. The difficulties include single record spanning across multiple lines, no delimiter option to name a few.
We at Gulf Breeze worked on this requirement sometime back and developed an utility to address some of these shortcomings. The utility is written in Java and can be run with the Java Runtime that comes with Omnibus. Some of the benefits/features of this utility includes the following.
  • Currently, it can run any Omnibus "SELECT" queries only. Though the utility could potentially be modified to run other queries, the idea is to use the nco_sql utility for all other type of queries.
  • Consistent one-record per line output
  • Ability to specify your own delimiter with "-d" switch.
  • Ability to suppress header output with "-n" switch.
  • Ability to specify queries in a file or read from command line.
  • No need to compile any FreeTDS drivers. It comes with the necessary FreeTDS drivers.
  • Authorization information kept in a separate file and can be specified with -a switch. You don't need to specify the password in your scripts.
  • Platform independent and works with the IBM and Sun JREs.
Here is a sample usage of this utility.
usage: java -jar GbsNcoSql.jar -a [-n] [-d ] [-f | -q ]
Where,

-a File name containing authorization information
-d Delimiter
-f File name containing SQL query text
-h Displays the help information
-n No header output
-q Query text
Here is a sample data.
$ java -jar GbsNcoSql.jar -a my.auth -q "SELECT Node,Tally from alerts.status"
Node,Tally
sys1,1
sys21,1
sys3,1
192.168.1.50,190
You can create an authorization manually in text editior. Here is how my authorization file looks. Please note that in the value for server is the hostname NOT the object server name.

$ cat my.auth
server=somehost
port=4100
user=root
password=mypass

Tuesday, January 19, 2010

Perl one-liner to convert ITM timestamps to normal timestamps

One of the annoying aspects of ITM troubleshooting is that the timestamps in the logs are written in epoch time in hexadecimal instead of 'human-readable' format. I don't know the exact reason behind it, but the whole purpose of logs is to aid in troubleshooting and timestamps are critical piece of information in troubleshooting.

Anyway, to convert the log timestamps to normal timestamp, I have been using the following one-liner.

perl -lane 'if (/^(.)([\dA-F]+)(\..*)/) { printf "%s%s%s\n", $1, scalar(localtime(oct("0x$2"))),$3; }' <log-file>

The one-liner can be used to read from pipe as well as below.

tail -100 <log-file> perl -lane 'if (/^(.)([\dA-F]+)(\..*)/) { printf "%s%s%s\n", $1, scalar(localtime(oct("0x$2"))),$3; }'

Hope this makes troubleshooting little easier.

Monday, January 18, 2010

GBSCMD V3.6.4 - Overview

GBSCMD is a free Gulf Breeze offering for performing ITM operations from command line. It is complimentary to tacmd tool and performs many operations that are not provided by tacmd. GBSCMD uses ITM Webservices feature extensively and some of the benefits of GBSCMD are listed below.

  • Feature support for backlevel ITM versions. Most of the features works from ITM 6.1 on wards.
  • Provides a way to execute remote commands on agents.
  • Provides a way to get agent data in CSV format.
  • Clear offline managed systems from Command line.
  • Provides a postemsg like feature to send events to situation event console.


If you would like to learn about GBSCMD tool, here are some links to the past articles about GBSCMD. You can google search "gbscmd site:blog.gulfsoft.com".

http://blog.gulfsoft.com/2008/03/gbscmd-v35-new-features.html

http://blog.gulfsoft.com/2008/03/simple-postemsg-like-solution-for-itm_15.html


We have been tweaking the tool from time to time to introduce new features and address bug fixes and this blog discuss some of the changes introduced during the recent versions. The latest and greatest version of the tool as of this writing is version 3.6.4. If you would like to get the latest version, please feel free to contact me at venkat at gulfsoft.com or Tony Delgross at gulfsoft.com. If you have some suggestion on features that you would like to see in GBSCMD, please feel free to write to me as well.

Here are the changes introduced to GBSCMD since version 3.6.1

Version 3.6.1

This version introduced support for row filtering of SOAP Call results with --afilter option. The following example uses --afilter option to get the disk data from an agent and filters the results to include only C: drive information.

./gbscmd ct_get --auth itm62.auth --Object NT_Logical_Disk --target Primary:ITM62:NT --afilter Disk_Name;EQ;C:

You can also get the last 24 hour history data from the agent using --history switch to CT_Get. Some of the other changes include ability to see the results in XML format using --xml option (Thanks IV for making that change!).

Version 3.6.2

This version introduced new SOAP calls for starting and stopping situations at RTEMS level using --starttemssit and --stoptemssit options. If you are running backlevel of ITM and would like the ability to start and stop situations, you can use this feature. It also addressed some timeout issues while getting large SOAP data. (e.g. listing of all managed systems in the entire enterprise for a large ITM Setup).

Version 3.6.3

This version uses new SOAP call for listing situations running on individual agents.

Version 3.6.4

This version introduces column filtering of SOAP results with --attribute option. For example, if you want to get the disk data and only interested in Disk Name and Free Megabytes attributes, the following command line will get you the information.

./gbscmd ct_get --auth itm62.auth --Object NT_Logical_Disk --target Primary:ITM62:NT --attribute Disk_Name --attribute Free_Megabytes

This version also adds --version switch to identify the current GBSCMD version.

The complete documentation is available for download from the following link.

http://www.gulfsoft.com/downloads/gbscmdReferenceManual.pdf

If you have any questions or suggestions, please feel free to mention them in the comments section.

How to get ITM agent data in CSV format using GBSCMD

Here is a commonly asked question from our customers. How do I get the real time agent data exported to excel format (CSV) from command line for further analysis? Even though this can be done from Portal using a combination of logical workspaces and table view, it is real simple  with gbscmd tool.
 
With GBSCMD, you just invoke ct_get subcommand providing agent name and attribute group you're interested in.  For example, if you want to get the list of current disk usage information from a Windows OS agent, you can use the following command.
 
./gbscmd ct_get --auth itm62.auth --Object NT_Logical_Disk --target Primary:ITM62:NT  >> disk_usage.csv
 
If you would like to get this information on set of windows agents, you can easily loop thru them one after another like the example below.
 
for agent in `cat myagents.list`
do
 ./gbscmd ct_get --auth itm62.auth --Object NT_Logical_Disk --target $agent  >> disk_usage.csv
done
   
Hope you find this tip useful.

Thursday, January 14, 2010

Multiple Logfile Monitoring Agent - KG2

As promised, here is a link to download a logfile monitoring agent. It will work on Unix, Linux or Windows and can monitor up to 10 logfiles per instance of the agent. This is a very generic agent with no filtering. The agent will take up to the first 1024 characters and put them into one field. As always, I've run this in the lab it works great - your setup may vary. The watchdog is setup and the memory threshold is set at 100mb for all OS platforms.

Step 1 - Download and unzip. The entrie package is 26mb in size, it contains all of the supported platforms.

ZIP version

http://www.gulfsoft.com/downloads/blog_downloads/KG2.zip

or the tar/gzip version

http://www.gulfsoft.com/downloads/blog_downloads/KG2.tgz


Step 2 - Run the installIraAgentTEMS.bat or .sh on your HUB TEMS and Remote TEMS.
Step 3 - Run the installIraAgentTEPS.bat or .sh on your TEP server (recycle the TEPS afterwards)
Step 4 - Populate your depot using tacmd addbundles -i /path... or just copy the zip file to the destination and run the installIraAgent.bat or .sh.
Step 5 - Configure it. Create an instance and add at least file name to the agent.
Step 6 - Once the agents appear in your portal server, you can create a situation that uses the "Scan for String within String" method to search for specific keywords.

Step 7 - Have fun.