Tuesday, March 25, 2008

Customizing ITM 6.1 tep.jnlp to work with Java 1.5

The default tep.jnlp file works great in certain environments, and not at all in others. Basically, it works if you only have Java 1.4.2 installed, or if you have Java 1.4.2 and Java 1.5 installed. But things don't work so well if you have Java 1.6 installed. This change fixes that problem.

What I did was change the line that reads:

j2se version="1.4+" size="64m" args="-showversion -noverify"

to the following:

j2se version="1.5*" size="64m" args="-showversion -noverify"

If you save this file with a new name, you must also update the line in the file that references itself (i.e. change 'href="tep.jnlp"' to 'href="newfilename.jnlp"').

And I found that ONE of these two files (the original or the changed) will work for almost all users. They just need to try one, and if that doesn't work, try the other. The problem is that the Java 1.4.2 browser plugin doesn't support the changed syntax. So you can just tell your users to try the first, then the second.

Edit: If a user only has Java 1.6 installed, the changed file will work and will automatically download and install Java 1.5.

Friday, March 21, 2008

GBSCMD V3.5 new features

In the past, we have posted few articles about gbscmd. We have updated the tool with few additional cool features such as clearing offline entries, executing commands on the agents and refreshing TEC EIF Integration component. This article explains how to use these features.

Clearing Offline Entries


This is one of the long requested requirement. Whenever the servers are decommissioned from monitoring environment, one has to remove the servers by selecting "Clear Offline" from TEP. However, there are no CLI equivalent for this function in tacmd, which is a handicap if you want to automate the decommisioning process. You can use gbscmd clearoffline feature to perform this function from command line.

Executing commands on agents


I wrote an earlier blog article about this feature. This feature is very handy to run non-interactive commands on the remote monitoring system. However, this feature does not capture the output of the command.

Refreshing TEC EIF Adapter component


If your site relies on tecserver.txt for setting situation severities or updates the MAP files frequently, any changes to these files require hub TEMS restart. Instead, you can use the refresheif feature of gbscmd to cycle the EIF component alone.

Examples


The following example restarts a Windows OS agent.
gbscmd command --auth ~/itm61.auth --command "net stop kntcma_primary && net start kntcma_primary" --managedsystem Primary:ITM61:NT

The following example restarts a Unix OS agent.
gbscmd command --auth ~/itm61.auth --command "/opt/IBM/ITM/bin/itmcmd agent stop ux;/opt/IBM/ITM/bin/itmcmd agent start ux" --managedsystem uxitm61:KUX

The following example clears an offline entry, which can be handy when servers are decommissioned from the monitoring environment.

gbscmd clearoffline --auth ~/itm61.auth --managedsystem uxitm61:KUX

More Info


You can download the manual from here. Please contact Tony (Tony.Delgross at gulfsoft.com) if you have any questions or need a copy of gbscmd.

Saturday, March 15, 2008

TPM SPB Import Automation Package

Posted by: martinc on Mar 06, 2008 - 04:07 PM

With TPM it is possible to import SPBs with the TCM integration, Software Package Editor or manually from the Web UI. I wanted to add another.

What happens if you have a bunch of SPBs that you want to import and you do not want to use the integration? Use the SPE or Web UI? That would take forever!

So I thought I would whip up a quick automation package to do just that. Click here to download the automation package.

The package is made up of 3 workflows

GBS_FindAllSpb.wkf
Function: Retrieves a list of all SPBs based on the FileRepository rootPath and the RelSearchPath

Arguments:

  • FileRepositoryID - This is the ID of the file repository to use for the search. Yes this could be changed to the name if you want.
  • RelSearchPath - This would be the path to search relative to the FileRepository.
    For example:
    if you enter "/" (no quotes) this will search the rootPath of the file repository and all sub-directories for spb files
    if you enter "/ActivePerl" (no quotes) this will search the rootPath of the file repository and all sub-directories under ActivePerl for spb files

    Calls: GBS_GetSpbVariables.wkf

    GBS_GetSpbVariables.wkf
    Function: Retrieves various package attributes used to define the Software Module to TPM.

    Arguments:
  • SpbRelPath - This is the relative path to the SPB found in GBS_FindAllSpb.
  • SpbFileName - The file name to retrieve the package attributes from.
  • FileRepositoryID - This is the ID of the file repository to use for root path

    Calls: GBS_ImportSPB.wkf

    GBS_ImportSPB.wkf
    Funtion: Using the variables passed in from GBS_GetSpbVariables.wkf, creates a XML file in the same directory as the SPB and then uses the built-in workflow called DcmXmlImport to import the XML file.

    Arguments:
  • SpName - value retrieved from the field Name in the SPB
  • SpVersion - value retrieved from the field Version in the SPB
  • SpVariables - value(s) retrieved from the default_variables field in the SPB
  • SpbRelPath - This is the relative path to the SPB
  • SpbFileName - File name of the SPB
  • FileRepositoryID - This is the ID of the file repository to use for root path

    Extra Notes

    It is also possible to just run GBS_GetSpbVariables.wkf if there is only one SPB to define. Just pass the required fields into the workflow execution and only that SPB will be defined

    With this template, it is very easy to make some changes that would suit your environment. Any comments or feedback is appriciated.

    Martin Carnegie
  • TPM 5.1.0.2 FP0003 and 5.1.1 FP0001 are available!

    Posted by: martinc on Mar 05, 2008 - 04:54 PM

    Fix Pack 3 for TPM 5.1 and Fix Pack 1 for 5.1.1 are now available! So what does it do?

    The most important thing is that this allows for current 5.1.0 users to upgrade to 5.1.1. In order to do this, you must first be at 5.1.0 FP002. When the install is done, you will be at 5.1.1.1. Also brings the 5.1.1 to 5.1.1.1.

    5.1.0 - FP0002 readme
    5.1.0.2-TIV-TPM-WIN-FP0003.README.HTM

    5.1.1 - FP0001 readme
    5.1.1-TIV-TPM-WIN-FP0001.README.HTM

    IBM Automation Package for IBM Tivoli Monitoring
    There is supposed to be an updated automation package on OPAL, but I do not see it yet. Very curious to see what this one offers as the old one was, um, lacking.

    VMware Virtual Infrastructure 3 support
    I know that there have been a few people asking about this one

    Automation package for management of large numbers of zones, regions, and depot servers
    This is great! Nothing more tedious than manually creating these.


    Here are the list of defects for each version
    5.1.0
    5.1.0-TIV-TPM-FP0002.DEFECTS.HTM

    5.1.1
    5.1.1-TIV-TPM-FP0001.DEFECTS.HTM

    Unfortunately the one that is missing still is the Vista agent.

    I will get these downloaded and see how the install goes. I am keeping my fingers crossed :)

    Edit:
    To download go to
    Tivoli Provisioning Manager Fix Pack 5.1.1-TIV-TPM-FP0001


    Tivoli Provisioning Manager Fix Pack 5.1.0.2-TIV-TPM-FP0003

    TPM 5.1.1 Classroom Comments

    Posted by: martinc on Mar 05, 2008 - 07:42 PM

    I recently taught a custom class we developed for TPM 5.1.1 and thought I would provide some feedback on TPM 5.1.1

    Performance
    Well first of all, I just want to say a word on performance. I taught almost the same class on 5.1.0 FP01 and 5.1.1 using all the same hardware and I was impressed with the performance. I had some students in this class from the class I taught a year ago on 5.1.0 and they stated that everything was running faster. In fact I had more students this time and there was almost no lag when running various tasks.

    Comments on the use/functionality
    Web Replay - although this was not really part of the class, we had some extra time to look at it. We felt that the tool was interesting and probably good as a training tool and possibly for some tasks. Generally speaking, I (or someone that has been using the interface for a while) can manage to get things done faster than Web Replay. This is still a good addition though.

    Computer information
    The layout of the Computer information is very clean. The General tab displays a very good summary of important information about the computer.

    Compliance and Remediation (also applies to patching)
    The 3 step process layout makes the creation of a compliance check very easy.

    Here is a snapshot of the Compliance tab for a group (Click on the image to see a larger one).



    Notice the easy Step 1 2 and 3. Also under the Step one there is a quick checkbox to add the OS Patches and Updates Check. Simple!

    Discovery
    The integration of SMB and SSH discovery in one discovery configuration is also a biggy. Seems minor, but it is a pain to set up two seperate discovery scans that hit the same ip addresses.

    Another nice feature in the discovery is that it will populate the operating system name. To do this in previous versions, you either had to run an inventory scan or manually add it. This saves some time.

    Depot (CDS) Installation
    In 5.1.0 in order to install a depot on a system that was already a TCA, you would have to first uninstall the TCA and then run the depot install. In 5.1.1, this has been fixed. The install will recogize that the TCA is already installed and only install the depot subagent.

    SOA Rename
    One fun thing while developing the course was that I noticed that everything was changed from SOA to SDI. SDI is Scalable Distribution Infrastructure. This name sure makes a lot more sense for the function than SOA.

    Other
    Just some Acronyms for you to make your day.

    CAS – Common Agent Services
    CDS - Content Delivery Service
    CIT – Common Inventory Technology
    DCM – Data Centre Model
    DMS – Device Management Service
    JES – Job Execute Service (also referred to as DMS)
    SAP – Service Access Point
    SDI – Scalable Distribution Infrastructure
    SIE – Software Installation Engine
    SOA – Service Oriented Architecture
    SPE – Software Package Editor
    TPM – Tivoli Provisioning Manager
    TPMfOSD – TPM for Operating System Deployment
    TPMfOSDEE – TPMfOSD Embedded Edition
    TPMfSW – TPM for Software
    WSDL – Web Services Definition Language

    ITM 6.2 Workspace Parameter - more portal tuning

    Posted by: jlsham on Mar 01, 2008 - 05:00 AM

    itm61
    If you have more than 200 of any type of agent, you start to see messages in your views about the number of systems being queried is too high.
    Well, in your cq.ini - look for

    #KFW_REPORT_NODE_LIMIT=xxx

    Uncomment this line and set it to a more meaningful value for the number of agents you need to query.

    Restart the portal and you're done.

    Modified applet.html file to resolve one TEP IE browser connection error.

    Posted by: napomokoetle on Feb 18, 2008 - 05:11 PM

    This short blog is related to the blog I wrote a while ago title "Upgrade from ITM6.1 FP6 to ITM6.2 may break logon through IE TEP".
    I had not posted the applet.html file I'm posting herein with the original blog I wrote on the problem because IBM/Tivoli had insisted the problem was unique to the environment I was working on at the time.




    Well I guess the error has proven not to be unique to the environment I was working on since other folks keep seeing the same problem and requesting I pass them the modified applet.html file. Posting the file here will save me from searching all over for the file every time I get asked for it by those who would like to try it out.

    TO RECAP:


    Environment:
    Upgraded from ITM6.1 FP6 to ITM6.2 RTM:

    TEPS & TEMS installed on same W2K3 (Dual 3.06 GHz Intel Xeon (Hyper-threaded0) host with 3808MB ram.

    PROBLEM: From an XP remote host, TEP in IE 6.0.2800 browser reports “cannot connect” error.
    When I clicked on the Java WebStart created icon I got the same error as in IE browser.

    Plugin150.trace reports:

    (475405aa.0c474f80-(null)AWT-EventQueue-2:Bundle,0,"Bundle.Bundle(String,String,Locale,String)") Resource bundle: id = kjri, baseName = candle.kjr.resources.KjrImageBundle, actual locale used:


    (475405b4.10ed7f00-(null)Thread-10:DataBus,0,"DataBus.make()") EXCEPTION: Unable to create DataBus object: org.omg.CORBA.INITIALIZE: Could not initialize (com/borland/sanctuary/c4/EventHandler) unexpected EOF at offset=0 vmcid: 0x0 minor code: 0 completed: No


    (475405b4.10ed7f00-(null)Thread-10:QueryModelMgr,0,"QueryModelMgr.QueryModelMgr()") EXCEPTION: InstantiationException --> Unable to instantiate DataBus object


    (475405b4.10ed7f00-(null)Thread-10:QueryModelMgr,0,"QueryModelMgr.make()") EXCEPTION: InstantiationException --> Unable to instantiate QueryModelMgr object


    (475405b4.10ed7f00-(null)Thread-10:LogonDialog,0,"LogonDialog.processOK()") EXCEPTION: Unable to connect to CandleNet Portal server: java.net.ConnectException


    (475405b6.33611380-(null)AWT-EventQueue-2:CNPClientMgr,0,"CNPClientMgr.terminate(CNPAppContext,boolean)") Performing normal exit of TEP Workplace


    (475405b6.33611380-(null)AWT-EventQueue-2:UserManager,0,"UserManager.loadPermissionXRef()") Error loading User Permission Cross-Reference tables: KFWITM219E Request failed during creation.





    Solution:
    After some multiple cycles troubleshooting the error with IBM, they eventually made a couple changes to the applet.html for me to try out. Please make a backup copy of your current applet.html, and then replace the version found in your ..\cnb directory with the attachment on the link below. If this makes no difference in your testing, then you will need to open an ITM 6.2 PMR with IBM/Tivoli so that support can work this issue to a successful conclusion with you.

    Click here to download the modified applet.html

    Adios,
    J. Napo Mokoetle

    Fine Tuning the Portal Server

    Posted by: jlsham on Feb 12, 2008 - 11:34 PM

    The following are a few of the parameters used to fine tune the Tivoli Enterprise Portal, the most popular one is the expansion to show more than 25 systems in the navigator.
    Tuning parameters can be set in the cnp.bat or cnp*.sh, these are located in the

    Windows:
    \CNP

    or

    Linux/Unix

    ../../cj/bin/cnp*.sh


    For example: /opt/IBM/ITM/*interp*/cj/bin/cnp.sh

    cnp.databus.pageSize = #of rows to get for any workspace table. Default is 100 rows.

    cnp.navigator.branch.pagesize = Navigator Expansion - this is the popular one, default value is 25.

    cnp.navigator.branch.threshold = Warning threshold for Navigator branch requests. Default value is 100.

    Simple Postemsg-like solution for ITM 6.x

    Posted by: venkat on Jan 23, 2008 - 09:44 PM

    One of the most convenient features of classic Tivoli event adapter is the postemsg command. Using this you can send custom events to TEC from your scripts/CLI. In ITM 6.1, there is no such equivalent command available though one can design such a solution in just three steps. This article discuss a way to setup postemsg like solution in ITM 6.x

    Overview


    The idea is to use gbscmd sendmessage command to write a custom message to ITM Universal Message Console and develop a situation to forward any messages written to Universal Message Console.

    Step 1: Create a situation


    Now develop a simple situation to forward Universal Message Log entries with the following formula. Hint. Create a situation under "Tivoli Enterprise Monitoring Server".

    (Originnode == AND Category == GBSMSG)

    It is important to include "Originnode" in your condition. Otherwise, the situation will not fire. Distribute the situation to *HUB. Associate and start the situation.

    Step 2: Write to Universal Message Console using Gbscmd


    You can send a message to Universal Message Console using the following command.

    gbscmd sendmessage --auth --message "some test message" --category "GBSMSG" --severity 2

    Step 3: Verify the results


    Ensure that the message appears in Situation Event Console and if you have Netcool/TEC Integration enabled, the message should also appear in the respective consoles.

    Posted by Venkat Saranathan

    Testing database connectivity using JDBC

    Posted by: venkat on Dec 28, 2007 - 09:20 PM

    Many a time, you might want to test if you could connect to a database or not. The simplest way is of course having a database client installed and use it. However, this method may not be possible in all cases. For example, on an ITM warehouse proxy, you'll have only Db2 JDBC type 4 drivers, (which are nothing but two files, db2jcc.jar and db2jcc_license_cu.jar). How do you test connectivity on such systems? Here is a Jython script.

    Why Jython?


    In addition to the self-serving opportunity of me learning Jython, Jython can reuse the same JDBC drivers that TDW uses. So there is no need to setup Perl DBI or database client. We could write the code in Java, but that's an overkill because of compilation requirements. Setting up Jython is very easy. Please refer to this page for detailed installation steps.

    Code


    Here is the sample code. Just be sure to change the connection settings.

    Note: Do NOT cut-paste the code from below listing. Jython, like python, is sensitive to tabs. So, download the code by clicking here.

    from com.ziclix.python.sql import zxJDBC
    import java.sql

    conn = "jdbc:db2://server:50000/warehous"
    user = "user"
    passwd = "passwd"
    driver = "com.ibm.db2.jcc.DB2Driver"

    try:
    db = zxJDBC.connect(conn, user, passwd, driver)
    print "Connection successful"
    except zxJDBC.DatabaseError, e:
    print "Connection failed:", e

    TPM 5.1.1 Documentation

    Posted by: martinc on Dec 21, 2007 - 04:55 PM

    IBM has provided a document on the components to download for install and updated the Info-Center
    Download information http://www-1.ibm.com/support/docview.wss?rs=1015&uid=swg24017302

    There is information for all the OS platforms, you just have to scroll down. For some reason it is not quite set up like previous pages. Oh well.

    Info Center: http://publib.boulder.ibm.com/infocenter/tivihelp/v20r1/index.jsp

    TPM 5.1.1 is now available

    Posted by: martinc on Dec 21, 2007 - 02:46 PM

    The Christmas present everyone was waiting for :)
    So what is new and improved?

    The biggest focus on the new version was the installer. Anyone who has tried to install TPM 5.1 knows that it just did not work. I know that installs for me were < 50% successful on the first try. Even with the exact same VM images from attempt to attempt. In the new version, I did not have a failure on the 4 times that I installed. Sweet!

    There is also more support for different topologies. In the previous version, getting TPM to use a remote database was a post install step. Now it is part of the installer.

    Some of the new features are:
    Agent less inventory scanning - there is no longer a requirement to have the agent installed on a system to perform an inventory scan. The scan will copy the required files to the target, initiate the scan and return the results to the DCM

    Web Replay - This was available in a "beta" in 5.1 but is now part of the install. Web replay allows for some automation of tasks. Some tasks within TPM take many steps to complete and for the most part do not require any input. A recording of the steps can be done to "automate" some steps and stop at prompts for others.

    Unknown device management - When discovering devices and an error is encountered, the device is dropped and not recorded in the DCM. Now the device will be recorded but will require some manual steps to complete the discovery. This is useful when a device is discovered but the user name and password are not correct. A new discovery can be done with the correct information on the unknown device.

    Enhanced Discovery - When discovering SMB or SSH devices, the discovery can be done in one discovery configuration rather than two separate discovery configurations. This is mainly a time saver so that the same subnet does not need to be scanned twice.

    There have also been many changes in the user interface to improve performance and readability.

    My Comments
    So far I have been pretty impressed by the installer and some of the changes made to the interface. I know that the agent less inventory scanning has been something that people were asking for and it does work! As I said before, the installer did work very nice. One of the big changes in the install is that by default it only uses local OS authentication (much like the fast start install). There is documentation and scripts to allow for easy configuration of TDS or MSAD (read only).

    I will get some screen shots and other comments in the new year.

    So Merry Christmas and Happy New Year!

    Martin

    Friday, March 14, 2008

    Tivoli Common Reporting (TCR) uses the ZeroG InstallAnywhere installer

    While many Tivoli products use the InstallShield Multi-Platform (ISMP) installer, TCR uses Zero G installer.

    This little difference is important if you want to reinstall the product. Specifically, ISMP uses the vpd.properties file as its installation registry. Zero G, on the other hand, uses a file named .com.zerog.registry.xml (notice that it begins with a dot, so it's a hidden file). So if you need to delete the product and start over, you need to edit or delete this file, also. If you only remove the install directory, when you run the installer, it will just tell you that Tivoli Common Reporting is already installed, and won't reinstall it.

    Using BIRT for TDW Reporting

    There has been alot of interest in BIRT lately with the release of Tivoli Common Reporting 1.1.1 to GA, so I'm posting a link to part of my TUG presentation on using BIRT..

    TUG BIRT Presentation

    Actually searching all files when searching for text

    For quite a while I have always been annoyed by the search in Windows. I thought it was great that you could use the search to look for text inside a file, just like grep. Of course in usual Microsoft fashion, search all files does not mean to actually search all files.

    So finally I became so annoyed with this, I went to see if there was a solution.

    I did find the solution and I thought I would share the link
    Using the "A word or phrase in the file" search criterion may not work

    Hope this helps someone.

    Where Wizards Fear To T"h"read

    Since PERL is a tool of choice for many of us Tivolians when it comes to automations and integrating systems with Tivoli products, I thought it could be of help to others to throw out a few gotchas I've encountered programming Perl threads. Please note that I'm not referring to "forking".

    Though Perl threads can help speed up things in a Tivoli automation or integration script, they can also add headaches and frustration to ones life. The following three pointers should help reduce some of the strive...

    1. Using nested data structures with PERL THREADS.

    When using threads it is usual to share data. Sharing complex data structures works very well for a single threaded (single process) PERL script. But once the main process starts producing child threads, then hell breaks loose with the complex data structure as PERL can as of the writing of this document, deal with ONLY one level deep references to a SHARED data structure like an array or hash.
    If you encounter and error like:

    Invalid value for shared scalar at ...

    then you've nested too much. Keep your reference to shared arrays and hashes to one-level down and you'll be ok.


    2. Creating an ALARM signal handler within a child thread.
    Often in Tivoli a person will call "w" command/s within a PERL program to perform some operation on a Tivoli resource. But it often happens that a "w" command hangs or takes too long to complete. What many often do is create an ALARM to timeout the command so the process continues execution when some specified time has elapsed. This usually works OK in a single thread program, but things start going out of hand when there's more than one thread executing in a PERL program. What happens with threads is that instead of timing-out the particular intended call within a CHILD thread, the whole PERL program quits executing! That is the main thread and its children threads die!!! The one work-around I've always found effective is to include code that looks as follows in the MAIN program flow. Make sure this code appears in the main thread and NOT in a child-thread.


    $SIG{ALRM} = sub {}; #Just here to workaround the threads "Alarm clock" bug


    That amazingly does the trick!


    3. Handling the CTRL C interrupt within a threaded Perl program.
    To avoid resource contention, often a person would prevent more than one instance of a PERL program from running by creating a lock-file at the begin of the program and then removing the file just before end of the program. But a problem comes when for some known or unknown reason, the PERL program receives an INT signal and terminates after creating the lock-file, but before executing to a point where the lock-file gets removed. Thus preventing any subsequent run of the program. It's easy to circumvent this situation and handle the signal in a single thread PERL program, but it can be a pain doing like-wise in a multi-threaded PERL program where a simple $SIG{INT} may do anything but catch the interrupt. For instance the norm would be to do something like:

    $SIG{INT} = sub { "code that removes the lock-file"; die};

    Trying to handle the interrupt this way in a multi-threaded PERL program may actually result with the program dumping core. Including the following chunk of code/subroutine most often does magic and beautifully handles the interrupt signal without fail:



    use sigtrap qw[handler sig_handler INT TERM QUIT PIPE];
    use Swith;

    sub sig_handler {
    my($sig) = shift;
    print "Caught signal $sig";

    switch($sig) {
    case ["INT","TERM","QUIT"] {
    unlink "/tmp/dashlock";
    print "Exiting";
    exit(0);
    }
    case "PIPE" {
    print "Continuing";
    }
    }
    }



    I'm sure the above is not all there is on PERL threading gotchas, I however hope the pointers save you time and heart-ache should you have to deal with Perl threading in the near future and encounter similar issues.

    That's all folks!

    Adios,
    J. Napo Mokoetle
    "Even the wisest man is yet to learn something."

    Eclipse plugin to access the TADDM API

    I realize that this won't be interesting to many people, but I wrote an Eclipse plugin to access the TADDM API.

    You can download the plugin from http://www.gulfsoft.com/downloads/TADDM_API_plugin.zip . The plugin requires Eclipse 3.3 and the Web Tools Platform.

    To install the plugin:

    1. Close Eclipse if it is running
    2. Extract the TADDM_API_plugin.zip file into your eclipse directory. It will place the file com.gulfsoft.wst.xml.ui_1.0.0.200711131411.jar in your plugins directory.
    3. All of the other help for the plugin is available from>Help->Help Contents in Eclipse. The help is found under the topic TADDM API Plugin.

    ITM AIX Premium agents - an Overview

    I recently got a chance to implement AIX Premium agents for one of our customers in a production environment. This article briefly discusses about our experience with these agents and also discusses about the pros & cons of these agents.

    Installation

    Installation of these agents is similar to Unix OS agents. There is a minor gotcha though. IBM currently offers "System P" SE edition as a free download with 1 year support, but don't confuse them with the "AIX Premium" agents which are available for current ITM customers. The "System P SE Edition" consists of agents for AIX Base, VIO and CEC whereas the "Premium" one consists of "AIX Premium", CEC and HMC agents. (Check with your IBM Sales rep about your entitlement).

    Use the "AIX Premium agents" C101HIE.tar from Passport advantage and installation is very similar to usual agent installation. Make sure that you install the agent support files on your TEMS/TEPS servers.

    What is the difference?

    So what is new with "AIX Premium" agents? Of course the workspaces are different and the attributes provide AIX specific information such as LPAR, entitlement, Volume Group information and paging space that are NOT available with the generic Unix OS agents. You should be able to define situations to monitor these resources just like you would do for Unix OS Agents. This information could be very useful for AIX administrators.

    You can find more information about this agent at the IBM Information Center for AIX Premium agent.

    Rollout Considerations

    The advantage of getting AIX specific information is really nice and most of admins would like it better than the Unix OS agents. However, there are couple of factors that you might want to look into before deciding whether to go forward with System P agents. One thing is the level of support and fixes available right now. Currently the UNIX OS agent is part of the "core" ITM release strategy and gets updated with every fixpack whereas AIX Premium agent is pretty much like the special type of agents such as DB2 agent, SAP agent etc. Since this is a fairly new type of agent, we don't know whether it will be integrated into IBM Fixpack release strategy.

    One other issue is the added complexity of managing another type of agent. If you are happy with the current UNIX OS agent, then you could probably experiment it in your test environment and see if you need the features of AIX Premium agents.

    An Introduction to Netcool Omnibus Components

    We have already discussed lot of things about Omnibus in our previous articles but here we are going to explain the basic components of Netcool Omnibus and their functions.

    What is an ObjectServer?

    ObjectServer is the central piece of Omnibus and can be considered analogous to the TEC Server. The ObjectServer receives events from various probes
    and processes them according to automation rules. The ObjectServer, however, is a light-weight process when you compare with TEC. Moreover, Omnibus provides
    easy to use high availbility features out of the box, so it is normal to have two or more ObjectServers running in an environment.

    What is a Probe?

    Probes can be thought as an equivalent of Tivoli Event Adapters. They receive events from various sources, such as SNMP agents, Unix log file, etc. Omnibus
    probes are much more intelligent than TEC event adapters in the sense, it can detect/filter duplicate information at the source and it is capable of advanced programming capabilities such as looping, regular expressions, etc.

    Gateways

    Gateways in Omnibus are different from the Gateways in Framework. In Omnibus, they act as a bridge between an Object Server and other ObjectServers, third-party applications or databases. Their function is to replicate ObjectServer data from one ObjectServer to other components such as another Object Server (for redundancy), or to third-party applications such as Remedy (for trouble-ticket management), or to databases such as Oracle (for event storage).

    License Server

    License Server is a central piece of Omnibus environment that dispenses necessary server and client licenses as and when requested. Without the license server, the Netcool components will not start. If the license server goes down AFTER the component is started, it should be back up within 24 hours or the Netcool components will shutdown. There are High Availability features supported for License Server and most of the time it is pretty stable. However, IBM understands the pain of managing one more component and ever since the acquisition, IBM is focussing on moving away from license server and expect this to be out in future releases.

    Process Agent

    The Process Agent plays a very important role in the overall architecture. It is responsible for starting and stopping Netcool components and it also restarts these components in case they died abnoramlly. Process agents are also responsible for executing automation requests coming in from remote systems. However, ironically the Process Control Agent does not manage processes on Windows, it is used only for executing requests received from remote systems. The process control agent works very well and it is very nice to see all the components started and managed by a single program. May be ITM could take a leaf out of it and use a similar solution!

    Proxy Server

    The Proxy Server is an optional component that can be used for better scalability and for firewalls. The Proxy Server acts as a proxy for ObjectServer, receives events from various probes and sends them over a single connection to the real ObjectServer. This will reduce the number of connections that the real ObjectServer has to handle. An equivalent in Tivoli world, Gateway comes to my mind!


    These are the basic components you should know about. Stay tuned for more articles on Netcool Omnibus in the coming days.

    Situation Status Field in ITM6.1

    The situation_status field in TEC uses single letter status to denote the current status of the situation. Understanding the different values of this field is important if you need to write TEC rules for incoming events so that you don't end up taking multiple actions for the same event. This article lists the different values for Situation_Status field. The article credit should go to IV Blankenship.

    According to IV in one of our earlier blog articles, the following are the valid values for situation_status field.

    N = Reset
    Y = Raised
    P = Stopped
    S = Started
    X = Error
    D = Deleted
    A = Ack
    E = Resurfaced
    F = Expired

    Here is the link to the earlier blog aricle/comments.

    ITM FP05 Fixes & Omnibus Fixpacks

    There are few interesting patches released at Tivoli Patches site including an interim fix for ITM 6.1 FP05 and a fixpack for Netcool Omnibus/Webtop. Here are the links to readme files.

    ITM Interim Fix to Fixpack 05

    Netcool Omnibus Fixpack 03

    Thanks to Martin Carnegie for bringing this to our attention.

    Using the APDE with JDBC Type 4 drivers

    The APDE (Application Package Development Environment) can either be used on the TPM server or can be installed on a remote computer for the development of workflows. In Fix Pack 2, documentation was provided on how to configure the APDE to use JDBC Type 4 drivers instead of having to install the DB2 client on the remote computers. The problem was that this documentation jumps all over the place. So I thought I would document the steps I used to install the APDE and now I will share them with you.

    Background
    The APDE is the development environment in TPM for building custom workflows (and other things). Even though you could develop these workflows in the Workflow Editor or even notepad, the APDE provides an excellent IDE that allows for easier creation of custom workflows.

    Steps required for DB2 connectivity on Windows
    These instructions are for installing the APDE on a Windows system. The same principles should apply for other OSs.

    1. Install Java 1.4.2+ confirm that the environment is setup by opening a command prompt and type java -version (Download from www.java.com
    2. Download Eclipse from www.eclipse.org. This has to be 3.1.2. Version 3.2 does not work. (Download from www.ecplise.org
    3. Extract the eclipse zip file to C:\Program Files (actually directory could be anywhere)
    4. Copy the apde.zip and apde.upgrade.zip to remote computer
    5. Extract apde.zip and apde.upgrade.zip
    6. Copy/move the contents of apde.zip to the eclipse directory and overwrite existing files
    7. Copy/move the contents of apde.upgrade.zip to the eclipse directory and overwrite existing files
    8. Create a directory under the C:\Program Files\eclipse directory called TPMCFG
    9. Copy the files db2jcc.jar, db2jcc_license_cisuz.jar, db2jcc_license_cu.jar from DB2_HOME\java on the TPM server to the TPMCFG directory
    10. Copy the crypto.xml and dcm.xml files from TIO_HOME\config to the TPMCFG directory
    11. Edit the new dcm.xml file and modify the file to the following values:
    ...



    ...
    jdbc:db2://:/tc

    The server name will be the name of the server where your TPM TC database resides (should be the TPM server). The db2 port is the port db2 is on, most likely this is 50000 or 60000 (this seems to happen more on Linux)
    12. Run the eclipse.exe from C:\Program Files\eclipse
    13. Go to Window -> Preferences -> Automation Package -> Database
    14. Press the Import button and select the dcm.xml file from the TPMCFG directory. Confirm all the settings are correct. I had to modify the password as this did not seem to import correctly.
    15. In the Import driver, I had to select the db2jcc.jar file and press the Add button or I received db connection errors
    16. Press OK and restart the APDE

    If you want to set up the APDE to allow for dynamic installs of the automation packages. The Deployment Engine information needs to be configured. Go to the Deployment Engine section and change the Host Name to the TPM server name. The remote and local directories will also need to be set.

    Now it should be all good to go!

    Martin Carnegie

    Tivoli Provisioning Manager - Fix Pack 2

    I am currently going through the Fix Pack 2 install for TPM (should be the same for TPM for Software) and thought I would make some notes.

    The fix became available on June 3rd, but I was too fast on the draw and found out that the files were not uploaded totally. Oops. Then on the 5th, I saw that the files were now quite a bit bigger. Each of the files for the various OSs are about 1.8GB, OUCH!

    For the patch information, go to http://www-1.ibm.com/support/docview.wss?uid=swg24016022&loc=en_US&cs=UTF-8〈=all

    The install was quite easy and I did not see any errors. One thing to note, make sure you are logged on as the tioadmin account. This is a lesson I learned when installing FP01. I know the docs say this, but I thought that as Administrator (yes only Windows so far) I should be able to do it. Boy was I wrong. At some point the install of TPM changes the security to various files that does not even allow Administrator access.

    So what is new?

    Endpoint Tasks
    This seemed to be something missing from GA and FP01 that was available in TCM. This allows for the creation of a Task much like we saw in TCM. In fact I think that it is much easier. The basic steps are:
    1. Create a script and put it in the LocalFileRepository (and a sub-directory if you want)
    2. In the TPM web interface, go to Task Management and there is a new section called Endpoint Tasks.
    3. Select Edit -> Create Endpoint Task
    4. Enter the task name and description
    5. Select the files to send from the repository. You can define different files for different operating systems in the same task.
    6. Define and parameters for the task
    7. View the summary and then press Finish and the task is ready to go!

    One thing to note, this only works if SOA is enabled. In my quick search of the docs, I did not see this as a requirement, but the message I saw when I tried to send without SOA enabled sort of pointed to the problem. The message was "COPDEX123E The workflow threw a NotSupportedInDE exception. The message is This workflow is a template workflow for TMF and TCA EndPointTask only."

    "Automatic" agent upgrade
    In previous versions, upgrading the TCA was a bit of a pain. Usually it was better just to remove the existing agent and reinstall with the new version. This would obviously be a nightmare in any real environment. So now there is a vorkflow called "TCA_Agent_Upgradeable.wkf" that can be wrapped as a Favorite Task that allows for the upgrading of agents. For the couple workstations I tried this on, the upgrade worked. For more information on the process check out Upgrading the common agent" on Info Center

    Patching for HP-UX and AIX
    I have not been able to try either of these, but I thought I would list them anyway. I know that AIX patching was there before, but according to people I talked to at the TTUG, it did not work. I was told that FP02 is where you have to be to make this work. HP-UX patching is new though and I am sure that people are looking for this.

    Java Web Start for the Software Package Editor (SPE)
    This is a nice improvement. Now you do not need to have Eclipse installed on your computer to use the SPE.

    You do require Java to be install with the minimum level of 1.4.2. Once Java is installed, open your web browser and go to "https://:9045/SPEWebStart/spe.jnlp". This will start the Java Web Start, download the required files and then start the SPE. Once the SPE is started, you will have to go to the Settings -> Preferences to configure the Web Server Name, port and path. You will also have to configure if you are using SSL and default paths to use. Check out Launching and configuring the Software Package Editor on Info Center for more information.

    New SOAP commands for Packaging
    Some new SOAP commands were created to allow for the creation, distribution, install and uninstall of a software package. There does not seem to be much information in the docs about this yet. The only one I have found is around creating the SPB. I will keep looking and post more later.

    Unified Patch Management
    You can now manage patches in TCM from TPM and be rid of that "Automation Server" that TCM used. This is a possible scenario that I can see using TPM as a first entry point. I personally think that the patch management facility works fairly well in TPM. The one that came with TCM was a real pain to install and use. So by implementing TPM and importing your TCM environment into TPM you can manage all the patches from TPM and still do all the other stuff from TCM.

    Note: This is only for Windows right now.

    Defects Fixed
    There are around 720 defects that have been fixed! For a complete listing, go to http://www3.software.ibm.com/ibmdl/pub/software/tivoli_support/patches/patches_5.1.0/5.1.0-TIV-TPM-FP0002/5.1.0-TIV-TPM-FP0002.DEFECTS.HTM

    Extra Notes
    Now that I have FP02 installed, it is time to start working through some of the features in more detail. Hopefully this blog has given you a little help in seeing what is new.

    I see that IBM is putting a lot of work in this product and I think they are making progress in many areas.

    The one area that still needs work is the installer. Unfortunately, this cannot be addressed in a fix pack. I have heard rumors that there will be a new installer in the next version (5.2??), but these are only rumors.

    Some things that I see help improve the success of an install are pre-installing the DB2 and WebSphere products. These seem to be the two trouble areas, especially if you are using machines that are not up to the minimum requirements (like a test lab).

    I also think that a fully manual process should be available to install TPM. I have done this and pretty much got everything to work, but it was a fight as I was working in areas that are not even close to documented. All I can say is thank you VMWare for snapshots :)

    So now you are wondering if TPM is ready for your environment. Well here is the straight answer, yes and well no.

    If you are new to the Tivoli environment and you want TPM or TCM, go to TPM now. There is no point in using TCM. There are some issues with TPM, but learning TCM now and then learning TPM later is going to really hurt. Learn TPM now, get the users on it and don't look back.

    If you are a current TCM user, then I would definitely have TPM in the test lab right now at least. The biggest problem for current TCM users that I have seen is that there has been so many processes built around TCM (web pages, reports, data sharing/use) that it is going to take a long time to move everything over. Also the whole push vs pull thing is a big difference to TCM people. We have spent years managing the client expectations that when we submit something, it will start almost right away. Now with TPM, this is no longer true. There is some configurations that can be done to decrease the waiting (polling) but this can severely impact the performance of the product. So there is going to be quite a bit of work to change these expectations, unless something is done to allow for a "push now" option.

    Ok this is enough for now, my fingers are bleeding from all the typing. I will keep you posted on anything I find new and interesting. If you have any questions, feel free to comment or send me an email (martin dot carnegie at gulfsoft dot com)

    PERL Postemsg Script and Module

    This has been around a long time, enjoy.

    PERL Postemsg and Module...

    Netcool overview for Tivoli folks

    The Netcool products are definitely upon us, and I just wanted to write a short description of some of the different products and how they fit in, from a traditional Tivoli perspective.

    Tivoli has stated that the future of event management will be Netcool/Omnibus, and TBSM 4.1 *IS* Netcool/RAD, along with some additional cool integration pieces, so I'm going to focus on those products and their prerequisites.

    Netcool/Omnibus is used for event management. It consists primarily of an ObjectServer and a database (Postgres by default). The ObjectServer receives events and performs functions similar to those provided by tec_server, tec_reception, tec_task, tec_rule and tec_dispatch. Specifically, it receives events, processes them according to configurable/definable rules, and writes them to the database. The rules you define can also perform automation - sending email, instant messages, running programs, etc.

    The Omnibus database itself is quite a bit nicer (in my opinion) than the TEC database. There is essentially ONE table that contains your event information: alerts.status (there are a couple of others, alerts.detail and alters.journal that may contain information about events, but alerts.status is the primary one). All of an event's slots map to columns in this table, and if you define an event that needs more slots/attributes, you need to modify this table. This makes it a little less flexible than TEC's TEC_T_SLOTS table, but that's a good tradeoff in my mind (to this day I haven't been able to find a single SQL query that will show me all of the slots for all of the events with a particular value in the 'hostname' slot, for example).

    The user interface for Omnibus itself is about as basic as the TEC event viewer. But because you normally will use other products along with Omnibus, most users won't actually see the Omnibus interface - they will use something nicer (like TBSM or Impact) as a front-end.

    Defining the automation for Omnibus (using the Netcool/Omnibus Administrator tool, 'nco_config'), should be familiar to all TEC administrators out there. You have to write code to perform any automated actions you want. The product doesn't have any wizard interfaces, but Netcool/Impact DOES (more on this in a bit).

    TBSM (Tivoli Business Service Manager) 4.1 sits primarily on top of the Omnibus database, though it can take feeds from a large number of different sources. Whereas previous versions of TBSM required that you send specially-formatted events to TBSM, this version performs the same function in a much more straightforward manner: by reading the database. As an administrator, you need to define your different business service hierarchy, and in doing so, you need to define which events affect the status of each service. You define these filters through the web-based interface, which is based on Netcool/Webtop, which is based on the Netcool/GUI Foundation.

    TBSM 4.1 also includes some functionality that was not in RAD 3.x. Specifically, there is a new TEC EIF Probe, which allows the included Omnibus ObjectServer to look EXACTLY like a TEC server. This means that you can point existing TEC adapters to your Omnibus host as the EventServer. This piece also allows you to perform attribute mapping so that your events come in correctly.

    Another new feature in TBSM 4.1 is that it can import Discovery Library Adapter (DLA) books that are created by other products. Most notably, it accepts the output from the ITM 6.x DLA, and even has rules built-in to handle events from ITM. Here's what makes this so cool:

    - You can generate the book in your ITM environment. This book (a file containing IDML [an XML language] data) contains information about all of the agents you've deployed in your environment.
    - You then have all of your agents visible within TBSM, and they can be included in any services you define.
    - If you point your TEMS to send events to the Omnibus ObjectServer being monitored by TBSM, your systems that were imported from ITM 6.x will turn yellow or red.

    TBSM 4.1 ALSO has tight integration with the CDT portion of CCMDB (aka TADDM). You can pull information from TADDM using DLA books OR using direct API access. This type of integration allows you to view CI status changes directly in TBSM. Additionally, you can launch the TEP or the TADDM GUI in-context directly from TBSM.

    This level of out-of-the-box integration is what a lot of us have been hoping for for a long time. Additionally, TEC event synchronization capabilities are easily configured.

    If you can't tell, I REALLY like this newest version of TBSM 4.1. It doesn't have nearly the complexity of earlier versions, AND leverages your event repository directly (the Omnibus database). Additionally, it ships with robust integration with ITM and TEC, which will make the transition off of TEC very easy for the vast majority of customers. For most customers who are using TEC for complex processing, it won't take too much effort to integrate Omnibus into your event management structure.

    TBSM 4.1 also has an SLA (Service Level Agreement) component that can be used to track and manage your SLAs. Tivoli is still selling the TSLA product separately, and I believe they will keep offering that product, so hopefully they will soon come out with a statement of direction in this area.

    With TBSM, you also get Netcool/Impact for no additional fee. *This* is the product that many of you have seen demonstrated with the ability to define your event management rules and automation just by clicking and dragging. This is accomplished through the Wizards that are included. Those wizards will guide you through many common automation tasks (running an external command, sending email, etc.), though like any wizards, to perform complex operations, you'll need to write code directly.

    The main interface for Netcool/Impact is web based, and therefore, like TBSM 4.1, requires Webtop and the GUI Foundation.

    Netcool also has a very granular security structure, where you can define exactly which users can access which resources depending on which tool they use for that access.

    Notice in all of the above that Netcool has no interface that competes with with the ITM 6.1 TEP interface - all of the Netcool interfaces above are based on events being generated. That's a good thing, as this (to me) clearly indicates that the TEP is *the* operational view for real-time metrics moving forward.

    That's all for now. There are definitely other Netcool products that I didn't touch on (Precision IP, Reporter, Proviso and Visionary, among a few others), but we will hopefully address those in a similar article soon. I know in particular that there is lots of interest in Tivoli reporting capabilities, and that Netcool/Reporter sounds like a good product to address that. However, Tivoli has announced that their future reporting solution will be based on the open-source BIRT (Business Intelligence Reporting Tool) application, so I don't really want to touch on that until Tivoli announces a more concrete direction.

    Stopping situations on Remote TEMS

    While it is advisable to manage your situations from hub, sometimes you might want to disable a situation just on one RTEMS. To disable a situation on a RTEMS, you need to use an undocumented/unsupported SOAP call. Please read on to know more.

    The following SOAP call seems to work for me.


    sysadmin
    something
    REMOTE_TEMS1
    NT_Service_Error
    situation


    The key is the TEMSNAME. The tag is undocumented for obvious reasons.

    ITM Events with wrong severity - A fix

    If you are running ITM with fixpack 02 or later, then you might have noticed that ITM events from TEC are sent with a severity (usually UNKNOWN) different from the one specified in situation. This article explains a fix for this problem.

    There are two issues here. One, there is a new parameter that should be added to KFWENV since FP02. Without this parameter, TEPS will incorrectly store the severity field (SITINFO) inside the situation definition. Second, after setting this parameter, we still need to manually modify the existing situations to have correct severity.

    Adding new parameter to KFWENV/cq.ini

    Edit your KFWENV/cq.ini file, add KFW_CMW_SET_TEC_SEV=Y to it. This parameter will take care of setting correct severity within the situation definitions.

    Modifying severity in existing situations

    Using a simple tacmd viewsit, you can export the situation definitions, modify the SITINFO field to right severity and import it using tacmd createsit command. But it is too destructive as it involves deleting the existing situation and creating new one.

    One relatively easy method is to use gbscmd. If you would like to learn more about gbscmd, please read this article. For example, the following gbscmd changes the situation severity on the fly.

    gbscmd executesql --auth --sql "UPDATE O4SRV.TSITDESC SET SITINFO='' WHERE SITNAME=''" --table O4SRV.UTCTIME

    The new sitinfo should be exactly like the previous sitinfo field but with the severity changed to correct one.

    New SOAP Methods for ITM 6.1 - It's a summer blockbuster

    So one of the big requests in ITM 6.1 is "task" like functionality, so far we have been limited to Situation actions - effective, but not something we can program around.

    Remote_System_Command and Local_System_Command. Just like they sound, they will allow system commands to be run on Remote systems.

    Read the attached for usage samples and details.

    Get Your KICK butt SOAP Methods here...

    ITM 6.1 ????Netcool Omnibus Integration Steps

    As IBM moves to Netcool Omnibus as its primary event handling mechanism, it is imperative to understand the integration mechanism of Omnibus with its other famous cousin, ITM 6.1. This article gives you a high-level overview of how the integration works and gives you necessary instructions to get the integration working.

    Terminology
    ObjectServerOmnibus ObjectServer is the in-memory database server at the core of Netcool Omnibus. ObjectServer receives events from various sources and process/display them according to the specification. This is analogous to TEC Event server. ProbesProbe is an Omnibus component that connects to an event source, receives events from that source and forwards the events to ObjectServer. For ITM integration, we need to use Tivoli Event Adapter probe a.k.a TME10tecad probe. EventListA GUI application that shows the events received by Netcool Omnibus. You can bring up the EventList by typing nco_event in your Unix terminal. Make sure you have X-Windows server such as Exceed running.
    How does the integration work?
    ITM uses OTEA (Omegamon TEC Event Adapter) to send events to TEC. The OTEA is very similar to a NT Event Adapter/TEC Event Adapter. To send events to Omnibus, you just need to install a TEC Event Adapter probe on a system and modify the ITM om_tec.config file to point to the server running the probe. To ITM, the probe will appear as a TEC server. The probe on receiving events from ITM, will forward them to the real Omnibus ObjectServer specified in its rules file. The following diagram shows the integration of these components.
    Downloads
    Setting up the software from scratch requires quite a few downloads from Netcool/IBM download site. The list of products needed are given below.
  • Netcool/OMNIbus, v7.1, AIX 5L 5X, HP-UX 11, HP-UX 10.x, Solaris (Sun Microsystems), Windows 2000/XP/Version 7.1 (Download Omnibus V7.1 for your platform, License Server and License file)
  • NetCool/Omnibus User, V7.1, AIX 5L 5X, HP-UX 11, HP-UX 10.x, Solaris (Sun Microsystems), Windows 2000 Version 7.1 (just the .lic file only)
  • Netcool/Omnibus Probes for Nonnative-base - eAssembly
  • Netcool/Omnibus Probes for Tivoli EIF eAssembly.
  • Download Tivoli & Netcool Event Flow Integration solution from OPAL. (TEC_Omnibus_IntegrationFlows.tar). The latest version as of this writing is V3.0.
  • Integration Steps
    Integrating ITM 6.1 with Omnibus involves the following major activities. 1. Install Omnibus and create an ObjectServer (if needed) 2. Install Tivoli Event Adapter Probe and point it to the ObjectServer created above. 3. Modify om_tec.config in ITM environment pointing it to the Event Adapter probe. 4. Reconfigure the Hub TEMS Server and specify the probe server as your TEC Server.
    Installation Steps - A Quick Overview
    The following steps are needed to get ITM-Omnibus integration working right from the scratch.
    1. Install Netcool Omnibus V7.1
    2. Install Netcool License server.
    3. Install your licenses in $NCHOME/license/etc folder.
    4. Create a Netcool Object Server to which events will be sent.
    5. Start the license server and ObjectServer.
    6. Install Omnibus Probe support on your probe server.
    7. Install Netcool/Omnibus Probes library for Non-native base on the probe server.
    8. Install Netcool/Omnibus Probes binary for Tivoli EIF on the probe server.
    9. On the probe server, extract ITM Omnibus Integration solution that you downloaded from OPAL and copy tme10tecad.rules files to %OMNIHOME%\probes\ directory.
    10. On the probe server, Create a file called %OMNIHOME%\probes\win32\tivoli_eif.props and the following lines to it.
    PortNumber : 5529
    Inactivity : 3600
    11. Bring up Server Editor and add the entries (ObjectServer name, hostname and port number) for your Netcool server.
    12. Start the Probe Service by starting the "NCO Nonnative Probe" Service.
    13. Change the om_tec.conf on the ITM hub server to reflect the connectivity information of the probe server that you did in step 10.
    14. Fire a test situation and ensure that the event is received by Netcool. You can verify this by bringing up Omnibus EventList (nco_event).
    The screenshot of events appearing in Omnibus EventList is shown below.

    Running the State Based Correlation Engine from ITM

    The ITM TEC integration provides the ability for situation events and ITM status events to be forwarded to TEC or Omnibus. Enabling the integration is fairly straight forward, but what is lacking is the ability to manipulate events as they are emitted from ITM. Some control over events can be achieved using the XML map files located in the TECLIB directory, but this level of control does not allow events to be manipulated programmatically. Any enrichment or correlation of events that could not be accomplished in a map file had to be done in TEC.
    Until now.
    The State based Correlation Engine (SCE) can be run from any of the recent TEC EEIF adapters and in reality the ITM TEC integration is simply a C based event adapter. Using the SCE will allow ITM events to me manipulated and correlated before they are sent to TEC.
    Running the SCE from ITM requires a little work. In this example I will use a Linux TEMS and implement the Gulfsoft SCEJavascript custom action to manipulate ITM events using Javascript programs.

    First, acquire the JAR files required to run the State based Correlation Engine from a TEC installation. The files needed are:
    zce.jar
    log.jar
    xerces-3.2.1.jar
    evd.jar

    Also required is the DTD file for your XML rules file. In this case I will use and modify the default XML rules file.
    tecroot.xml
    tecsce.dtd

    Create a directory such as /opt/IBM/ITM/sce and copy the files listed above to this directory.

    Since we will be implementing the SCEJavascript custom action we will also need scejavascript.jar and js.jar (included in the Gulfsoft package) both files will also be copied to this directory.

    Next we will have to modify the TEMS configuration file to successfully run the SCE. The file is named (on Linux) $CANDLEHOME/config/${HOSTNAME}_ms_${HUB_NAME}.config and contains environment variable settings for the TEMS.

    Find the entry for LD_LIBRARY_PATH and add
    /opt/IBM/ITM/JRE/li6243/bin:/opt/IBM/ITM/JRE/li6243/bin/classic
    to the existing entry. Depending on where ITM is installed and the version of Linux, the path may be different. As you can guess, I will be using the ITM provided Java for this example so there will be no need to download and install another JRE unless you really want to. Also in this file we will setup the initial CLASSPATH environment variable and point it to the minimum required JAR files:
    CLASSPATH='/opt/IBM/ITM/sce/zce.jar:/opt/IBM/ITM/sce/log.jar:/opt/IBM/ITM/sce/xerces-3.2.1.jar:/opt/IBM/ITM/sce/evd.jar'

    Be sure to add CLASSPATH to the export list.

    The next step is to modify the $CANDLEHOME/tables/$HTEMS_NAME/TECLIB/om_tec.config file to enable the SCE:
    UseStateCorrelation=YES
    StateCorrelationConfigURL=file:///opt/IBM/ITM/sce/tecroot.xml
    PREPEND_JVMPATH=/opt/IBM/ITM/JRE/li6243/bin
    APPEND_CLASSPATH=/opt/IBM/ITM/sce/js.jar:/opt/IBM/ITM/sce/scejavascript.jar
    #TraceFileName=/tmp/eif_sc.trace
    #TraceLevel=ALL

    Note that we are indicating the location of the ITM provided Java and we are adding the JARs needed to run our custom action.

    The next steps are to configure our Javascript code and modify the tecroot.xml file to run the custom action. The Javascript we will use will be a simple change to the msg attribute:
    function processEvents(events)
    {
    for(i=0;i<events.length;i++)
    {
    var foo="FOO:";
    events[i].putItem("msg",foo.concat(events[i].getString("msg")));
    }
    }

    We will call this file test.js and save it in /opt/IBM/ITM/sce.

    Finally we will modify the tecroot.xml file to run the custom action:
    <?xml version="1.0"?>
    <!DOCTYPE rules SYSTEM "tecsce.dtd">

    <rules predicateLib="ZCE">

    <predicateLib name="ZCE"
    class="com.tivoli.zce.predicates.zce.parser.ZCEPredicateBuilder">
    <parameter>
    <field>defaultType</field>
    <value>String</value>
    </parameter>

    </predicateLib>

    <rule id="itm61.test">
    <match>
    <predicate>true</predicate>
    </match>
    <action function="SCEJavascript" singleInstance="false">
    <parameters><![CDATA[/opt/IBM/ITM/sce/test.js]]></parameters>
    </action>
    </rule>
    </rules>

    Once all of the changes have been implemented, stop and start the TEMS.

    All of the event that come out of ITM will now have messages starting with "FOO:", check back for more useful examples...

    Using TPM for patching Windows

    TPM (and TPMfSW) provides the ability to patch Windows computers through a couple different methods. In this blog, I will summarize the various methods.

    There are 2 ways of doing Windows patching in TPM
    1. Using the Deployment Engine
    2. Using Scalable Distribution (SOA)

    So the first thing is to determine the method you are using.

    The Deployment Engine is better designed for a data center environment where the network is not a concern. This is because using the DE does not provide any bandwidth control or checkpoint restart. It does not use the depots for fan out distributions. It is a straight file copy. With the DE there are actually two methods that can be used. The first (and best) is to have the Windows Update Agents (WUA) talk to an internal WSUS server. The second (would not recommend) is to have the WUA talk directly to Microsoft.

    SOA is used for the distributed environment. If you have many computers to distribute to and there are targets on the other end of a slow link you will want to use this method. This requires that the TCA (Tivoli Common Agent) is installed on all target computers and that the SOA-SAP has been enabled. You will also require at least one depot server (CDS).

    If you are using SOA, the TPM server will have to discover and download the patches directly from Microsoft (there is a proxy config you can set too).

    Ok so now you have the method you want to use. How to implement it?

    DE
    In order to use the DE method the following tasks need to be completed (I am going to assume that you are using the WSUS server method)
    1. Install and Configure the WSUS server (approve and download desired patches)
    2. Set the global variable WSUS server
    After this the steps between DE and SOA are the same so I will list them after listing the SOA tasks

    SOA
    1. Configure the Windows Updates Discovery discovery configuration.
    2. Execute the Windows Updates Discovery. This will populate the DCM with all patches available according to the filters you set (much like WSUS). Remember, this is only the definitions for the patches not the binaries required to install them.
    3. Approve patches
    4. Execute the MS_SOA_DownloadWindowsUpdates to download the files from Microsoft.

    Common Steps
    Now that the desired repository is setup you need to complete the following.
    1. Install the WUA on all targets
    2. create a patching group. Under the compliance tab, add a security compliance check called Operating System Patches and Updates.
    3. Execute the Microsoft WUA Scan discovery configuration
    4. In the Compliance tab, select Run -> Run Compliance Check. Once the task is complete, the Compliance tab will show if there are computers out of compliance.
    5. Click on the number under the Compliant header (something like 0/1)
    6. Select the desired patches and computers and press the Approve button.
    7. Select the desired patches and computers and press the Run/Schedule button (Note: the Run button does not work for SOA distributions)
    8. Once the distributions are complete, run the Microsoft WUA Scan again and then the Run Compliance Check.

    Done!

    Let me know if you have any comments/questions. Complaints > /dev/nul ;)

    Thanks to Venkat for all his help!

    Martin Carnegie
    martin dot carnegie at gulfsoft dot com

    ITM 6.1 FP05 - NEW - What's a widget?

    If you have started looking at Fix Pack 5 for ITM 6.1, you may have noticed (on the Windows version fix pack) that there is a directory called Widget.

    This directory has the Tivoli Widget Engine, these "widgets" are very similar to the widgets you would use with the Google Desktop or Mac OS X. The widgets are little graphical JAD programs that execute a SOAP call to your hub tems via the Tivoli Widget Engine. You can set transparency and opacity so you can see thru the widgets. Each one of these widgets must be configured and you need to understand the formatting of the SOAP requests to get them properly configured. Each widget engine must be installed locally on each workstation, configuration must be performed locally too.

    So, my final comment is this - I would have rather have seen improvements in security and a lightweight web interface to the my agents than this workstation based solution. Maybe this is the direction of the"web mash" - but until the basics solid (INCLUDING SCALE) - I think more effort should be put into the core product.

    TTUC Presentation

    Many of you did not get the hand outs at my presentation during last weeks Tivoli Technical Users Conference.

    Here is the URL to download it in PDF format:



    TTUC Presentation

    ITM Fixpack 05 is available now!

    As expected, IBM released Fixpack 05 for ITM V6.1 today. The fixpack readme is available at http://www3.software.ibm.com/ibmdl/pub/software/tivoli_support/patches/patches_6.1.0/6.1.0-TIV-ITM-FP0005/itmfp5_add.pdf

    Stay tuned for more updates about Fixpack 05 in our future articles.

    Introduction to ITM 6.1 Policies

    Policies are the "grey area" of ITM. Everyone knows what they are, but only a few really implement them. While there are reasons for not implementing them as your primary event handling mechanism, there are enough reasons for relying on ITM policy automation for some of your simple needs. This articles lists some example scenarios to implement ITM policies.

    Policies - A quick look

    ITM 6.1 policies provide a simple if-then-else automation and they can be used to take actions based on situations. For example, if most of your event handling involves running a script to take some actions such as sending an email, running a recovery script, etc, you could easily implement it in ITM policies.

    When to use them?

    Here are some scenarios where you will need to rely on policies.

    1) If you don't have framework or planning on moving away from it, then ITM policies might be the way to go.
    2) For small environments where the volume of events happening is very low.
    3) You could write your scripts to provide necessary logging but policies don't provide explicit event tracking mechanism as such.
    4) You have only a small number of situations to manage.
    5) All your response actions are very simple and doesn't involve complex event correlations.

    Example 1: Sending emails for alerts

    To send an email alert for a situation, use the "Wait Until a situation becomes true" activity and "Take Action" activity and connect them using "Situation is true" connector. In the take action, choose "Run a system command" and specify the script that will send email alerts. Make sure that you execute this action at the TEMS or at a specific node where the script will be available by clicking "More options" button.


    Example 2: Restarting Windows Services when they go down.

    To restart a Windows Service when it goes down, setup a situation to monitor the service and when it goes down and use a similar mechanism like the above except that in the "Take Action" field, use "net start &NT_Services.Service_Name". You can enter the service name by using Attribute substitution button.

    Policy Distribution

    Once the policy has been created, it need to be distributed to manage systems or managed system lists on whose situations it will have effect on. Click on the distribute check box against the policy name and it will bring up Policy distribution window. This process is similar to situation distribution selection.

    Start Policy

    The policy will not be effective unless you start the policy. On the other hand, if you would like to disable a policy for a while, you could stop the policy. Make sure that AutoStart attribute is set appropriately so that your policy will take effect during server startups.

    There are few more interesting combinations possible with policies, start playing with them and you will never know when they will become handy. Good luck.

    Windows XP and Vmware Tips

    I have had few issues related to Vmware, slow hard disk in Windows XP and long boot time in Windows XP. Thought of sharing the solutions for these issues with everyone of you.

    Slow Hard disk in Windows XP

    You don't think your brand new computer could be running much more inefficiently than a 20 year old PC, do you? Mine did for some time, I didn't even realize it. My computer was taking lot of CPU even for mundane tasks such as copying files and the performance was getting worse. ProcExplorer showed hardware interrupts taking 70-80% of CPU.

    The reason? The hard disk was running in Programmed IO mode (PIO mode) in which CPU was responsible for data transfer instead of DMA (Direct Memory Access) Controller. Right click My Computer -> properties -> Hardware -> Device Manager. Expand IDE ATA/ATAPI controllers and right click Primary IDE Channel and choose properties. Goto advanced settings and see the Current Transfer Mode. It should be Ultra DMA or NOT PIO.

    If it is PIO, just goto Driver tab and click "Uninstall Driver" and reboot twice. If you would like to learn this in depth here are two good sources.

    http://support.microsoft.com/kb/817472
    http://winhlp.com/WxDMA.htm

    Slow Boot time in Windows XP

    Does your system stays long time in Windows XP Logo screen? It could be due to a corrupt program in Windows Prefetch directory, where Windows stores frequently used programs for faster fetching. Delete C:\windows\prefetch\*.pf files and reboot your computer.

    Virtual machine fails to boot

    I have some of my vms running on a NTFS filesystem mounted on a linux box using a ntfs-3g driver. The net effect is that the disk performance is relatively slow. Same could be said of USB 1.1 hard drives and network mounted drives. If you are running your VMs from any of these and your virtual machine fails to boot, try adding the following line to your *.vmx configuration file.

    mainMem.useNamedFile = "False"

    Hope you find these tips useful.

    Perl Module for Tivoli Access Manager (TAM) is available

    I was just searching around and found this nifty Perl module for TAM administration functions available on CPAN.

    http://search.cpan.org/\chlige/TAM-Admin-0.35/Admin.pm

    Thursday, March 13, 2008

    Script to read any Windows Event/Application Log

    Awhile back there was a discussion on the TME10 list about reading a custom application event log that a developer is using. The out of the box problem is that the ITM agents don't allow you to specify a log to look at, it only looks at the basic Windows logs. The ultimate solution is to write a script to grad the data.

    So here is a simple PERL script that will read all messages from all Windows event/application logs. With a little tweaking, it can be used to feed a Universal Agent that will report to ITM based on Event number or LogFile name.

    use strict;
    use Win32::OLE('in');

    use constant wbemFlagReturnImmediately => 0x10;
    use constant wbemFlagForwardOnly => 0x20;

    my @computers = ("localhost");
    foreach my $computer (@computers) {
    print "";
    print "==========================================";
    print "Computer: $computer";
    print "==========================================";

    my $objWMIService = Win32::OLE->GetObject("winmgmts:\\\\$computer\\root\\CIMV2") or die "WMI connection failed.";
    my $colItems = $objWMIService->ExecQuery("SELECT * FROM Win32_NTLogEventLog", "WQL",
    wbemFlagReturnImmediately | wbemFlagForwardOnly);

    foreach my $objItem (in $colItems) {
    print "Log: $objItem->{Log}";
    print "Record: $objItem->{Record}";
    print "";
    }
    }

    Determining which executable has a port open on Windows

    A while back I wrote an article about using 'netstat -o' for finding out which PID had a particular port open (on Windows - you can use 'lsof' on Linux/Unix). Well, it turns out that in windows an additional flag will give you even more information.

    Specifically, the addition of the '-b' flag will tell you which executable has which port open. Here's an example of the command and a snippet of its output:

    C:\> netstat -bona

    Active Connections

    Proto Local Address Foreign Address State PID
    TCP 0.0.0.0:135 0.0.0.0:0 LISTENING 932
    RpcSs
    [svchost.exe]
    TCP 0.0.0.0:554 0.0.0.0:0 LISTENING 5652
    [wmpnetwk.exe]
    TCP 0.0.0.0:912 0.0.0.0:0 LISTENING 3204
    [vmware-authd.exe]
    TCP 0.0.0.0:990 0.0.0.0:0 LISTENING 1616
    WcesComm
    [svchost.exe]
    TCP 0.0.0.0:3389 0.0.0.0:0 LISTENING 1628
    Dnscache

    NOTE: If you try to run this on Vista as anyone other than Administrator, you'll get an error stating "The requested operation requires elevation.". To get around this:

    RIGHT-Click on Start->All Programs->Accessories->Command Prompt, and select "Run As Administrator"

    Then you can run the command from that new command prompt.

    SCE JavaScript Action

    The State Based Correlation Engine (SCE) is a powerful tool for filtering and applying simple correlation rules to TEC events before TEC rules are applied, but a major deficiency is a lack of a general purpose action for manipulating events. In order to manipulate events in ways other than the simple methods provided by the supplied actions, one had to develop a new Java class.

    Presented here is a general purpose SCE action that embeds the Rhino JavaScript engine that enables the use of JavaScript programs to manipulate events.

    The SCE JavaScript custom action can be downloaded here.

    Key features include:


    • Add, delete, and modify event attributes (slots)
    • Change event class (type)
    • Generate new events
    • Discard events
    • JavaScript regular expressions
    • Automatic conversion of Prolog list types to JavaScript arrays and back
    • Forward events to other SCE rules
    • Access SCE variables
    • Get and set SCE rule properties
    • Rhino Live Connect to access native Java objects such as JDBC
    • Optional mapping of event attributes to JavaScript properties of an Event object


      The README file contains examples of event flood detection and handling and JDBC (MySQL) event enrichment using Rhino's Live Connect feature.

    Three simple rules for Universal agent naming

    Most of you know that the Universal application naming is very restrictive. Sometimes, we spend lot of hours troubleshooting universal agent issues only to figure out later that the issue was caused by bad application naming. This article lists three simple rules for naming your Universal agent applications.

    Name your application using 3 letters

    You could use more than three letters for application name but only the first three letters matter and they should be unique.

    Avoid names starting with 'K'

    In ITM 6.1, application names starting with 'K' are reserved. Avoid them.

    Avoid special characters

    Universal agent application naming convention allows few special characters such as underscore but not all of them. Just to keep your application naming convention simple, avoid any special characters and instead use only numbers or alphabets.

    Hope this saves some time during your next UA development.

    Tivoli Provisioning Manager Basic Terminology

    As TPM gains foothold in Tivoli product space as a successor to ITCM 4.2, it is important to learn some of the basic terminology to understand the upcoming lingo. This article gives you a basic idea for some of the TPM terms. This article credit is shared with Martin Carnegie.

    TPM Product Names

    Since there are quite a few products under TPM product family, you might want to read some of our earlier blogs about TPM Products. Here are couple of links.

    1. http://www.gulfsoft.com/blog_new/index.php?name=News&file=article&sid=351
    2. http://www.gulfsoft.com/blog_new/index.php?name=News&file=article&sid=382

    TPM Server

    TPM Server is roughly the equivalent of TMR Server in the ITCM 4.2 world. The server initiates all the TPM operations such as scans, software distribution, etc.

    Tivoli Common Agent (TCA)

    TCA is similar to endpoint in ITCM. However, in TPM, you can perform operations such as basic scans, software installs, etc without the presence of TCA code.

    Content Delivery Service (CDS)

    Content Delivery Service can be roughly considered as a "Mdist2" like service in TPM that facilitates data transfer across TPM.

    CDS Management Server

    Content Delivery Service Management server is the server component of Content Delivery Service. It can be thought as a manager of depot servers. (See Below)

    Depot Servers

    These are equivalent to the gateways in ITCM, from where endpoints pulls the data. Please note however that in TPM, the data transfer can skip depot servers depending upon the SAP mechanism used.

    Service Access Point (SAP)

    Service Access Point specifies the underlying protocol & authentication information to use while performing TPM operations. For example, TPM can communicate with an endpoint using a SSH service or a Windows SMB service or using a TCA SOA-SAP.

    Workflows

    These are set of instructions that performs specific task. For example, you can execute a task, copy a file to a remote machine, etc. In TPM, most of the operations such as software distribution, tca installation, Inventory scan are implemented as a set of workflows. Custom workflows can be developed using Jython programming language.

    These are some of the basic TPM terms that comes to mind. Hope you find it useful.

    Converting TDW timestamps to DB2 Timestamps

    One of the key aspect of history tables is timestamps. But as you might have noticed, performing date operations with TDW timestamps is not straight forward. Part of the reason is that TDW uses "Candle timestamps" rather than RDBMS timestamps. This article discusses a way to convert the "Candle timestamps" into DB2 timestamps.

    The problem

    First, in TDW, the "Candle timestamps" are stored in a "CHAR" format rather than "timestamp" or "datetime" format. If you would like to utilize the powerful date functions of the RDBMS, you will not be able to do unless you convert the string into RDBMS timestamp format.

    So, are you ready to convert the string into timestamp using DB2 Timestamp() function? Well, it is not so simple, my friend! The "Candle timestamps" are stored in a julian year like "CYYMMDDhhmmssSSS" format where C is the century (0 for 20th, 1 for 21st, etc). This non-standard format makes it difficult to convert the string into timestamps directly.

    The solution

    Okay, enough whining. Let us see how it can be converted to DB2 timestamps. You can use the SUBSTR() function of DB2 to extract the desired fields from the database but it will lead to a very complicated queries especially if you need to convert the timestamps at multiple places in the same query.

    For a related blog article, see here

    The idea here is to do something similar but define it in a DB2 User Defined Function (UDF). After defining it, we can use the UDF just like any other function.

    Creating a UDF.

    I am going to rush thru the UDF creation steps like the guy in marriage ceremony in Budlight superbowl commercial. See . To create an UDF, goto "Control Center" -> Tools -> "Development Center" -> Project -> New Project -> Add Connection -> User Defined Functions -> New SQL User Defined Function.

    Now enter the following UDF definition in the Editor view and click Build. Make sure to replace the ITMUSER with your TDW user id.


    CREATE FUNCTION ITMUSER.STR2TIMESTAMP (
    STRDATE CHAR(16) )
    RETURNS TIMESTAMP
    LANGUAGE SQL
    DETERMINISTIC
    CONTAINS SQL
    NO EXTERNAL ACTION
    BEGIN
    ATOMIC
    DECLARE RESULT TIMESTAMP ;
    SET RESULT = TIMESTAMP_FORMAT(
    '20' || SUBSTR(STRDATE,2,2) || '-' ||
    SUBSTR(STRDATE,4,2) || '-' ||
    SUBSTR(STRDATE,6,2) || ' ' ||
    SUBSTR(STRDATE,8,2) || ':' ||
    SUBSTR(STRDATE,10,2) || ':' ||
    SUBSTR(STRDATE,12,2), 'YYYY-MM-DD HH24:MI:SS');
    RETURN RESULT;
    END



    Using the UDF in SQL queries

    Once the UDF is defined, converting the "Candle Timestamps" to DB2 is very easy. The following shows an example.


    SELECT "Server_Name", ITMUSER.STR2TIMESTAMP(A."Timestamp"), "%_Disk_Used" From ""NT_Logical_Disk" A


    Hope you find this useful.

    ITM UA Socket data provider tips

    Tips/observations I found using the UA socket data provider with perl while doing work at a customer site..

    ITM 6.1 FP4, SuSE Linux 10

    1. turn OFF "warn" in perl (#!/usr/bin/perl -w). I use it all the time but I was only getting data from first connection until I turned it off in my script.
    2. TCP works great for scheduled (> 30 sec interval) sending of batch data, but use UDP with sporadic alert-type data that *might* be < 30 sec interval data. Using TCP with the latter scenario causes the UA to silently "miss" < 30 sec interval data.

    Also, the POST data provider with UDP works great when you can't control the sending program, like with certain network devices.

    ITM 6.1 Web Page to view Situations on Agents

    Many times people ask to know what Situations are running on a given agent in ITM 6.1. I have written an HTML form and CGI/PERL script to query this data from the HUB and report it back in HTML. It's more convienent that CLI for casual users, but still requires a login.

    The following PDF describes deployment and usage.

    http://www.gulfsoft.com/downloads/blog_downloads/Viewing_Situations.pdf

    UPDATE - ITM 6.1 Unix/Linux/Windows Agent Upgrades from a Windows HUB

    Below is a script/batch file that will run on a Windows HUB or REMOTE TEM Server and take two arguments that will upgrade OS agents using a depot on the server where it is run.

    This script has been validated in 2 separate Windows 2003 production environments as well as numerous test systems.

    This may seem trivial and it is - but it's huge time saver plus I have it logging so you can track your agent upgrades. I just ran it at two customer sites and it was "set it and forget it". I came back a few hours later and my agents were upgraded.

    I capture the resulting error code for troubleshooting later, I have also the timeout to 10 minutes instead of the default 5 minutes. This is a fully supportable solution because we are using the standard depot and simply putting a wrapper around the commands and changing the number of files in a depot structure.

    This is a batch file, or .bat file. Copy the following text all the way down to and including the work "exit" and paste it into a new file. Save it on your Windows HUB and Remote TEMS. Before running this file, login using the tacmd login command.

    Run the program with with 2 argument. The first is the agent/product code you want to upgrade - acceptable values are NT, UX and LZ for Windows, Unix and Linux OS agent. The next is the versions of those agents that you want to upgrade, acceptable values are 06.10.00.00, 06.10.02.00, 06.10.03.00, 06.10.04.00 and 06.10.01.00. Logging occurs in the directory where the program is run from.


    @echo off
    REM This script takes 2 input options. The first is an agent Product code. Valid options would
    REM be NT UX and LZ.
    REM The second option is the version of agents that you want to upgrade. Valid options are
    REM 06.10.00.00, 06.10.04.00, 06.10.03.00, 06.10.02.00, 06.10.01.00


    set TACMD_TIMEOUT=10
    tacmd listsystems -t %1 | findstr %2 >upglist.txt

    if ERRORLEVEL 0 GOTO UPGRADE
    echo not upgradeing

    :UPGRADE
    echo do the upgrade
    for /F %%f IN (upglist.txt) do ( FOR /F "TOKENS=*" %%A IN ('TIME /T') DO SET TIME=%%A
    FOR /F "TOKENS=*" %%B IN ('DATE /T') DO SET DATE=%%B
    echo %DATE%,%TIME%-Currently Upgrading-%%f
    echo %DATE%,%TIME%,Currently Upgrading-%%f,was %2 >>agent_upgrade.txt
    tacmd updateagent -t %1 -n %%f -f
    echo Result Code for %%f is %ERRORLEVEL% >>agent_upgrade.txt)

    :END
    exit

    UPDATE - Here is a shell script version that works on Unix and Linux

    #!/bin/sh
    LOG=agent_upgrade.log
    DATE=`date`
    for x in `./tacmd listsystems -t $1 | grep $2 | cut -f1 -d" "`
    do
    echo tacmd updateagent -t $1 -n $x -f
    if [ "$?" = 0 ]
    then
    echo $DATE - Upgrade on $x successful >>$LOG
    else
    echo $DATE - Upgrade on $x failed >>$LOG
    fi
    done



    send events using postesmg to Netcool Probe Server or Omnibus Server

    send events using psotemsg to Netcool?

    Has someone been able to send events using Tivoli's postemsg to Netcool Object Server or Probe Server?

    We were a Tivoli implementation and were using TEC and are now migrating to Netcool Omnibus.

    We were able to send events from different sources such as HP OpenView using the postesmg command to TEC.

    I would like to know has anyone ever used postesmg to send the events directly to Netcool Probe or Objest Server?

    If yes, how do i configure that?

    TPM and DST

    According to IBM, there is a fix that needs to be done for Websphere to address the DST change. This effects the 5.1 versions of TIO/TPM and TPM for Software.

    For more information check out http://www-1.ibm.com/support/docview.wss?uid=swg21255724

    TPM 5.1 and MS Active Directory issue

    After doing a clean install of TPM 5.1 (no fixpack yet, coming soon) and using MSAD, I was unable to logon to the TPM webpage. The message displayed was "You are not authorized to access the system. Contact your system administrator. Click here to try again". This is not the message for a bad password. For that you would see bad user name or password.

    So after digging around I found that only the tioappadmin user id and a few of the roles were created. None of the permission groups where created and the tioappadmin id was not added to the SuperUser group. This happened due to an issue when the first attempt was done by the Topology Installer to perform this step. On the first attempt, I had the active directory set in mixed-mode which does not allow some of the features required in the tiodata.ldif.

    To solve this problem (after changing the MSAD mode), I re-imported the tiodata.ldif (most likely in C:\Documents and Settings\Administrator\Local Settings\temp) file using LDIFDE.EXE. Since some of the entries where created, the -k switch is required to ignore existing entries. The syntax is as follows:
    ldifde -i -k -f tiodata.ldif.

    Now it all works :)

    FileServer software distribution in ITCM 4.2.x

    One of the disadvantages of Software distribution architecture is depots can be hosted only from managed nodes and you will have to install a managed node at each slow-link to eliminate redundant data transfer across WAN. But, this may not be possible in an enterprise with large number of slow networks especially when each network hosts only a handful of systems. ITCM provides a FileServer based Software Distribution feature that permits any Windows system to act as a Fileserver for Software packages and this article describes my experiences with this feature.

    Methodology

    In this method, any Windows system can act as a File Server as long as you are able to map a network drive to the system. You can create an image of the Software Package SPBs using "wdepot image" command and transfer the image files (*.toc and *.dat) to the file server share directory using any standard file copy operations including software distribution data movement operations.

    On each endpoint that receives the file server distribution, create a file (if it doesn't exist already) named remote.dir on the LCF_DATDIR directory that lists the file server shares. You can designate multiple fileserver shares by listing one share per line and each will be tried until the image is found. One important thing is each endpoint must have TRAA account enabled.

    Distribution method is same except that you have to check the "From Fileserver" check box in GUI or set from_fileserver=y option in winstsp command. Activity Planner also has an XML tag to enable this feature. When distributed this way, ITCM transfers only the informataion about the package being distributed to the endpoint and NOT the actual content of the SPB. The endpoint consults the remote.dir file for shares and uses TRAA account to access the remote shares and install the package. The final status is relayed back using the usual MDist2 communication path.

    Maintenance

    By default, the "wdepot image" command creates two files named nnnnnnnn.toc and nnnnnnn.dat, where nnnnnnnn is the serial number. If you have large number of packages, it is difficult to keep track of this number and its associated packages and it is wise to setup a lookup table preferably using an automation script.
    Keeping track of which fileserver has which package is a complex process as well and scripts could be developed to maintain this information. Also, you could use software distribution/activity planner to deploy the packages to remote fileservers.

    FAQs

    Would Check Point Restarts work?

    No. Please understand that when you do FileServer distribution, the packages are transferred from file share using Windows SMB protocol and MDist2 is no way involved in the transfer. So, check point restarts will NOT work.

    Will I see a progress of data transfer?

    For normal distributions, you will see a progress of data transfer using wmdist command. But, again this is a MDist2 feature and you will NOT see the progress of data transfer in wmdist -I command while using file server distribution.

    Can I change the contents of remote.dir dynamically?

    Yes. It is just a text file that will be consulted during fileserver distribution.

    What are the package formats supported?

    SPB only.

    Can I distribute to non-Windows endpoints?

    No. Again this features relies on TRAA and SMB protocols.

    Can I host the fileserver on a Unix box running Samba?

    Good question. Worth testing :)

    Hope you find it useful.