Wednesday, November 23, 2011

GBSCMD Performance Improvement Tip

GBSCMD is a Gulf Breeze developed utility for performing ITM operations from command line using SOAP.  If you are running GBSCMD, you will probably notice performance degrade when the amount of SOAP response is huge.   For example, if you are fetching list of all managed systems or all situations in an environment with thousands of systems/situations, gbscmd might take quite sometime to get this information.  
This performance degrade is due to the built-in XML parser used by XML::Simple module. To overcome this issue, you can simply set an environment variable to use a different/more-efficient parser and that will do the trick.
For example, run  "export XML_SIMPLE_PREFERRED_PARSER=XML::Parser" in your environment before running gbscmd and you will notice much better response times even for large datasets. 
Hope this helps.

Thursday, November 3, 2011

Working with the Deployment Engine

So the IBM Deployment Engine (aka ACSI, or Autonomic Computing Solution Installer is what I believe the acronym stands for) is a really nice product and works very well if you read the documentation and warnings. As luck would have it, I forgot to read some of the documentation and warnings, so I was able to learn some things.

The OMNIbus documentation states that if you use the DE as a regular user, then as root, that all installs from that point forward will only use the DE instance that was installed by root (the global instance). It also states that if you uninstall the global/common instance, then the DE is uninstalled everywhere.

The situation that brought me to this point is that I installed TBSM 6.1 on Linux as the user 'netcool' (so it uses a user-specific DE). I then tried to install CCMDB 7.2.1 on the same machine as the user 'root'. This failed early in the process, but not before a new global/common DE was installed. I gave up my CCMDB install dreams and proceeded to install an OMNIbus probe as the user 'netcool'. This gave me an error that I was currently using the global DE on an installation that had been performed using my user-specific DE and I should abort the installation. After reading the above OMNIbus documentation, I didn't want to uninstall the global DE (for fear that it would wipe out everything and I wouldn't be able to upgrade any products). However, since I had a copy of my VM, I gave it a shot. What I did was:

as root:

cd /usr/ibm/common/acsi/bin
export SI_JAVA_HOME=/usr/ibm/common/acsi/jre
./si_inst -r -f

This scared me a bit because it did COMPLETELY remove the /usr/ibm/common/acsi directory and killed all of the acsi processes ('ps -ef | grep acsi' showed nothing at this point). But my ~netcool/.acsi_* directories were still there (I don't know why, but I have two of these directories - ~/.acsi_netcool and ~/.acsi_myserverhostname). At this point, I re-ran the probe installation as user netcool (nco_install_integration), and I got no error messages, and the install information was added correctly to my local DE instance.

And the lesson I learned is that once you install any DE-based product on a machine as a non-root user, all of your subsequent DE-based installs need to be done as non-root users (it doesn't need to be the same user for different products, but you don't want to install anything DE-based as root).

Monday, October 10, 2011

Tips on the ITM Agent for Maximo (product code MI )

I recently found the Tivoli Monitoring Agent for Maximo 7.1.1 Feature Pack and wanted to share some tips on using it. You can find the Feature Pack (with downloads) here:

The instructions for installing the agent assume that you have Maximo, your ITM TEMS and ITM TEPS all on the same Windows machine, which I imagine would not be the case for most customers. You *can* install the TEMS and TEPS support on a non-windows machine using the following commands: /opt/IBM/ITM /opt/IBM/ITM

Where the first parameter is your ITM install directory. You can then install just the agent on a Windows machine with:

installIraAgent.bat C:\IBM\ITM

The agent itself is ONLY installable on Windows. However, this can be ANY Windows machine you want - it only needs to be able to access your Maximo server via URL. NOTE: The agent does get some information from the Log file dir that you specify; if you install the agent on a machine that is not your Maximo server, this data will not be available. (I'm not certain exactly what information it gets from the logs.)

A BIG caveat of the agent is that you CAN NOT use it if you have configured Application Server Security for authentication and authorization with Maximo. (I didn't test out the scenario of configuring Application Server Security only for authentication due to time constraints). So you can only use the agent to monitor a Maximo installation that is configured to use Maximo security.

The next tip has to do with configuration. When you configure the agent, you're required to provide a few pieces of information:

Instance Name: Do NOT use "maximo" as the value! I found this out the hard way - it simply doesn't work if you do this. I used "MXServer", but it looks like you can use anything OTHER than "maximo".

Log file dir: This is the location of your application server log files. For example:


Port: This is the port you use to access Maximo. The default is 7001, which is the default http port for WebLogic. If you're using WebSphere, you should change this to 9080 for http access (or 9443 for https).

Java Home Directory: This can be set to any Java 1.5 (or above) install location on the system. I set mine to:


Another tip is that you do NOT need to configure Maximo Performance Monitor for the agent to work.

The last tip is on usage, once you get the agent up and running. Let it run for a several minutes before assuming it's not working correctly. It just takes a few minutes to capture some of its data. Once it up and running correctly, the table in the Performance Object Status workspace should look similar to this:

DataBaseInfo DataBaseInfo ACTIVE NO ERROR
InstalledApps InstalledApps ACTIVE NO ERROR
License License ACTIVE NO ERROR
InstalledProducts InstalledProducts ACTIVE NO ERROR
DBConnections DBConnections ACTIVE NO ERROR
MemGaugeForAllSrvrs MemGaugeForAllSrvrs ACTIVE NO ERROR
RuntimeMXBean RuntimeMXBean ACTIVE NO ERROR
MemoryGauge MemoryGauge ACTIVE NO ERROR
CronTasks CronTasks ACTIVE NO ERROR
EscalationErrorLog EscalationErrorLog ACTIVE NO INSTANCES RETURNED

Sunday, October 2, 2011

PLAYterm: a New Way To Improve Command Line Skills

PLAYterm: a New Way To Improve Command Line Skills: chrb writes "Linux Journal points out PLAYterm, an interesting project that offers up recordings of Linux command line sessions, with the aim of helping viewers to improve their skills by watching gurus at work." And there's no bad excuse to link to Neal Stephenson's excellent (and free-to-download in delicious zipped-text form) In the Beginning was the Command Line.

Read more of this story at Slashdot.

I think this is a great resource for Windows people learning UNIX/Linux, and also for Linux people who just want to learn about some new commands.

Thursday, September 29, 2011

How to create a lock on a DB2 table

I spent a while figuring this out (to set up a problem/resolution scenario for ITCAM) and figured I would share.

By default, DB2 has auto-commit turned ON. So any time you run a SQL statement, it's automatically followed by a COMMIT. To change this, the easy way is:

db2 +c "delete from your_table_name where your_where_clause"

Then any other application or process trying to read or write this table will have to wait until the lock is cleared before returning. So if you open another window and run 'db2 select * from your_table_name', it will just sit there.

To clear the lock, run:

db2 commit

More info is here:

Monday, September 26, 2011

Tivoli Common Reporting Security - Removing users from administrator roles

In Tivoli Common Reporting, by default, all users will have administrative privileges.  So, every user you create in TCR will have access to the Launch->Administration option and he/she can edit data sources, cancel scheduled jobs and perform various administrative tasks.   While this is great for test environments, it is absolutely not desirable for production implementations.   So, how do we turn off this major security hole?   Fortunately, there is an easy but not well-documented way.
1. Logon as tipadmin/tcradmin in Tivoli Common Reporting portal and select Reporting->Common Reporting
2. Click Launch->Administration
3. Goto Security Tab.
4. Select Cognos.
5. Make sure you selected, Users, Groups and Roles option in the left pane.
6. The list of roles will be listed. Go to the next page on the list. 
7. Select "System Administrators" role that is listed at the very bottom.
8. Click on the "Properties" option to edit the role settings.
9. Click on the "members" tab.
10. Click on the "Add" link to add specific users to TCR administration role.  Typically, the TCR users you created will be under VMMProvider.
11. Next select the "Everyone" group by selecting the checkbox next to it and click "Remove" link.
12. Click OK to save the changes.
13. Log out and log back in as an ordinary user. Now the "Launch->Administration" option will not appear anymore.
Hope this helps,

Tuesday, September 6, 2011

Fixing perl's CPAN on CentOS

If you are using CentOS 5.5 and trying to download perl modules with CPAN, you may come across this error:

Undefined subroutine &Compress::Zlib::gzopen called at /usr/lib/perl5/5.8.8/ line 5721

When working with Tivoli software, it is helpful to use virtual machines. You can save images for different app/version combinations and retrieve them when you need to develop or test something. CentOS is the open source brand for Red Hat, so it works really well when you have to install multiple VMs but don't want the hassle of tracking RHEL licenses. Sometimes you may have to modify various file contents to look more like RHEL itself, but in general CentOS does the trick.

Recently, I was attempting to write a perl script to parse XML files.  I chose to download the XML::LibXML module because it is very flexible and pretty fast. I started CPAN with:

user@system> cpan

Then I attempted the install:

user@system> install XML::LibXML

but then I got the 'Undefined subroutine...' error above. I tried running CPAN with the alternate command:

user@system> perl -MCPAN -e shell

I also tried to install other packages (such as DBD::DB2), but they generated the same error. I have been using the same CentOS 5.5 image for a couple of years, so it made sense to update the perl packages. Same error.

After some Google research, it appears that another error may have a similar cause.
(note the different package and line #):

Undefined subroutine &Compress::Zlib::gzopen called at /usr/lib/perl5/5.8.8/CPAN/ line 122

It took a while to piece together this solution in steps, so hopefully this can save someone else a little time.

1. Install the yum utilities:

user@system> yum install yum-utils
(This contains the yum-complete-transaction executable, which does what its name says.  Description here.)

2. Get libxml2:

user@system> yum install libxml2-devel

user@system> yum-complete-transaction
(notice it's yum-, NOT the normal yum with a space)

3. Update software packages:

But use yum from the command line to do it instead of the built-in CentOS  'Software Updater'.  Run this:

user@system> yum check-update

It will outline all the available updates and ask if you want to execute them. Go ahead and say yes. There may be libraries in some of those packages that will be required to build perl modules. (In a single run, I had 242 installs and 242 removes, and it completed all of them. In my previous attempts to do the same thing from the Software Updater, the Package Manager would hang every time.)

user@system> yum-complete-transaction
(this will just make sure they're all done)

4. Run CPAN:

user@system> perl -MCPAN -e shell

Within CPAN, run these commands in this order:

cpan> force install Scalar::Util

cpan> force install IO::Compress::Base

cpan> force install Compress::Raw::Zlib

cpan> force install IO::Compress::Gzip

cpan> force install Compress::Zlib

after running all of these, CPAN should run just fine. Go ahead and download any perl packages you want.

Saturday, August 27, 2011

Verify the CloudBurst 2.1 Tivoli software stack

Verify the CloudBurst 2.1 Tivoli software stack:

The advantages an appliance brings with it are often achieved by complex
tasks; many times this complexity is hidden by the interface to the appliance, giving the user a limited view of the entire configuration and integration points. But a user may need to verify or re-verify the software stack when the environment changes (restoring backup images in a disaster recovery scenario), making modifications to hardware configurations (like when you add new blades) or software configurations (like when you add new networks with VLAN tagging). In this article, the author provides a quick guide to verifying the IBM CloudBurst 2.1 Tivoli software stack.

Friday, July 15, 2011

Emailing Reports in TCR

The latest version of TCR supports Report Emailing and scheduling feature. However this feature is hidden deep in the menu options and this article shows how to email your report.

  1. First make sure that you configured TCR for emailing using the or by using "Cognos Configuration" application in Windows.

  2. Now to email a report, click on "Run with Options" icon against the report. It is a green arrow icon appearing on the same row as the report name.

  3. Now click on "To specify a time to run the report, or additional formats, languages or delivery options, use advanced options" link that appears to the right.

  4. In the advanced options page, click on "Run in the background" and "Now" under Time and Mode.

  5. Choose the appropriate format such as PDF.

  6. Under Delivery, uncheck save report.

  7. Under Delivery:, check send the report by email and click on "Edit options" right next to it.

  8. In the "Set Email Options" page, set the email receipients, (separated by commas). Edit the subject and body if necessary.

  9. Ensure the "Attach the report" is clicked. Alternatively, you can send a TCR link to the receipients. Click OK.

  10. Ensure that "Prompt for values" is checked. Now click Run.

  11. Now any report parameter values will be prompted and once you entered them and click finish,

  12. Finally, click OK to confirm and now the report will be generated and emailed to the receipients.

Hope this helps.

Wednesday, June 29, 2011

Importing Custom Images in TCR Cognos Reports

If you are developing custom Cognos reports in Tivoli Common Reporting, one of the basic needs in custom reporting is to include your company logo or custom images in your reports.  This article describes steps necessary to include custom logos in your Cognos reports.
  1. First assemble the custom images that you need to include.  These images must be in JPG or GIF format.
  2. Copy these images to the following directory location in TCR.  Or, you can create a subdirectory under the directory below and put your images under the subdirectory.
    <installdir>/../tipv2/profiles/TIPProfile/installedApps/TIPCell/IBM Cognos 8.ear/p2pd.war/tivoli
  3. Important: Also copy the images to <installdir>/../tipv2Components/TCRComponent/cognos/webcontent/tivoli directory or any of its subdirectory.
  4. Now you can drag and drop image objects in Report Studio in your report designs.  After dropping a image object, right click on it and select "Edit Image URL". 
  5. Specify the image url as "../tivoli/mylogo.jpg" if your images are located in tivoli folder in the above example. Modify the URL to include subdirectory names in case your images are located in the subdirectory of tivoli folder in steps 2 & 3.

That's it, Happy Reporting! 

Tuesday, May 3, 2011

The 10 commandments of good source control management

In IT, we all have to write some amount of code for something (OMNIbus rules, custom scripts, custom web pages, etc.), and that code should almost always go into a revision control system. I say almost always because sometimes you just have to write a quick awk one-liner that you'll never use again. But for anything that's in production, you should have it versioned. This link describes the 10 necessary rules for using source control management:

(There are just a few R-rated words, but nothing egregious.)

Friday, April 22, 2011

Interesting information on Tivoli's Cloud initiatives

If you're new to Tivoli's Cloud movement, I think the best way to get use out of this paper is to just read about the products that are involved in the total solution. If your company is moving toward the cloud, knowledge of those components will definitely help you.

Tuesday, March 29, 2011

Passing TCR UserID in BIRT Reports

Many times, you might want to display/determine the TCR user name that is invoking the reports. While there is no GUI way of doing this within BIRT, a simple Javascript is all you need. Here is how to do it in BIRT

  1. Select a blank area in the report. This should display report properties in the Property Editior.

  2. Now click on the "Script" tab for the report displayed at the bottom of the main work area. (where Preview/Layout tabs are).

  3. In the script drop down, select "Before Factory" and paste the javascript code below.

    TCR_IUSER = "";

    userInfo = reportContext.getAppContext().get(TCR_IUSER);

    userName = "unknown";

    if (userInfo != null) {

    userName = userInfo.getUserPrincipal();


  4. Now you can use the userName javascript variable in your reports to identify/display the TCR User.

  5. For example, to display the UserName, insert a "Dynamic Text" item anywhere in your report and enter the following value. "User name = " + userName
Hope this helps.

Thursday, March 17, 2011

IBM Service Management YouTube channel

This is great - a channel containing lots of fairly technical videos of Tivoli products and integrations:

A great tutorial on ITCAMfT integration with TBSM

TBSM is able to read Discovery Library Adapter (DLA) books from a number of products, including ITM, TADDM, and ITCAM for Transactions (there are others, but I don't know of a comprehensive list). Sometimes the specifics about the integration are dependent upon what other products you have installed, but that is a larger discussion also. This piece from IBM contains extremely useful information on how you can filter the data in the ITCAM for Transactions DLA so that it can be processed more quickly by TBSM and increase the quality of the data in TBSM (by eliminating services that are not important):

The information is great, but you definitely have to do some work before you can just follow along. In the example, they exclude all of the .gif, .css and .jpg components. In many shops, this would work great. However, I've been in some companies that have had problems specifically with .css files being moved/renamed/locked/etc., and those companies would definitely not want to exclude those entities. So before you can just dive in, you need to analyze your business needs and the current state of your components. This could be done in a DEV/QA environment, or possibly in a temporary portion of your TBSM implementation.

Wednesday, March 9, 2011

BitLocker on Windows 7

What is BitLocker?

Windows Vista and 7 included the BitLocker functionality to allow for encryption of the drive.

Deployment Problem:

According to the Info Center documentation, OSD is BitLocker ready. Well, not really. The idea is that OSD has the capability of creating a partition that will allow BitLocker to be activated. The problem is that when OSD creates the partition it assigns a driver letter to the partition and this is not something that can be there for BitLocker to function.


As of Windows 7 (and Vista SP1(?), but who cares), Microsoft included a tools called bdehdcfg.exe that allows for the ability to take any partition, shrink it by a certain amount and prepare it for BitLocker. In order for BitLocker to work, it requires a minimum of 100MB or 300MB if you also want the recovery console (For Vista this is 1.5 GB). In order to do this, just use a software module that is deployed with the image to execute the bdehdcfg command.

One thing to note with this solution, when the image is deployed, you will end up with a larger partition than expected. The reason for this is that when the bdehdcfg command is executed, the partition ends up being created at the end of the drive and when OSD is completed, it takes the cache partition (about 500MB) and adds it to the last partition on the drive. So if you are defining bdehdcfg to create a 300MB partition, you will end up with a 800MB partition (approx). Currently the only way around this is to have the bdehdcfg execute after the OSD deployment is completed.

BitLocker sounds simple enough to implement, but there are some things to think about that will impact the business

  1. The PIN is used to provide an additional level of security to the BitLocker process. This PIN is set to the computer not to the user(s) of the computer, so if there are multiple users of the system, then they all share the same PIN.
  2. The PIN can only be set with someone with Administrative access. (I have not personally confirmed this, but I was informed of this by an engineering group, so if this is incorrect, please let me know and I will remove)
  3. There is no native method to enforce a password expiry of the PIN
  4. BitLocker can be disabled/paused by anyone with administrative access, thus leaving the system unprotected.
  5. Will require processes to be put in place when users forget their PIN (you know it will happen) and provide the recovery password. This is possibly the hardest part depending on the users and the number of users.

On the plus side:

  1. It is free so you are able implement encryption without additional software expense
  2. When protected, the encryption seems to be as good as any
  3. Encrypting a drive is relatively quick compared to other vendors
  4. Recovering a drive is simple as you just need the recovery password from Active Directory
  5. Did I mention it was free?

Hope this helps you out :)

If you have any other topics you would like covered, send me a note at martin dot carnegie at gulfsoft dot com.

Deploying Windows 7 with TPMfOSD

Recently I have been involved in using TPMfOSD to capture and deploy Windows 7 images. There is quite a bit of information available on the web and on IBM’s Info Center, but at times we found that there are certain areas that are not completed enough.

I have been working through the Devworks site with various people and thought I would also give back some information. Since this was too big for Devworks, I thought a blog would be best.

At a high level, here is what I did:
1. Importing Windows 7 DVD for Unattended Install
2. Preparing the OS Configuration for Unattended Install
3. Deploying the Unattended Install
4. Customizing Master Image
5. Executing sysprep
6. Capture Clone Image
7. Modifying the OS Configuration for Clone Install
8. Deploying the Cloned OS

For my environment, I am using VMware Workstation to create my profile. There are many advantages of using VMware rather than physical hardware such as:

1. The image does not contain any drivers for the physical hardware. Windows 7 can be installed on VMware with almost no extra drivers (depending on the vm hardware defined)

2. Simple and quick to restore an image with the snapshots rather than using OSD to capture the “Golden Master”

3. Multiple snapshots can be created to backup and restore during various stages

4. The restore of an image can be done to any system that has VMware installed, as long as the hardware is setup the same. So the VM image can be built on Lenovo/HP/Dell/etc hardware

When using VMware, I also add the setting bios.bootdelay=15000 to the .VMX file to allow time to press the F12 key or ESC for the boot menu.

Before starting on this, one big note is around the Built-in Administrator name that is used. When installing Windows 7, you are prompted to create an id that will be an administrator on the system. When this user is created, it will be added to the Administrators group and the Built-in administrator will be disabled. In order to get the Built-in administrator enabled, you need to set the Administrator name in the OS profile to “Administrator” (has to be this no matter what you want the id to actually be). For this example, I will be changing the Built-in administrator to “myadmin” and show how to make this will work.

1. Importing Windows 7 DVD for Unattended Install

This was fairly simple. Just use the New Profile > Unattended Setup and walk through the wizard.

Info Center documentation:

2. Preparing the OS Configuration for Unattended Install

Once the import is complete, open the OS configuration, go to the Windows tab and set the "Administrator Name:" field to Administrator. Also verify that the time zone is set. If you are using volume licensing, then select the “Volume licensing” option. If not, then set the serial number.

3. Deploying the Unattended Install

After the unattended install system profile is created, it can be deployed to a target system in order to create the clone profile. The methods to deploy an unattended or cloned profile are exactly the same. The big difference is the time for installation. The unattended install is significantly longer to complete than a cloned image.

Info Center documentation:

4. Customizing Master Image

There are many options to configure in the image such as included software, user ids, local policies, etc. Also remember that software modules can be used to customize an image after deployment, so make sure what is included will not require you to make more updates to the image than necessary.

Some of the deciding factors for what to do in the image vs in a software module:

- will the software take too long to deploy in a software module. For example:

- MS Office, this product takes a very long time to run through installation than it does to have included in the image

- Adobe Flash, this product is quick to install but is updated quite regularly, so it is probably better to have in a software module.

- Antivirus applications. Since these are core to protecting the corporate environment, they should be in the image. This is because there could be a failure installing the software module which would end up leaving a system unprotected.

The Windows 7 image is quite large even without any software installed, so whatever can be cleaned to minimize this image would be a good idea. Typically I would include any patch backups as this could shrink an image by 1GB or more.

As stated, I have changed my Administrator (SID 500) account to myadmin. This is a typical configuration that most sites will do. There are a couple “quirks” that happen when you do this:

  1. After the change, the user directory on the system will be C:\Users\Administrator. When you deploy the image, the directory will be changed to C:\Users\myadmin. You cannot change the directory name on the original image (you can Google it).
  2. As stated earlier, when setting the OS Configuration in step 7, you have to set the Administrator Name to “Administrator”. If you do not, the system will be deployed with the “myadmin” account, but it will not be the SID 500 account, it will just be an id in the Administrators group. The SID 500 will be called Administrator and it will be disabled. When set correctly, the “myadmin” will be the SID 500 account and another account called “Administrator” will be added to the Administrators and Users groups. For my deployment, I included a software module that would remove it from both groups and disable the account.

Another issue that I ran into was that I deleted the C:\install directory. This is created by the unattended install. When deploying an image to the target, the c:\install directory would be created, but when executing software modules later in the build process, they would not execute. This is being addressed in a future fix (not in FP04). To workaround this issue, just leave the c:\install directory in the image.

5. Executing sysprep

Once the unattended install is complete, the system can then be configured with any corporate software and configurations. After all configurations are completed, the next step is to use the Microsoft tool called Sysprep. This tool is used to remove system specific configurations to allow for a cloning of an image to different systems.

Unlike Windows XP, sysprep is already on Windows 7 and is located in C:\Windows\System32\Sysprep. The options selected are OOBE, Generalize and Shutdown. I prefer using the shutdown as I do not want to miss the reboot and have the mini-setup run again.

Info Center documentation:


A system that is joined to a domain cannot be used for creating a cloned profile. If the system has been joined to a domain, then it has to be moved to workgroup mode.
- Some extra recommended tasks are:
- Empty recycle bin
- Execute chkdsk to ensure there are no disk error
- Clean out temporary files
- Remove any persistent drive mappings
- Clear the Application, Security and System event logs
- Sysprep still has the limit of being executed 3 times in Windows 7.

6. Capture Clone Image

Capturing the Windows 7 OS is no different than the methods used for any other operating system. The process is quite a bit longer than Windows XP and requires more reboots, but overall the whole process is the same.

Info Center documentation:

7. Modifying the OS Configuration for Clone Install

Once an image is imported, the OS configuration will need to be set. The OS Configuration is where you use OSD to set the parameters that will be used in the unattend.xml file. The UI will allow for the configuration of many of the common settings, but if there are more that are required, use the “Edit custom unattend.xml” on the General tab. When setting the OS configuration, the most important item to set is the “Administrator Name” to “Administrator”. This is done by opening the properties for the OS configuration and going to the Windows tab. Also on this tab in the “System Customization”, check the setting “Always authorize installation of unsigned drivers”.

8. Deploying the Cloned OS

Deploying the Windows 7 OS is no different than the methods used for any other operating system. The process is quite a bit longer than Windows XP and requires more reboots, but overall the whole process is the same. One thing that did happen in Windows 7 and not XP is that OSD actually logs into the OS. This causes some issues with scripts that may be in the run/runonce/startup.

Info Center documentation:

Other Notes:

TPMfOSD started supporting Windows 7 in, but this version and use the WinPE2. There are some pretty significant improvements in using or better yet as it utilizes WinPE3 for the deployments. If you have not started, or are just starting, then move to one of these versions. There are other reasons for moving to these newer versions, but this is one of the most visible from a deployment perspective.


As noted, this is a fairly high level of using OSD for Windows 7 deployments, but should start you on the right path.

Remember, we at Gulf Breeze Software Partners are ready to help you with your implementations on TPMfOSD or any IBM Tivoli product

If you have any other topics you would like covered, send me a note at martin dot carnegie at gulfsoft dot com.

Monday, January 31, 2011

GbsTask - A Task Management Utility for ITM

In Tivoli Management Framework, there is a concept called "Task". Tasks let users to specify executable for a specific platform at the creation time. When a task is run against multiple targets ("endpoints"), the appropriate executable is transferred and executed on the remote system and the output is presented in the standard output. When executed on multiple targets, the execution is done in multi-threaded manner.

We, at Gulf Breeze, developed a Java based solution to implement the task feature in Tivoli Monitoring product and this article discusses about this solution in detail. If you are interested, please email me and I will send you a free copy.

  • A Simple database driven tool to create/update/delete/execute tasks.
  • Tasks can be executed on individual OS agents or on ITM MSLs.
  • Tasks can be executed in a multi-threaded manner across agents of different platforms.
  • Supports SQLServer or DB2 databases to store task information.
  • Authorization information kept in a separate file and can be specified with -a switch. You don't need to specify the password in your scripts.
  • Maximum number of threads is limited by the maximum number of "tacmd"s that can be run in parallel. Running more than this limit could cause stability issues. As of ITM 6.2.2. FP2, the maximum number of threads is 10.
  • Currently the tasks can be executed only against Windows, Linux and Unix OS agents.

To run the "gbstask" solution, you will need the following. The solution is tested with SUN JRE 1.5 and "should" work in other implementations of Java Runtime.
  • JRE 1.5 or later. (The code will NOT work with JRE 1.4).
  • JDBC driver for your database.
  • Tacmd CLI. (The CLI is installed with an OS agent installation or ITM TEMS installation).
  • A SQL Server or DB2 database where you can create a table to contain task information.

The following command creates task called pingtask for Linux and Windows.
# Creates tasks for Linux and Windows
C:\temp>java -jar GbsTask.jar -a db2.auth -c -l mylib -t pingtask -o Linux -f C:\temp\
C:\temp>java -jar GbsTask.jar -a db2.auth -c -l mylib -t pingtask -o Windows -f C:\temp\test.bat
# Executes a task on specific managed systems.
$ java -jar GbsTask.jar -a db2.auth -x -l mylib -t pingtask -h Primary:VMTBSM421:NT,vmitm622:LZ,Primary:VMTBSM42X:NT
# Executes a task on specific MSLs.
$ java -jar GbsTask.jar -a db2.auth -x -l mylib -t pingtask -m "*NT_SYSTEM,*LINUX_SYSTEM"
# Deletes a task
$ java -jar GbsTask.jar -a db2.auth -d -l mylib -t pingtask -o Windows
$ java -jar GbsTask.jar -a db2.auth -d -l mylib -t pingtask -o Linux

Sample Output

$ java -jar GbsTask.jar -a db2.auth -x -l mylib -t pingtask -h Primary:VMTBSM421:NT,vmitm622:LZ,Primary:VMTBSM42X:NT
---Begin Task Output for ManagedSystem vmitm622:LZ
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=128 time=0.276 ms
64 bytes from icmp_seq=2 ttl=128 time=0.255 ms
64 bytes from icmp_seq=3 ttl=128 time=0.168 ms
64 bytes from icmp_seq=4 ttl=128 time=0.221 ms
--- ping statistics ---
4 packets transmitted, 4 received, 0% packet loss, time 3000ms
rtt min/avg/max/mdev = 0.168/0.230/0.276/0.040 ms
---End Task Output for ManagedSystem vmitm622:LZ
---Begin Task Output for ManagedSystem Primary:VMTBSM421:NT
Pinging with 32 bytes of data:
Reply from bytes=32 time=1ms TTL=128
Reply from bytes=32 time=1ms TTL=128
Reply from bytes=32 time=1ms TTL=128
Reply from bytes=32 time=1ms TTL=128
Ping statistics for
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 0ms, Maximum = 0ms, Average = 0ms
---End Task Output for ManagedSystem Primary:VMTBSM421:NT
---Begin Task Output for ManagedSystem Primary:VMTBSM42X:NT
Pinging with 32 bytes of data:
Reply from bytes=32 time=1ms TTL=128
Reply from bytes=32 time=1ms TTL=128
Reply from bytes=32 time=1ms TTL=128
Reply from bytes=32 time=1ms TTL=128
Ping statistics for
Packets: Sent = 4, Received = 4, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 0ms, Maximum = 0ms, Average = 0ms
---End Task Output for ManagedSystem Primary:VMTBSM42X:NT

Interested? Please email me at venkat at and will send you a free copy of this tool. You can download the documentation for this tool from the link below.