If you have been using CSV/flat file as a persistent storage for your scripts, you should really checkout, SQLite. It gives you the power of RDBMS while without the complexity that comes with it. Any SQLite database you create is nothing but a file. It provides locking, transaction support, joins, etc. With ".dump" command, it can generate the SQL commands to reproduce the whole schema.
Did I mention, this database format is supported by ActivePerl, by default? You can use the standard Perl DBI module to manage this database.
Also, there is a CLI tool called sqlite3 (~500K) that lets you run all database manipulation and SQL commands. And, if it is good enough for Google Android and Apple Safari, chances are it should be robust enough for my needs.
Here is a simple Perl script to access this database from Perl.
#!/usr/bin/perl
use DBI;
my $dbh = DBI->connect('dbi:SQLite:dbname=sample.db',',');
$sql = qq{ CREATE TABLE MYCERT ( num int not null, name varchar(20) ); };
$dbh->do($sql);
$sql = qq{ INSERT INTO MYCERT VALUES(1, 'ITM'); };
$dbh->do($sql);
$sql = qq{ INSERT INTO MYCERT VALUES(2, 'Omnibus'); };
$dbh->do($sql);
$dbh->disconnect();
Tuesday, November 17, 2009
Thursday, November 5, 2009
To AB, or not to AB
If you are looking to develop custom monitoring solution in ITM, ITM gives you two options, viz. an eclipse GUI based Agent Builder tool or Universal Agent (UA). Which one would you choose? While the agent builder is shiny and easy to use, UA solution has its own advantages. Read on to know some of the pros and cons of each approach.
Agent Builder based solution makes sense for the following scenarios.
1) If you want to deploy something real quick and easy, then Agent Builder is a good candidate for your needs. Once you are familiar with the agent builder interface, you can create a custom monitoring agent literally in minutes. Moreover, there are not many typos/mistakes you can make with the Agent Builder's GUI based approach.
2) If you generally prefer GUI method over CLI methods, you will like Agent Builder more.
3) If you want to pull from data sources such as JDBC, WMI, NT Event Log, Service Control Manager, etc, then you should be build an agent builder agent with few clicks. UA will require lot of work as you may have write your own code to pull data from these data sources.
4) If you want to integrate the custom monitoring deployement with your current agent deployment methods, then obviously agent builder is the way to go. Deploying agent builder is very much the same as deploying any other agent.
Universal Agent based solution makes sense for the following scenarios.
1) If you want to minimize the number of agents you want to manage, then you are better of with UA. For example, if your requirement is to deploy 'n' custom monitoring solutions, typically agent builder would require 'n' agents, whereas in case of UA, one agent should be able to perform all 'n' monitoring activities.
2) Let me prefix this statement with a caveat. Check with your IBM representative for all licensing related information. Since one UA can handle multiple monitoring tasks, the licensing costs of UA based solution is typically lower than that of Agent Builder.
3) If you have been using UA for a long time, you can deploy the UA solution as quickly as an Agent Builder solution. More over, UA works pretty reliably.
4) If your monitoring requirements needs advanced summarization capabilities, then UA provides more advanced features than agent builder. Again, some of these tasks can be done using by modifying the itm_agent_toolkit.xml file, but it is just that the Agent Builder capabilities in this regard is not fully known yet.
Hope this information is helpful in your next custom monitor deployment.
Agent Builder based solution makes sense for the following scenarios.
1) If you want to deploy something real quick and easy, then Agent Builder is a good candidate for your needs. Once you are familiar with the agent builder interface, you can create a custom monitoring agent literally in minutes. Moreover, there are not many typos/mistakes you can make with the Agent Builder's GUI based approach.
2) If you generally prefer GUI method over CLI methods, you will like Agent Builder more.
3) If you want to pull from data sources such as JDBC, WMI, NT Event Log, Service Control Manager, etc, then you should be build an agent builder agent with few clicks. UA will require lot of work as you may have write your own code to pull data from these data sources.
4) If you want to integrate the custom monitoring deployement with your current agent deployment methods, then obviously agent builder is the way to go. Deploying agent builder is very much the same as deploying any other agent.
Universal Agent based solution makes sense for the following scenarios.
1) If you want to minimize the number of agents you want to manage, then you are better of with UA. For example, if your requirement is to deploy 'n' custom monitoring solutions, typically agent builder would require 'n' agents, whereas in case of UA, one agent should be able to perform all 'n' monitoring activities.
2) Let me prefix this statement with a caveat. Check with your IBM representative for all licensing related information. Since one UA can handle multiple monitoring tasks, the licensing costs of UA based solution is typically lower than that of Agent Builder.
3) If you have been using UA for a long time, you can deploy the UA solution as quickly as an Agent Builder solution. More over, UA works pretty reliably.
4) If your monitoring requirements needs advanced summarization capabilities, then UA provides more advanced features than agent builder. Again, some of these tasks can be done using by modifying the itm_agent_toolkit.xml file, but it is just that the Agent Builder capabilities in this regard is not fully known yet.
Hope this information is helpful in your next custom monitor deployment.
Wednesday, October 14, 2009
Including Javascript functions in your BIRT reports
BIRT provides a very tight integration with Java/Java Script for customizing your reports. Most of the time, you embed your JavaScript within your reports and you have to modify each of the reports if something need to be changed.
However, there is a better way especially for some frequently used functions. You can put them in a .js file and re-use them across your reports. Here is how to do it.
1. Create a set of Java Script functions (such as for logging, modifying your queries, etc) and put it in a file. (e.g. GbsFunctions.js)
2. Save the file under somewhere under your resource directory, which can be set using Window->Preferences->Report Design->Resource->Resource folder within Eclipse. (e.g. resourcedir/GBS/scripts/GbsFunctions.js).
3. Now add the following XML tag to your XML source of the reports. Make sure that the XML you add doesn't result in malformed XML. (e.g. add just before <data-sources> tag).
<list-property name="includeScripts">
<property>GBS/scripts/GbsFunctions.js</property>
</list-property>
4. Now, you can access the functions listed in GbsFunctions.js within BIRT.
Hope you find this useful.
However, there is a better way especially for some frequently used functions. You can put them in a .js file and re-use them across your reports. Here is how to do it.
1. Create a set of Java Script functions (such as for logging, modifying your queries, etc) and put it in a file. (e.g. GbsFunctions.js)
2. Save the file under somewhere under your resource directory, which can be set using Window->Preferences->Report Design->Resource->Resource folder within Eclipse. (e.g. resourcedir/GBS/scripts/GbsFunctions.js).
3. Now add the following XML tag to your XML source of the reports. Make sure that the XML you add doesn't result in malformed XML. (e.g. add just before <data-sources>
<list-property name="includeScripts">
<property>GBS/scripts/GbsFunctions.js</property>
</list-property>
4. Now, you can access the functions listed in GbsFunctions.js within BIRT.
Hope you find this useful.
Thursday, October 8, 2009
A great new draft Redbook is available
Integrating Tivoli Products
It's got lots of good information on integrating ITM, ITNM, TADDM, CCMDB, TBSM, etc., and is well worth the read.
It's got lots of good information on integrating ITM, ITNM, TADDM, CCMDB, TBSM, etc., and is well worth the read.
Saturday, August 8, 2009
ITNM 3.8 Running as SUID root on AIX 6.1 Requires GSKit 7.0.4.14
If you plan to install ITNM 3.8 on AIX 6.1 as a non-root user and have it run as SUID root (as opposed to having the processes actually run as root, which is your other option after you go through the install), you will need to install at least GSKit 7.0.4.11.
The reason I'm posting this is that you may unwittingly encounter these issues:
1. If you've already installed an ITM 6.2.1 agent on your AIX, you've got GSKit installed, but it's the wrong version. The version included with ITM 6.2.1 is 7.0.3.18. This version will cause several of the ITNM processes to fail.
2. ITNM actually ships with the correct GSKit libraries, but it simply copies those libraries to your AIX machine underneath your ITNM install location. So you might think that you can just set your LIBPATH environment variable to use these GSKit libraries. HOWEVER, you would be wrong - when a process is running as SUID root on AIX, the ONLY directories it searches for necessary libraries are those that are HARD-CODED into the binary! You can see this library path for any binary with the command 'dump -Hv executable_file_name'.
3. If you choose to run ITNM as root (rather than SUID root), you won't have this problem because you can just set the LIBPATH environment variable appropriately.
The reason I'm posting this is that you may unwittingly encounter these issues:
1. If you've already installed an ITM 6.2.1 agent on your AIX, you've got GSKit installed, but it's the wrong version. The version included with ITM 6.2.1 is 7.0.3.18. This version will cause several of the ITNM processes to fail.
2. ITNM actually ships with the correct GSKit libraries, but it simply copies those libraries to your AIX machine underneath your ITNM install location. So you might think that you can just set your LIBPATH environment variable to use these GSKit libraries. HOWEVER, you would be wrong - when a process is running as SUID root on AIX, the ONLY directories it searches for necessary libraries are those that are HARD-CODED into the binary! You can see this library path for any binary with the command 'dump -Hv executable_file_name'.
3. If you choose to run ITNM as root (rather than SUID root), you won't have this problem because you can just set the LIBPATH environment variable appropriately.
Tuesday, August 4, 2009
Converting TDW Timestamps in MySQL
Hope you read my previous articles on converting TDW Timestamps into "normal" timestamps in DB2 and in JavaScript (BIRT). Recently, I had to re-write this function in MySQL. In case you wonder, ITNM uses MySQL as the poll data collection database and again the same problem manifested.
The solution in MySQL is similar to that of DB2 based solution. Create a function in MySQL database and call that function in your SQL. But here is how to do it.
The solution in MySQL is similar to that of DB2 based solution. Create a function in MySQL database and call that function in your SQL. But here is how to do it.
- Bring up MySQL Administrator client and connect to the database in question.
- Goto Catalogs in your left pane and select the appropriate database schema.
- Goto Stored Procedure tab and click the "Create Stored Proc".
- In the name field, give a Name for the function (e.g. TDW_TO_NORMAL_TS) and click "Create FUNCTION" button.
- MySQL will create a skeleton function like below.
CREATE FUNCTION `ncpolldata`.`TDW_TO_NORMAL_TS` () RETURNS INT
BEGIN
END - Replace the "CREATE FUNCTION" to look like below.
CREATE FUNCTION `ncpolldata`.`TDW_TO_NORMAL_TS` (tdw_time bigint) RETURNS DATETIME DETERMINISTIC - Between the "BEGIN" and "END" blocks, paste the following code.
BEGIN
Declare normal_time datetime;
Declare tdw_trunc bigint;
Set tdw_trunc = substr(tdw_time,2,12);
Set normal_time = DATE_FORMAT(tdw_trunc, '%y%m%d%H%i%s');
return(normal_time);
END - That's it. Click on the "Execute SQL" button to save the newly created function.
- Call the function in your SQL Statements like below.
SELECT TDW_TO_NORMAL_TS(poll_time) from KNP_POLL_DATA_COLLECTION LIMIT 100
Friday, July 31, 2009
How to resolve odaconsumer.CannotPrepareStatement error in BIRT
I was just trying to create a simple report in BIRT using a flat file and got this error when trying to preview the data:
A BIRT exception occurred.
Plug-in Provider:Eclipse.org
Plug-in Name:BIRT Data Engine
Plug-in ID:org.eclipse.birt.data
Version:2.2.2.r22x_v20071212
Error Code:odaconsumer.CannotPrepareStatement
Error Message:Failed to prepare the following query for the data set type org.eclipse.datatools.connectivity.oda.flatfile.dataSet.
[select "COLUMN_1", "COLUMN_2", "COLUMN_3" from mydata.txt : {}]
Invalid table name:mydata.txt
The problem turns out to be the location of my stupid file. I had placed it in C:\ and BIRT apparently doesn't like that at all. So I moved the file to a folder named C:\deleteme, then updated my Data Source and Data Set, and then all was well.
A BIRT exception occurred.
Plug-in Provider:Eclipse.org
Plug-in Name:BIRT Data Engine
Plug-in ID:org.eclipse.birt.data
Version:2.2.2.r22x_v20071212
Error Code:odaconsumer.CannotPrepareStatement
Error Message:Failed to prepare the following query for the data set type org.eclipse.datatools.connectivity.oda.flatfile.dataSet.
[select "COLUMN_1", "COLUMN_2", "COLUMN_3" from mydata.txt : {}]
Invalid table name:mydata.txt
The problem turns out to be the location of my stupid file. I had placed it in C:\ and BIRT apparently doesn't like that at all. So I moved the file to a folder named C:\deleteme, then updated my Data Source and Data Set, and then all was well.
Subscribe to:
Posts (Atom)