I'm giving this presentation on TADDM for the NYC TUG today and the Philadelphia TUG tomorrow, and wanted to make it available online.
Full size: https://drive.google.com/open?id=0B2lRAtNC_A9BYkZ1THlEb3lMeEU
Wednesday, November 19, 2008
Sunday, November 16, 2008
Accessing your Windows files from a Linux VM
At least in VMWare Workstation 6.5 (and probably earlier versions, tho I'm not sure) running on Windows, you can easily access any of your host OS files from any Linux VM. You just need to enable Shared Folders (from VM->Settings, in the Options tab) and specify the folders you want to have accessible from Linux. Once you do this, you should see those folders under /mnt/hgfs in Linux. So it looks just like a regular filesystem from the Linux perspective.
Note: I verified this with CentOS 5.
Note: I verified this with CentOS 5.
Adding disk space to a Linux VM in VMWare
I had a CentOS 5 VM that just didn't have enought disk space, so I wanted to give it some more. I didn't think it would be too hard, and in the end it wasn't, but it sure took me a while to find all the steps to accomplish it. So here are the ones I found useful. YMMV :)
Host OS: Windows Vista x64 SP1
VMWare Software: VMWare Workstation 6.5
Guest OS: Centos 5 (code equivalent to RHEL5)
1. Power off the VM (have to do this to add a new disk)
2. Create a new virtual disk (this is the easy part)
a. Go into VM->Settings and in the Hardware tab, click the Add... button.
b. Follow the instructions. This is very straightforward. I created a new 8GB disk.
3. Power on the VM and log in as root.
4. I decided to use the LVM subsystem, and that's what these steps address:
a. Create a Physical Volume representing the new disk: pvcreate /dev/sdb
b. Extend the default Volume Group to contain the new PV:
vgextend VolGroup00 /dev/sdb
c. Extend the default Logical Volume to include the newly-acquired space in the VG:
lvextend --size +7.88G /dev/VolGroup00/LogVol00
(The disk is 8GB according to VMWare, but it looks like around 7.88GB to Linux)
d. Extend the device containing the / (root) filesystem to stretch across the entire LV:
resize2fs -p /dev/mapper/VolGroup00-LogVol00
And that's it. I took the defaults on installing CentOS, so my / (root) filesystem is of type ext3, which supports this dynamic resizing.
So in this case, this disk is basically tied to this VM. If you wanted to create a disk that could be used by different VMs, you would certainly go about it differently, but that's a different topic.
Host OS: Windows Vista x64 SP1
VMWare Software: VMWare Workstation 6.5
Guest OS: Centos 5 (code equivalent to RHEL5)
1. Power off the VM (have to do this to add a new disk)
2. Create a new virtual disk (this is the easy part)
a. Go into VM->Settings and in the Hardware tab, click the Add... button.
b. Follow the instructions. This is very straightforward. I created a new 8GB disk.
3. Power on the VM and log in as root.
4. I decided to use the LVM subsystem, and that's what these steps address:
a. Create a Physical Volume representing the new disk: pvcreate /dev/sdb
b. Extend the default Volume Group to contain the new PV:
vgextend VolGroup00 /dev/sdb
c. Extend the default Logical Volume to include the newly-acquired space in the VG:
lvextend --size +7.88G /dev/VolGroup00/LogVol00
(The disk is 8GB according to VMWare, but it looks like around 7.88GB to Linux)
d. Extend the device containing the / (root) filesystem to stretch across the entire LV:
resize2fs -p /dev/mapper/VolGroup00-LogVol00
And that's it. I took the defaults on installing CentOS, so my / (root) filesystem is of type ext3, which supports this dynamic resizing.
So in this case, this disk is basically tied to this VM. If you wanted to create a disk that could be used by different VMs, you would certainly go about it differently, but that's a different topic.
Monday, October 20, 2008
Why don't databases have foreign keys any more?
If you've looked at a database from a vendor lately (Tivoli included; some example products that include databases are CCMDB, TADDM, ITCAM for WAS, ITCAM for RTT to name a few), you'll notice that there are very few (or NO) relationships between the tables. What this means is that you can't get a nice Entity Relationship Diagram from them, and that makes it quite a bit harder to write reports using the data in them.
So why is it this way?
More and more, software developers are using Object-relational mapping (ORM) components to abstract the software from the database. Here is a pretty comprehensive list of object-relational mapping software. What this software does is abstract the data relationships from BOTH the software AND from the database, putting that relationship information into various files that are used by the selected ORM implementation. The impact this has is that it is difficult as a consumer/user of the software to write reports directly against the database - because the first step is having to reverse engineer the usage of the tables in the database.
What can be done about it?
I can think of a couple of approaches:
1. The best solution I can think of is that you need to ask the vendor for ERDs for all databases that will be used to store collected metrics.
2. Since there may be some valid intellectual property-related arguments against no. 1 above, the next best approach would be to ask the vendor for the SQL needed to produce specific reports.
3. If neither of the above works, then reverse engineering is the only approach left. I've had success in this area by turning up the debugging on the software and looking for SQL "SELECT" statements in the log files.
Wednesday, October 15, 2008
Data visualization using Google Spreadsheets, Yahoo Pipes and Google Maps
OK, so this isn't a Tivoli-related post, but since a big part of what we all want to do is visualize data, I thought that this post that I found through reddit.com was an exellent description of how to use several free web-based utilities to gather and display data. Here's the link:
Here is the author's summary:
So to recap, we have scraped some data from a wikipedia page into a Google spreadsheet using the =importHTML formula, published a handful of rows from the table as CSV, consumed the CSV in a Yahoo pipe and created a geocoded KML feed from it, and then displayed it in a Yahoo map.
I haven't come up with an implementation that gathers Tivoli data, but I can think of one:
The main thing you would need is to get HTML that can be consumed by Google Spreadsheets. One way to do this would be with a JSP (on WAS) or PHP (on IBM HTTP Server) or other server-side language that makes SOAP or direct database calls to retrieve data from TDW and display it in HTML. Once you have that, you could then follow the steps in the article to viualize the data.
Thursday, September 25, 2008
Creating TPM Signatures for Multi_sz registry entries
When creating some software signatures for Windows the use of registry entries to identify an application/operating system on a system. If you check out many of the existing Windows signatures, you will see that they use registry entries extensively.
One issue I came across was around the use of multi_sz type registry entries. This came up when scanning a Windows server, the operating system was not defined. In order to define the operating system the following keys were needed:
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProductName=Microsoft Windows Server 2003
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\CSDVersion=Service Pack 1
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\ProductOptions\ProductType=ServerNT
And the multi_sz
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\ProductOptions\ProductSuite=Blade Terminal Server
For this last key, the Blade and Terminal Server were on separate lines. After looking at other signatures that also used multi_sz, I could not see what was required to separate the lines, so I tried a space, \n and a couple others that never worked.
I then tried the dcmexport command and looked at other multi_sz definitions. They were useless as the character was not a normal ASCII character.
Then I found Windows signatures located in $TIO_HOME/eclipse/plugins/windows-operating-system/xml. In there I found the code & # 1 4 5 ; (remove the spaces) used in existing definitions. I then created the software signature, associated it with the Software Definition Windows Server 2003 Standard Edition SP1 (for lack of a better one for now), ran a scan and the operating system was recognized.
Hope this helps.
One issue I came across was around the use of multi_sz type registry entries. This came up when scanning a Windows server, the operating system was not defined. In order to define the operating system the following keys were needed:
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\ProductName=Microsoft Windows Server 2003
HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\CSDVersion=Service Pack 1
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\ProductOptions\ProductType=ServerNT
And the multi_sz
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\ProductOptions\ProductSuite=Blade Terminal Server
For this last key, the Blade and Terminal Server were on separate lines. After looking at other signatures that also used multi_sz, I could not see what was required to separate the lines, so I tried a space, \n and a couple others that never worked.
I then tried the dcmexport command and looked at other multi_sz definitions. They were useless as the character was not a normal ASCII character.
Then I found Windows signatures located in $TIO_HOME/eclipse/plugins/windows-operating-system/xml. In there I found the code & # 1 4 5 ; (remove the spaces) used in existing definitions. I then created the software signature, associated it with the Software Definition Windows Server 2003 Standard Edition SP1 (for lack of a better one for now), ran a scan and the operating system was recognized.
Hope this helps.
Monday, September 8, 2008
TCR Report Packages
Tivoli Common Reporting provides two ways to package your BIRT report design files viz. Report Design Format and Report Definition Format. Both have the same acronym, so we can't abbreviate them. This article explains the two formats in detail.
Report Design Format
This is the simplest of the two formats. How do you create them? Simply create a zip file of your BIRT project directory (note! you need to create the zip file of the PROJECT not the ReportDesign file). That's it. In Windows, you can right click on the project directory and click Send To->Compressed (Zipped) Folder, there you're done! Import the resulting zip file into TCR and it will import the files with .rptdesign extension as TCR reports.
The Path to the report design files will be reflected in the TCR navigation tree. For example, if your report design files are located in /GBSReports/Netcool directory, then the report designs in TCR will appear under GBSReports->Netcool in the TCR Navigation tree.
Report Definition Format
TCR Documentation does not give good idea about Report Definition Format. Creating Report Definition Format involves creating an XML file that describes the report set structure. The advantage of this format is that you can create a well documented report set and it also provides the ability to share the same report design across multiple levels in the navigation tree. Even though, this format helps to create a well-documented report set, ironically, TCR does not have a detailed documentation such as details about the XML Schema. The OPAL ITM62 Report set uses the Report Definition Format and it should give you some idea, but that's all I have come across so far.
Overall, the Report Design Format should be good enough for most of our needs and it is much easier to change the Report set structure as well. However, Report Definition Format gives more control over Report set organization and one hopes that more information about the format will be available in future releases.
Subscribe to:
Posts (Atom)