Feb.26-27
Mar.19-20
Apr.16-17
May 14-18
For more information, check IBM's education website.
For more information, check IBM's education website.
Copy cat and atr files
The main problem with the UA in a multi-TEMS environment is that the application support files need to be copied from the RTEMS to the HUB TEMS manually. You could achieve this by manually copying xxxCAT00 and xxxATR00 files from your RTEMS to the hub tems. (xxx is the three character application name from your MDL).
This step might require a restart of Hub TEMS.
Re-connect your UA to Hub TEMS
This is a much easier method than manually copying files. Run "itmcmd config -A um" or MTEMS GUI to reconnect the Universal agent to the hub TEMS. Recycle UA. When the UA connects, it automatically populate the application support files on the TEMS, in this case hub. Don't forget to reconnect the UA back to the RTEMS once the application support files are installed.
Hope you find it useful.
Use sudo
Sudo is the simplest choice for auditing the commands invoked as super user. It comes as a standard package on Unix/Linux and it provides standard logging into syslog. You could use a ITM universal agent file data provider or ITM Unix Log agent to monitor the log messages written by sudo. However one drawback of sudo is that it is difficult to setup an audit trail for users other than root.
Use command history files
This is a simplistic way but not a feasible solution at all. We could use the command history stored for each shell but users can easily disable history logging by switching to say Bourne Shell. Moreover, the history does not indicate when the command was invoked thus missing a crucial piece of information.
Use audit trail utilities
If you are looking for a basic security audit trail utility, you could use the psacct utility for logging the commands invoked by all users. It is very easy to setup and it provides commands such as "lastcomm" to display the list of commands invoked by a particular user along with the timestamps. You could run the lastcomm in your Universal Agent Script Data provider to monitor the commands invoked by a particular user. There are other commercial tools available as well and you may want to consider them if your budget allows.
Hope this gives you an idea.
ESMers (Enterprise Systems Management experts) are often charged with the responsibility of providing monitoring and availability information for services. But nowadays due to the influences of ITIL guidance and other best practices, it has become apparent to many, that it is probably insufficient to assume reporting the availability or status of a CI (Configuration Item) implies availability reporting on a service provided to a customer. In other words, when tasked with implementing a monitoring solution to provide availability and status information for a service, monitoring of ALL CIs critical to the usability of a service served to the end recipient MUST be considered.
An "IT service" can be defined as a specific output that provides customer value. It is a measurable product which is the basis for doing business with customer, and is deliverable through a series of interrelated processes, or activities, or both. It comprises of a group of related, CI delivered functionality required by a customer for business use. It is NOT available to a customer if the functions that customer requires at that particular location CANNOT be used. ESMers provide status and availability information to assist other IT personnel ensure the underpinning CIs for a given service are kept sufficient, reliable and properly maintained. They monitor a service, by monitoring in a correlated manner CIs that comprise an IT service. Internet, E-mail and Telephone are just a few examples of well known IT services that come to mind.
Since in general an IT service comprises only a limited number of CIs, monitoring and availability information should be focused on those CIs responsible for the service delivered to the customer, instead of everything in the environment has been traditionally.
If you have enabled historical data collection, give about 25 hours or longer before checking for the historical data in the warehouse database. No matter, how much you pound your tables and bang your keyboard, ITM seems to take little more than a day for data to appear.
Why does it take so long? There is a reason for that. In historical data collection, the last 24 hour of data is stored on the TEMA (if you configured to store the data on TEMA) and the first time the historical data collection start, the information will be stored at the TEMA for the first 24 hours and only on the 25th hour, the data will start flowing in.
This assumes that your warehouse data collection is set to every 1 hour. However, if your WPA data collection interval is set to 1 day, you might even wait longer times before you start seeing data.
TSOM provides a way for network operations folks to gather security threats from sources in the network, called "sensors", or network devices such as firewalls, Intrusion Detection systems, web servers, and present these threats based on threat level on a console. The product allows the use of watchlists to group events together, and has a number of handy console types, including an event console, and a "Powergrid" to visually manipulate events for quick analysis.
Events from the sensors can be acted on by stateful rules, alot of which are product provided, that can watch for a threat "signature" through correlation.
TSOM uses MySQL for it's event database, and offers an Oracle alternative to MySQL for the persistent database. TSOM supports a number of different firewall formats, and collects information from them using a number of different protocols, or "conduits", such as syslog, SNMP, SMTP, XML (custom events), eStreamer and Check Point FW-1.
Some features in TSOM 3.1:
Integration with TIM and TAM
Cisco SDEE support
Event import/export via SNMP
Ability to import vulnerability scans from a number of different scan products from an XML file
Ability to forward events to Netcool and Tivoli
More to come so stay tuned.