Showing posts with label netcool. Show all posts
Showing posts with label netcool. Show all posts

Saturday, September 30, 2023

Launch a page in JazzSM DASH without login page

The following is from an IBM Communities post on this topic by Detlef Kleinfelder . The link is also included, but those see to go dead on IBM's site, which is why I'm including the text:

https://community.ibm.com/community/user/aiops/discussion/is-there-a-way-to-run-public-event-viewer-of-omnibus-webgui-for-big-wall-dashboards#bmd85f7aed-8757-47a5-99c0-ceae09aec075

you can launch a page directly with URL provinding the page id and an user-based token.

The user-based token has the encoded username and password that is used to authenticate and launch a page.

 

The URL convention is https://<FQDN>:16311/ibm/action/launch/<PageID>/<xlaunchCred>

where:

FQDN = Fully Qualified Domain Name or IP address
PageID = The Page ID of the DASH page
xlaunchCred = The user-based token generated by xlaunchapi.jar

The following STEPS will walk you through to obtain the required parameters:

 

  1. Get the Page ID of the DASH page. For example, to get the Page ID of DASH About Page, open the About page and click on the "Page" icon at the top right corner and click "About". The Page ID is under "General"

  1. Generate the  user-based token using xlaunchapi.jar, ensure all text in the command are all in one line as shown:
     
    ./java -cp /<JazzSM_HOME>/profile/installedApps/JazzSMNode01Cell/isc.ear/xlaunchapi.jar com.ibm.isc.api.xlaunch.LaunchPropertiesHelper\$Encode com.ibm.isc.xlaunch.username <username> com.ibm.isc.xlaunch.password <password>

  • Replace <username> and <password> with the user you want to Single Sign On.

 

  • The command will return a string like:

  • L2NvbS5pYm0uaXNjLnhsYXVuY2gudXNlcm5hbWUvc21hZG1pbi9jb20uaWJtLmlzYy54bGF1bmNoLnBhc3N3b3JkL29iamVjdDAw

  1. Following the URL convention, https://<FQDN>:16311/ibm/action/launch/<PageID>/<xlaunchCred>, launch the DASH About Page. The page will launch without manually entering login and password.

 

Thursday, September 14, 2023

Configuring DASH/JazzSM and WebGUI HA using an ObjectServer database

 There is a great support article on this exact topic here:

https://www.ibm.com/support/pages/omnibus-webgui-configuring-high-availabilityload-balancing-using-omnibus-81-objectserver-ha-database

There are just a few caveats that need I need to point out:

1. Use the webha.sql file from the above link to create the ObjectServer database. There are two similar files on your DASH/JazzSM server after the install, but both are incorrect.

2. The instructions in the link for configuring WebSphere are all manual, through the WebSphere Admin Console. To make things more easily repeatable, I created a file with all of the Jython WebSphere administrative commands required. Here are the contents of that file:




Friday, August 25, 2023

Modifying the Netcool Remedy 8 gateway for use with Java JDK 8

𝐁𝐚𝐜𝐤𝐠𝐫𝐨𝐮𝐧𝐝

The Netcool Gateway for Remedy 8 version 1.3 was originally written for Java 6, which used the Rhino JavaScript engine. The Nashorn JavaScript engine in Java 8 (which is automatically installed with an OMNIbus fixpack) is slightly different. Nashorn is so different that the some modifications are required to get the probe to work correctly.

The error you'll see in the gateway log file if you try to run the gateway without these changes is:

Error: [Main Gateway] java.lang.RuntimeException: javax.script.ScriptException: ReferenceError: "importPackage" is not defined in <eval> at line number 1

The issue is basically that Nashorn gives you a new way to create Java objects within JavaScript.

𝐒𝐨𝐥𝐮𝐭𝐢𝐨𝐧
You will need to modify three files to get the Remedy 8 gateway working correctly. Those files are listed here:

$𝗢𝗠𝗡𝗜𝗛𝗢𝗠𝗘/𝗷𝗮𝘃𝗮𝘀𝗰𝗿𝗶𝗽𝘁/𝗺𝗼𝗱𝘂𝗹𝗲𝘀/𝘀𝗼𝗴.𝗷𝘀
$𝗢𝗠𝗡𝗜𝗛𝗢𝗠𝗘/𝗷𝗮𝘃𝗮𝘀𝗰𝗿𝗶𝗽𝘁/𝗺𝗼𝗱𝘂𝗹𝗲𝘀/𝗱𝗮𝘁𝗲𝗳𝗼𝗿𝗺𝗮𝘁.𝗷𝘀

In the above two files, you simply need to add this one line at the beginning of each file:

𝚕𝚘𝚊𝚍("𝚗𝚊𝚜𝚑𝚘𝚛𝚗:𝚖𝚘𝚣𝚒𝚕𝚕𝚊_𝚌𝚘𝚖𝚙𝚊𝚝.𝚓𝚜");

$𝗢𝗠𝗡𝗜𝗛𝗢𝗠𝗘/𝗴𝗮𝘁𝗲𝘀/𝗯𝗺𝗰_𝗿𝗲𝗺𝗲𝗱𝘆/𝗯𝗺𝗰_𝗿𝗲𝗺𝗲𝗱𝘆.𝗻𝗼𝘁𝗶𝗳𝗶𝗰𝗮𝘁𝗶𝗼𝗻.𝗷𝘀 
This file requires a bit more customization. Specifically, you replce this line in the update() function definition:

    𝚟𝚊𝚕𝚞𝚎𝚜 = 𝚜𝚘𝚐.𝚗𝚎𝚠𝚛𝚘𝚠();

With these 3 lines:

    //𝚟𝚊𝚕𝚞𝚎𝚜 = 𝚜𝚘𝚐.𝚗𝚎𝚠𝚛𝚘𝚠(); // 𝚓𝚞𝚜𝚝 𝚌𝚘𝚖𝚖𝚎𝚗𝚝𝚒𝚗𝚐 𝚒𝚝 𝚘𝚞𝚝.
𝚟𝚊𝚛 𝚂𝙾𝙶𝚁𝚘𝚠 = 𝙹𝚊𝚟𝚊.𝚝𝚢𝚙𝚎("𝚌𝚘𝚖.𝚒𝚋𝚖.𝚝𝚒𝚟𝚘𝚕𝚒.𝚗𝚎𝚝𝚌𝚘𝚘𝚕.𝚒𝚗𝚝𝚎𝚐𝚛𝚊𝚝𝚒𝚘𝚗𝚜.𝚜𝚘𝚐.𝚂𝙾𝙶𝚁𝚘𝚠");
𝚟𝚊𝚛 𝚟𝚊𝚕𝚞𝚎𝚜 = 𝚗𝚎𝚠 𝚂𝙾𝙶𝚁𝚘𝚠();

Once you make all of the above changes, you need to restart the gateway for the changes to take effect, and it has a better chance of working. I say better chance because you still have to provide all of the correct values for the various properties. But if you're just upgrading your working gateway to OMNIbus 8.1 FixPack 30, then the above changes are what you need.

Monday, February 13, 2023

Recent versions of the Netcool Message Bus Probe support Kafka

 We are working with a client who needed to send events from their cloud-native application to their legacy on-prem netcool Operations Insight implementation. After researching a bit, we found that their application was already writing the events of interest to a Kafka topic. The only issue was that they had an old version of the Message Bus Probe. So we installed version 21 of the probe and used the included Nokia NFMP files as a starting point to configure the probe to pull the events from this topic so that they could be processed by Netcool. 

Reach out to us if you're using Netcool/Watson AIOps and need some help working through some obstacles.

Wednesday, February 8, 2023

Configuring Fluent Bit to send messages to the Netcool Mesage Bus probe

 Background

Fluent Bit is an open source and multi-platform log processor tool which aims to be a generic Swiss knife for logs processing and distribution.

It is included with several distributions of Kubernetes, and is used to pull log messages from multiple sources, modify them as needed, and send the records to one or more output destinations. It is amazingly customizable, so you can do just about any processing you want, with a couple of idiosyncracies, one of which I'll describe here.

The Challenge

What if you have a log message that you want to handle in two different ways:

1. Normalize the fields in the log message for storage in ElasticSearch (or Splunk, etc.).

2. Modify the log message so it has all of the appropriate fields needed for processing by your Netcool environment (fields that you don't necessarily want in your log storage system).

The Solution

Based on all of the unique restrictions in Fluent Bit, what you need to do is create a new copy of the log message, preserving the original so that the original can go through your "standard" processing, and the new message can be processed according to your needs in Netcool.

The specifics of this solution are to use a rewrite_tag FILTER to create a new, distinct copy of the message with a custom tag within the Fluent Bit pipeline, and then configure the appropriate additional FILTERs and OUTPUTs that only Match this new, custom tag. You also need to modify any existing OUTPUTs to exclude this new tag.

Here's a high-level graphic showing what we're going to do:



Our rewrite_tag FILTER is going to match all tags beginning with "kub". This will exclude our new tag, which will be "INC". So after the rewrite_tag filter, there will be two messages in the pipeline: the original plus our new one with our custom "INC" tag. We can the specify the appropriate Match statements in later FILTERs to only match the appropriate tag. So in the ES output above, the Match_Regex statement is:

Match_Regex  ^(?!INC).*

The official name of the above is a "lookahead exclude". Go ahead and try it out at regex101.com if you want. It will match any tag that does NOT begin with "INC", which is the custom tag for our new messages that we want to send tou our HTTP Message Bus probe.

The rewrite_tag FILTER will be custom for your environment, but the following may be close in many cases. For my case, I want to match any message that has a log field containing the string "ERROR writing to". You'll have to analyze your current messages to find the appropriate field and string that you're interested in. But here's my rewrite_tag FILTER stanza:

[FILTER]
    Name rewrite_tag
    Match_Regex ^(?!INC).*
    Rule    $log  ^.*Error\swriting\sto.* INC true

The "Rule" statement is the tricky part here. This statement consists of 4 parts, separated by whitespace:

Rule - the literal string "Rule"
$log - the name of the field you want to search to create a new message, preceded by "$". In this case, we want to search the field named log.
^.*Error\swriting\sto.* - the regular expression we want to match in the specified field. This regular expression CANNOT CONTAIN SPACES. That's why I'm using "\s".
INC - this is the name of the tag to set on the new message. This tag is ONLY used within the Fluent Bit pipeline, so it can literally be anything you want. I chose "INC" because these messages will be sent to the Message Bus proble to eventually create incidents in ServiceNow.
true - this specifies that we want the KEEP the original message. This allows it to continue to be processed as needed.

After you have the rewrite_tag FILTER in place, you will have at least one additional FILTER of type "modify" in your pipeline to allow you to add fields, rename fields, etc. You'll then have an OUTPUT stanza of type "http" to specify the location of the Message Bus probe. Something like the following:

[OUTPUT]
    Name http
    port 80
    Match INC
    host probehost
    uri /probe/webhook/fluentbit
    format json
    json_date_format epoch

The above specifies that the URL that these messages will be sent to is 

http://probehost:80/probe/webhook/fluentbit

In the json that's sent in the body of the POST request, there will be a field named date , and it will be in Unix "epoch" format, which is an integer representing the number of seconds since the beginning of the current epoch (a "normal" Unix/Linux timestamp).

That's it. That's all of the basic configuration needed on the Fluent Bit side.

Extra Credit/TLS Config

If your Message Bus probe is using TLS, you just need to add the following two lines to the above OUTPUT stanza:

    tls On
    tls.verify Off

The first line enables TLS encryption, and the second line is a shortcut that allows the connection to succeed without having to add the appropriate certificates to Fluent Bit - it will accept any certificate presented to it by the Message Bus probe, even a self-signed certificate.

Tuesday, January 25, 2022

Configuring certificates for the Netcool email probe when using Office365

 Background

If your company uses Office365 for email, and you need to use the Netcool Email Probe, you will have to configure a KeyStore database to store the valid/trusted certificates presented by Office365. What I found at one customer was that after we imported one certificate into the KeyStore, we still frequently received Certificate chaining errors, which eventually would cause the probe to stop working. The problems I saw were caused by what looks like a configuration difference on the load-balanced Office365 servers, where multiple different certificates (and certificate chains) were being presented to the Email Probe.

Solution

After several attempts at resolving the problem, I took the nuclear approach to download every possible certificate from Office365 and import them all into the KeyStore database. I'm certain it's overkill, but I scripted the solution below, and it doesn't affect the performance of the probe. Here's the script, with comments:

cd /tmp

for i in file{1..100}

do

openssl s_client -showcerts -verify 5 -connect outlook.office365.com:995 < /dev/null > $i

# each file contains at least two certificates. Each certificate needs to be in its own file

# to import it into the keystore. That's what the following command does. It will create

# files named file*-00, file*-01, file*-02 if there are two certificates returned by the above

# command.

csplit -f $i- $i '/-----BEGIN CERTIFICATE-----/' '{*}'

# file*-00 doeesn't contain anything useful (certs are in *-01 and *-02), so we will delete it

rm file*-00

done

# now import all of the above certs into the keystore.

for i in file*-*

do

keytool -keystore "/opt/IBM/tivoli/netcool/core/certs/key_netcool.jks" -import \

-trustcacerts -alias $i -file $i -noprompt -storepass THE_KEYSTORE_PASS

done





Tuesday, October 26, 2021

Converting timestamp in milliseconds to seconds in Netcool probe rules

 Background

The Netcool OMNIbus ObjectServer expects timestamp values to be the number of seconds since epoch (00:00:00 UTC on 1 January 1970). This is a 10-digit number. However, some systems generate a timestamp that is the number of milliseconds since epoch (a 13-digit number). This causes the timestamp to be interpreted wildly incorrectly in the ObjectServer. One such event source is Nokia NSP, which integrates with Netcool via the Probe for Message Bus.

Conversion Process

My process of converting the 13-digit timestamp to the correct 10-digit one is straightforward:

# convert timestamp to string by concatenating it with a string
$millString = $timestampInMilliseconds + ""

# take the first 10 characters
$secondsString = substr($millString,1,10)

# convert back to an integer
$validTimestamp = int($secondsString)

# store in FirstOccurrence of Event
@FirstOccurrence = $validTimestamp

That's it.

Thursday, February 27, 2020

JRExecAction() function behavior in Netcool Impact

Just a short post today to set the record straight on the use of the JRExecAction() function in Netcool Impact policies.

The product documentation states that the JRExecAction() function returns nothing, but sets the variables ExecError and ExecOutput corresponding to stderr and stdout from the script/command that you run. However, the function itself returns the data written to stdout in addition to setting ExecOutput and/or ExecError. So if you try the following, you'll see that the same output is logged twice:


results = JRExecAction("/bin/ls", "/", false, 1000);
log(0,"Frank results = " + results);
log(0,"ExecOutput results = " + ExecOutput); 


Incidentally, the first use above is really the more common use in practice. You will find very few implementations where the ExecOutput variable is referenced.

Thursday, February 13, 2020

Customizing the ServiceNow Netcool Connector

UPDATE 5/1/2020

You can update this script through the ServiceNow GUI. The navigator item to select is "Script Files" under "MID Server". And the name of the script is NetcoolConnector. Edit in the GUI and it gets updated on all of your MID servers.

Background

The ServiceNow Netcool Connector (introduced at some point before the New York release) allows you to pull events from a Netcool ObjectServer into ServiceNow. The connector is a process that runs on a MID Server. Within the ServiceNow interface, there are only a few configuration options (userid/password, JDBC URL, how often to run, etc.). However, there are no filters to configure. That's because the connector is a straightforward Groovy script that you can edit as needed on the MID Server.

Details

The Netcool Connector script is found on the MID Server in the file .../agent/scripts/Groovy/NetcoolConnector.groovy. Some of the interesting parts of the script are the actual query that's run:

query = "select top 3000 Identifier,Node,NodeAlias,AlertKey, Manager,Agent,AlertGroup,Severity,Type,Summary,Acknowledged,LastOccurrence,StateChange,SuppressEscl from alerts.status where StateChange > " + lastTimeSignature + " and Manager not like '^.*Watch\$' order by StateChange asc";


That is exactly the query that's run, and you can edit it to include custom fields, for example. To complete the customization, you also need to update the createEvent() function to actually include those custom fields in the event that's created in ServiceNow. In there you can also do any hard-coded transforms that are required or anything else.

Wednesday, February 12, 2020

Short script to generate continuous random events to Netcool OMNIbus

So you need to generate some events to OMNIbus? It's really easy with the nco_postmsg shell script that's installed on your ObjectServer host:

while (true); do nco_postmsg -user root -password n3tc00l "Identifier='my_identifier_${RANDOM}'" "Node='mynode${RANDOM}'" "Severity=5" "Manager='nco_postmsg'" "Summary='An event occurred again'" "ImpactFlag=1" "AlertGroup='FRANK_${RANDOM}'"; sleep 10; echo sending another event; done

Update the user and password parameters, and you can add any fields you need. It simply generates an event every 10 seconds.

Thursday, September 6, 2018

Some of our current projects

We work with a pretty wide array of products, so I wanted to highlight some of the projects we're working on right now

ServiceNow Architecture and Implementation

We're working with a communications company to implement their procurement, installation and change processes within ServiceNow, with asset feeds from multiple external systems.

ServiceNow Incident Response integration with QRadar

We're helping our client customize both products and the integration between them to best leverage their existing investment and people.

IBM Control Desk for Field Service Management

We're helping a different communications customer with their field service management through workflows and custom user interfaces defined in IBM Control Desk.

Netcool Operations Insight Implementation

We're actually working on several of these at the moment. The most work on these goes into identifying the different event sources, what (if any) automated actions need to be performed and who needs to be notified.

BigFix Steady State

A medical client of ours has been leveraging our BigFix Managed Services for several years to ensure that all IT equipment is both known and is running software at the appropriate patch level.

ICD and BigFix Implementation with Airgap

We're working with a defense contractor to ensure that their Asset Management and Change Management processes continue to work smoothly leveraging ICD and BigFix

Thursday, June 29, 2017

More IBM Netcool Agile Service Manager Videos

I think some wires got crossed in YouTube recently as IBM Service Management moved over to the IBM Cloud channel, and it appears that their most recent videos are hidden from any searches. However, thanks to Matt Duggan from IBM who shared the direct links on LinkedIn, I've added them all to my own IBM Agile Service Manager playlist, which can be found here:

https://www.youtube.com/playlist?list=PLxv2WlaeOSG9z_L4LCjHzz-qnZ-vDqnjn

Have fun

Tuesday, June 27, 2017

IBM Netcool Agile Service Manager - What is swagger?

Introduction

The ASM documentation references "swagger" and "swagger URLs" for several different services. The purpose of this post is to describe what this actually means.

What is swagger?

Here's a statement from swagger.io:

The goal of Swagger™ is to define a standard, language-agnostic interface to RESTAPIs which allows both humans and computers to discover and understand the capabilities of the service without access to source code, documentation, or through network traffic inspection.

So the goal of this article is to show what that statement actually means to you in the context of Agile Service Manager.

Swagger URLs for ASM

There are 7 different services that are accessible via a browser. My ASM host is named "asm", and here are the URLs I have for the services:

File Observer Swagger UI
http://asm:9098/1.0/topology/observer/swagger/#/
        
topology-service Swagger UI
http://asm:8080/1.0/topology/swagger#/ 
        
search service Swagger UI
http://asm:7080/1.0/search/swagger  
        
ITNM observer Swagger UI
http://asm:9080/1.0/topology/observer/swagger   
        
OpenStack observer Swagger UI
http://asm:9082/1.0/topology/observer/swagger   
        
Event observer Swagger UI
http://asm:9084/1.0/topology/observer/swagger 
        
Docker observer Swagger UI
http://asm:9086/1.0/topology/observer/swagger 

Topology Service

The Topology Service is the one that will be the one you normally want to visit to view (and even change) data about the resources in the ASM database. Here's what you'll see when you access the URL:

You can click on each section to see the operations associated with each. The section I like is Resources. Here are the operations found there:


From here, you can click on one of the operations, such as the first one: GET /resources. Here's just the first part of what's displayed there:



Notice that it gives you documentation about the operation and lots of other information. Specifically, it provides you with the ability to fill in values for all of the parameters that the operation accepts AND allows you to execute the operation! It also provides you with the 'curl' command that you can run from the command line to execute the exact same operation, with the exact same parameters.

The way to execute the operation is to click the "Try it out!" button at the bottom of the operation documentation.


And there you go! Some data. In this case, what's returned is the ID of the node in the topology that matches the criteria I specified. I can then take this ID and use it as input to other operations in this same group or in other groups.

Try it out and have fun

The above is just an short entry point into ASM's swagger UIs. Play around with them and you'll see that you can do some interesting stuff.