Tuesday, June 27, 2017

IBM Netcool Agile Service Manager - What is swagger?

Introduction

The ASM documentation references "swagger" and "swagger URLs" for several different services. The purpose of this post is to describe what this actually means.

What is swagger?

Here's a statement from swagger.io:

The goal of Swagger™ is to define a standard, language-agnostic interface to RESTAPIs which allows both humans and computers to discover and understand the capabilities of the service without access to source code, documentation, or through network traffic inspection.

So the goal of this article is to show what that statement actually means to you in the context of Agile Service Manager.

Swagger URLs for ASM

There are 7 different services that are accessible via a browser. My ASM host is named "asm", and here are the URLs I have for the services:

File Observer Swagger UI
http://asm:9098/1.0/topology/observer/swagger/#/
        
topology-service Swagger UI
http://asm:8080/1.0/topology/swagger#/ 
        
search service Swagger UI
http://asm:7080/1.0/search/swagger  
        
ITNM observer Swagger UI
http://asm:9080/1.0/topology/observer/swagger   
        
OpenStack observer Swagger UI
http://asm:9082/1.0/topology/observer/swagger   
        
Event observer Swagger UI
http://asm:9084/1.0/topology/observer/swagger 
        
Docker observer Swagger UI
http://asm:9086/1.0/topology/observer/swagger 

Topology Service

The Topology Service is the one that will be the one you normally want to visit to view (and even change) data about the resources in the ASM database. Here's what you'll see when you access the URL:

You can click on each section to see the operations associated with each. The section I like is Resources. Here are the operations found there:


From here, you can click on one of the operations, such as the first one: GET /resources. Here's just the first part of what's displayed there:



Notice that it gives you documentation about the operation and lots of other information. Specifically, it provides you with the ability to fill in values for all of the parameters that the operation accepts AND allows you to execute the operation! It also provides you with the 'curl' command that you can run from the command line to execute the exact same operation, with the exact same parameters.

The way to execute the operation is to click the "Try it out!" button at the bottom of the operation documentation.


And there you go! Some data. In this case, what's returned is the ID of the node in the topology that matches the criteria I specified. I can then take this ID and use it as input to other operations in this same group or in other groups.

Try it out and have fun

The above is just an short entry point into ASM's swagger UIs. Play around with them and you'll see that you can do some interesting stuff.

Monday, June 26, 2017

Agile Service Manager UI Introduction

Here's a short video introduction covering the basic features of IBM's Netcool Agile Service Manager.





IBM Netcool Agile Service Manager Thoughts

I recently installed IBM's Netcool Agile Service Manager and wanted to give my initial thoughts on it.

What is Agile Service Manager?


Basically, it's a real-time topology viewer for multiple technologies. Specifically, it can currently render topology data for ITNM, OpenStack and Docker, all in one place. Additionally, it maps events to the topology so you can see any events that are affecting a resource in the context of its topology. So, for example, if you receive a CRITICAL event for a particular Docker container, you will see the node representing that container turn red. Pretty neat. Here's an example of a 1-hop topology of my ASM server's docker infrastructure (you always have to start at some resource to view a topology):



What's so great about it?

Combined Topology View

First, this topology view is wonderful for Operations and Development because it shows a topology view of your combined Network, Docker and Openstack environments, so everyone can see where applications are running and the dependencies among the pieces.

ElasticSearch

Second, it's got ElasticSearch under the covers, so updates and searches are amazingly fast, and the topology view is built extremely quickly.

Custom Topology Information

Third, you can add your own topology information to make it even more useful!

Here's a screenshot where I've manually modified the topology using a combination of the File Observer and direct access to the Topology Service REST API (from the Swagger URL):



Notice also that Time Entry is in a Critical state. That's due to an event that I generated.

History

Fourth, it maintains history about the topology. That means that you can view the difference in topology between 2 hours (or two days) ago and right now.

Is ASM a complete replacement for TBSM and/or TADDM?


No, ASM is not a complete replacement for TBSM or TADDM, but you can definitely think of it as "TBSM Lite". TBSM still has some very unique features, such as status propagation, service rules, and custom KPIs that can be defined on a per-business-service basis.

And TADDM's unique capability is the hard work of actually discovering very detailed data and relationships in your environment.

However, because the search and visualization pieces of ASM are so fast and efficient, I can definitely see ASM being used as at least part of the visualization portion of  TADDM. What would be required to allow this is a TADDM Observer to be written.

Additionally, I think the ASM database and topology will in the future be leveraged by TBSM, though this will take a little work.

Parting thoughts

ASM is a truly useful product, with some great capabilities. It's also incredibly easy to install if you've already got Netcool Operations Insight (or at least DASH) installed - I was able to get it installed in just a few hours. I'm certain IBM will be adding features and add-ons to provide even more functionality in the coming months.

Thursday, May 25, 2017

New Linux Samba vulnerability and fix

A new vulnerability was found in Linux Samba from version 3.5 and above. Details here:

https://www.samba.org/samba/security/CVE-2017-7494.html

The workaround is easy and is contained in the link above:

in your /etc/samba/smb.conf file, add the following in the [global] section:

nt pipe support = no


Then restart smbd with 'service smb restart'

Monday, April 24, 2017

BMXAA7025E and BMXAA8313E Errors running MAXINST on ICD 7.6

I wanted to install the demo data that's provided with ICD 76 by basically following the instructions found here:

https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/Anything%20about%20Tivoli/page/To%20load%20the%20sample%20DB2%20database%20after%20Control%20Desk%207.6%20installed

But I didn't find those steps before I started, so I took my own path.

Specifically, I didn't drop the database, and that meant that I encountered errors BMXAA7025E and BMXAA8313E when running the 'maxinst.sh' script. What I found is that the cleandb operation doesn't really delete all of the tables and views in the MAXIMO schema (I'm on DB2/WebSphere/RHEL 6.5), so when maxinst gets to running the files under:

/opt/IBM/SMP/maximo/tools/maximo/en/dis_cms

It fails because a few of these SQL files try to create tables and views that still exist. I found this link about the problem:

https://www-01.ibm.com/support/docview.wss?uid=swg21647350

But I didn't like it because it tells you to re-create the database. So with a little digging, I found that after I hit the error, I could run the following db2 commands to delete all of the tables and views that were not automatically deleted:


db2 connect to maxdb76 user maximo using passw0rd
db2 DROP TABLE ALIASES
db2 DROP TABLE ATTRIBUTE_TYPES
db2 DROP TABLE BNDLVALS
db2 DROP TABLE BUNDLENM
db2 DROP TABLE CDM_VERSION
db2 DROP TABLE CHANGE_EVENTS
db2 DROP TABLE CLASS_TYPES
db2 DROP TABLE CMSTREE
db2 DROP TABLE CMSTREES
db2 DROP TABLE DESIRED_SUPPORTED_ATTRS
db2 DROP TABLE DESIRED_SUPPORTED_MAP
db2 DROP TABLE ENUMERATIONS
db2 DROP TABLE FTEXPRSN
db2 DROP TABLE FTVALUES
db2 DROP TABLE INTERFACE_TYPES
db2 DROP TABLE LAPARAMS
db2 DROP TABLE LCHENTR
db2 DROP TABLE LCHENTRY
db2 DROP TABLE ME_ATTRIBUTES
db2 DROP TABLE METADATA_ASSN
db2 DROP TABLE MSS
db2 DROP TABLE MSS_ME
db2 DROP TABLE MSS_RELATIONSHIPS
db2 DROP TABLE NAMING_IDENTIFIERS
db2 DROP TABLE NAMING_POLICIES
db2 DROP TABLE NAMING_RULES
db2 DROP TABLE RELATIONSHIPS
db2 DROP TABLE RELATIONSHIP_TYPES
db2 DROP TABLE SBSTVALS
db2 DROP TABLE SUPERIORS
db2 DROP TABLE VALID_REL_TYPES
db2 DROP VIEW ATTR_PRIORITIES
db2 commit

And then I could re-run the maxinst.sh script and it worked like a champ. Please feel free to use my super secure password for yourself.

Monday, April 3, 2017

DevOps: Operations Can't Fail

Agile and DevOps are all about "Fail Fast", which is fine for developers, but absolutely unacceptable for Operations.

For a recent example, just look at the recent AWS outage:

http://www.recode.net/2017/3/2/14792636/amazon-aws-internet-outage-cause-human-error-incorrect-command

That was caused by someone debugging an application. None of us want our Operations department to be in that position, but it can obviously happen. I think there are one or more reasons behind why it happened, and I've got some opinions on how we need to work to ensure it doesn't happen to us:

Problem: Developers think Operations is easy

Absolutely everything labeled "DevOps" is aimed at allowing Development to do just enough "operations" to get by. But we in Operations know that it takes a lot more: Change and Configuration Management, Event Management, Business Service Modeling, and the list goes on and on. Individual Development teams don't necessarily understand these practices outside of their own application.

One Solution: We need to learn about "the new stuff"

The only way we'll be invited to the table to talk to development teams is to learn about the tools they're using (Jira, Puppet/Chef, Kubernetes, Docker, etc.). This will allow us to use a similar vocabulary when meeting with them. Without this basic knowledge, they simply won't invite us to any of their discussions.

Problem: Developers think Operations is unnecessary

Individual Development teams often don't see why the Operations department even exists. They have their tools that allow them to consistently deploy their application, so why does Operations need to be involved. They don't understand that any one of their 20-or-so "incidental" microservices may actually be absolutely critical to some other application in the environment.

One Solution: After learning the new stuff, ask to be involved

The Operations Manager needs to get involved with the Development teams. She needs to give Development teams some type of framework or process or SOMETHING that makes their application's metrics and availability visible to the Enterprise. This will allow ALL involved parties to understand the situation when there is an outage.

A great graphic from Ingo Averdunk at IBM


The parts in light blue (Logging, Monitoring, Event Mgmt, Notification, Runbook Automation, ChatOps and Root Cause Analysis) are those components that need to be standardized across all applications. If your Operations team isn't meeting with Development, you won't get to explain the need for the standard suite of tools.

There are other problems and other solutions

This post is meant to help Operations in a sea of DevOps information that is aimed only at Development, in the hope that we can reign things in and continue to ensure that the entire enterprise is healthy and available.

Friday, March 31, 2017

DevOps: The functions that must be standardized among different applications

DevOps appears to be here to stay, so from an Operations perspective, we need to ensure that all of the Development teams are playing together nicely and following some common rules.

Why?

I just realized that many Dev teams don't fully understand the need for Ops when they're implementing DevOps. Here are the foundational reason, IMO, behind the need for Operations:

Business Continuity

In many enterprises, applications never die, and customers continue to need support long after the original application development team has moved on. If applications don't follow some basic standard practices, they can easily be forgotten by the people who need to support them - Operations. Developers want to move on to the next new thing, which is great for Dev, but horrible for Ops. There are numerous classifications of applications that can't simply change on a whim due to factors such as regulatory control. Regulations affect a truly stunning number of companies, from utilities to taxis to manufacturing. Unless Dev is going to take responsibility for the support of their application over its entire lifespan (which can be 5 to even 20 years), Operations needs to be involved.

Integration With Other Applications

Applications need to talk to one another at some point. And when those connections fail, all involved application teams usually point fingers at one another. To minimize this finger-pointing, all applications should adhere to some common standards, several categories of which are found below. Even if all Development teams coordinate tightly in your company, there are still MANY external applications being used that need to be supported (e.g. WebSphere, Oracle, etc.). And the management of these applications needs to be coordinated with the in-house applications being built. Operations provides this management and coordination.

Logging

Application logging should be somewhat standardized to allow the log data to be collected and parsed for important information. This doesn't mean they all need to log in exactly the same format, but they should all adhere to some best practices, such as:

Every log entry should have a timestamp and a unique identifier (such as transaction ID)
Logs should be human readable
Identify the source of the message
Avoid multi-line messages if possible
Use name-value pairs (possibly log in JSON format)

Monitoring

Applications NEED to be monitored at very least for performance (response time) and availability (up/down). Ideally you want to have data collectors at each tier of a multi-tiered application to give you transaction topology and detailed monitoring data, but this can come later. At a bare minimum, all applications need to be monitored using some type of synthetic transactions, which run dummy/non-"real" transactions through the system to gather constant performance and availability metrics.

Event Management

While many applications log information, there are parts of the infrastructure that can only send "events" to some remote destination. The most common types of events are "SNMP traps" (SNMP=Simple Network Management Protocol), which are generated by network equipment such as routers and switches. A cohesive management strategy by operations needs to manage information in log files and events to allow for correlation between and among different systems. For example, a JDBC call from an application may fail, but the application itself doesn't know if this is a failure of the database itself, the network infrastructure or possibly even DNS misconfiguration. The event management function of the Operations group works on identifying these relationships in order to help perform Root Cause Analysis of incidents. This decreases the amount of resources required to resolve an issue.

Notification

Who needs to be notified when "something" goes wrong? Do you want every application team to receive an emergency text in the middle of the night for every problem? Probably not. The Operations team is usually responsible for sending (and, more importantly, suppressing) the appropriate notifications. This is tightly related to Event Management and Root Cause Analysis.

Runbook Automation

Anyone who is responsible for handling a ticket needs to have some idea of what to do. Runbooks are sequences of steps an operator can run to gather more information and/or resolve an issue. Runbooks need to be maintained to ensure that they're valid and up-to-date. Application teams often don't have all of the experience needed to create comprehensive runbooks. They are created over time by the Operations staff, who are constantly handling issues.

Authentication

In an enterprise, the ideal situation is that each user has ONE userid and password (or certificate, etc.) that they use to authenticate to all applications. This authentication storage mechanism needs to be maintained. This is another function provided by Operations.

Conclusion

DevOps is currently a very popular methodology, and it serves its purpose very well. It allows Development teams to continuously deploy applications to provide better business value. Operations is still required to perform quite a few functions that simply aren't in the purview of Development.

Monday, March 20, 2017

Come by booth 568 at #IBMInterConnect to demystify DevOps from an Operations perspective

There is a LOT of chatter about DevOps, but all of it seems to leave Operations almost completely out of the picture. Come to our booth to get our take on DevOps including:

- DevOps tries to encourage Development to do *some* amount of automation and monitoring.

- Your Operations department needs to provide Dev teams with policies for integrating their apps into your monitoring​ and event management system.

- Your Operations department needs to learn a little about software development so you can help educate your Enterprise on exactly how DevOps can fit into your environment.

- Your Operations department needs to learn enough about Agile (specifically Scrum and Kanban) to participate in relevant conversations when the topics arise.

- and more.

Saturday, March 18, 2017

Thursday, February 23, 2017

Visit us at booth 568 at IBM InterConnect March 19-23 in Las Vegas

Get out to IBM InterConnect 2017!

Stop by booth S568 in the Hybrid Cloud area to talk to us about:

- Our recent and historical successes helping customers like you deploy IBM products.

- IBM's comprehensive suite of ITSM tools, including Netcool, IBM Control Desk, IBM Performance Management, and TADDM.

- How you can effectively use an Agile methodology in your journey to realizing DevOps.

- Different strategies for effective deployments.

- Effectively consolidating and integrating your existing toolsets to your best advantage.

and many more topics!

Thursday, February 9, 2017

How to start a Netcool OMNIbus implementation

Someone posed this question on IBM Developerworks today, and I wanted to share the answer I provided, since it contains quite a few useful links:

https://www.ibm.com/mysupport/s/question/0D50z00006LMPab/how-to-start-implementation-of-tivoli-omnibus?language=en_US

And here's my reply in case the above link goes away:

With such an open-ended question, I'm going to provide links that start at the very beginning - Event Management. IBM has a great Redbook on this topic. It's from 2004, but the foundational information is still completely valid:

http://www.redbooks.ibm.com/redbooks/pdfs/sg246094.pdf

It's a REALLY good reference, particularly chapters 1 and 2. Once you understand Event Management concepts, reasons, challenges, needs and personas, I think you then need to move on to information about the OMNIbus components, architecture and capabilities, which you can find in the product documentation here:

https://www.ibm.com/support/knowledgecenter/en/SSSHTQ_8.1.0/com.ibm.netcool_OMNIbus.doc_8.1.0/omnibus/wip/user/concept/omn_ovr_introtonetcoolomnibus.html

Then keep on reading through the rest of the product documentation so you understand how OMNIbus is basically configured.

The next topic you'll want to look at is probes. which will process data and send events to OMNIbus, and this information is also in the product documentation:

https://www.ibm.com/support/knowledgecenter/en/SSSHTQ_8.1.0/com.ibm.netcool_OMNIbus.doc_8.1.0/omnibus/wip/probegtwy/concept/omn_prb_settingupprobes.html

Next you'll probably want to dive into ObjectServer SQL to find out how to manage the events that probes generate:

https://www.ibm.com/support/knowledgecenter/en/SSSHTQ_8.1.0/com.ibm.netcool_OMNIbus.doc_8.1.0/omnibus/wip/admin/concept/omn_adm_sql_objservsql.html

You should probably also look at the links listed here:

https://www.ibm.com/developerworks/community/wikis/home?lang=en

Somewhere in here, you'll also need to determine if you're going to use Netcool Impact (most new customers purchase both products in some combination). And if so, you start poking around the Impact Wiki:

https://www.ibm.com/developerworks/community/wikis/home?lang=en

Automated testing for IBM Control Desk

Last year IBM made available the Selenium Automation Toolkit for Maximo, which includes IBM Control Desk. More information can be found here:

https://www.ibm.com/developerworks/community/forums/html/topic?id=4d90a532-31a3-41bd-a128-2186fdae50b8

More information about Selenium itself can be found here:

http://www.seleniumhq.org/

IBM uses Selenium in several tools, including IBM Performance Manager and IBM Application Performance Manager. Essentially, it's used for recording and playing back web browser interactions.

Thursday, January 5, 2017

Maximo: How to view data from an arbitrary table

I recently had a need to view data in a Maximo table, but didn't have direct access to the database. So I wanted to find a way to use the Maximo Application Developer to get me this data. As I thought, it's very straightforward. Basically, you just need to create, configure, authorize and launch a dialog that specifies the table (MBO) as its source.

Mainly, follow the thorough instructions found here:

http://maximobase.blogspot.com/2013/05/how-to-create-custom-dialog-box-in.html

The parts of interest are:

In the dialog element, specify the appropriate mboname:

<dialog id="Testing" mboname="WARRANTYVIEW" label="Contract financial info" >

In this example, the MBO is "WARRANTYVIEW".

Also, you need to specify your MBO's attributes in with the "dataatribute" attribute of each appropriate control:

<textbox id="finaninfo_grid_s1_1"dataattribute="totalcost" />

In this case, "totalcost" is the name of the attribute that will be displayed. Yours will be different.

And that's it for my usecase. The MBO used by the dialog doesn't have to have any relationship to the main MBO attached to the application.

Friday, December 16, 2016

An awesome video on IBM Network Performance Insights 1.2

If you're not a member of the IBM Middleware User Community, you'll need to sign up for free to access this video, but I think it's well worth it:

https://www.imwuc.org/p/do/sd/sid=3340&source=6

Here are a couple of screenshots to show you the kind of detail covered:



The speakers in the video are:

Krishna Kodali, Sr. Software Engineer, IBM - Krishna Kodali is a Senior Software Engineer at IBM, he provides support and consultation for Netcool Product Suite. He has been working with Netcool Product Suite since 2005 and is a worldwide Subject Matter Expert (SME) for IBM Tivoli Network Manager (ITNM). He offers guidance in design and implementation for any size of deployment. Krishna has a Bachelor’s degree in Engineering and is a Cisco Certified Network Professional (CCNP). He specializes in Network Technologies, System Management, IT Service Management, Virtualization, SNMP and Netcool.

John Parish, Technical Enablement Specialist, IBM - John Parish has been teaching IBM courses for the past 10 years.

Wednesday, November 16, 2016

PINK17 Feb 19-22, Las Vegas

I'm attending the Pink Elephant 2017 IT Service Management Conference and Exhibition Feb 19-22 in Las Vegas. To all of my associates and friends, please let me know if you're going so we can try to connect.


Friday, November 4, 2016

IBM's Cloud Business Partner Advisory Council in New Orleans was amazing

This was the first BPAC I've attended, and I hope to be invited to many more. We got to get to know our VP enablement team and have some great fellowship all around. I got to make new friends at Lighthouse, gen-E, Flagship, Perficient, ne Digital, Sirius, Avnet, Arrow and others. Thank you Dave Hock, Bob Miller, Joh Donaldson, Don Stough, Rene Ferguson, Melissa Hadley and all other IBMers involved for pulling off a great event.
If you're an IBM Business Partner, you need to make sure to keep in touch with the IBM BP team. There was great information presented and just an outstanding sharing of ideas to help everyone be successful.
We look forward to seeing everyone in March in Vegas at InterConnect if not before!

Tuesday, September 20, 2016

All of the applications included in IBM Control Desk for Service Providers 7.6 and 7.6.0.1

Just thought this would be useful information to publish. Screenshots of all of the Applications you can select from the "Go To" menu and submenus in ICD 7.6 and 7.6.0.1



































Friday, September 9, 2016

Adding an additional hostname to Maximo on WebSphere and IBM HTTP Server

When you install Maximo (IBM Control Desk in this case) on IBM HTTP Server and WebSphere, the installation creates all of the virtual hosts you need based on the hostname of the IBM HTTP Server(s) that you include in your environment. However, you may need users to access the application using a different hostname (maybe one that's accessible from the Internet, for example). If you simply add a DNS CNAME record for your web server, you'll get an error when you try to access the application with that hostname.

To fix this problem, you need to add a Host Alias to each of the appropriate Virtual Hosts that you have defined for Maximo, then restart the application server(s). Here's how:

1. In my environment, I want to be able to access Maximo using the URL:

http://icdcommon/maximo

The two servers participating in my cluster are named icd1 and icd2. I'm adding an entry in my /etc/hosts (or \windows\system32\drivers\etc\hosts) file for icdcommon to be an alias for icd1.

(In a real environment, you would be modifying DNS appropriately).

2. Log into the WebSphere admin console at http://dmgr_host:9060/admin

3. Navigate to Environment->Virtual Hosts, where you will see multiple virtual hosts. I have the application configured in a cluster named MXCLUSTER, so the virtual hosts of interest to me are:

MXCLUSTER_host
default_host
webserver1_host

4. For each of the above Virtual Hosts, click on that host then click the Host aliases link.

5. Click the New button to add a new entry and in that entry, specify:

Host Name: icdcommon
Port: 80

6. Click OK, then click the save link at the top of the page (or you can wait until your done to click save).

7. Once you've done the above for each of the three Virtual Hosts, you need to restart all IBM HTTP Servers AND all application servers.

8. Now you should be able to access the application with http://icdcommon/maximo (you may need to restart your browser).

Thursday, September 8, 2016

Installing IBM Control Desk 7.6 on RHEL 6.5 in a test environment

The biggest hitch you'll encounter when installing ICD 7.6 on Redhat Linux in a dev/test environment is the error

CTGIN8264E : Hostname failure : System hostname is not fully qualified

And the reason it's a big hitch is because the error is misleading. You do need to have your hostname set to your FQDN, but you also need to have an actual DNS (not just /etc/hosts, but true DNS) A record for your hostname. If you don't already have one that you can update, you can install the package named:

The Berkeley Internet Name Domain (BIND) DNS (Domain Name System) server

It is available on the base Redhat install DVD.

Friday, July 1, 2016

Accessing the CTGINST1 DB2 Instance From the Command Line Processor

When you install IBM Control Desk 7.6 on Windows, you actually have two DB2 instances created - DB2 and CTGINST1. The one with all of the data is CTGINST1, but the one that the system is configured to access is DB2. Luckily, this is easy to fix by changing the environment variable named DB2INSTANCE. After install, it is set to "DB2", and you simply need to change its value to "CTGINST1". You can do this temporarily from the command line or permanently by modifying the environment variables for the user.

Monday, June 27, 2016

Installing the ICD Demo Content along with the ICD Process Content Packs

Do NOT try to install the 7.5.1 demo data into 7.6. It really doesn't work well. I'm leaving this post intact because the steps are useful in general I believe.


If you try to install the IBM Control Desk Content Packs along with the 7.5.1 Demo Content, you're going to have problems. I already addressed a standalone problem with the Demo Content in an earlier post, and now I've gotten further, so wanted to share the wisdom I gained.

No matter which order you install - Demo Content then Process Packs (specifically the Change Management Content Pack) or the other way around - you're going to encounter the following error:

One or more values in the INSERT statement, UPDATE statement, or foreign key update caused by a DELETE statement are not valid because the primary key, unique constraint or unique index identified by "1" constrains table "MAXIMO.PLUSPSERVAGREE" from having duplicate values for the index key.. SQLCODE=-803, SQLSTATE=23505, DRIVER=4.11.69


The cause for this is that the Change Management Content Pack and also the Service Desk Content Pack specify hard-coded values for PLUSPSERVAGREEID in the DATA\PLUSRESPPLAN.xml file, when the inserts should be creating and using the next value of the PLUSPSERVAGREESEQ sequence.

In finding the above root cause, it means that there are two possible solutions to the problem, depending on the order you install things.

If you install the Content Packs before the Demo Content


So in my first run, I installed the Content Packs first, and then the Demo Content (after modifying it as explained in an earlier post). And the exact SQL statement causing this problem was:

SQL = [insert into pluspservagree ( active,calendar,changeby,changedate,createby,createdate,description,hasld,intpriorityeval,intpriorityvalue,langcode,objectname,orgid,ranking,sanum,pluspservagreeid,servicetype,shift,slanum,calendarorgid,slatype,status,statusdate,slaid,slahold,stoprpifjportt,billapprovedwork) values (?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,nextval for PLUSPSERVAGREESEQ,?,?,?,?,?,?,?,?,?,?,0)]
 parameter[1]=1
 parameter[2]=BUS01
 parameter[3]=MAXADMIN
 parameter[4]=2007-10-12 12:33:55.0
 parameter[5]=MAXADMIN
 parameter[6]=2007-10-12 12:33:55.0
 parameter[7]=P1 Incident - Respond in 30 mins. Resolve in 2 hrs.
 parameter[8]=0
 parameter[9]=EQUALS
 parameter[10]=1
 parameter[11]=EN
 parameter[12]=INCIDENT
 parameter[13]=PMSCIBM
 parameter[14]=100
 parameter[15]=SRM1002
 parameter[16]=SLA
 parameter[17]=BUSDAY
 parameter[18]=SRM1002
 parameter[19]=PMSCIBM
 parameter[20]=CUSTOMER
 parameter[21]=ACTIVE
 parameter[22]=2011-09-14 13:26:13.247
 parameter[23]=1
 parameter[24]=0
 parameter[25]=0


To find the constraint causing the problem, I found this page:

https://bytes.com/topic/db2/answers/810243-error-messages-key-constraint-violations

Which showed that I could find the particular constraint with the following SQL:

SELECT INDNAME, COLNAMES
FROM SYSCAT.INDEXES
WHERE IID = 1
AND TABNAME = 'PLUSPSERVAGREE'

That basically showed an index named SQL160607091434350 consisting of just the column named PLUSPSERVAGREEID.

So each row in the PLUSPSERVAGREE table should have a unique value in the PLUSPSERVAGREEID column.


Then to find the existing values in the PLUSPSERVAGREEID column of the PLUSPSERVAGREE table, run:

SELECT PLUSPSERVAGREEID from PLUSPSERVAGREE

For me, this showed values 1 through 16.

Now, looking at the sequence itself, I found that the last value assigned was 5 with this query:

SELECT LASTASSIGNEDVAL from sysibm.syssequences where seqname = 'PLUSPSERVAGREESEQ'

So to fix the problem, I altered the PLUSPSERVAGREESEQ sequence to start at 17:

ALTER SEQUENCE PLUSPSERVAGREESEQ RESTART WITH 17

After I did that, I tried again to install the Demo Content and it worked!

If you installed the Demo Content first

I take lots of snapshots of my VMs, so I could easily go back to a snapshot where I had already installed the Demo Content, to then try to install the Content Packs. That led me to see that the Change Management Content Pack has hardcoded values in the DATA\PLUSRESPPLAN.xml file (by downloading the ChangeMgtPack7.6.zip file and opening up the file). On the positive side, it appears that nothing else in the Content Pack actually references these hardcoded values, so we have the option of changing them as needed.

In my particular case, I found that the following values in the PLUSPSERVAGREE table for the PLUSPSERVAGREEID column

9
10
11
12
13
14
24
25

I also found that the LASTASSIGNEDVAL for the PLUSPSERVAGREESEQ sequence was 25, so that matches up with the data.

The very lucky part for me is that there are exactly 8 rows that get inserted by the PLUSRESPPLAN.xml file, and the PLUSPSERVAGREE table doesn't have any rows with values 1 through 8!

So the solution I applied was I manually edited the PLUSRESPPLAN.xml file to set the PLUSPSERVAGREEID values to 1 through 8. Then I saved the edited file back into the zip file, created a valid ContentSource.xml file to point to it (so I could install from my local copy of the Content Pack), added my new Content Source to the Content Installer, and I was able to successfully install the Change Process Content Pack!

However, I then found that there's also a similar problem with the Service Desk Content Pack, but the same solution can't be applied. Specifically, in the Service Desk Content Pack, the DATA\SLA.xml file uses hardcoded values for the same column, but those values are 1, 3, 4 and 5, which I just used in my workaround for the Change Management Content Pack. So to fix this correctly, I looked in the Demo Content Content Pack to find out how to reference the PLUSPSERVAGREESEQ sequence, and it's actually not too bad.

So the fix I went through was to manually modify the DATA\SLA.xml file to change every element that looked like this:

<column dataType="java.lang.Long" name="PLUSPSERVAGREEID">
      <value>3</value>
    </column>

to this:

<column dataType="java.lang.Long" name="PLUSPSERVAGREEID">
    <columnOverride>
        <sequence mode="nextval" name="PLUSPSERVAGREESEQ"/>
      </columnOverride>
    </column>

Then like above, I saved the edited file back into the zip file, created a valid ContentSource.xml file to point to it (so I could install from my local copy of the Content Pack), added my new Content Source to the Content Installer, and I was able to successfully install the Service Desk Content Pack!

After installing, I checked the PLUSPSERVAGREE table again, and I saw that the values 26 through 29 were there, so I know my change worked.

So in my case I didn't have to change the start value for the PLUSPSERVAGREESEQ sequence, which is nice.

It was a painful afternoon, but well worth it in the end.

Wednesday, June 8, 2016

Installing ITIC and TDI on Windows Server 2012

Both of these tools use the ZeroG InstallAnywhere installer, which doesn't completely get along with Windows Server 2012. Luckily, there's an easy fix within Windows. You need to set the "Compatibility mode" to run with compatibility for "Windows 7". You need to perform this procedure on the setup.exe file for ITIC (under Install\ITIC wherever you've extracted the install images) and the install_tdiv71_win_x86_64.exe file in the TDI installer directory.

On each file, right click and select Properties.

Then on the Compatibility tab, click the "Change settings for all users" button at the bottom.

In the "Compatibility mode" section, select "Run this program in compatibility mode for:" checkbox.

Select "Windows 7" from the drop down list.

Click OK, then OK again.

And now you're ready to install

UPDATE: You do also need to ensure that the java executable is in your path. If not, it will fail when trying to create the Java Virtual Machine.

UPDATE 2: And it MUST be the Java 1.7 executable in your path. 1.8 will fail.

Tuesday, June 7, 2016

Installing SmartCloud Control Desk 7.5.1 Demo Content on ICD 7.6

You can't do it

Use the maxdemo data that comes with the product. Install it initially using these steps:

https://www.ibm.com/developerworks/community/wikis/home?lang=en#!/wiki/Anything%20about%20Tivoli/page/How%20to%20install%20the%20sample%20data%20during%20IBM%20Control%20Desk%207.6%20installation


These steps are supposed to get it installed after the initial install, but I tried twice and failed both times:

http://www.ibm.com/support/knowledgecenter/SSLKT6_7.6.0/com.ibm.mam.inswas.doc/mam_install/t_mam_create_maxdemo_postinstall.html

So if you want demo data, which you do in some number of test/dev environments, simply install it at initial install time. It goes very smoothly.

But you can, mainly, with a little work

UPDATE 8/17/2016 
NOTE:THIS WILL ACTUALLY MAKE YOUR SELF SERVICE CENTER UNUSABLE!!!! I don't know why, but it does. Something in the content makes it so that you cannot do anything from the Self Service Center. So ONLY install this data if you have a complete backup of your system (VM snapshots are a wonderful thing).

Additionally, you will have other problems, such as the following error when you try to create a new WORKORDER:

BMXAA4169E - No record found in maxvars table for maxvar WOENABLEREPFAC. Make sure to insert the MAXVAR in the MAXVARS table.

And there's no easy fix. So the demo data will let you play around with a lot of functionality, but the system is pretty unusable for anything else after you install it.

Add an attribute to the TICKET table

You need to add an attribute named RBA_RC to the TICKET table. Its type needs to be ALN and its length set to 50. This attribute no longer exists, but the demo content requires it. You may want to take a different route to solve this problem, but this was the easiest one I could think of.

Remember, after adding this attribute, you need to set Admin Mode to ON, Apply Configuration Changes, then set Admin Mode to OFF.

Download the package

First, you need to download the content pack ZIP file itself from here:

https://www-304.ibm.com/software/brandcatalog/ismlibrary/details?catalog.label=1TW10CO0A

Edit the package

You've got the entire package downloaded, but if you just try to install it, you will fail. So you need to edit the file named Package/ImportPackage.xml within the zip file. You can edit it in vi, Notepad, gedit, etc. - any text editor you want. What you need to do is delete lines 408 through 717 (leaving the last line that reads "</package>"). The reason for this is that the first error when importing was on line 408. I tried de-selecting different options through the GUI, but was not successful. My choice to simply delete these lines was made only after 40-50 other attempts.

After editing the file, add it pack to the zip file.

Define your local content and Install

  1. Create an XML file called ContentSource.xml in the C:\temp directory on your Smartcloud Control Desk server system that contains the following text:
    <?xml version="1.0" encoding="UTF-8"?>
    <catalog infourl="" lastModified="" owner=""
      xmlns:tns="http://www.ibm.com/tivoli/tpae/ContentCatalog"
      xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="ContentCatalog.xsd">
      <catalogItem>
        <version>7.5.1</version>
        <type>mriu</type>
        <name>Enter the name of package</name>
        <description>Enter a description of the package here</description>
        <homepage/>
        <licenseurl/>
        <category>Describe the category of the content</category>
        <url>file:////C:\temp\TestPackage.zip</url>
      </catalogItem>
    </catalog>
  2. Edit the name and description and the category according to the content that you are installing. Change the file name in the URL to the name of the content pack zip file.
  3. Save the file.
  4. Copy the content pack zip file to the C:\temp directory on the server.
  5. Go to the ISM Content Installer application: System Configuration>IBM Content Installer.
  6. Click the New icon.
  7. Enter the location of the ContentSource.xml that you created in step 1 and a description. The file name in our example is:  file:////c:\temp\ContentSource.xml
  8. Click Save.
  9. Click the newly created content source.
  10. Click the download link to install the content.

You've now got a good amount of demo content

You don't have everything from the original content pack, but you've got a lot more than you started with. Good luck.

Monday, June 6, 2016

IBM Control Desk for Service Providers

One of the many features of IBM Control Desk that separates it from the competition is its ability to support Service Providers. It does this by allowing you to secure information on a per-customer basis. For this blog post, I wanted to show a couple of multi-customer scenarios in the product. Specifically, I wanted to show a customer-specific user logging in and only seeing that customer's assets. Additionally, I wanted to show a software license being assigned to that customer-owned asset and how it appears. The screenshots associated with those are shown here:

Create a customer named ACME CORP






Now create a Person who is associated with ACME CORP.





The Cust/Vendor field is farther down on the page.


Now create a user that is associated with that Person.


Now create a Security Group (SP) with any permissions you want, but specify "Authorize Group for Customer on User's Person record". I only granted Read access to the Assets application. And add your user to this group.


Here I'm logged in as the user, and can only see the one asset associated with ACME CORP.


Here I'm viewing the Licenses (SP) application for Adobe Acrobat and see that a license has been allocated to ITAM1010, which is the asset associated with ACME CORP.


Friday, April 22, 2016

Configuring ITIC for use with IBM Control Desk 7.6

As shipped with ICD 7.6 (at least on Linux x86 64-bit), ITIC isn't quite configured correctly. When you try to run startFusion.sh, it will complain that it cannot file the main class. The problem lies in the init.sh script. Specifically, you need to change the following line:

FSNBUILD=7510

to

FSNBUILD=7600

Without this change, it's trying to find a file named IntegrationComposer7510.jar, which doesn't exist. In 7.6, the correct file is IntegrationComposer7600.jar.

Another thing to note is the URL of the BigFix server for use with the ITIC mapping is:

https://hostname-or-ipaddress:52311