Monday, January 27, 2020

Automating processes in a small company using free utilities

Introduction

I want to share some of the automation that I've written for my company to help with some of the back office processing. The reason that I wrote this automation myself is that I looked at numerous offerings on the market, and none fit exactly what we needed. Once I made that realization, I came up with automation using tools we already own or that are free to use, specifically:

Google Sheets, including Google Script
Windows Batch scripting
Perl scripting
Bookmarklets in JavaScript
Quickbooks IIF files

It looks like a lot (and it may very well be a lot), but for my background, this path required no more work than learning an off-the-shelf product.

The Payroll Process

Not everything is automated. We do have automated time sheets, but there are several pieces of data that need to be manually combined and calculated. Where we started with the automation was the spreadsheet in which this data was being captured. The data was already in a straightforward format, so it just needed to be cleaned up a little. Specifically, some of the entries just needed to have some specific keywords, and we put the spreadsheet into Google Sheets. We made the decision to move from Excel so that we could always access the spreadsheet and also because I personally enjoy programming in JavaScript (Google Scripting) than in Visual Basic (MS Office Scripting).

The process starts with my CFO filling out the Google Sheet using data from the few different sources that we have. She then presses the "Create IIF" menu item (created with a Google Script implementation of the onOpen() function). That creates both an IIF file for QuickBooks (that actually needs a little more processing - more on that later) and the a file containing the JavaScript for a bookmarklet that is used later in the process. It also sends an email out to each

The IIF file has to then be downloaded into a specific directory which contains a windows .BAT file, a shell script and a Perl script. The .BAT file is simply used to call the shell script, which calls the Perl script to actually parse the file. That Perl script mainly just adds TAB characters in the correct places to match the correct IIF format. I wasn't able to make the TAB characters actually stay in the downloaded file via Google Script, so this is the workaround I came up with.

The contents of the JavaScript file are the implementation of the "SetACH" bookmarklet that has already been defined in her browser. That bookmarklet is used to fill in the appropriate fields on our bank's ACH web page. To do this, my CFO logs into our bank and navigates to the appropriate page, then clicks the SetACH bookmarklet, which fills in all of the appropriate values.

Better Than a Packaged Product?

In a word, Yes. Every COTS (commercial off-the shelf) product requires some amount of configuration, which, in my experience, would have taken more time than creating the custom solution I have. Additionally, all of the COTS products I investigated included tons of capabilities that we simply don't need or that don't specifically work for us. As you can see, we only needed a tiny bit of functionality, and I didn't want to pay for a bevy of additional features that we wouldn't use. Specifically, I used the same evaluation process that I help customers with every day. I identified the specific functions I needed to perform and reports that I needed to generate, and evaluated the choices based on those. I also evaluated the existing tools and my development expertise with each to determine that I was capable of producing a working solution. We've now had this automation in place for three years, and it has required minimal upkeep.

Monday, January 13, 2020

Using jQuery in Cognos 10.x and DASH

For most of what you need, IBM has a great article on how to incorporate jQuery into your Cognos reports here:

https://www.ibm.com/developerworks/library/ba-pp-reporting-advanced-report-design-page647/index.html

However, there are some painful issues which vary based on whether you access reports through DASH/JazzSM or go directly to Cognos Reporting. Specifically, it appears that DASH adds some additional capabilities to the environment, possibly through the use of Dojo. Or it's possible that these capabilities are stripped off when logging directly into Cognos. Whatever the case, here are the caveats:

DASH/JazzSM adds some additional container elements to the HTML report such that you cannot use the browser "Print" function to print a multi-page report. All you get is the first page of the report, and this happens on all browsers. To get around this, I opened a new window and set the contents to be only the container enclosing the report. Then THAT window can be printed via the browser.

Logging directly into Cognos "removes" several methods that should exist in IE 11. For example, it undefines the Object.keys() method. So if any JavaScript you're using references that method, you'll need to use a workaround like this one:

https://stackoverflow.com/questions/18912932/object-keys-not-working-in-internet-explorer

Additionally when logging directly into Cognos, the HTMLCanvasElement.getContext() method is not defined. Personally, I got around this by commenting out any references to that method and it worked in my case. Unfortunately, I couldn't find any "good" solution to this problem, so this is all I've got.

Happy coding!

Tuesday, December 3, 2019

Adding an Unauthenticated JSP to IBM Control Desk

Background

We have a customer that needed to allow unauthenticated users to open tickets within IBM Control Desk, and we only had access to IBM HTTP Server and our Maximo/ICD WebSphere server to make that happen.

Security Risks

Essentially, anyone with the link can get to the pages described in this article. They could even write a script to create a huge number of tickets, taking down your ICD installation. So in any pages you create using this article, you need to do *something* to guard against that behavior. Exactly what you do depends on your specific situation. 

Where to Create the Unprotected JSP

You just need to create the .jsp file under:
.../installedApps/<cell__name>/MAXIMO.ear/maximouiweb.war/webclient/login

For example, put the following in the file HelloWorld.jsp in the above directory:

<HTML>
 <HEAD>
  <TITLE>Hello World</TITLE>
 </HEAD>
 <BODY>
  <H1>Hello World</H1>
  Today is: <%= new java.util.Date().toString() %>
 </BODY>
</HTML>

Then access it with the url:


And this is what you'll see, with no login required:

image.png


That's it. Now you just need to write the JSP code to do what you need.

Caveats

A JSP file added in this way would need to be re-deployed after each time you deploy the application. These instructions do NOT tell you how to add the JSP file to the EAR build process.
You need to manually copy the JSP file(s) to the appropriate location for each MAXIMO UI JVM.

Monday, December 2, 2019

Creating incident ticket in IBM Control Desk using the new REST API

Background

Maximo 7.6.0.2 introduced a new REST API that can be accessed via .../maximo/oslc . Here's a link to the documentation on it:

https://developer.ibm.com/static/site-id/155/maximodev/restguide/Maximo_Nextgen_REST_API.html

This is an all-JSON API that makes things a ton easier than it was with the older (and deprecated) XML-based REST API.

The Problem

However, that documentation is aimed at Maximo Enterprise Asset Management users and not IBM Control Desk users. That means there aren't any examples for creating incidents or service requests, for example.

Why You're Here

You want an example of creating an INCIDENT in ICD, and that's what I'll provide. I'm using ICD 7.6.1.1 on WebSphere, DB2 and IBM HTTP Server, along with the sample data. That's how I have a classification ID and hierarchy structure in the example below.

Basically, the best way I've found is to use the MXOSINCIDENT object structure because it already has a bunch of relationships (including one to TICKETSPEC, so you can add specifications when creating an incident). Here are the details:


Additional header:
properties: *

BODY:
{
    "reportedby": "MXINTADM",
    "description": "second MXINCIDENT OS API",
    "externalsystem": "EVENTMANAGEMENT",
    "classstructureid": "21010405",
    "ticketspec": [{"assetattrid": "computersystem_serialnumber","alnvalue": "99999"}]
}

RESPONSE:

{
    "affecteddate": "2019-11-29T15:02:00-05:00",
    "template": false,
    "creationdate": "2019-11-29T15:02:00-05:00",
    "hierarchypath": "21 \\ 2101 \\ 210104 \\ 21010405",
    "historyflag": false,
    "actlabcost": 0.0,
    "createwomulti_description": "Create Multi Records",
    "selfservsolaccess": false,
    "outageduration": 0.0,
    "ticketuid": 46,
    "inheritstatus": false,
    "reportdate": "2019-11-29T15:02:00-05:00",
    "class_description": "Incident",
    "description": "second MXINCIDENT OS API",
    "reportedby": "MXINTADM",
    "classificationid": "21010405",
    "sitevisit": false,
    "_rowstamp": "10009026",
    "accumulatedholdtime": 0.0,
    "createdby": "MXINTADM",
    "isknownerror": false,
    "affectedperson": "MXINTADM",
    "class": "INCIDENT",
    "ticketid": "1040",
    "ticketspec": [
        {
            "classstructureid": "21010405",
            "changeby": "MXINTADM",
            "changedate": "2019-11-29T15:02:00-05:00",
            "alnvalue": "99999",
            "mandatory": false,
            "refobjectname": "INCIDENT",
            "ticketspecid": 7,
            "assetattrid": "COMPUTERSYSTEM_SERIALNUMBER",
            "_rowstamp": "10009029",
            "refobjectid": 46,
            "displaysequence": 1,
        }
    ],
    "status_description": "New",
    "externalsystem_description": "EVENT MANAGEMENT",
    "classstructureid": "21010405",
    "changeby": "MXINTADM",
    "changedate": "2019-11-29T15:02:00-05:00",
    "externalsystem": "EVENTMANAGEMENT",
    "actlabhrs": 0.0,
    "relatedtoglobal": false,
    "hasactivity": false,
    "statusdate": "2019-11-29T15:02:00-05:00",
    "createwomulti": "MULTI",
    "hassolution": false,
    "virtualenv": false,
    "pluspporeq": false,
    "isglobal": false,
    "oncallautoassign": false,
    "pmscinvalid": false,
    "status": "NEW"
}

Prerequisites

To successfully do the above, you do need to Configure Object Structure security: https://www.ibm.com/support/pages/using-object-structure-security-limit-access-security-groups , and the user MUST have a Default Insert Site, which apparently my MXINTADM user does. MAXADMIN in my system DOES NOT, so it fails if I use that user.

I'm using Postman for testing, which I highly recommend: https://www.getpostman.com/downloads/

Of course you'll use some specific language (or curl) when you're doing this in production, but for testing, you want to use Postman.

A helpful link

In addition to the API documentation, this link was very helpful to me:


Tuesday, November 12, 2019

Special characters in passwords initially configuring IBM Control Desk 7.6.1.1

Several of the screens displayed by ConfigUI tell you that some passwords don't allow special characters. The rule of thumb I found the REALLY hard way is:

ONLY use underscore as a special character in any password initially.

We ran into problems mainly with dollar sign, but other ones will certainly bite you, too. We even ran into the problem with the root user's password. Any password can be changed after the initial deployment, so save yourself some headache and make them initially very simple. And the painful part is that some of the errors will just be silent, such as "Unable to access host with these credentials", but no other error. So just take the advice above and you'll be much happier.

Monday, November 4, 2019

Moving to the cloud. Pick any two: Cheap, Fast, or Easy

The Cloud offers literally all of your current IT services, plus tons more, some of which most of your IT department has never heard of before. You can quickly and easily move all your existing workloads there, but you'll pay dearly, as many companies are finding out. You can take the time to train your IT staff and meticulously plan for the most efficient way to the cloud, but that's not quick. 

Moving to the cloud correctly truly requires rethinking how you do everything in IT. In all cases, the best route is to only move a subset of workloads or capabilities to the cloud, and different clouds may be better for different workloads. Some things are easier and cheaper to run on-prem. For example, in many cases it can be cost effective to arm your IT and development staff with laptops with 64GB of RAM. Doing so allows each one to run their own private multi-cloud in which they can test away. A brand new laptop with warranty with 64GB of RAM and 6 cores (12 threads) can be found for under $1700 on eBay and has a useful life of 4 years. Such a VM in the cloud (AWS EC2 r5.2xkarge) costs $.20 per hour, which is $5,300 for a three-year term, and doesn’t allow the flexibility of a local system running VMWare Workstation. That's a very specific example, but it illustrates why each and every workload needs to be analyzed or audited before simply moving it to the cloud.

 Some workloads are more suited to specific public clouds. WebSphere applications are a big example. If you want to “lift and shift” these workloads to the cloud, the IBM Cloud should be your first choice. If you have apps that run under Sharepoint, you should absolutely run those applications on Microsoft’s Azure cloud. There are many other workloads that may run equally well on any cloud, and for those, the analysis needs to take other factors into account.

 The point I want to get across is that moving to the cloud requires analysis by a qualified team of experts. I think the best approach is to hire one or more experts and simultaneously train your own team to help them get up to speed. The combination of those two is extremely important, because you don’t want newly-trained people responsible for your entire cloud migration. You want an expert who can guide the team, allowing them to take over responsibilities over time.

Friday, September 13, 2019

If you're scripting on Windows, use PowerShell

My last post on PowerShell was in 2008, so I thought I would write an update. If you're writing scripts on Windows, you should probably be using PowerShell. It seems to have 99 to 100% of the tools (especially parsers) that I ever need. I just recently needed to scrape a web page for some data, so I thought I would spend some time messing with PowerShell to get it going. Well, it only took about 30 minutes to develop the entire script that I needed, with absolutely no external dependencies.

Here's the whole script:

# get the web page. Yes, PowerShell has a 'curl' command/alias
$resp = curl -UseDefaultCredentials http://myhostname/mypagename

# Get all of the rows of the table with an ID of "serverTable"
$rows = $resp.ParsedHtml.getElementById("serverTable").getElementsByTagName("TR")

# Loop through the rows, skipping the header row: 
for ($i=1; $i -lt $rows.Length(); $i++) {

# get the hostname of this row
  $thehost = $rows[$i].getElementsByTagName("TD")[2]

# get the date this host was last rebooted
  $rebootDateString=$rows[$i].getElementsByTagName("TD")[6]

# if the host was rebooted over 20 days ago, print that date

  if ([datetime]::Now.AddDays(-20) -gt [datetime]::parseexact($rebootDateString.innerText,"G", $null))   {
    $rebootDateString.innerText } }


That's it, with no external references and nothing extra to install. It's got date parsing, date arithmetic, HTML parsing and HTTP request capabilities all built in. I realize that this then isn't portable... or is it? There's actually a PowerShell port for Linux available, with instructions here from Microsoft:


I know that Python is a hot language these days, but I don't like it as much as PowerShell. I tried to do the above with Python, and it took quite a bit longer, even though I've used Python more than PowerShell. You have to import some classes and then use XPath to find elements in the HTML. PowerShell was just straightforward and easy, at least for me, with my background and expectations. YMMV, but I like PowerShell.