Tuesday, December 29, 2020

ServiceNow Event Rules: No support for If..Then..Else in regex

The Transform and Compose Alert Output section of an Event Rule in ServiceNow allows you to use regular expressions and grouping to parse data in incoming events. Almost full regular expression support is provided, but I wanted to point out one feature that's definitely not supported as of the Paris release: if..then..else conditionals. Here's a great description of this feature from regularexpressions.info. Basically, the format of this conditional is:

(?(condition)(then)|(else))

It's somewhat esoteric, but it can be useful in cases where you have two different string formats that you want to parse in a single event rule. I just tested this thoroughly on my dev instance, and it absolutely is not supported. While you can save the event rule with this syntax, you'll eventually see an error stating:

Invalid Regular Expression Field

I believe this is due to the fact that the regular expression engine does not support "lookbehind" (limitation documented here). In any case, it's not supported. I just wanted to document my findings here to save others a little headache.

Update 1/3/2021

The official name for this is Conditional Lookahead. You can play around with it at http://regex101.com if you're interested. 

And to make sure I was doing it correctly, I also tried this syntax in the Event Rule transform:

(?(?=my_pattern)(\d+)|(\w+))

This works in regex101, but gives the Invalid Regular Expression Field in the ServiceNow interface.

Friday, August 14, 2020

ServiceNow ITOM Health - Using a complex algorithm to bind an Alert to a CI

 By default with ServiceNow ITOM Health Event Management, an Alert is bound to a CI following this flowchart:

In 95-99% of implementations, you should work to configure components appropriately to leverage the above flow. 

However, there are some number of unique cases where you need to do something extremely specific that falls outside of the above decision-making process. For those cases, you will want to modify one or more of the Advanced Scripts under Event Management->Settings in the navigator. Specifically, to change the bound CI, you would modify the EvtMgmtCustom_PostBind_Create script to suit your needs. In this script, you can do anything you want to the alert GlideRecord before it's actually written to the database. 

Read the warnings in the script definition to realize that you do NOT want to modify this script unless you know what you're doing and have some very efficient code to run. But for those few cases where you need to set the bound CI according to some esoteric algorithm, this script is where it can be done.

Thursday, July 2, 2020

How to get live metrics for CPU, Memory,Disk etc by running db2 queries in the tdw server.

I just received this question today from an ITM user and thought it would be best to answer it here.

First, The "Getting Started with IBM Tivoli Monitoring on Distributed Environments" PDF contains great information on ITM 6.1, and it is applicable to all of the 6.x versions:

http://www.redbooks.ibm.com/redbooks/pdfs/sg247143.pdf

Chapter 4 specifically goes over the Tivoli Data Warehouse, including the DB2 table and column names. Specifically, look at section 4.2.4 for descriptions of the different table names.

The tricky part in DB2 is that the table and column names are all case sensitive, which means you need to enclose the table name inside double-quotes in your sql statement. For example, you would use this SQL statement to get percent processor time for all processes for all servers:

select "%_Processor_Time","Process_Name" from "NT_Process"

If you want to see data that hasn't yet made it to the TDW, then you need to use the krarloff command described here:


Friday, June 12, 2020

Exception handling in Maximo automation scripts with try except

For the longest time, I thought that try/except blocks didn't work in automation scripts. A colleague asked me about them yesterday, so I figured I would try it out again, and found that they work. It's really just like normal Python exception handling. Here's a script I used to test it, with an Object Launch Point on Save for the SR object:

from psdi.server import MXServer
import sys

try:
  mbo.getMboValue("FOOBAR")
except:
  print "an error was encountered"
  e = sys.exc_info()[0]
  print(e)

The expected behavior is that a message will be written to the log file, but that an SR will be created, and that's exactly the behavior I observed. Here's what was in the log file:

[6/12/20 9:25:50:961 EDT] 000021c4 SystemOut     O 12 Jun 2020 09:25:50:961 [DEBUG] [MAXIMO] [] an error was encountered
<type 'psdi.util.MXApplicationException'>

Happy scripting!

Tuesday, June 9, 2020

Windows users, set up your Print Screen key to launch the Snip and Sketch tool

After years of using different mechanisms to take screenshots or partial screenshots, I've finally recognized that the absolute best way is to configure your Print Screen key to launch the Snip and Sketch tool. It's easy to do and will make your life much easier:

https://winaero.com/blog/print-screen-screen-snipping-windows-10/

Now that it's configured, just press your Print Screen key and you can then outline a rectangular area of your screen to copy. Once selected, the image is copied to your clipboard, but the Snip application isn't launched. If you need to launch the application, you can click on the notification that was generated when you selected the area (I normally just need to paste the image into a document, so I'm very happy that the application isn't launched). That's it. It's a very small change that has made my personal workflow much more efficient.

designer/missingid error when importing application definition in ICD 7.6.1 using port forwarding

Background

I encountered a strange error with my IBM Control Desk development environment and wanted to share because I couldn't find any hits on Google.

Problem

The problem I've encountered is the following error when trying to import any application XML into ICD 7.6.1.1:


And the import fails. It does this on all Windows and Linux browsers that I tried (Chrome, IE, Firefox, Edge, new Edge, Chromium, etc.), so it doesn't appear to be a browser problem.

Workaround

I realized that my ICD VM is running on my VMWare host on a NAT network, and I've got port forwarding configured to allow me to access it from systems on my home network. So I figured I would try the same steps using a browser either running on the ICD VM itself or on another VM on the same NAT network. And that works! So the issue is something to do with my configuration of port forwarding or something else in my network AFAIK. More importantly, it's NOT a problem with ICD.

I've used my browser's Developer Tools to analyze what's going on, and I don't see any difference between a failing browser and a working one, so I think there's something going on in ICD. However, I know that this particular configuration isn't one that I'm likely to see at any customer site, especially since I'm using VMWare Workstation to run my VMs. So I'm just documenting this here so there's at least one Google hit on the problem, along with at least one workaround.

Tuesday, April 7, 2020

How to delete virbr0 and virbr0-nic in RHEL7

Background

I have an RHEL 7.6 VM that I clone to have new servers. This VM was installed with the Desktop profile defaults. This installed libvirt, which I normally want in my VMs, but it sometimes gets in the way. Specifically, I needed to configure two DB2 VMs with HADR and TSAM in a test environment, and I found that virbr0 and virbr0-nic don't allow the 'db2haicu' command to successfully complete. So I needed to quickly delete those interfaces.

List of steps

As root:

service libvirtd stop
systemctl disable libvirtd
ip link set virbr0 down
brctl delbr virbr0
ip link delete virbr0-nic

Caveats

I don't know if the above will correctly survive a reboot.

Friday, March 27, 2020

Thursday, March 26, 2020

One solution for VMMSyncData has no values

Background

During an IBM Control Desk (also applies to Maximo) implementation, we found a problem with the VMMSYNC cron task. Specifically, it didn't appear to pull any data from Active Directory. Or if it did, the data wasn't processed.

Symptoms

We had ChangePolling set to 1 in our cron task instance. We enabled DEBUG level logging for the VMMSYNC cron task and saw the following in SystemOut.log:


[3/25/20 19:28:06:374 EDT] 000001ba SystemOut     O 25 Mar 2020 19:28:06:374 [DEBUG] [MAXIMO0c1] [CID-CRON-1618] Synchronizing VMM Users: vmmSubject Subject:

        Principal: defaultWIMFileBasedRealm/myproduser
        Public Credential: com.ibm.ws.security.auth.WSCredentialImpl@16bdee5d
        Private Credential: com.ibm.ws.security.token.SingleSignonTokenImpl@e022bcb
        Private Credential: com.ibm.ws.security.token.AuthenticationTokenImpl@17329f4d
        Private Credential: com.ibm.ws.security.token.AuthorizationTokenImpl@64c05421

[3/25/20 19:28:06:384 EDT] 000001ba SystemOut     O 25 Mar 2020 19:28:06:384 [DEBUG] [MAXIMO0c1] [CID-CRON-1618] Synchronizing VMM Users: VMM search results
<?xml version="1.0" encoding="UTF-8"?>
<sdo:datagraph xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xmlns:sdo="commonj.sdo" xmlns:wim="http://www.ibm.com/websphere/wim">
  <wim:Root>
    <wim:entities xsi:type="wim:PersonAccount">
      <wim:identifier externalName="CN=myproduser,OU=Admin,o=MYCORP.COM"
          repositoryId="MYAD" uniqueId="CN=myproduser,OU=ADMIN,O=MYCORP.COM"
          uniqueName="CN=myproduser,OU=Admin,o=MYCORP.COM"/>
    </wim:entities>
    <wim:controls xsi:type="wim:ChangeResponseControl">
      <wim:checkPoint>
        <wim:repositoryId>MYAD</wim:repositoryId>
        <wim:repositoryCheckPoint>10085177</wim:repositoryCheckPoint>
      </wim:checkPoint>
    </wim:controls>
  </wim:Root>
</sdo:datagraph>
 
[3/25/20 19:28:06:384 EDT] 000001ba SystemOut     O 25 Mar 2020 19:28:06:384 [DEBUG] [MAXIMO0c1] [CID-CRON-1618] syncUser, syncData = <VMMSyncData has no values>


Notice the greenish highlighted text is the entry that should have been seen in the "VMMSyncData", but it says it has no values.

I then looked at the log of a working environment and saw that it was similar, but it actually worked:


[3/25/20 18:57:12:940 EDT] 000001b3 SystemOut     O 25 Mar 2020 18:57:12:940 [DEBUG] [MAXIMO0] [CID-CRON-102377] Synchronizing VMM Users: vmmSubject Subject:
        Principal: defaultWIMFileBasedRealm/myqauser
        Public Credential: com.ibm.ws.security.auth.WSCredentialImpl@ec82289
        Private Credential: com.ibm.ws.security.token.SingleSignonTokenImpl@5b17dd0b
        Private Credential: com.ibm.ws.security.token.AuthenticationTokenImpl@9348468a
        Private Credential: com.ibm.ws.security.token.AuthorizationTokenImpl@447810d4
 
[3/25/20 18:57:12:944 EDT] 000001b3 SystemOut     O 25 Mar 2020 18:57:12:944 [DEBUG] [MAXIMO0] [CID-CRON-102377] Synchronizing VMM Users: VMM search results
<?xml version="1.0" encoding="UTF-8"?>
<sdo:datagraph xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xmlns:sdo="commonj.sdo" xmlns:wim="http://www.ibm.com/websphere/wim">
  <wim:Root>
    <wim:entities xsi:type="wim:PersonAccount">
      <wim:identifier externalName="CN=myqauser,OU=Admin,o=MYCORP.COM" repositoryId="MYAD"
          uniqueId="CN=myqauser,OU=ADMIN,O=MYCORP.COM" uniqueName="CN=myqauser,OU=Admin,o=MYCORP.COM"/>
      <wim:changeType>modify</wim:changeType>
      <wim:uid>myqauser</wim:uid>
    </wim:entities>
    <wim:controls xsi:type="wim:ChangeResponseControl">
      <wim:checkPoint>
        <wim:repositoryId>MYAD</wim:repositoryId>
        <wim:repositoryCheckPoint>670244</wim:repositoryCheckPoint>
      </wim:checkPoint>
    </wim:controls>
  </wim:Root>
</sdo:datagraph>

[3/25/20 18:57:12:944 EDT] 000001b3 SystemOut     O 25 Mar 2020 18:57:12:944 [DEBUG] [MAXIMO0] [CID-CRON-102377] syncUser, syncData = uid=myqauser
[3/25/20 18:57:12:956 EDT] 000001b3 SystemOut     O 25 Mar 2020 18:57:12:956 [DEBUG] [MAXIMO0] [CID-CRON-102377] insertRecord, columnName = personid
[3/25/20 18:57:12:956 EDT] 000001b3 SystemOut     O 25 Mar 2020 18:57:12:956 [DEBUG] [MAXIMO0] [CID-CRON-102377] value, value = myqauser
[3/25/20 18:57:12:956 EDT] 000001b3 SystemOut     O 25 Mar 2020 18:57:12:956 [DEBUG] [MAXIMO0] [CID-CRON-102377] insertRecord, columnName = type
[3/25/20 18:57:12:957 EDT] 000001b3 SystemOut     O 25 Mar 2020 18:57:12:957 [DEBUG] [MAXIMO0] [CID-CRON-102377] value, value = WORK
[3/25/20 18:57:12:957 EDT] 000001b3 SystemOut     O 25 Mar 2020 18:57:12:957 [DEBUG] [MAXIMO0] [CID-CRON-102377] insertRecord, columnName = isprimary
[3/25/20 18:57:12:957 EDT] 000001b3 SystemOut     O 25 Mar 2020 18:57:12:957 [DEBUG] [MAXIMO0] [CID-CRON-102377] value, value = 1
[3/25/20 18:57:12:957 EDT] 000001b3 SystemOut     O 25 Mar 2020 18:57:12:957 [DEBUG] [MAXIMO0] [CID-CRON-102377] insertRecord, columnName = emailaddress
[3/25/20 18:57:12:958 EDT] 000001b3 SystemOut     O 25 Mar 2020 18:57:12:957 [DEBUG] [MAXIMO0] [CID-CRON-102377] value, value = null


Notice the greenish highlighted text again, where it shows that the bind user was modified, and then it pulls in all of the attributes and values for that entity.

I found this link online and tried what it suggested, but this is outdated (I'm on ICD 7.6.1.1):


That "changeType" attribute is absolutely NOT required any more - it's built into the VMMSYNC code.

The Solution

It turns out that the bind DN (user ID) that we were using apparently didn't have some permission or configuration in place. We actually tried with three different users before we found one that had the appropriate access to actually pull the data. 

What's really strange is that we could successfully use any of the three users with Apache Directory Studio to connect to Active Directory to see any and all data. But for some reason, only one of the users actually worked through VMMSYNC.

I will update if I find out from the AD team what the specific setting is, but I wanted to document this here for posterity.

Monday, March 16, 2020

Integrating IBM Control Desk or Maximo with QRadar for monitoring logins

Background

You may need to monitor ICD/Maximo logins with QRadar, and that's what this post will detail, as there are a few steps and a lack of documentation around it. In this case, I'm using ICD 7.6.1.1 and QRadar Community Edition 7.3.3 (downloaded on 3/13/2020).

All of the steps here apply to both ICD and Maximo, since there is nothing ICD-specific about them, so I'm going to refer to the product as just ICD in the remainder of this post.

Introduction

ICD doesn't log logins by default, and even once you configure it to track logins, it logs them to a database table rather than SystemOut.log. 

Tracking logins

To enable login tracking, you need to go into the Security->Security Groups application in ICD. From there, choose the Security Controls action. In the dialog shown, select "Enable Login Tracking?" and you're done. That's it. To test it, log out and back in again, then use your favorite database tool to look at the contents of the MAXIMO.LOGINTRACKING table. You should see your login.

Configuring QRadar for a JDBC Log Source

This part is a little tricky, but that's why I'm writing this article. You need to create a new Log Source with the following important characteristics:

Type: Universal DSM
Protocol: JDBC
Database name: MAXDB76
Table name: MAXIMO.LOGINTRACKING
Compare field: ATTEMPTDATE

Those are all of the hard properties. The others should be fairly straightforward, such as hostname, port, etc. After that, deploy your changes. Once that's done, log out and back into ICD, and you should get an event that looks similar to this:


Now you can set up the normal parsing, extracting, etc. to have QRadar properly categorize your events.

Tuesday, March 3, 2020

Finding unique element types in XML with PowerShell

I recently needed to deal with a large (165MB) XML file, so the first question I had is: How many unique element types are actually in the file? I found a bunch of truly painful-sounding answers online, mainly involving the distinct-values() XPATH 2.0 function. If you've ever dealt with XPATH, you know that just setting up the toolchain environment to use it is pretty labor intensive. However, I then started looking for PowerShell cmdlets that work with XML and I found Select-Xml:

https://docs.microsoft.com/en-us/powershell/module/microsoft.powershell.utility/select-xml?view=powershell-7

It looks nice enough, but it turns out that it has a few features that are really nice. Specifically, once you read in an XML document, your variable will have a Node member, and that member contains one member for each type of element in your document! So it automatically shows you the list of distinct elements. You can then maneuver through the members to find the number of each type of element ($myvar.Node.myelementType.count) or to view one of the instances ($myvar.Node.myelementType[0]), etc.

In my case, I found that my 165MB file (produced by the ITNM DLA) only contained about 17 unique element types, and that there were only about 2500 network devices, which I could then easily loop through to get the information I needed. Chained with ConvertTo-Csv, I was able to massage the data exactly as needed.

The moral of the story is: PowerShell is amazing right out of the box. I didn't have to add any new packages or anything, and it gave me precisely what I needed.

Thursday, February 27, 2020

JRExecAction() function behavior in Netcool Impact

Just a short post today to set the record straight on the use of the JRExecAction() function in Netcool Impact policies.

The product documentation states that the JRExecAction() function returns nothing, but sets the variables ExecError and ExecOutput corresponding to stderr and stdout from the script/command that you run. However, the function itself returns the data written to stdout in addition to setting ExecOutput and/or ExecError. So if you try the following, you'll see that the same output is logged twice:


results = JRExecAction("/bin/ls", "/", false, 1000);
log(0,"Frank results = " + results);
log(0,"ExecOutput results = " + ExecOutput); 


Incidentally, the first use above is really the more common use in practice. You will find very few implementations where the ExecOutput variable is referenced.

Monday, February 17, 2020

How to reconfigure your ServiceNow MID server when your developer instance changes URLs

If your developer ServiceNow instance is hibernated, it may be assigned a new URL. This will break your MID server(s).

To fix this:

cd /servicenow/<midservername>/agent
vi config.xml

search for the old URL
update it
if the password has changed, put the new password in the file unencrypted. On startup, it will get encoded and encrypted.

./stop.sh
./start.sh

Go to your instance to check the status of your MID servers.

Thursday, February 13, 2020

Customizing the ServiceNow Netcool Connector

UPDATE 5/1/2020

You can update this script through the ServiceNow GUI. The navigator item to select is "Script Files" under "MID Server". And the name of the script is NetcoolConnector. Edit in the GUI and it gets updated on all of your MID servers.

Background

The ServiceNow Netcool Connector (introduced at some point before the New York release) allows you to pull events from a Netcool ObjectServer into ServiceNow. The connector is a process that runs on a MID Server. Within the ServiceNow interface, there are only a few configuration options (userid/password, JDBC URL, how often to run, etc.). However, there are no filters to configure. That's because the connector is a straightforward Groovy script that you can edit as needed on the MID Server.

Details

The Netcool Connector script is found on the MID Server in the file .../agent/scripts/Groovy/NetcoolConnector.groovy. Some of the interesting parts of the script are the actual query that's run:

query = "select top 3000 Identifier,Node,NodeAlias,AlertKey, Manager,Agent,AlertGroup,Severity,Type,Summary,Acknowledged,LastOccurrence,StateChange,SuppressEscl from alerts.status where StateChange > " + lastTimeSignature + " and Manager not like '^.*Watch\$' order by StateChange asc";


That is exactly the query that's run, and you can edit it to include custom fields, for example. To complete the customization, you also need to update the createEvent() function to actually include those custom fields in the event that's created in ServiceNow. In there you can also do any hard-coded transforms that are required or anything else.

Wednesday, February 12, 2020

Short script to generate continuous random events to Netcool OMNIbus

So you need to generate some events to OMNIbus? It's really easy with the nco_postmsg shell script that's installed on your ObjectServer host:

while (true); do nco_postmsg -user root -password n3tc00l "Identifier='my_identifier_${RANDOM}'" "Node='mynode${RANDOM}'" "Severity=5" "Manager='nco_postmsg'" "Summary='An event occurred again'" "ImpactFlag=1" "AlertGroup='FRANK_${RANDOM}'"; sleep 10; echo sending another event; done

Update the user and password parameters, and you can add any fields you need. It simply generates an event every 10 seconds.

Tuesday, February 11, 2020

Why you should use MXOSINCIDENT instead of OSLCINCIDENT for creating and querying ICD incidents

In re-reading the documentation and looking through the system, I finally recognized why the MXOSINCIDENT is a much better choice for integration than OSLCINCIDENT. The OSLCINCIDENT OSLC Resource only includes 11 attributes (fields) for an incident. Those are:

modifiedBy
modified
classStructureId
title
internalPrioirity
created
creator
priority
status
shortId
identifier


and that's it. So when you GET an incident via the OSLCINCIDENT URL (.../maximo/oslc/oslcincident/...), those are, essentially, all of the fields that you'll see for that incident. For example, here is the data I retrieved for incident number 1003 in my system:


dcterms:creator
rdf:resource "http://was.gulfsoft.rocks:80/maximo/oslc/os/sccdoslcperson/_TUFYQURNSU4-"
oslc:shortId "1003"
dcterms:title "First incident"
dcterms:created "2019-11-27T10:43:18-05:00"
dcterms:modified "2020-02-11T11:10:06-05:00"
sccd:statusdate "2019-11-27T10:43:18-05:00"
rdf:about "http://was.gulfsoft.rocks:80/maximo/oslc/os/oslcincident/_SU5DSURFTlQvMTAwMw--"
dcterms:identifier "9"
prefixes
sccd "http://jazz.net/ns/ism/helpdesk/sccd#"
rdf "http://www.w3.org/1999/02/22-rdf-syntax-ns#"
dcterms "http://purl.org/dc/terms/"
oslc "http://open-services.net/ns/core#"
_rowstamp "17235351"
oslc:modifiedBy
rdf:resource "http://was.gulfsoft.rocks:80/maximo/oslc/os/sccdoslcperson/_TUFYQURNSU4-"
sccd:status_description "New"
sccd:status
rdf:resource "http://was.gulfsoft.rocks:80/maximo/oslc/os/sccdoslcsyndomain/_SU5DSURFTlRTVEFUVVMvTkVXL35OVUxMfi9_TlVMTH4vfk5VTEx_L05FVw--"

However, here's the data retrieved for the SAME incident using the MXOSINCIDENT URL (.../maximo/oslc/os/mxosincident/...):


spi:ticketid "1003"
spi:status "NEW"
spi:pluspporeq false
spi:virtualenv false
spi:hasactivity false
spi:relatedtoglobal false
spi:changeby "MAXADMIN"
spi:accumulatedholdtime 0
spi:selfservsolaccess false
_rowstamp "17235351"
spi:class_description "Incident"
spi:inheritstatus false
relatedrecord_collectionref "http://was.gulfsoft.rocks:80/maximo/oslc/os/mxosincident/_SU5DSURFTlQvMTAwMw--/incidentrelrec"
multiassetlocci_collectionref "http://was.gulfsoft.rocks:80/maximo/oslc/os/mxosincident/_SU5DSURFTlQvMTAwMw--/incidentmultiassetloci"
spi:isknownerror false
spi:oncallautoassign false
spi:createdby "MAXADMIN"
spi:cinum "RBA_PAYROLL"
spi:statusdate "2019-11-27T10:43:18-05:00"
spi:affectedperson "MAXADMIN"
spi:class "INCIDENT"
spi:createwomulti "MULTI"
spi:description "First incident"
ticketspec_collectionref "http://was.gulfsoft.rocks:80/maximo/oslc/os/mxosincident/_SU5DSURFTlQvMTAwMw--/ticketspecclass"
spi:creationdate "2019-11-27T10:43:18-05:00"
spi:actlabhrs 0
spi:historyflag false
spi:sitevisit false
rdf:about "http://was.gulfsoft.rocks:80/maximo/oslc/os/mxosincident/_SU5DSURFTlQvMTAwMw--"
spi:createwomulti_description "Create Multi Records"
prefixes
rdf "http://www.w3.org/1999/02/22-rdf-syntax-ns#"
spi "http://jazz.net/ns/ism/asset/smarter_physical_infrastructure#"
oslc "http://open-services.net/ns/core#"
spi:outageduration 0
spi:isglobal false
spi:multiassetlocci
0
localref "http://was.gulfsoft.rocks:80/maximo/oslc/os/mxosincident/_SU5DSURFTlQvMTAwMw--/incidentmultiassetloci/0-5"
spi:recordclass "INCIDENT"
spi:progress false
rdf:about "http://childkey#SU5DSURFTlQvTVVMVElBU1NFVExPQ0NJLzU-"
spi:langcode "EN"
spi:multiid 5
spi:isprimary true
spi:performmoveto false
prefixes
rdf "http://www.w3.org/1999/02/22-rdf-syntax-ns#"
spi "http://jazz.net/ns/ism/asset/smarter_physical_infrastructure#"
oslc "http://open-services.net/ns/core#"
_rowstamp "17235352"
spi:hasld false
spi:pmchgassesment false
spi:recordkey "1003"
spi:cinum "RBA_PAYROLL"
spi:externalsystem_description "EVENT MANAGEMENT"
spi:pmscinvalid false
spi:reportdate "2019-11-27T10:43:18-05:00"
spi:hassolution false
spi:actlabcost 0
spi:changedate "2020-02-11T11:10:06-05:00"
spi:status_description "New"
spi:externalsystem "EVENTMANAGEMENT"
spi:affecteddate "2019-11-27T10:43:18-05:00"
spi:template false
spi:ticketuid 9
spi:reportedby "MAXADMIN"

To me, MXOSINCIDENT provides much better data.

Monday, February 3, 2020

When you Reset Windows, Microsoft saves a list of all of the applications removed

The Windows 1909 (November 2019) Windows 10 update didn't like one of my laptops at all. After trying multiple times to upgrade to it, I finally gave up and decided to take the route of choosing "Reset Windows" and keeping all of my data. That worked on the first try, so I was already happy about that. Then I noticed an HTML file on my desktop named "Removed Apps". I opened it, and much to my surprise, it listed ALL of the apps that were deleted by the reset. Happily, I still had all of them in my Downloads folder, so I can easily re-install them. This was a happy surprise, since I was certain that the reinstall part (and figuring out everything that needed to be installed) would be the worst part of the process.

Monday, January 27, 2020

Automating processes in a small company using free utilities

Introduction

I want to share some of the automation that I've written for my company to help with some of the back office processing. The reason that I wrote this automation myself is that I looked at numerous offerings on the market, and none fit exactly what we needed. Once I made that realization, I came up with automation using tools we already own or that are free to use, specifically:

Google Sheets, including Google Script
Windows Batch scripting
Perl scripting
Bookmarklets in JavaScript
Quickbooks IIF files

It looks like a lot (and it may very well be a lot), but for my background, this path required no more work than learning an off-the-shelf product.

The Payroll Process

Not everything is automated. We do have automated time sheets, but there are several pieces of data that need to be manually combined and calculated. Where we started with the automation was the spreadsheet in which this data was being captured. The data was already in a straightforward format, so it just needed to be cleaned up a little. Specifically, some of the entries just needed to have some specific keywords, and we put the spreadsheet into Google Sheets. We made the decision to move from Excel so that we could always access the spreadsheet and also because I personally enjoy programming in JavaScript (Google Scripting) than in Visual Basic (MS Office Scripting).

The process starts with my CFO filling out the Google Sheet using data from the few different sources that we have. She then presses the "Create IIF" menu item (created with a Google Script implementation of the onOpen() function). That creates both an IIF file for QuickBooks (that actually needs a little more processing - more on that later) and the a file containing the JavaScript for a bookmarklet that is used later in the process. It also sends an email out to each

The IIF file has to then be downloaded into a specific directory which contains a windows .BAT file, a shell script and a Perl script. The .BAT file is simply used to call the shell script, which calls the Perl script to actually parse the file. That Perl script mainly just adds TAB characters in the correct places to match the correct IIF format. I wasn't able to make the TAB characters actually stay in the downloaded file via Google Script, so this is the workaround I came up with.

The contents of the JavaScript file are the implementation of the "SetACH" bookmarklet that has already been defined in her browser. That bookmarklet is used to fill in the appropriate fields on our bank's ACH web page. To do this, my CFO logs into our bank and navigates to the appropriate page, then clicks the SetACH bookmarklet, which fills in all of the appropriate values.

Better Than a Packaged Product?

In a word, Yes. Every COTS (commercial off-the shelf) product requires some amount of configuration, which, in my experience, would have taken more time than creating the custom solution I have. Additionally, all of the COTS products I investigated included tons of capabilities that we simply don't need or that don't specifically work for us. As you can see, we only needed a tiny bit of functionality, and I didn't want to pay for a bevy of additional features that we wouldn't use. Specifically, I used the same evaluation process that I help customers with every day. I identified the specific functions I needed to perform and reports that I needed to generate, and evaluated the choices based on those. I also evaluated the existing tools and my development expertise with each to determine that I was capable of producing a working solution. We've now had this automation in place for three years, and it has required minimal upkeep.

Monday, January 13, 2020

Using jQuery in Cognos 10.x and DASH

For most of what you need, IBM has a great article on how to incorporate jQuery into your Cognos reports here:

https://www.ibm.com/developerworks/library/ba-pp-reporting-advanced-report-design-page647/index.html

However, there are some painful issues which vary based on whether you access reports through DASH/JazzSM or go directly to Cognos Reporting. Specifically, it appears that DASH adds some additional capabilities to the environment, possibly through the use of Dojo. Or it's possible that these capabilities are stripped off when logging directly into Cognos. Whatever the case, here are the caveats:

DASH/JazzSM adds some additional container elements to the HTML report such that you cannot use the browser "Print" function to print a multi-page report. All you get is the first page of the report, and this happens on all browsers. To get around this, I opened a new window and set the contents to be only the container enclosing the report. Then THAT window can be printed via the browser.

Logging directly into Cognos "removes" several methods that should exist in IE 11. For example, it undefines the Object.keys() method. So if any JavaScript you're using references that method, you'll need to use a workaround like this one:

https://stackoverflow.com/questions/18912932/object-keys-not-working-in-internet-explorer

Additionally when logging directly into Cognos, the HTMLCanvasElement.getContext() method is not defined. Personally, I got around this by commenting out any references to that method and it worked in my case. Unfortunately, I couldn't find any "good" solution to this problem, so this is all I've got.

Happy coding!