Tuesday, December 3, 2019

Adding an Unauthenticated JSP to IBM Control Desk


We have a customer that needed to allow unauthenticated users to open tickets within IBM Control Desk, and we only had access to IBM HTTP Server and our Maximo/ICD WebSphere server to make that happen.

Security Risks

Essentially, anyone with the link can get to the pages described in this article. They could even write a script to create a huge number of tickets, taking down your ICD installation. So in any pages you create using this article, you need to do *something* to guard against that behavior. Exactly what you do depends on your specific situation. 

Where to Create the Unprotected JSP

You just need to create the .jsp file under:

For example, put the following in the file HelloWorld.jsp in the above directory:

  <TITLE>Hello World</TITLE>
  <H1>Hello World</H1>
  Today is: <%= new java.util.Date().toString() %>

Then access it with the url:

And this is what you'll see, with no login required:


That's it. Now you just need to write the JSP code to do what you need.


A JSP file added in this way would need to be re-deployed after each time you deploy the application. These instructions do NOT tell you how to add the JSP file to the EAR build process.
You need to manually copy the JSP file(s) to the appropriate location for each MAXIMO UI JVM.

Monday, December 2, 2019

Creating incident ticket in IBM Control Desk using the new REST API


Maximo introduced a new REST API that can be accessed via .../maximo/oslc . Here's a link to the documentation on it:


This is an all-JSON API that makes things a ton easier than it was with the older (and deprecated) XML-based REST API.

The Problem

However, that documentation is aimed at Maximo Enterprise Asset Management users and not IBM Control Desk users. That means there aren't any examples for creating incidents or service requests, for example.

Why You're Here

You want an example of creating an INCIDENT in ICD, and that's what I'll provide. I'm using ICD on WebSphere, DB2 and IBM HTTP Server, along with the sample data. That's how I have a classification ID and hierarchy structure in the example below.

Basically, the best way I've found is to use the MXOSINCIDENT object structure because it already has a bunch of relationships (including one to TICKETSPEC, so you can add specifications when creating an incident). Here are the details:

Additional header:
properties: *

    "reportedby": "MXINTADM",
    "description": "second MXINCIDENT OS API",
    "externalsystem": "EVENTMANAGEMENT",
    "classstructureid": "21010405",
    "ticketspec": [{"assetattrid": "computersystem_serialnumber","alnvalue": "99999"}]


    "affecteddate": "2019-11-29T15:02:00-05:00",
    "template": false,
    "creationdate": "2019-11-29T15:02:00-05:00",
    "hierarchypath": "21 \\ 2101 \\ 210104 \\ 21010405",
    "historyflag": false,
    "actlabcost": 0.0,
    "createwomulti_description": "Create Multi Records",
    "selfservsolaccess": false,
    "outageduration": 0.0,
    "ticketuid": 46,
    "inheritstatus": false,
    "reportdate": "2019-11-29T15:02:00-05:00",
    "class_description": "Incident",
    "description": "second MXINCIDENT OS API",
    "reportedby": "MXINTADM",
    "classificationid": "21010405",
    "sitevisit": false,
    "_rowstamp": "10009026",
    "accumulatedholdtime": 0.0,
    "createdby": "MXINTADM",
    "isknownerror": false,
    "affectedperson": "MXINTADM",
    "class": "INCIDENT",
    "ticketid": "1040",
    "ticketspec": [
            "classstructureid": "21010405",
            "changeby": "MXINTADM",
            "changedate": "2019-11-29T15:02:00-05:00",
            "alnvalue": "99999",
            "mandatory": false,
            "refobjectname": "INCIDENT",
            "ticketspecid": 7,
            "assetattrid": "COMPUTERSYSTEM_SERIALNUMBER",
            "_rowstamp": "10009029",
            "refobjectid": 46,
            "displaysequence": 1,
    "status_description": "New",
    "externalsystem_description": "EVENT MANAGEMENT",
    "classstructureid": "21010405",
    "changeby": "MXINTADM",
    "changedate": "2019-11-29T15:02:00-05:00",
    "externalsystem": "EVENTMANAGEMENT",
    "actlabhrs": 0.0,
    "relatedtoglobal": false,
    "hasactivity": false,
    "statusdate": "2019-11-29T15:02:00-05:00",
    "createwomulti": "MULTI",
    "hassolution": false,
    "virtualenv": false,
    "pluspporeq": false,
    "isglobal": false,
    "oncallautoassign": false,
    "pmscinvalid": false,
    "status": "NEW"


To successfully do the above, you do need to Configure Object Structure security: https://www.ibm.com/support/pages/using-object-structure-security-limit-access-security-groups , and the user MUST have a Default Insert Site, which apparently my MXINTADM user does. MAXADMIN in my system DOES NOT, so it fails if I use that user.

I'm using Postman for testing, which I highly recommend: https://www.getpostman.com/downloads/

Of course you'll use some specific language (or curl) when you're doing this in production, but for testing, you want to use Postman.

A helpful link

In addition to the API documentation, this link was very helpful to me:

Tuesday, November 12, 2019

Special characters in passwords initially configuring IBM Control Desk

Several of the screens displayed by ConfigUI tell you that some passwords don't allow special characters. The rule of thumb I found the REALLY hard way is:

ONLY use underscore as a special character in any password initially.

We ran into problems mainly with dollar sign, but other ones will certainly bite you, too. We even ran into the problem with the root user's password. Any password can be changed after the initial deployment, so save yourself some headache and make them initially very simple. And the painful part is that some of the errors will just be silent, such as "Unable to access host with these credentials", but no other error. So just take the advice above and you'll be much happier.

Monday, November 4, 2019

Moving to the cloud. Pick any two: Cheap, Fast, or Easy

The Cloud offers literally all of your current IT services, plus tons more, some of which most of your IT department has never heard of before. You can quickly and easily move all your existing workloads there, but you'll pay dearly, as many companies are finding out. You can take the time to train your IT staff and meticulously plan for the most efficient way to the cloud, but that's not quick. 

Moving to the cloud correctly truly requires rethinking how you do everything in IT. In all cases, the best route is to only move a subset of workloads or capabilities to the cloud, and different clouds may be better for different workloads. Some things are easier and cheaper to run on-prem. For example, in many cases it can be cost effective to arm your IT and development staff with laptops with 64GB of RAM. Doing so allows each one to run their own private multi-cloud in which they can test away. A brand new laptop with warranty with 64GB of RAM and 6 cores (12 threads) can be found for under $1700 on eBay and has a useful life of 4 years. Such a VM in the cloud (AWS EC2 r5.2xkarge) costs $.20 per hour, which is $5,300 for a three-year term, and doesn’t allow the flexibility of a local system running VMWare Workstation. That's a very specific example, but it illustrates why each and every workload needs to be analyzed or audited before simply moving it to the cloud.

 Some workloads are more suited to specific public clouds. WebSphere applications are a big example. If you want to “lift and shift” these workloads to the cloud, the IBM Cloud should be your first choice. If you have apps that run under Sharepoint, you should absolutely run those applications on Microsoft’s Azure cloud. There are many other workloads that may run equally well on any cloud, and for those, the analysis needs to take other factors into account.

 The point I want to get across is that moving to the cloud requires analysis by a qualified team of experts. I think the best approach is to hire one or more experts and simultaneously train your own team to help them get up to speed. The combination of those two is extremely important, because you don’t want newly-trained people responsible for your entire cloud migration. You want an expert who can guide the team, allowing them to take over responsibilities over time.

Friday, September 13, 2019

If you're scripting on Windows, use PowerShell

My last post on PowerShell was in 2008, so I thought I would write an update. If you're writing scripts on Windows, you should probably be using PowerShell. It seems to have 99 to 100% of the tools (especially parsers) that I ever need. I just recently needed to scrape a web page for some data, so I thought I would spend some time messing with PowerShell to get it going. Well, it only took about 30 minutes to develop the entire script that I needed, with absolutely no external dependencies.

Here's the whole script:

# get the web page. Yes, PowerShell has a 'curl' command/alias
$resp = curl -UseDefaultCredentials http://myhostname/mypagename

# Get all of the rows of the table with an ID of "serverTable"
$rows = $resp.ParsedHtml.getElementById("serverTable").getElementsByTagName("TR")

# Loop through the rows, skipping the header row: 
for ($i=1; $i -lt $rows.Length(); $i++) {

# get the hostname of this row
  $thehost = $rows[$i].getElementsByTagName("TD")[2]

# get the date this host was last rebooted

# if the host was rebooted over 20 days ago, print that date

  if ([datetime]::Now.AddDays(-20) -gt [datetime]::parseexact($rebootDateString.innerText,"G", $null))   {
    $rebootDateString.innerText } }

That's it, with no external references and nothing extra to install. It's got date parsing, date arithmetic, HTML parsing and HTTP request capabilities all built in. I realize that this then isn't portable... or is it? There's actually a PowerShell port for Linux available, with instructions here from Microsoft:

I know that Python is a hot language these days, but I don't like it as much as PowerShell. I tried to do the above with Python, and it took quite a bit longer, even though I've used Python more than PowerShell. You have to import some classes and then use XPath to find elements in the HTML. PowerShell was just straightforward and easy, at least for me, with my background and expectations. YMMV, but I like PowerShell.

Thursday, September 12, 2019

How to view an LtpaToken2 token

Leave a Comment

If you find this article useful, please let us know in the comments.

The Article

If you use any WebSphere-based applications (DASH, Impact, BPM, ODM, etc.), you're using LTPA Tokens. An LTPA Token is a browser cookie named LtpaToken2. You can see it if you turn on developer tools in your browser (F12) after you log into one of these applications. You'll see it in the "COOKIES" request header. The value of that header will look something like this:

s_vi=[CS]v1|2E7C0CDA8507BB19-4000010FA0013DFE[CE]; s_cc=true; s_sq=%5B%5BB%5D%5D; JSESSIONID=00004FX-3-uu2ZoYHx1t9p8fJIb:52e767f1-e67e-4220-8435-fa54d8776107; CSRFCookie=C9495874E4D5BA23D8E1330E4F76EA5C9495874E4D5BA23D8E1330E4F76EA5; LtpaToken2=i/InlYuq2tm3rPdd/3BEzA8m9BCc8WGNR3q6eu7OfeQ7s1ICiMvPv0QCNQar5cCQlyVH5GE0N0VNbJj1Z6sUGe2S3nb1kwwbzdzPWzCbNPPtN3uiPWnfLyXzi5T4p2Pz/URwCfP6zWW2NOob/yQoG5vYg/JAgJag9CWP5tqd9+6FgInahSj3VaYYvu69O4hY+h6e6D+v7mpLTYBRM33TlVugTxOkx64JTMAdwFAfH553Ob2T+sW4aqyiGc7arLodIMlWjiVbkBBEgYZ0PXMyCPKb7JPa+5lFxfMRBK0P1kMsC34OXnQ1jUaedx44U4I5

Notice that each cookie=value pair is separated by a semicolon (;). The LtpaToken2 cookie and value are in bold above. That cookie contains the "principal" (user) name, the "realm" (a named security policy domain), and the expiration date of the token. This token is used by a WebSphere server (whether it's WebSphere ND, WebSphere Liberty, or any other flavor of WebSphere) to make authorization decisions to determine which resources the user has access to. It's also used for authentication across WebSphere servers if the two servers share the same LTPA keys. This sharing of keys is how one server "trusts" an LTPA token created by another server.

For troubleshooting purposes, sometimes you want/need to see what's inside an LTPA token. I found the best article on the web that provides the code to allow you to do exactly that. The page is found here: http://tech.srij.it/2012/04/how-to-decrypt-ltpa-cookie-and-view.html , but it leaves out some basic steps that I'm including here. I'm also copying the Java code at the bottom of this blog post just in case the linked article disappears.

Once you've copied the code into a file named DecryptLTPA.java (the file MUST have this name to match the name of the class defined in the file), you then need to specify several values in the file. Specifically, you need to provide values for the following variables:

ltpaKey : the com.ibm.websphere.ltpa.3DESKey value from ltpa.keys file. This file already exists if you're using WebSphere Liberty. If you're using "full" WebSphere, you need to create this file (with whatever name you want) by exporting the LTPA keys from the WebSphere Administration Console. In the ltpa.keys file, this value will end with the characters "\=" (backslash equals). When you paste the value into the DecryptLTPA.java, remove that backslash.

ltpaPassword : WebAS  - This is the default password on WebSphere Liberty. If you're using "full" WebSphere, the password will be whatever you specify when you export the LTPA keys using the WebSphere Administration Console.

tokenCipher : the entire value of the ltpatoken2 header from the COOKIES header in the Request.

Once you've done the above, the beginning of the definition of the main() function in the DecryptLTPA.java file should look something like:

 public static void main(String[] args) {
  String ltpaKey = "ADbpPpqPf3bnkj0b34sNaNC2FYHYygub3/cGjIn+mR4=";
  String ltpaPassword = "WebAS";
  String tokenCipher = "i/InlYuq2tm3rPdd/3BEzA8m9BCc8WGNR3q6eu7OfeQ7s1ICiMvPv0QCNQar5cCQlyVH5GE0N0VNbJj1Z6sUGe2S3nb1kwwbzdzPWzCbNPPtN3uiPWnfLyXzi5T4p2Pz/URwCfP6zWW2NOob/yQoG5vYg/JAgJag9CWP5tqd9+6FgInahSj3VaYYvu69O4hY+h6e6D+v7mpLTYBRM33TlVugTxOkx64JTMAdwFAfH553Ob2T+sW4aqyiGc7arLodIMlWjiVbkBBEgYZ0PXMyCPKb7JPa+5lFxfMRBK0P1kMsC34OXnQ1jUaedx44U4I5";

You then need to compile the java file with a command similar to the following:

 javac -cp /opt/IBM/tivoli/impact/wlp/usr/servers/ImpactUI/apps/blaze.war/WEB-INF/lib/commons-codec-1.10.jar DecryptLTPA.java

That command points to the commons-codec.1.10.jar file, which contains the definition of the Base64 class referenced by the code. That command was run on a Netcool Impact version server which uses WebSphere Liberty. If you were to compile the code on a DASH version server, the command would look like this:

javac -cp /opt/IBM/tivoli/JazzSM/profile/installedApps/JazzSMNode01Cell/IBM Cognos.ear/p2pd.war/WEB-INF/lib/commons-codec-1.3.jar DecryptLTPA.java

In both cases, my path includes the Java 1.7 SDK binaries and my JAVA_HOME is set to the Java 1.7 SDK directory.

Once you have the file compiled, you can actually run it, which requires a similar command:

 java -cp /opt/IBM/tivoli/impact/wlp/usr/servers/ImpactUI/apps/blaze.war/WEB-INF/lib/commons-codec-1.10.jar:. DecryptLTPA

Notice that you must include "." (present working directory) in the classpath (-cp flag) in addition to the commons-codec jar file.

Once that runs, you should see output similar to:

Full token string:[expire:1568250479184$u:user\:customRealm/impactadmin%1568250479184%zoQ7cb1BSWekvxdd3slUpxmCRlBCBmMO1nu8iztv73PKQN3MIybuCx/C9EdKGwoeoguJHKrj0BOOAeXxVLDgIQL5Jz2Tg6LQcIyTpAtRAVMsqWTPFzDBrs85Zxs9kP0zFEiOvEsDUmRXXm92dN6zxWooEyGz453x1VPmqGoZ0ww=]
Token is for:[expire:1568250479184$u:user\:customRealm/impactadmin]
Token expires at:[2019-09-11-21:07:59 EDT]
Token signature:[zoQ7cb1BSWekvxdd3slUpxmCRlBCBmMO1nu8iztv73PKQN3MIybuCx/C9EdKGwoeoguJHKrj0BOOAeXxVLDgIQL5Jz2Tg6LQcIyTpAtRAVMsqWTPFzDBrs85Zxs9kP0zFEiOvEsDUmRXXm92dN6zxWooEyGz453x1VPmqGoZ0ww=]

As you can see, the user (principal) ID and expiration timestamps are easily visible, which was my goal from the beginning.

Full Java code:

import java.security.MessageDigest;
import java.text.SimpleDateFormat;
import java.util.Arrays;
import java.util.Date;
import java.util.StringTokenizer;

import javax.crypto.Cipher;
import javax.crypto.SecretKey;
import javax.crypto.SecretKeyFactory;
import javax.crypto.spec.DESedeKeySpec;
import javax.crypto.spec.IvParameterSpec;
import javax.crypto.spec.SecretKeySpec;

import org.apache.commons.codec.binary.Base64;

public class DecryptLTPA {
 private static final String AES = "AES/CBC/PKCS5Padding";
 private static final String DES = "DESede/ECB/PKCS5Padding";

 public static void main(String[] args) {
  String ltpaKey = "<DES key from ltpa token>";
  String ltpaPassword = "<password used to export ltpa token>";
  String tokenCipher = "<the header text to dercypt>";

  try {
   Base64 b = new Base64();
   byte[] secretKey = null;
   MessageDigest md = MessageDigest.getInstance("SHA");
   byte[] hash3DES = new byte[24];
   System.arraycopy(md.digest(), 0, hash3DES, 0, 20);
   Arrays.fill(hash3DES, 20, 24, (byte) 0);
   secretKey = decrypt(b.decode(ltpaKey), hash3DES, DES);
   byte[] ltpaByteArray = b.decode(tokenCipher);

   String algorithm, userInfo, expires, signature, ltpaPlaintext;
   try {
    ltpaPlaintext = new String(decrypt(ltpaByteArray, secretKey, DES));
   } catch (Exception e) {
    ltpaPlaintext = new String(decrypt(ltpaByteArray, secretKey, AES));

   StringTokenizer st = new StringTokenizer(ltpaPlaintext, "%");
   userInfo = st.nextToken();
   expires = st.nextToken();
   signature = st.nextToken();
   System.err.println("Full token string:[" + ltpaPlaintext + "]");
   Date d = new Date(Long.parseLong(expires));
   SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd-HH:mm:ss z");
   System.err.println("Token is for:[" + userInfo + "]");
   System.err.println("Token expires at:[" + sdf.format(d) + "]");
   System.err.println("Token signature:[" + signature + "]");

  } catch (Exception e) {

 public static byte[] decrypt(byte[] ciphertext, byte[] key, String algorithm) throws Exception {
  SecretKey sKey = null;

  if (algorithm.indexOf("AES") != -1) {
   sKey = new SecretKeySpec(key, 0, 16, "AES");
  } else {
   DESedeKeySpec kSpec = new DESedeKeySpec(key);
   SecretKeyFactory kFact = SecretKeyFactory.getInstance("DESede");
   sKey = kFact.generateSecret(kSpec);
  Cipher cipher = Cipher.getInstance(algorithm);

  if (algorithm.indexOf("ECB") == -1) {
   if (algorithm.indexOf("AES") != -1) {
    IvParameterSpec ivs16 = generateIvParameterSpec(key, 16);
    cipher.init(Cipher.DECRYPT_MODE, sKey, ivs16);
   } else {
    IvParameterSpec ivs8 = generateIvParameterSpec(key, 8);
    cipher.init(Cipher.DECRYPT_MODE, sKey, ivs8);
  } else {
   cipher.init(Cipher.DECRYPT_MODE, sKey);
  return cipher.doFinal(ciphertext);

 private static IvParameterSpec generateIvParameterSpec(byte key[], int size) {
  byte[] row = new byte[size];

  for (int i = 0; i < size; i++) {
   row[i] = key[i];

  return new IvParameterSpec(row);

Tuesday, August 13, 2019

The best easy and free video editing software available is Shotcut

I've never editing a video before, but I needed to yesterday. I found Shotcut, and I couldn't be happier about how easy it is to use. It's free, easy AND it's a portable app, so you don't have to install anything. They've got a very active forum and tons of how-to videos available. If you need to do some video editing (including multiple tracks, audio overlay, and all the high-ed options), give it a try.

Monday, July 15, 2019

JavaScript regular expression trick

Working on a Netcool Impact implementation recently I ran across a feature of JavaScript regular expressions that really impressed me. I'll compare it to a somewhat similar feature/syntax in Perl.

If you need to test a string for a regular expression in Perl, you can do the following:

if ($mystring =~ /my_regular_expression/) ...

That will return true if $string contains the specified regular expression.

In JavaScript, you can invoke the test() method directly on the regular expression (including the leading and trailing "/") with one parameter, which is the string to test. Here's what the equivalent of the above looks like in JavaScript:

if (/my_regular_expression/.test(mystring)) ...

And to test if it doesn't match, the syntax is:

if (!/my_regular_expression/.test(mystring)) ...

That's it. I just thought it was pretty neat.

Wednesday, July 3, 2019

APM historical data without a TEMS

IBM APM only directly supports the storage of up to 32 days of data. You've always had the option to store older data in the ITM 6 data warehouse, but setting up an entire ITM 6 infrastructure has always seemed like a whole lot of work. As it turns out, you don't have to set up the entire infrastructure! You oonly have to set up a warehouse proxy agent, summarization and pruning agent and the warehouse database. No TEMS or TEPS, as the WPA and SPA can work in autonomous mode.

Madhavan Vyk recently posted a great article on DeveloperWorks detailing exactly how to configure this:


Monday, May 6, 2019

IBM Netcool Agile Service Manager 1.1 is available with new observers

I've been busy, so I only just now saw that ASM 1.1 was released in April with a bunch of new observers. Observers are used to "observe" individual applications to provide additional data to the topology view(s) in ASM. Here's the download document:


Below is the list of observers from that link. Some that I believe are new are ITNM, BigFix, ServiceNow, New Relic and DynaTrace.

IBM Netcool Agile Service Manager v1.1 Docker Observer (asm - Linux 64bit English
IBM Netcool Agile Service Manager v1.1 Event Observer (asm - Linux 64bit English
IBM Netcool Agile Service Manager v1.1 ITNM Observer (asm - Linux 64bit English
IBM Netcool Agile Service Manager v1.1 OpenStack Observer (asm - Linux 64bit English
IBM Netcool Agile Service Manager v1.1 File Observer (asm - Linux 64bit English
IBM Netcool Agile Service Manager v1.1 ALM Observer (asm - Linux 64bit English
IBM Netcool Agile Service Manager v.1.1 REST Observer (asm - Linux 64bit English
IBM Netcool Agile Service Manager v.1.1 TADDM Observer (asm - Linux 64bit English
IBM Netcool Agile Service Manager v.1.1 VMWare NSX Observer (asm - Linux 64bit English
IBM Netcool Agile Service Manager v.1.1 VMWare vCenter Observer (asm - Linux 64bit English
IBM Netcool Agile Service Manager v.1.1 DNS Observer (asm - Linux 64bit English
IBM Netcool Agile Service Manager v.1.1 Cisco ACI Observer (asm - Linux 64bit English
IBM Netcool Agile Service Manager v.1.1 Kubernetes Observer (asm - Linux 64bit English
IBM Netcool Agile Service Manager v.1.1 IBM Cloud Observer (asm - Linux 64bit English
CC16KENIBM Netcool Agile Service Manager v.1.1 Juniper Contrail Observer (asm - Linux 64bit English
CC16LENIBM Netcool Agile Service Manager v.1.1 ServiceNow Observer (asm - Linux 64bit English
CC16MENIBM Netcool Agile Service Manager v.1.1 New Relic Observer (asm - Linux 64bit English 
CC16NENIBM Netcool Agile Service Manager v1.1  BigFix  Inventory Observer (asm - Linux 64bit English
CC16PENIBM Netcool Agile Service Manager v1.1 Dynatrace Observer (asm - Linux 64bit English

Wednesday, March 13, 2019

A Great Application Dependency Discovery and Mapping Tool

I recently ran across www.device42.com and am blown away by the price and capabilities of their offerings. One of the most difficult challenges in IT is to get application dependency maps in your infrastructure. The biggest hurdle is access to the different systems. Device42 helps ease this problem by providing standalone discovery executables that can be given to system administrators to run themselves. This means that credentials don't NEED to be stored centrally (this is the root of the access problem). Each administrator can simply upload the results of the discovery process.

The tool also lets you drag-and-drop servers into rack configurations, so you can get a real-world visualization of your datacenter.

All of the above is standard in the software from the leaders in this space (BMC ADDM, ServiceNow Discovery, IBM TADDM), but Device42's price just blows the others away. Their pricing page is here: https://www.device42.com/pricing/ . If you've done any research in the space, you know that the pricing gives you a terrific amount of value.

* Gulfsoft has no relationship with Device42. This opinion is being provided simply because we're so amazed by this application.

Tuesday, March 12, 2019

How does an SQL injection attack occur and how can developers guard against it?

I ran across this great YouTube video that shows exactly how to perform an SQL injection attack:


I like the way the author steps through all of the gory details of the attack, including his assumptions, thought processes, etc. It's simply a great tutorial on how a hacker would go about formulating this type of attack. What it doesn't explicitly cover is the list of specific mitigation techniques that can be employed to stop this kind of attack, but you can find that information easily with Google:


There you'll find tons of language-specific solutions to the problem.

Thursday, March 7, 2019

Full service cloud providers deliver better, faster, cheaper services (and more of them) than your IT department

Isn't The Cloud just "running my stuff on someone else's hardware"?

In a word, NO! If you're in IT at any level, from individual contributor all the way up to CIO, you need to take the time to really look in-depth at the cloud offerings available today. If you aren't blown away by what's available, then you need to spend more time looking at the capabilities on offer. That's not meant as an insult - it's my honest opinion as a deeply technical consultant who has been in IT for  a very long time. Personally, I would recommend that you look at AWS because they're the leader and have been since Cloud became a thing. They simply have more resources available (documents, videos, tutorials, use cases, case studies, third party tools, etc.) to really show you what they offer, and it's absolutely incredible.

OK, I'm impressed, but my IT department can provide everything we need

You may be getting everything you currently think you need, but I promise you there are more capabilities out there that you just haven't thought of yet because of limitations that exist in your current environment. For example, what reply would you get from this request:

I would like to see a topology graph of all of the server, network, database and security resources associated with Application X.

I've worked with thousands of companies of all sizes, and I've only seen this question answered a couple of times, and only for very well-known, small applications. The tools to answer it are out there, and many companies own several of them, but there are technical and political obstacles that prohibit this information from being displayed on a single pane of glass. However, with AWS, there are several third party vendors that offer products that can provide this information within minutes. Specifically, all of the components are registered centrally within AWS, so their metadata can be retrieved with the AWS API. These third party tools pull the data and display it on a graph to make it easier to consume (with filtering so you can include/exclude the appropriate components based on name or tag). 

This central repository of configuration information is, basically, a built-in CMDB. There are companies that have been working for years and years to eventually have a partial CMDB, and the big cloud providers offer it from day one. And in AWS, this central repository is audited BY DEFAULT. That means you can see exactly who changed exactly what and when. That's incredible.

I still don't see what's so great about The Cloud

That means you still haven't spent enough time trying to understand what's available. What I would recommend is that you go through one of the AWS workshops available on Github. Specifically, go through the Website workshop available here:

It will step you through the creation of a serverless web application, including a user self-registration component. This is something that's normally a HUGE obstacle in enterprise application creation, and AWS offers it directly via their IAM service. And, as I mentioned, everything is defined centrally, so you can see what your applications look like.

Now I'm impressed, but it looks pretty difficult to set up

It is definitely complicated, but it can be done. You do need to define policies around things like naming, tags and usage, and you need to restrict who can perform which actions, and there are a multitude of other policies that must be defined for your specific enterprise. The good news is that there are several education and certification tracks available to get people certified as AWS architects. Additionally, there are lots of AWS architects available for hire. It's like any new initiative - it just needs to be approached incrementally.

How to get started with serverless apps on AWS

Amazon has workshops on Github!

This is really neat stuff. Amazon has a collection of several serverless workshops available on Github here:

These workshops take you step by step through the entire process of creating several different types of serverless applications. Hopefully this will help demystify serverless for you.

Wednesday, March 6, 2019

NSA open sourced a powerful software reverse engineering tool, Ghidra

WIRED: The NSA Makes Ghidra, a Powerful Cybersecurity Tool, Open Source. https://www.wired.com/story/nsa-ghidra-open-source-tool

Tuesday, March 5, 2019

How do you start on the path of Digital Transformation?

What is Digital Transformation?

Most of the definitions I've found are grandiose, vague and elusive at best. From an implementation perspective, the definition is, IMO, very simple:

Find better ways to use available technology.

I realize that's still pretty vague, but I have some concrete details and examples to show you how you can start addressing this challenge.

How do I start?

The best way we've found to help companies start down the path of Digital Transformation is to create a list of questions that need to be answered. Specifically, we've expanded the definition above to:

Find better ways to use available technology to provide answers to our daily/weekly/monthly questions.

Once we have at least one question that we want to answer, we can identify information or technology gaps in our current environment. For example, an extremely common question among companies is:

How many servers do we own and what is the status of each?

We've found that this seemingly simple question can cause fistfights to break out in a meeting. That's because multiple different departments have different answers, and the true answer has been an elusive quest for a number of years. The goal is to identify the data required to answer the question and the location(s) of that data if available. For example, you may have the beginning of an answer that looks like this:

For the development servers, Jim R. has a manually updated spreadsheet at location XXXX on Sharepoint. 
For the engineering servers, Nancy P. has a homegrown database that only she has access to.
For the website, we think Ashok V. has a spreadsheet that may or may not be up-to-date.

Notice that we don't actually have an answer at all, but we're identifying areas of interest that may get us the information we need. This exercise shows areas where improvement is needed. In this case, it should be apparent that some type of asset discovery and management system is needed to enable us to get a valid answer to the question. What we normally find is that the customer actually owns one (or several) tools that can provide the required function, but no one in the meeting knows about these tools. This usually leads to another question similar to:

What software do we currently own and what are the capabilities of each title?

And you have to go through the same process as above with this question. I guarantee that it will be frustrating for everyone involved, but this is the process that is absolutely required. 

Further along in the process, as new systems are introduced, the owners of those systems need to be aware of the questions that the system will need to provide answers for so that they can be architected appropriately. For example, any new online system needs to be able to provide data that can answer the following questions:

How many users are actively using the system?
How many failed transactions have occurred in the past (hour/day/week) and which users were affected?
Is the system working properly at this moment and is it accessible to all of my users?

There are literally thousands of other questions that you may have, and part of Digital Transformation is identifying those questions so that the answers can be obtained quickly. And this is where another version of my original definition comes in handy:

Find better ways to use available technology to save time.

You can come up with tons of different reasons to use technology more wisely, and they are all perfectly valid reasons for you to continuously work on your Digital Transformation.

Customizing bash command line completion

What am I talking about?

In the bash shell on Linux, you can type a character or two then hit the TAB key to get a list of the commands that start with those characters. You can do the same to complete the name of a file you're trying to edit or directory you're trying to change to. It turns out that you can customize this command line completion behavior by installing the "bash-completion" package. This package is often installed by default and has been available for several years.

What can you do with bash-completion?

You can have the TAB key complete command arguments for you. For example, the 'curl' command has tons of arguments. You can customize bash to auto-complete the parameters for you. You just need to create a specifically coded file named 'curl' in the /etc/bash_completion.d folder. Here's a great tutorial on creating these command completion scripts:

Even more helpful, here is a ton of them that have already been created:

If you've got a command with tons of options, you can use this to make it easier for you or your users to successfully create a working command.

Tuesday, February 26, 2019

You Probably Don't Need Blockchain

Here's a great article detailing some popular blockchain use cases and how they can be subverted:


While reading it, note that there are often other (simpler, cheaper, more mature, more widely known) technologies out there that can solve the problems you're trying to solve.

One big example is the combination of digital signatures with an immutable data store . This captures the identities of the participants, the information provided by the participants and the timestamps of all entries.

Blockchain does have some valid use cases (e.g. cryptocurrency management), but it certainly shouldn't be seen as the best way to solve existing problems.

Tuesday, February 5, 2019

A great video on deploying and operating Kubernetes at scale

Here's a video from Chick-Fil-A's IT team describing exactly how they use Kubernetes clusters at the edge (in each restaurant). The problems and their solutions are really intriguing.


Wednesday, January 16, 2019

Improving the QRadar to ServiceNow integration by adding QRadar event payloads to ServiceNow incident

Using the standard configuration for the QRadar/ServiceNow integration gives you some great capabilities, but some of our customers have asked for more information in the generated ServiceNow incidents. Specifically, they've asked to have the payloads from the events associated with the offense to be added to the Description of the incident in ServiceNow. This provides extensive details about the events that triggered the offense in one pane of glass so the SOC engineer doesn't have to separately open QRadar to get this information.

This can be accomplished my making some configuration changes in both QRadar and ServiceNow. I'll provide the overview here. If you would like more details, please contact me.

1. Add the offense start time to the incident description in the mapping within QRadar.
2. Create a ServiceNow business rule to parse the offense id and start time from the description whenever a new incident is created from QRadar.
3. In that same business rule, use the offense id, start time and a stop time (equal to start time +1) to submit an Ariel query to QRadar via REST to have the query run.
4. In that same business rule, parse the results of the previous REST call to get the results id, then make a second REST call to obtain the actual results, which will be the payloads of the events that caused the offense (and resulting incident) to be created.

The solution doesn't tax either system very much at all and makes life easier for the security engineer researching the issue.

Thursday, January 10, 2019

Install IBM's QRadar Community Edition 7.3.1 on CentOS 7.5 instead of RHEL 7.5

IBM offers a QRadar Community Edition for free available here:


The documentation states that it runs on "CentOS or Red Hat 7.5 with a Minimal install". If you're installing the OS from scratch, I would recommend that you use CentOS 7.5 (officially CentOS 7 1804) because it works much better than Red Hat. Specifically, I downloaded CentOS 7.5 from here:


There are smaller downloads in that same directory, but I wanted to get everything I might need. I then installed it with 16GB RAM and 8 cores and selected the "Minimal Install" option (this is the default option). I did this install under VMWare Workstation 14 Pro running on a Windows 10 laptop.

I could then directly follow the install instructions from IBM:


What doesn't work very well or at all:

(Guess how I know these)

The QRadar install will 100% fail if you try to install it on CentOS 7.6 (1810). The prerequisite checker will tell you that 7.5 is REQUIRED.

Trying to install on CentOS 7.5 using the "Server with GUI" option fails on glusterfs* package problems.

Installing on RHEL 7.5 requires that you configure your RHEL instance to be registered with the Red Hat Subscription Manager

Wednesday, January 2, 2019

Integrating systems today is both easier and more complex than ever

Integrating IT systems used to require a LOT of sweat and tears just to get the plumbing configured (think of updating a SharePoint site when a new z/OS dataset is created). Today, thankfully, all of the plumbing is available and there are tons of different options for integrations. So the problem now is surveying your specific environment to identify all of the tools that people use and then architecting and implementing a solution that works well for everyone.

As an example, you may use SalesForce for CRM, ServiceNow for service desk, Maximo for asset management, Oracle Cloud for financials, AWS for some applications, Grafana for operations dashboards and Sharepoint for internal web sites (just to name a few). All of these solutions have workflow engines and connectors that can allow you to integrate them all together. But you first need to answer a couple of questions that are similar to those associated with custom application development:

Who are the people and personas that we're trying to help?

This is the most important question because the personas you identify will directly shape the solution you're implementing. And answering this question with specific personas, like "Nancy the regional sales manager" will allow you to refine additional data down the road.

What data am I interested in and which systems are the golden sources of record for that data?

We spend quite a bit of time with customers simply finding all of the systems that are being used. Normally we start small, maybe with a single department, and then we work on getting a larger and larger picture. All of our clients use numerous systems that usually have some number of overlapping functions. We try to find everything in use so we can intelligently identify the ones that may be best suited to different tasks, also taking into account the number of users who have familiarity with the different applications.

Now that you've got some questions answered, what are the options available?

This is where things get messy in a hurry, and why you want to enlist the help of an experienced enterprise architect. It used to be that you could only get a workflow engine from an expensive enterprise application. Now, most companies are already paying for multiple workflow engines and they aren't using them. For example, Microsoft offers several: Flow, Business Process Flows (in Dynamics365), and Azure Logic Apps. Those are all separate (though very similar and intertwined) workflow engines just from Microsoft. AWS has Simple Workflow Service and Step Functions. And IBM has Business Process Automation or the workflow engine in Maximo. ServiceNow has a workflow component. (As of this writing, Google Cloud doesn't offer a generic workflow engine; they have Cloud Composer, but that's a completely different animal.) And each of those has a large set of connectors, triggers and actions that allow you to automate anything you need.

So which components do you use?

This is where knowledge, experience and collaboration come together. There is no one answer that generically fits the requirements for all customers. The answer has to be developed and refined based on the needs of the customer and the project. We use an iterative approach to our implementations, where we develop/customize a little at a time, while gathering feedback from stakeholders. This is commonly referred to as the Agile Methodology, and we've found that it works very well, especially for complex integrations.

The eventual solution depends on a large set of factors, and the solution is often complex. That's why we always document our solutions in a format that's easily consumed. Sometimes that means it's a Word document with Visio diagrams, and other times it's a full Sharepoint site with attached documents - it really depends on the client.

What's the point of this post?

While it's easier than ever to connect systems together, there's still a lot of hard work that has to go into implementing solutions. And this is exactly what we at Gulfsoft Consulting do: we help customers solve complex business problems by leveraging the appropriate knowledge, processes, people and tools. No matter what software you're working with, if you need help solving a complex problem, contact us. We've got decades of experience and we keep up to date on the latest technologies, patterns and strategies.