Sunday, December 9, 2018

JIRA can easily be used incorrectly

This is a great article about how JIRA can easily be weaponized for all the wrong purposes:

TechCrunch: JIRA is an antipattern. https://techcrunch.com/2018/12/09/jira-is-an-antipattern/

Like all things related to Agile, it needs to be used at the appropriate stage(s), otherwise it is just wrong.

Someone needs to have a view of the overarching goal, and that's where we fit in. Gulfsoft Consulting is a group of people who have decades of experience dealing wit it all of the details of data centers and application development, and we can help you make the right decisions. Contact us to start the conversation about your digital transformation.

Wednesday, December 5, 2018

With new avenues to make money come new ways for others to steal that money

I just read this article about Defy Media abruptly closing:

https://www.theverge.com/2018/12/5/18125657/defy-media-youtube-logan-paul-ryland-adams-anthony-padillo-smosh-network

I wanted to share this as a warning to all entrepreneurs out there to be diligent in vetting your partners and backers. Make sure you know what you're getting into before signing anything. And try to find a trusted adviser who you can turn to with questions about business and finances.

Tuesday, December 4, 2018

If you run Kubernetes in the cloud, the first major vulnerability found isn't a huge issue

The first major Kubernetes (aka K8s) vulnerability was found yesterday:

https://www.zdnet.com/article/kubernetes-first-major-security-hole-discovered/

It's a pretty big deal and quite scary, but patches were immediately available upon disclosure. What's even better is that the managed Kubernetes services running onAWS, Azure and Google Cloud Platform have all been patched already. If you're managing your own K8s clusters, however, you need to patch it yourself, which just takes time and know-how.

In my eyes, this is another data point that shows how proper use of cloud resources can be extremely beneficial to a company. Specifically, the big cloud players, especially AWS, are very similar to a highly competent and agile outsourced IT department. They have offerings that are years ahead of services that you would want to have onsite, and they've got testing methodologies in place to ensure that they're available 99.9% of the time.

It's true that there can be some issues in moving to the cloud, but many of the problems of the past now have very robust solutions that are included in the offerings. And those offerings are available on a pay-as-you-go basis in many cases. So you can easily keep tabs on exactly how much you're spending even on a per-application basis.

To ensure a successful digital transformation, contact us to get the experienced help that will put you on the right path.

Thursday, November 29, 2018

A really interesting AWS DevOps job opening

I just received this email, and the job looks incredibly interesting to me. If you've got AWS and DevOps experience, please contact Bhaskar directly (contact details below):

Direct Client:: In person interview is needed. No skype/WebEx/Phone.

Location: Boston, MA
Duration: 12+ months (37.5 hrs/week)
Rate: Open

Responsibilities:
•             Help with production issues and deployments and any analytics/data science products
•             Move the team  closer to continuous deployment, improving tooling (i.e. automation) and use of infrastructure (i.e. try-test-iterate faster)
•             Setup deployment infrastructure for Mayflower, our style guide and visual component library
•             Setup deployment infrastructure for analytics and data science products, that AWS Lambda and Docker based (Goal: Reduced time to go from engagement to receiving data)
•             Create and implement a strategy to monitor Digital Services supported applications and accurately notify engineers of problems
•             Construct and maintain a Threat Model for Digital Services supported applications, and implement solutions for gaps in our security based on it
•             Develop a reusable infrastructure playbook and tooling for rapidly deploying Data Team packages to internal customers
o             Quickly standup standard environment and/or distribute or handoff work deliverables
•             Mentor the team on good DevOps practices
•             Development and deployment of RESTful APIs, including documentation
•             Setup and facilitate processes and environments for the creation of new services; this would include the creation of processes and deployments to dev, test and prod environments for the proof of concepts services developed throughout the agency.

Skills Needed
•             5-8 years of experience in the below categories
•             Experience with AWS cloud platform specifically with the following services (or equivalent services within alternative cloud-based platforms)
o             AWS CLI
o             Cloud Formation
o             Cloud Front
o             S3 management
o             RDS
o             DynamoDB
o             SNS & SQS
o             EC2 management
o             Elastic beanstalk and other auto-scaling services
o             Lambda Function (python & node)
o             API Gateway
o             AWS Route 53 and AWS Cert Manager
•             Linux terminal
•             Experience with an IaaS (preferably: Amazon Web Services)
•             Virtual machines
•             Monitor production web applications
•             People and technical process improvement/re-engineering
•             Communicating effectively
•             Continuous integration/deployment
•             Conducting technical and behavioral interviews
•             Infrastructure security practices
•             Documenting in plain language
•             RESTful APIs
•             Infrastructure automation tools
•             Amazon Web Services
•             “Serverless” architecture such as AWS Lambda
•             Microservices architecture
•             Bonus points for experience with:
o             Acquia Cloud
o             PHP
o             Drupal
o             Agile/iterative development
o             User Experience (UX) practice
o             Python
o             Ansible
o             Docker
o             Other Coding Experience


Thanks
Bhaskar

Bhaskar Nainwal
Software People Inc.
bhaskar.nainwal@softwarepeople.us
Ph: 631-739-8915 © Fax: 631-574-3122

Wednesday, November 28, 2018

QRadar has a low cost Data Store option that lets you store and search as much data as you want

It looks like this has been around since April, but I just ran across it today. The QRadar Data Store option allows you to store as much log data as you want, without having to pay the normal EPS price. Here's more information on it:


And a short video that talks about it:


It does have a cost, but it's MUCH cheaper than the normal QRadar cost, and it allows you to use the same QRadar interface to search all of your log data (rather than only your security related information).

Tuesday, November 27, 2018

Istio and transaction topology for serverless applications

just watched this short video on Istio:


Basically it's microservice plumbing for Kubernetes that adds security and telemetry. So I wondered if that telemetry included topology information and found that it DOES (or it can with a plugin):


So in some way off future, you'll be able to "automatically" obtain topology data without having to install and manage data collectors and agents (also without asking developers to instrument their code). Kinda neat.

Monday, November 26, 2018

Every enterprise is already using serverless applications in some form or another

If you have an application that makes a call to an external application, then you're on the calling side of a serverless application. Here's a high level graphic to illustrate my point:
You essentially have no insight into how the Results are generated by the "cloud" you're accessing via IP address or hostname. So you're accessing a service, but the actual server part of that interaction is abstracted from you.

Here's a great article on the concept of "Servicefull Serverless" to go into more detail about this:

https://www.infoq.com/articles/serverless-sea-change

Now, the current definition of "serverless" leverages all kinds of possible technologies like AWS Lambda or Whisk or even Cloudflare Isolates, on top of containers and Kubernetes running in VMs (or bare iron in the case of Isolates). So it's extremely important for you to understand those components at some point, but from your view as a consumer, you're already using serverless technology.

Wednesday, November 7, 2018

Why employees hate their computers

I just read this article in slashdot about why doctors hate their computers:

https://science.slashdot.org/story/18/11/06/162201/why-doctors-hate-their-computers

The article really shows JUST how much it can cost do implement software incorrectly. Specifically, the process we follow includes the following questions/components to ensure that our customers have useful software once it's in production:

- Identification of ALL users of the system and their frequency of use. Once we know all of the users and how often they interact with the system, we can define priorities for each use case. For example, we would have identified doctors as high priority frequent users and ensured that their interactions with the system were the smoothest possible. There are several ways to ensure this, but one that we always require is an actual run-through of the screens with the user. This is normally difficult to schedule with the busiest users, but it MUST be done or you'll simply be burning money.

- Identification of all data to be migrated. In the case of moving to a new system (whether it's medical records, insurance claims, or anything else), ALL of the existing data must be found and must be made available in the new system in some way or another. This normally takes time, but that time is a lot less expensive BEFORE a new system goes live. Issues in a software implementation get more and more expensive to fix the farther along in the implementation, so they need to be caught early.

- For enterprise applications, "good enough" isn't. Some of the current thinking in application development and deployment says that you should get something in front of users and fix problems as they arise. This attitude is fine for a new game or small application, but it can cost money and lives in enterprise software. The people leading the implementation need to have experience in business critical applications to truly understand the cost of even a minor failure. When the cost of one minute of downtime can be measured in tens of thousands of dollars (or more!), every possible scenario has to be addressed before a production rollout.

At Gulfsoft, all of our consultants have over 15 years of experience in mission critical situations. We've worked with 911 emergency systems, satellite communications companies, large financial companies and everything in between. We know how to successfully implement large scale enterprise solutions to ensure that your employees and customers are delighted, and we can help you.

Tuesday, October 16, 2018

You can now use Vega to create custom graphs in Kibana

Prior to Kibana 6.2, you had to create a custom plugin to create custom visualization types. Now, however, support for Vega is included. Vega is a JSON (HJSON, actually) language that you can think of as a wrapper around the D3 visualization toolkit to allow it to display in Kibana. Here's a video with the highlights:

https://www.youtube.com/watch?v=lQGCipY3th8

Monday, October 15, 2018

IBM Announces Multicloud Manager

https://www.ibm.com/cloud/multicloud-manager

It allows you to manage containers across all the biggest cloud providers.

You can now see your LinkedIn saved articles on the desktop!

I just ran across this today, and am very happy that I can finally view the items that I save on my phone on the web version. The link to see your saved articles is:

https://www.linkedin.com/feed/saved-articles/

To save an article, you should see a little bookmark icon under all articles. Click that, and you'll get a DIC popup that tells you it's saved and give you a link to view all of your saved articles.

Tuesday, October 9, 2018

ITMSuper is in a new location

IBM moved the ITMSuper tool to:

https://www-01.ibm.com/marketing/iwm/iwm/web/pickUrxNew.do?source=tivopal

If you're an ITM 6.x user and you haven't used this tool, you really should download it to help with the management of your environment.

Wednesday, October 3, 2018

Learning neat new things at the Splunk Conference


Splunk is introducing tons of great new features at their conference this year. Many customers complain about the cost of Splunk, but you can lower that cost by leveraging the data to cover more usecases. If you feel like you're not getting the most out of Splunk, give us a call to get some help.

Monday, September 17, 2018

Maximo 7.6.1 includes entitlement for Cognos Analytics 11

Cognos Analytics 11 is the all-new Cognos product built for self-service business analytics, competing with the likes of Tableau and Microsoft BI. Pam Denny at IBM created a 13-part video series about this new entitlement here:

https://www.youtube.com/playlist?list=PLOBy7UFdPupclhOayxt8jrebiSZqZiv3R

Thursday, September 6, 2018

Some of our current projects

We work with a pretty wide array of products, so I wanted to highlight some of the projects we're working on right now

ServiceNow Architecture and Implementation

We're working with a communications company to implement their procurement, installation and change processes within ServiceNow, with asset feeds from multiple external systems.

ServiceNow Incident Response integration with QRadar

We're helping our client customize both products and the integration between them to best leverage their existing investment and people.

IBM Control Desk for Field Service Management

We're helping a different communications customer with their field service management through workflows and custom user interfaces defined in IBM Control Desk.

Netcool Operations Insight Implementation

We're actually working on several of these at the moment. The most work on these goes into identifying the different event sources, what (if any) automated actions need to be performed and who needs to be notified.

BigFix Steady State

A medical client of ours has been leveraging our BigFix Managed Services for several years to ensure that all IT equipment is both known and is running software at the appropriate patch level.

ICD and BigFix Implementation with Airgap

We're working with a defense contractor to ensure that their Asset Management and Change Management processes continue to work smoothly leveraging ICD and BigFix

Friday, August 31, 2018

The latest version of OpenStack (Rocky) can leverage bare metal servers directly

The source link from ZDNet:

https://www.zdnet.com/article/new-openstack-cloud-release-embraces-bare-metal/

Now you can provision bare metal servers through OpenStack. The linked article describes some of the use cases, and provides additional links to more information.

Wednesday, August 29, 2018

ServiceNow - requiring input from a user completing a task from workflow

Background

A normal part of workflow is requiring some additional information from someone involved in the workflow. A lot of information can be captured automatically, but there often seems to be some information that must be input manually by someone simply because not everything can be determined within an algorithm. This may be because the information is maintained in a separate, walled-off system, or it could be because the sensors required to gather the information aren't yet deployed, etc.

In ServiceNow IT Operations Management, you have this ability Out-of-the-box when dealing with Service Catalog Items. A Service Catalog Item is also known as a Requested Item or an RITM. Specifically, you can define Variables that are associated with an RITM, and those Variables are then available for use within any Catalog Tasks that you create within the workflow for that RITM. Without a little customization, this feature is ONLY available within workflows that target the sc_req_item table. So if you require some generic user input as part of a change task, for example, you need to perform the customizations detailed here.

Here is a link to a great article on this very topic. That post is a little old (from 2010), so the information is a bit dated and very terse. I'm writing this article to update the information and to clarify a few pieces to make it clearer.

I suggest you read through these instructions once or twice before you start trying to follow them. Then re-read them as you're implementing them. The pieces will come together for you at some point, just probably not immediately unless you've worked on this area of ServiceNow a bit.

What's already in place?

Basically, almost all of the components needed to provide this capability already exist in the system, so there's only one place that you'll have to add some code. Everything else is just customization.

By default there are two tables that already exist in ServiceNow for this very purpose:

Question [question]
Question Answer [question_answer]

(The format of these names is a common one that you'll see in ServiceNow. It is
"Label [actual_table_name]")

The Question table holds all of the questions/variables defined in the system. What's specified here is the text of the question and the type of field required (simple text field, choice list, reference, etc.)

The Question Answer table exists to hold one entry for each question and its answer associated with an "entity". For the purpose of this article, the "entity" we'll be adding question/answer pairs to is a change_task item.

Note: Catalog Items already use both the question and question_answer tables to store the Variables (options) that can be specified for each item. 

Areas that need to be customized

You will need to make customizations in the following areas:

1. Workflow->Administration->Activity Definitions

In here, you need to edit the definition for the Create Task activity. The customizations we're making here will allow us to add Questions to a Create Task workflow activity when we add it to the workflow canvas.


Click on Create Task to edit its definition:


As with most forms in ServiceNow, there are numerous fields and sections within this form. 

Define new variable

The first thing we need to do is define a new variable in the bottom section named "Activity Variables":


Add a variable of type "List", with a label of "Questions" with a Column name of "task_questions" (this name will have "u_" prepended to it once you click Submit), and that this is a Reference to the table Question:

Edit the script

The next thing we need to do is add to the script on the Script tab near the top of the page:



In here, you're going to add two pieces of code. In the onExecute function, you're going to add this code, which references the variable you defined in the last step:

if(activity.vars.u_task_questions)
   this._setVars(taskID);

Add it before the call to this.autoClose(taskID), as shown here:


The purpose of this portion of code is to call our function (defined in the next paragraph) when the "Create Activity" task actually creates a task as part of a workflow.

The next piece of code you'll add is the definition of the _setVars() function that's called in the code you just inserted. This is the code you'll add:

_setVars: function(taskID){
   var questions = activity.vars.u_task_questions;
   questions = questions.split(",");
   for(i=0;i<questions.length;i++){
  var qa = new GlideRecord("question_answer");
  qa.initialize();
  qa.question = questions[i];
  qa.table_name = activity.vars.task_table;
  qa.table_sys_id = taskID;
  qa.insert();
   }
},

Add it after the _generate function definition, as shown here:


The purpose of this code is to add one entry to the question_answer table for each of the questions that are defined for this particular workflow activity task. We'll see this in action later.

Edit the Create Task form

Now that you've got a variable to hold the questions for the task and you've got the code in place to add each of the questions to each new task that will be created by this workflow activity, you need to edit the form to actually let someone choose the questions that will be presented in the task. For this, you need to click on the Edit Variables Layout link in the Related Links section of the Create Task Workflow Activity Definition (hint: this section is directly under the "Update" button under the body of the script):

In here, drag and drop your new "Questions" field  wherever you'd like to see it on the form. I've placed mine in the second section of the form:


That's the end of the customizations you need to make to the Workflow Activity Definition. If you want to see the fruits of your labor, you can open the Workflow Editor and create a new workflow, and drop the Create Task activity onto it. Here's what it looks like by default:


And here it is with the new "Questions" field:


We'll come back to the Workflow Editor later. 

2. System UI->Formatters

You now need to create a UI Formatter to display the list of questions and answers. This is covered in the ServiceNow documentation here:


Basically, you just need to create a new UI Formatter that specifies the name you want (I chose "FTQuestionFormatter), the "Formatter" value as com_glideapp_questionset_default_question_editor and you need to specify the type of task that you're going to be creating. In my case, I'm working with a Change Task (the change_task table):


You don't need to change anything about the UI Macro for this formatter - it's written to do exactly what we need.

3. Change Task default view

This is described in the product documentation link above, but I'm including it here for completeness. Since I decided to make this feature available to Change Tasks, that's the form we're going to work with. The most straightforward way to edit this form is to go to System Definition->Tables and select the Change Task [change_task] table


Then scroll down to the middle of the page to select the Design Form Related Link to open the Form Designer, and there you can drag your UI Formatter from the Formatters section (on the left under "Fields") onto the form where you want the questions displayed. I put mine at the bottom of the second section:


This formatter will ONLY show something if the change task you're viewing actually has questions defined. That means that ONLY change tasks created from the workflow that you create in the next step will have anything shown. So at this point, you won't see anything different on any existing change tasks.

4. Service Catalog->Catalog Variables->All Variables

Here is where you need to define any "Questions" (aka Variables) that you want to see on the tasks you'll create later. The main tip here is that you don't need to specify a value for the Catalog Item field, as what we're doing has nothing to do with catalog items. In fact, catalog items already have this capability built-in, with some additional capabilities. What we're configuring is a bit more generic, but we're using the built-in forms to accomplish our goal.

5. Workflow->Workflow Editor

At this point, you can edit a workflow that targets a table that extends the task table and drag the "Create Task" Core Activity onto the canvas. When you do, you'll see your Questions field:


Make sure to select the appropriate type of task - Change Task. This does NOT work for Task items directly in the Task table. This is important to remember! You will not see a successful result if you select "Task".

You can click the padlock icon to open it, then click the magnifying glass to search for your questions:


You'll see ALL of the variables/questions in the questions table. Pick the ones you want displayed to the person assigned the task.

The Result

Now when this task is assigned to a user, that user will see the task in "My Work" and will see that they have the option to provide values for all of the questions selected in the workflow design:


Conclusion

You now have the ability to prompt users for additional input when they're completing a task as part of a workflow. You still need to write scripts to access that data, perform validation, make decisions, etc., and I'll leave that for another day.

Tuesday, August 14, 2018

The Lenovo P1 is a thin-and-light laptop with a Xeon and 64GB RAM that will be available by September

https://www.theverge.com/circuitbreaker/2018/8/13/17682992/lenovo-thinkpad-thin-laptop-work-travel

Additionally, they've got a P72 that will support up to 128GB RAM coming out at the same time.

More memory, storage and power is a great thing.

Wednesday, July 11, 2018

Processing JSON in automation scripts in IBM Control Desk 7.6

Background

You may need to deal with JSON-formatted data in an automation script, and it can be a little tricky. I've written this post to provide the few little pointers to make it easier for you.

JavaScript

You can write automation scripts in Rhino JavaScript or Jython. While Jython is the most common language used for automation scripts, it turns out that JSON processing is MUCH easier in JavaScript. Specifically, in a JavaScript automation script, you have access to the popular object named JSON that will give you everything you need. Here's an example:

var jsonObject = JSON.parse(jsonString);

And that's it. You can now work with jsonObject as a JSON object as described in this reference material from w3schools:


As far as I know, this will work in both WebSphere and WebLogic application servers. One possible caveat is that the JavaScript engine is changing from JDK 7 to JDK 8. Here's more information on that:

Jython in WebSphere

For Maximo/ICD automation scripts, Jython is by far the most popular language. It's also more thoroughly documented and, IMO, easier to work with in this context. However, JSON parsing has a couple of caveats. Specifically, the Jython interpreter in ICD 7.6 is version 2.5.2, which doesn't have a built-in JSON parser (one was added in Jython version 2.6). However, we're still in luck because WebSphere actually includes a JAR file that provides JSON processing. The specific class that you need to import is com.ibm.json.java.JSONObject :

from com.ibm.json.java import JSONObject
...
my_json = JSONObject.parse(my_filebody)

And from there, you can deal with my_json appropriately according to the JavaDoc here:


Jython in WebLogic

Admittedly, I haven't tested this one. I've tested the above two, and from my research, I believe this will work. Specifically, these two links give the necessary information:



If you find that it doesn't work, please ping me and I'll help you get it to work then update this entry as necessary.

With that in mind, you just need to import the appropriate classes in your automation script:

from javax.json import Json
from javax.json import JsonObject

And there you go.

Tuesday, July 10, 2018

It only takes an hour to get a test BigFix environment installed and working

The only caveat (which they've maybe fixed now) is that the SQL Server that's bundled with the BigFix Eval is borked, so you first need to install an eval version of MSSQL Server 2014, which is available from Microsoft.

But the whole process is really easy:

1. Create/clone a Windows 2012 or 2016 server (you can download an eval copy of Windows Server 2016 if needed)
2. Google MSSQL Server 2014 evaluation download and download it
3. Install MSSQL with all the defaults
5. Follow IBM's instructions for installation.
6. Once it's up and running (takes about 10 minutes), continue through the install instructions to add all of the available Sites. The site named IBM BigFix Inventory v9 is actually the one that will get you the BigFix Inventory install files.
8. Optionally create/clone a Windows or Linux VM to be an additional client in your environment

That's it, and even if you need to install Windows Server from scratch, it only takes at most 1.5 hours.

There are other parts you can also install now, such as BigFix Inventory or the WebUI (both are available via fixlets in one of the available Sites).

Monday, July 9, 2018

How to change the BigFix WebUI database userid and password

I recently installed the BigFix WebUI with the wrong password and needed to fix it. I found the encrypted information in the db_config.json file in the folder:

C:\Program Files (x86)\BigFix Enterprise\BES WebUI\WebUI

However, this is what the contents of that file are:

{"user":"96\u002fzY1rPfE40v69uFttQAg==","password":"MwKBDmT00BEwEZm1ctZahg==","hostname":"WIN-5M6866TPST1.mynet.foo","port":1433}

And while those look like Base64 encoded values, there's also some encryption going on (try putting either of those strings through an online Base64 encoder/decoder and you'll see).

So the first thing I tried was to just put the information in the file in cleartext and restart the WebUI service, so the file looked like:

{"user":"sa","password":"passw0rd","hostname":"WIN-5M6866TPST1.mynet.foo","port":1433}

Amazingly, that worked, and here's the logfile entry that shows it:

Wed, 04 Jul 2018 13:14:24 GMT bf:dbcredentials-error Failed to decrypt database credentials, attempting to use inputted credentials as plaintext

However, the file kept the cleartext data (I had hoped that it would re-encrypt the values on startup, but it did not).

Then I found the solution in the place I should have looked to begin with - in the BigFix console! There's a task defined in the BES Support site specifically for this purpose. The task is named "Deploy/Update WebUI Database Configuration". Run the action associated with that task and it will create a new db_config.json file with the properly encrypted data and you're good to go.

Friday, July 6, 2018

For business use, don't buy a laptop with higher than 1080p resolution

The high resolution screens available today are amazing for graphics and gaming, but absolutely horrible if you need to use any traditional/legacy applications. The main application that gives me trouble is Quickbooks Desktop Pro. We have version 2016, and I don't imagine they're going to fix it anytime soon since they seem to (rightly) want everyone to move to their online version. We've been using Quickbooks for over 15 years, so we're using some features that simply aren't available in the online version. I'm sure we'll move to the online version at some point, but it won't be any time soon. I'm certain there are other desktop applications that similarly have a problem with high resolution monitors - specifically, the text and windows are too small to see, and scaling doesn't work correctly at all. It's just ugly.

The higher end business laptops (Lenovo Thinkpad T, P or X series; Dell XPS; etc.) generally offer a 1920x1080 pixel option as a base, then higher resolutions and touchscreens cost more. In my experience, you'll be the happiest with the lower cost 1920x1080 option. Whether you get a touch-enabled screen or not is up to you, but definitely skip the high resolution screen.

Wednesday, June 27, 2018

Just Announced: IBM Cloud App Management

Here's the announcement, with architecture details:

https://developer.ibm.com/apm/2018/06/26/introducing-ibms-new-service-management-cloud-native-offering-ibm-cloud-app-management/

Some of the highlights are that it runs on IBM Cloud Private (so it runs in containers orchestrated by Kubernetes) and supports both ITM v6 and APM v8 agents.

Monday, June 25, 2018

Reading and writing files in a Maximo automation script

Background

All of the product documentation tells you to use the product provided logging for debugging automation scripts (see here, for example: https://www.ibm.com/support/knowledgecenter/SSZRHJ/com.ibm.mbs.doc/autoscript/c_ctr_auto_script_debug.html ). For quick debugging, however, I thought that was cumbersome, so I decided to figure out how to access files directly from within an automation script. This post goes over exactly what's required to do that. Maximo supports Jython and Rhino-JavaScript for automation scripting, and I'll cover both of those here.

Jython

This one was straightforward, since the Python documentation can be followed exactly. Jython is simply an implementation of Python written completely in Java. All you need to open a file is:

my_file = open('c:/tmp/outfile.txt','a')

where 'a' specifies that we're appending to the file (and creating it if it doesn't exist). You then do need to flush and close the file, and this is my function to do that:

def logit(mytext):
  my_file = open('c:/tmp/jout.txt','a')
  my_file.write(mytext + '\n')
  my_file.flush()
  my_file.close()

So to log a string, just run:

logit("this is my string")

Reading from a file is just as easy:

my_read = open('c:/tmp/computers.json')
my_json = my_read.readline()
my_read.close()

In my case, the file contains one long line of JSON data, so readline() works great to store all of the text of the file into the string named my_json.

Rhino-JavaScript

This one is quite a bit more painful than Jython, which is really just one more reason that all of your automation scripts should be written in Jython. Specifically, the Rhino implementation in Maximo doesn't seem to completely adhere to the documentation you'll fine online. For example, there is no "ReadFile()" method available in Maximo. There are also other limitations, and the only way I found to get over them was to use Java classes. I thought that would make it easy, but then you have to deal with the fact that Java objects (specifically Array objects) are absolutely not the same as JavaScript objects. 

So, writing a file isn't too difficult once you realize that you need to use Java. Here's how you open and write a file:

var outFile = new java.io.FileWriter("c:/tmp/autoscriptout.txt");
outfile.write("my string");
outfile.close();

The hard part is actually reading data from the file. Using the same JSON file as above with one long line of JSON, the following is required to read that data into a JavaScript string that can then be parsed:

var 
  thefile = new java.io.File("c:/tmp/computers.json"),
  filelength = thefile.length(),
  thefilereader = new java.io.FileReader(thefile),
  jsonData = java.lang.reflect.Array.newInstance(java.lang.Character.TYPE,filelength),
  res = thefilereader.read(jsonData,0,filelength),
  jsonString = new java.lang.String(jsonData);

And now all of the JSON data is in the string named jsonString.

Enjoy.

Friday, June 15, 2018

ICD 7.6 Fresh install and config with LDAP authentication configured will fail

We found a problem when installing IBM Control Desk 7.6 on WebSphere and MSSQL where it fails every time if you enable LDAP/AD authentication during the configuration phase of the install. Specifically, you'll see this error in the ConfigTool window:

Apply Deployment Operations-CTGIN5013E: The reconfiguration action deployDatabaseConfiguration failed. Refer to messages in the console for more information.

And if you look in the CTGConfigurationTrace<datetime>.log file, you'll see this error:

SEVERE: NOTE ^^T^Incorrect syntax near the keyword 'null'.
com.microsoft.sqlserver.jdbc.SQLServerException: Incorrect syntax near the keyword 'null'.
at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:216)
at com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServerStatement.java:1515)
at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.doExecutePreparedStatement(SQLServerPreparedStatement.java:404)
at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement$PrepStmtExecCmd.doExecute(SQLServerPreparedStatement.java:350)
at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:5696)
at com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:1715)
at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeCommand(SQLServerStatement.java:180)
at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeStatement(SQLServerStatement.java:155)
at com.microsoft.sqlserver.jdbc.SQLServerPreparedStatement.executeQuery(SQLServerPreparedStatement.java:285)
at com.ibm.tivoli.ccmdb.install.common.util.CmnEncryptPropertiesUtil.init(CmnEncryptPropertiesUtil.java:187)
at com.ibm.tivoli.ccmdb.install.common.util.CmnEncryptPropertiesUtil.<init>(CmnEncryptPropertiesUtil.java:101)
at com.ibm.tivoli.ccmdb.install.common.util.CmnEncryptPropertiesUtil.getInstance(CmnEncryptPropertiesUtil.java:141)
at com.ibm.tivoli.ccmdb.install.common.config.database.ACfgDatabase.createCronTask(ACfgDatabase.java:1391)
at com.ibm.tivoli.ccmdb.install.common.config.database.CfgEnableVMMSyncTaskAction.performAction(CfgEnableVMMSyncTaskAction.java:140)
at com.ibm.tivoli.ccmdb.install.common.config.database.ACfgDatabase.runConfigurationStep(ACfgDatabase.java:1108)
at com.ibm.tivoli.madt.reconfig.database.DeployDBConfiguration.performAction(DeployDBConfiguration.java:493)
at com.ibm.tivoli.madt.configui.config.ConfigureSQLServer.performConfiguration(ConfigureSQLServer.java:75)
at com.ibm.tivoli.madt.configui.common.config.ConfigurationUtilities.runDatabaseConfiguration(ConfigurationUtilities.java:540)
at com.ibm.tivoli.madt.configui.bsi.panels.deployment.TpaeDeploymentPanel$RunOperations.run(TpaeDeploymentPanel.java:1550)
at org.eclipse.jface.operation.ModalContext$ModalContextThread.run(ModalContext.java:121)


This happens before the EAR files are built.

The way around this is to select "Use Maximo internal authentication" on the "Configure Application Security" screen of the ConfigTool". Once everything is installed, configured and running, you can then go in and enable J2EE application security for authentication.


Wednesday, June 13, 2018

JD-Gui is an invaluable tool for troubleshooting Java applications

If you deal with Java applications, you should get familiar with JD-Gui if you aren't already. JD-Gui (Java Decompile - Graphical User Interface) does exactly what its name states, and it seems downright magical because it shows you source code from compiled Java applications, which can give you amazingly useful insight into how an application is working. Here's a screenshot of it in action, where I'm using it to look at a JAR file that's included with the IBM Control Desk ConfigTool:


All of that information came from just dropping the JAR file onto JD-Gui.

The problem I'm encountering is a SQL error complaining about a syntax error near the keyworkd "null". By looking at the trace file produced and the source code, I've been able to reproduce the exact error message, and I'm 100% confident I know exactly in the code where the error is generated. So instead of just randomly trying different possible solutions, I can focus on the very small number of areas that could be causing this particular problem.

I've been using this tool for years, so I'm not sure what took me so long to write about it.

Monday, June 11, 2018

I definitely recommend installing Linux on Windows

If you've got Windows 10, hopefully you've been able to install a recent update that allows you to enable the Windows System for Linux feature and install a supported Linux distribution from the Microsoft Store. I finally bit the bullet and installed Ubuntu today, and it makes the life of a system administrator MUCH easier. I already had Cygwin installed, but this is just a slightly smoother integration, with many of the tools you need already installed (or available with the normal 'apt' or 'apt-get').

The Ubuntu distribution available in the MS Store even comes with vi with color highlighting for known file types (like html or js), and it's got telnet, ssh, sftp, etc. to make your life easy.

It's been available for a while, and I was hesitant to install it, but now I'm very happy I did.

Friday, June 1, 2018

Amazon Chime is a cheaper and more powerful alternative to WebEx

If you haven't looked at Amazon's different AWS offerings in a while, you definitely should take a look sometime. For example, I stumbled across their Chime web conference service:

https://aws.amazon.com/chime/

and I can report that it's just as reliable and easy as WebEx, but with more capabilities and at a fraction of the cost. Specifically, it's only a maximum of $15 per month per host, with 100 attendees allowed, plus you get a dial-in number (an 800 number is available, but there are additional per-minute charges associated with it).

We had an older WebEx account that was $50 per host per month, so I was very happy to run across this service and to get a minimum of a 70% savings. I say minimum of 70% savings because some of our host accounts were used only at most 2 days per month, which, with Chime, will now only cost a maximum of $6 per month.

High Availability for DB2 on AWS

If you're running an IBM product that requires a DB2 backend, you should really consider running DB2 on AWS. Here's a great article that provides you with the CloudFormation template to set it all up for you very quickly:

https://aws.amazon.com/blogs/database/creating-highly-available-ibm-db2-databases-in-aws/

If you're concerned about running your infrastructure in the cloud, please contact us so we can give you the information you need about the tight security and incredible flexibility that AWS provides.

Update 8/8/2023: One important note: this still uses TSAM for failover. What this means is that if your primary DB2 instance has a problem, the entire VM hosting that instance is rebooted. So if you have multiple databases on that instance, they're all failing over to the backup. If you have any other processes running on that VM, they're all going away. You can definitely architect your application to work fine with this, but it is definitely something you have to keep in mind before simply implementing DB2 HADR.

Monday, February 26, 2018

Netcool and other IBM ITSM products upgrades due to Java6 EOS

Problem

IBM has announced End of Support dates for quite a few products in 2018.  In many cases, this stems from the impending end of support for Java 1.6.  You can search for IBM products and the EOS date here: https://www-01.ibm.com/software/support/lifecycle/

Solution

Gulfsoft Consulting can help you move to a supported release in a short time period, or we can get you upgraded to a product with more features (like moving to NOI from Omnibus). We have helped hundreds of clients over the years upgrade and migrate in situations exactly like this.  The typical time needed is a few weeks, not months.  For more information contact:

frank.tate@gulfsoft.com 304 376 6183
mark.hudson@gulfsoft.com 816 517 7179

Details

Some of the products whose support ends in 2018 are:

Product Version EOS Date
IBM Tivoli Monitoring  6.2.2 4/30/2018
IBM Control Desk  7.5.x 9/30/2018
Tivoli Workload Scheduler  8.6.x 4/30/2018
Netcool Operations Insight 1.2, 1.3.x 12/31/2018
Network Mgmt 9.2.x 12/31/2018
OMNIbus 7.4.x 12/31/2018
Impact 6.1.x 12/31/2018
IBM Tivoli Network Manager 3.9.x, 4.1.x 12/31/2018
Netcool Performance Manager 1.3.x 12/31/2018
Netcool Performance Flow Analyzer 4.1.x 12/31/2018
Network Configuration Manager 6.3.x, 6.4.0, 6.4.1  12/31/2018

IBM recommends upgrading to later versions of the products as soon as possible in order to maintain full support. After April 2018, support for Java™ 6 will be limited to usage and known problems with possible updates for critical security fixes through the end of 2018. After April 2018, WebSphere Application Server (WAS) 7 support will be limited to non-Java defects. Support for other components will continue as usual.

More information about Gulfsoft can be found here: https://www.gulfsoft.com/about

Friday, February 23, 2018

We've got a few open time slots for one-on-one meetings at #Think2018

The #Pink18 ITSM conference was a great success for us, and we're now looking forward to #Think2018. It's going to be a jam packed week, but we've still got a few time slots open to schedule sit down discussions with existing and potential clients and partners. Contact Frank (frank.tate@gulfsoft.com) or Mark (mark.hudson@gulfsoft.com) today to set up a 30-60 minute meeting!

Monday, February 19, 2018

#Pink18 is off to a Great Start

The #Pink18 ITSM conference kicked off last night with a reception, and it looks to be another great conference this year. Pink Elephant always has great thought leaders presenting at the sessions, and this year will continue that tradition.

If you're at the conference, please stop by our booth, #601, in the exhibitors showcase.

Friday, January 19, 2018

IBM Maximo named a Leader in Gartner Magic Quadrant for Enterprise Asset Management!

https://www.ibm.com/developerworks/community/blogs/a9ba1efe-b731-4317-9724-a181d6155e3a/entry/IBM_Maximo_named_a_Leader_in_Gartner_Magic_Quadrant_for_Enterprise_Asset_Management?lang=en

Something very important to note is that the IBM Control Desk product is built entirely on Maximo. So almost all of the features and capabilities that put Maximo into Gartner's Magic Quadrant for Enterprise Asset Management are also included in IBM Control Desk.

Tuesday, January 16, 2018

Gulf Breeze Software Partners is now Gulfsoft Consulting

Gulf Breeze Software Partners was started 15 years ago as a consulting firm that also had small
aspirations to write software at some point. Along the way, we realized that we really prefer
implementing and customizing software over writing new applications from scratch. And while
we've advertised that we're specialists in the implementation and customization of the IBM suite
of products up to this point, we're now marketing the fact that we have experience in and offer
services on a much larger array of products from multiple vendors. To effectively market our
capabilities to new customers, we decided to change our name from Gulf Breeze Software Partners
to Gulfsoft Consulting. We've still got the same amazing people and the same drive to make customers
successful, and now we've got a name that more accurately describes what we do. Here’s a link to some of the technologies we work with every day.


We look forward to continuing our relationships with existing customers and making new ones.