Monday, December 13, 2021

Quickest log4j2 vulnerability remediation I've found on Linux

 Quickest Linux fix I've found for the #log4j2 vulnerability:


find / -name "log4j-core-*.jar" -exec zip -q -d {} org/apache/logging/log4j/core/lookup/JndiLookup.class \;
reboot


The above command will find all files named "log4j-core-*.jar" on the system and will remove the "JndiLookup.class" file from them. The 'reboot' is a fairly large hammer, but it will restart all processes on the box. Alternatively, you can stop and restart all java processes running on the server.

Tuesday, October 26, 2021

Converting timestamp in milliseconds to seconds in Netcool probe rules

 Background

The Netcool OMNIbus ObjectServer expects timestamp values to be the number of seconds since epoch (00:00:00 UTC on 1 January 1970). This is a 10-digit number. However, some systems generate a timestamp that is the number of milliseconds since epoch (a 13-digit number). This causes the timestamp to be interpreted wildly incorrectly in the ObjectServer. One such event source is Nokia NSP, which integrates with Netcool via the Probe for Message Bus.

Conversion Process

My process of converting the 13-digit timestamp to the correct 10-digit one is straightforward:

# convert timestamp to string by concatenating it with a string
$millString = $timestampInMilliseconds + ""

# take the first 10 characters
$secondsString = substr($millString,1,10)

# convert back to an integer
$validTimestamp = int($secondsString)

# store in FirstOccurrence of Event
@FirstOccurrence = $validTimestamp

That's it.

Wednesday, September 22, 2021

Using VSCode to write Netcool Probe Rules and Impact Policies

 VSCode is Microsoft's free, cross-platform IDE for software development. It is booming in popularity recently because it is an amazing tool with lots of plugins. These plugins provide all kinds of different functionality. The ones I want to introduce to you today are syntax highligting plugins that provide syntax highlighting and syntax validation for Impact Policy Language (IPL) and Netcool Probe Rules Language

Here's an example from the Probe Rules extension:


Compared to the vi editor or Notepad++, this is a HUGE improvement.






Wednesday, May 5, 2021

ServiceNow Quebec Release Netcool Connector V2 Implemented in JavaScript

 Background

Prior to the Quebec Release, the Netcool Connector was only available as a Groovy script. In the Quebec release, ServiceNow offers BOTH the legacy Groovy connector and a new JavaScript-based connector. This new connector is named IBM Netcool V2. This new connector leverages the OMNIbus REST API for retrieving and updating events, whereas the legacy Groovy script directly connects to the ObjectServer database to perform these operations.

Monday, March 22, 2021

vCenter Appliance "tiny" Size Is Not Enough for Creating OpenShift Cluster

 I just tried to create an OpenShift 4.7 cluster using a vCenter appliance that was configured with the "tiny" size from the installer. This gives it 2 vcpus and 10GB RAM. I was using Installer Provided Infrastructure (IPI) on vSphere 6.7. The cluster creation failed with a timeout. I looked at the vCenter server performance stats and saw that it was using all of its CPU and memory. So I destroyed the cluster and doubled the resources on the vCenter VM. I then ran the cluster creation again, and everything completed as expected.

Wednesday, March 17, 2021

Overprovisioning vCPUs in ESXi as a VMWare guest

Background

I have a large server (96 vcpus and 1TB RAM) for working on cloud projects. A limitation, however, is that VMWare Workstation Pro 16 has a limitation of allocating a max of 32 vCPUs and 128GB RAM to any one guest VM. Normally this isn't a problem, but when you're dealing with OpenShift and Watson AIOps on-prem install, that's not enough. Specifically, Watson AIOps states that it requires a minimum of 36 cores for the all of the VMs required.

Solution

It turns out that the 32 vCPU limit isn't really a problem in this case. VMWare products allow you to overprovision the host resources to guests. And ESXi doesn't have any limitations on the number of vCPUs or amount of memory you can assign to one of its guests. This means that I can run ESXi as a guest under VMWare Workstation, and allocate more than the 32 vCPUs to its guest VMs. Here's a picture that I think clears up the situation:


As you can see, my ESXi guest (32 vCPUs) has three guest VMs that are using a total of 42 vCPUs, and they're all running fine. If all of the vCPUs get busy, performance will degrade, but I don't expect that to ever happen in my lab.

I've seen discussions where people overprovision vCPUs in a ratio of up to 4:1, meaning that I could possibly allocate 128 vCPUs in my ESXi guest as long as the guest VMs aren't too busy.

Tuesday, March 16, 2021

Troubleshooting Red Hat CodeReady Containers

Background

I've been working with Red Hat CodeReady Containers (CRC) recently, and found that I've had to look all over the place to find even the most basic troubleshooting information. For example, here is the link to the Troubleshooting chapter of the official documentation. Go read it. If you don't think you have time, you're wrong. It will take you about 30 seconds. I'm writing this post to provide a little information that I've found to be useful. It's certainly not everything you need, but it's enough to get you pretty far, and it is infinitely more information than in the link above.

Environment

Here is a diagram that shows my configuration for CRC:


Other than the memory and CPU specs, this is a pretty common configuration for CRC. 

When troubleshooting CRC, your Physical Host Machine and your virtualization software (VMWare Workstation 16 Pro in my case) don't really come into the picture too much. They generally do their job and are transparent to what you're doing, so I'm not touching on those. The systems you're actually going to look ar are your Guest VM, crc VM, and crc pods.

Guest VM

What I'm calling the Guest VM is the system on which you've downloaded and plan to run CRC. So in your case, this could actually be your laptop. But Guest VM is what I'm calling it. This is where most of your troubleshooting will be done if you're having problems getting the crc VM to start, which is what I have encountered most often. I am using libvirt, KVM, and qemu, which is the default/normal configuration on Linux. Information on how these three components work together can be found at this link.

crc VM log file

The most important file to know about is the crc VM log file created by qemu. That file is:

/var/log/libvirt/qemu/crc.log

This is the console output of the crc VM, so it will show you exactly what's happening in the VM as it is booting up.

virsh command

The virsh command is included in the libvirt package. This command allows you to interact with libvirt/KVM/qemu VMs on your system, like the crc VM. The important thing to note is that you need to run this command as root. As root, you can run the following command to see a list of all VMs running:

virsh list --all


virsh is a complete terminal environment, with tons of additional commands that can be run interactively. 

virt-manager

I actually found this before I found the above log file. Now that I know the location of the log file, I've found that this tool isn't as useful, but I wanted to include it anyway.

Install virt-manager on your Guest VM with yum. You can then launch it with the command virt-manager, which will bring up the application window:


You can then click on the crc VM to see the console. There is no way to actually log into the crc VM because you can only log in via the core user's private key (shown later). Googling around, I see that password access has been requested/suggested, but there appears to be no plan to implement it at this time.

crc VM

crc pods log files

If the crc VM is up and running, you can ssh into it with this command:

ssh -i ~/.crc/machines/crc/id_ecdsa core@api.crc.testing


Once inside the crc VM, you can cd to /var/log/pods and you will see one log file for each pod created.

That's all for now

As I said in the beginning, I just included a few tools, but this is more than in the product documentation. CRC and OpenShift are really complex frameworks that rely on tons of components like libvirt, Kubernetes, and tons of other complex components. It is understandable why it's so hard to troubleshoot. However, I personally believe that the development team could include more logging information in the web console itself so that users/operators of the system have access to the data without having to separately open a terminal window.