Thursday, March 16, 2023

Installing additional software on the Rancher docker container

If you read one of my previous posts to install Rancher on a single docker container, you may have found that it doesn't include several commands like ping, netstat, ss, and even apt. And if you run 'uname -a', you might think that the image you're in is Ubuntu, but it's not. It's SUSE Linux (the same people who maintain Rancher), and the package manager there is accessed via the command 'zypper'. So to install several of the tools you know and love, run the following

zypper install net-tools iproute2 bind fping lsof

That's it. Now you have a few more tools for debugging.

Wednesday, March 15, 2023

Installing Rancher in a Single Docker Container on Ubuntu 20.04

This is MUCH easier than my last couple of posts because this just takes one step after you configure your OS. Rancher is a cloud native (runs on its own K8s/K3s cluster) K8s manager and container orchestration platform. It is a competitor to Red Hat OpenShift and VMWare Tanzu.

This solution is for a DEV/practice environment. 

I've uploaded the script to configure Ubuntu as a gist to Github. So all you need to do is start with a working install of Ubuntu 20.04 desktop (my test systems have been configured with 16 cores and 64GB RAM). Your user must have sudo access (you'll be prompted for the password as the scripts run) and you can run this script:


#!/bin/bash
#
# Full list of commands required to install minikube on Ubuntu 20.04
#
sudo groupadd docker
sudo usermod -aG docker $USER
group=docker
if [ $(id -gn) != $group ]; then
exec sg $group "$0 $*"
fi
sudo apt update
sudo apt install -y ca-certificates curl gnupg lsb-release net-tools
sudo apt-get update
sudo mkdir -m 0755 -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin -y
sudo systemctl status docker
# verify docker works
docker run hello-world
sudo apt install -y curl wget apt-transport-https
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube
curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
chmod +x kubectl
sudo mv kubectl /usr/local/bin/
kubectl version -o yaml
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh
# Start minikube: Change memory and cpus to whatever you need
minikube start --addons=ingress default-storageclass storage-provisioner --install-addons=true --kubernetes-version=stable --driver=docker --memory 49152 --cpus 16
# configure kubectl bash completion
kubectl completion bash | sudo tee /etc/bash_completion.d/kubectl > /dev/null
echo 'alias k=kubectl' >>~/.bashrc
echo 'complete -o default -F __start_kubectl k' >>~/.bashrc
. ~/.bashrc
# start new shell in docker group
exec newgrp docker

Now run this command:

docker run -d --restart=unless-stopped \
  -p 80:80 -p 443:443 \
  --privileged \
  rancher/rancher:latest

Now open your browser to http://localhost and follow the directions. It will instruct you how to get the password, then prompt you to change the password, and you're good to go. You have a local Rancher K3s cluster running in a docker container. From the UI you can probe your cluster configuration, install new applications, etc. One application of interest is:

Monitoring - This is similar to (though not exactly) the kube-prometheus-stack, with Prometheus, Grafana, and several Grafana dashboards configured.

To access the cluster from the CLI, you first need to get the container-id of your rancher container with:

docker ps

Then run:

docker exec -it container-id /bin/bash

At this point you have a root shell with access to the kubectl command.

Another application that will probably interest you is Elasticsearch. Be prepared for a LOT of failure if you try to install this one. I simply could not get it to install, and I could not determine why it failed. I couldn't find any useful logs describing where it was getting hung up. If you can figure it out, please let me know. I will keep on trying.

Update 3/16/2023: I was able to get Elasticsearch installed, and I can verify via curl to port 9200 that it's running, but that's it. I can't get any logs sent to it because the Logging app won't let me configure anything. And while I can install Kibana, I cannot figure out how to access the UI once it's installed. I've tried quite a few different things, but it's not working.

To get Elasticsearch installed, you need to perform some additional steps:

Create a directory like /home/mypv inside the Rancher docker container.
Set the owner of that directory to the user "rancher"
create a PersistentVolume in the Rancher UI to be a HostPath that points to /home/mypv with a size of 30Gi (to match the defaults for the Elasticsearch install)
In the Elasticsearch yaml, change the values of these two keys as listed here:

replicas: 1
minimumMasterNodes: 1

But, like I said, you won't be able to actually do anything with it at this point.





Tuesday, March 14, 2023

Installing the ELK stack and Fluent-Bit on Minikube on Ubuntu 20.04

 Background

This should be easy, but it took me a couple of days to successfully get it running, so that showed me that I needed to create this post. The problems are:

1. There are a LOT of out-of-date articles out there that are now just wrong (this one was written on 3/14/2023 and will be obsolete at some point; I apologize in advance if you are reading this after that point of obsolescence). It's not the fault of the authors. Components in this space are simply changing very quickly. Event some of the latest HOWTO documentation in the different github repositories is wrong (invalid/deprecated flag used, etc.)

2. The various helm charts include some example yaml files (yay!) that don't work without modification (dammit!).

3. The Fluent Bit helm chart defaults simply do not work with a default Elasticsearch install. Specifically, Elasticsearch requires (and there is no way to disable this) TLS connections with authentication, while the Fluent Bit chart is only set up for an HTTP connection to Elasticsearch with NO authentication.

So those are some of the reasons for this article.

This solution is for a DEV/practice environment. I can't possibly list all of the reasons why. Those reasons start with "it's on minikube" and include "the Elastic password is in plaintext", among many, many others.

Solution

I've uploaded the scripts as gists to Github. So all you need to do is start with a working install of Ubuntu 20.04 desktop (my test systems have been configured with 16 cores and 64GB RAM). Your user must have sudo access (you'll be prompted for the password as the scripts run) and you can run these two scripts in order:



#!/bin/bash
#
# Full list of commands required to install minikube on Ubuntu 20.04
#
sudo groupadd docker
sudo usermod -aG docker $USER
group=docker
if [ $(id -gn) != $group ]; then
exec sg $group "$0 $*"
fi
sudo apt update
sudo apt install -y ca-certificates curl gnupg lsb-release net-tools
sudo apt-get update
sudo mkdir -m 0755 -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin -y
sudo systemctl status docker
# verify docker works
docker run hello-world
sudo apt install -y curl wget apt-transport-https
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube
curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
chmod +x kubectl
sudo mv kubectl /usr/local/bin/
kubectl version -o yaml
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh
# Start minikube: Change memory and cpus to whatever you need
minikube start --addons=ingress default-storageclass storage-provisioner --install-addons=true --kubernetes-version=stable --driver=docker --memory 49152 --cpus 16
# configure kubectl bash completion
kubectl completion bash | sudo tee /etc/bash_completion.d/kubectl > /dev/null
echo 'alias k=kubectl' >>~/.bashrc
echo 'complete -o default -F __start_kubectl k' >>~/.bashrc
. ~/.bashrc
# start new shell in docker group
exec newgrp docker



#!/bin/bash
# install Elastic
# reference URL:
# https://www.bogotobogo.com/DevOps/Docker/Docker_Kubernetes_ElasticSearch_with_Helm_minikube.php
# add helm repo for elastic
helm repo add elastic https://Helm.elastic.co
# an example values.yaml for use with minikube, but it didn't work exactly as written for me.
# curl -O https://raw.githubusercontent.com/elastic/Helm-charts/master/elasticsearch/examples/minikube/values.yaml
# This is how I created the elasticvalues.yaml file:
# helm show values elastic/elasticsearch | tee -a elasticvalues.yaml
#
# I then edited to increase the Java opts to 512m). The important setting here seems to be "storageClassName: "standard".
# Yep. That's the trick. I saved the YAML file as elasticvalues.yaml.
# I also set the password to "passw0rd" to make life easier. Setting the password requires getting the "full" list of values
# with 'helm show...'
# enable these addons for minikube.
minikube addons enable default-storageclass
minikube addons enable storage-provisioner
helm install elasticsearch elastic/elasticsearch -f https://gist.githubusercontent.com/franktate/2faaa85e7dfd953ee0115b82d2e989af/raw
# keep checking the status of the elasticsearch pods. They take several minutes to become Ready.
echo Sleeping 5 minutes to wait for the install to complete
sleep 300 # wait 5 minutes
# Once they're Ready, run the following command. This is just needed to test
# the status of elasticsearch. It's not required for normal operations.
kubectl port-forward svc/elasticsearch-master 9200 &
# now install Kibana
helm install kibana elastic/kibana
echo Sleeping 5 minutes to wait for the install to complete
sleep 300 # wait 5 minutes for the install to complete.
# provide access to the Kibana UI
kubectl port-forward deployment/kibana-kibana 5601 &
# Kibana URL: http://localhost:5601
# user: elastic
# get pass with:
# kubectl get secrets --namespace=default elasticsearch-master-credentials -ojsonpath='{.data.password}' | base64 -d
# It's actually hard-coded to "passw0rd" in elasticvalues.yaml. Download the file and change it if needed.
# install metricbeat.
helm install metricbeat elastic/metricbeat
# You can verify metricbeat is working by going to https://localhost:9200/_cat/indices?v&pretty
# and you should see at least one index whose name begins with ".ds-metricbeat"
echo Sleeping 2 minutes to wait for the install to complete
sleep 120 # wait 2 minutes for the install to complete.
# install logstash
# Specifying this values.yaml file to use the OSS image:
# https://github.com/elastic/helm-charts/blob/main/logstash/examples/oss/values.yaml
# The default install looks for a license and other things and causes problems. This one does not.
helm install logstash elastic/logstash -f https://raw.githubusercontent.com/elastic/helm-charts/main/logstash/examples/oss/values.yaml
echo Sleeping 5 minutes to wait for the install to complete
sleep 300 # sleep 5 minutes waiting for the install to really complete. May not take this long.
# We need filebeat installed and feeding logstash. The OSS example is already configured, so use it.
helm install filebeat elastic/filebeat -f https://raw.githubusercontent.com/elastic/helm-charts/main/filebeat/examples/oss/values.yaml
echo Sleeping 30 seconds to wait for the install to complete
sleep 30 # sleep 30 seconds to wait for the install to really finish
# To verify that it worked, run:
# curl --insecure "https://localhost:9200/_cat/_indices?v&pretty"
# make sure there's at least one index shown whose name begins with ".ds-filebeat-oss"
# Now that we have the OSS version of Elasticsearch installed, let's install Fluent-Bit
# Install fluent bit
helm repo add fluent https://fluent.github.io/helm-charts
# the fluentbitvalues.yaml file used here was first downloaded with
# curl https://raw.githubusercontent.com/fluent/helm-charts/main/charts/fluent-bit/values.yaml | tee -a fluentbitvalues.yaml
# and then modified. The modifications were just to the two "es" [OUTPUT] stanzas
helm install fluent-bit fluent/fluent-bit -f https://gist.github.com/franktate/0873e0a38234ca8ca57350b6c08a2ef8/raw
# To verify that it worked, run:
# curl --insecure "https://localhost:9200/_cat/_indices?v&pretty"
# You should see a new index whose name begins with "logstash" (really. Seems odd, and is configurable, but that's the default).
# That's it! You should be good to go.

Monday, March 13, 2023

Installing Minikube and Prometheus on Ubuntu 20.04 as of 3/11/2023

Background

You might think it's strange that I've included a specific date in the title of this post, which means that you haven't tried to perform this kind of installation at two points in time some number of months apart. See, EVERYTHING in this space is changing rapidly. The latest and greatest way to install Prometheus in Kubernetes (whether it's actual K8s or minikube or anything else) is to install kube-prometheus-stack (https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack) via a helm chart. But the specific details can be changed at any time. None of the many links I found gave me a working installation without modifying the commands at least a little. So I'm hoping this post is useful to at least one person before one or more changes make it obsolete.

Solution

Here's the script that will get everything installed. You can Google any of the commands you want to see why they're in here if you're curious. But if you just need a stinkin' cluster with Prometheus installed, the exact script to do it is below. Some additional links I used to get to this point:




#!/bin/bash
#
# Full list of commands required to install minikube and kube-prometheus-stack (Prometheus Operator, Grafana, dashboards, etc.)
# on Ubuntu 20.04 valid on 3/11/2023. Since kube-prometheus-stack is updated regularly and without warning, there is no guarantee that this will
# work without modification at any future point in time.
#
sudo usermod -aG docker $USER
group=docker
if [ $(id -gn) != $group ]; then
exec sg $group "$0 $*"
fi
sudo apt update
sudo apt install -y ca-certificates curl gnupg lsb-release net-tools
sudo apt-get update
sudo mkdir -m 0755 -p /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
echo \
"deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt update
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin -y
sudo systemctl status docker
docker run hello-world
sudo apt install -y curl wget apt-transport-https
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube
curl -LO https://storage.googleapis.com/kubernetes-release/release/`curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt`/bin/linux/amd64/kubectl
chmod +x kubectl
sudo mv kubectl /usr/local/bin/
kubectl version -o yaml
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
# Start minikube: Change memory and cpus to whatever you need
minikube start --addons=ingress --install-addons=true --kubernetes-version=stable --driver=docker --memory 49152 --cpus 16
# Install kube-prometheus-stack:
helm install prometheus prometheus-community/kube-prometheus-stack --namespace=prometheus --create-namespace --wait
# Access Uis:
# Prometheus
kubectl --namespace prometheus port-forward svc/prometheus-operated 9090 &
# Then access via http://localhost:9090
# Grafana
kubectl port-forward --namespace prometheus svc/prometheus-grafana 8080:80 &
# Then access via http://localhost:8080 and use the default grafana user:password of admin:prom-operator.
# Alert Manager
kubectl --namespace prometheus port-forward svc/prometheus-kube-prometheus-alertmanager 9093 &
#Then access via http://localhost:9093
kubectl completion bash | sudo tee /etc/bash_completion.d/kubectl > /dev/null
echo 'alias k=kubectl' >>~/.bashrc
echo 'complete -o default -F __start_kubectl k' >>~/.bashrc
. ~/.bashrc
# start new shell in docker group
exec newgrp docker

Monday, February 13, 2023

Recent versions of the Netcool Message Bus Probe support Kafka

 We are working with a client who needed to send events from their cloud-native application to their legacy on-prem netcool Operations Insight implementation. After researching a bit, we found that their application was already writing the events of interest to a Kafka topic. The only issue was that they had an old version of the Message Bus Probe. So we installed version 21 of the probe and used the included Nokia NFMP files as a starting point to configure the probe to pull the events from this topic so that they could be processed by Netcool. 

Reach out to us if you're using Netcool/Watson AIOps and need some help working through some obstacles.

Friday, February 10, 2023

The Fluent Bit rewrite_tag filter doesn't fully work until version 1.8.12

 I'm working with a client who has a packaged Kubernetes distribution installed that includes Fluent Bit 1.8.3. I tried the config from my last blog post on their system, and it just does NOT work as expected. In their system, it creates a new message with the new tag, but then none of the subsequent filters are applied. I had been working in the latest version (2.0.9), and everything worked like a champ. So I downloaded 1.8.3 and found that the same configuration didn't work. It seemed to partially call the rewrite_tag filter (if I set KEEP to false, it would delete the message, but if I set KEEP to true, it did nothing). The test configuration they suggest, using an input of type Dummy actually works exactly as expected. But the problem seems to be when you have an Input of type tail. And there is no workaround other than upgrading to a newer version. I actually downloaded and tested 1.8.4 through 1.8.12 before it worked correctly. So my client is now working on upgrading to a newer version.

Wednesday, February 8, 2023

Configuring Fluent Bit to send messages to the Netcool Mesage Bus probe

 Background

Fluent Bit is an open source and multi-platform log processor tool which aims to be a generic Swiss knife for logs processing and distribution.

It is included with several distributions of Kubernetes, and is used to pull log messages from multiple sources, modify them as needed, and send the records to one or more output destinations. It is amazingly customizable, so you can do just about any processing you want, with a couple of idiosyncracies, one of which I'll describe here.

The Challenge

What if you have a log message that you want to handle in two different ways:

1. Normalize the fields in the log message for storage in ElasticSearch (or Splunk, etc.).

2. Modify the log message so it has all of the appropriate fields needed for processing by your Netcool environment (fields that you don't necessarily want in your log storage system).

The Solution

Based on all of the unique restrictions in Fluent Bit, what you need to do is create a new copy of the log message, preserving the original so that the original can go through your "standard" processing, and the new message can be processed according to your needs in Netcool.

The specifics of this solution are to use a rewrite_tag FILTER to create a new, distinct copy of the message with a custom tag within the Fluent Bit pipeline, and then configure the appropriate additional FILTERs and OUTPUTs that only Match this new, custom tag. You also need to modify any existing OUTPUTs to exclude this new tag.

Here's a high-level graphic showing what we're going to do:



Our rewrite_tag FILTER is going to match all tags beginning with "kub". This will exclude our new tag, which will be "INC". So after the rewrite_tag filter, there will be two messages in the pipeline: the original plus our new one with our custom "INC" tag. We can the specify the appropriate Match statements in later FILTERs to only match the appropriate tag. So in the ES output above, the Match_Regex statement is:

Match_Regex  ^(?!INC).*

The official name of the above is a "lookahead exclude". Go ahead and try it out at regex101.com if you want. It will match any tag that does NOT begin with "INC", which is the custom tag for our new messages that we want to send tou our HTTP Message Bus probe.

The rewrite_tag FILTER will be custom for your environment, but the following may be close in many cases. For my case, I want to match any message that has a log field containing the string "ERROR writing to". You'll have to analyze your current messages to find the appropriate field and string that you're interested in. But here's my rewrite_tag FILTER stanza:

[FILTER]
    Name rewrite_tag
    Match_Regex ^(?!INC).*
    Rule    $log  ^.*Error\swriting\sto.* INC true

The "Rule" statement is the tricky part here. This statement consists of 4 parts, separated by whitespace:

Rule - the literal string "Rule"
$log - the name of the field you want to search to create a new message, preceded by "$". In this case, we want to search the field named log.
^.*Error\swriting\sto.* - the regular expression we want to match in the specified field. This regular expression CANNOT CONTAIN SPACES. That's why I'm using "\s".
INC - this is the name of the tag to set on the new message. This tag is ONLY used within the Fluent Bit pipeline, so it can literally be anything you want. I chose "INC" because these messages will be sent to the Message Bus proble to eventually create incidents in ServiceNow.
true - this specifies that we want the KEEP the original message. This allows it to continue to be processed as needed.

After you have the rewrite_tag FILTER in place, you will have at least one additional FILTER of type "modify" in your pipeline to allow you to add fields, rename fields, etc. You'll then have an OUTPUT stanza of type "http" to specify the location of the Message Bus probe. Something like the following:

[OUTPUT]
    Name http
    port 80
    Match INC
    host probehost
    uri /probe/webhook/fluentbit
    format json
    json_date_format epoch

The above specifies that the URL that these messages will be sent to is 

http://probehost:80/probe/webhook/fluentbit

In the json that's sent in the body of the POST request, there will be a field named date , and it will be in Unix "epoch" format, which is an integer representing the number of seconds since the beginning of the current epoch (a "normal" Unix/Linux timestamp).

That's it. That's all of the basic configuration needed on the Fluent Bit side.

Extra Credit/TLS Config

If your Message Bus probe is using TLS, you just need to add the following two lines to the above OUTPUT stanza:

    tls On
    tls.verify Off

The first line enables TLS encryption, and the second line is a shortcut that allows the connection to succeed without having to add the appropriate certificates to Fluent Bit - it will accept any certificate presented to it by the Message Bus probe, even a self-signed certificate.