Showing posts with label K8s. Show all posts
Showing posts with label K8s. Show all posts

Wednesday, March 29, 2023

Modify kibana.yml after deploying Kibana with Helm

If you deploy Kibana using the Elastic helm chart with default values, what you'll find is that you don't have any obvious way to modify the kibana.yml file. For example, if you log into the Kibana pod with

kubectl exec --stdin --tty kibana_podname -- /bin/bash

you'll find that there's no editor available (like vi or even ed). You can cat config/kibana.yml, but the comments state that it is auto-generated. So what are you supposed to do to add an a setting to the file? For example, you might need to add a value for xpack.encryptedSavedObjects.encryptionKey so you can configure alerting.

The solution I came up with is a multi-step process:

1. Get the default values.yaml file for the chart and store that in a file with the command:

helm show values elastic/kibana > /tmp/kibana.yaml

2. Edit that file to add a section for kibana.yml under kibanaConfig. Originally, kibanaConfig is empty (set to {}). You need to change it to be something like:

   kibana.yml: |
      xpack.encryptedSaveObject.encryptionKey xxxxxxxxxxxxxxxxxxxx

3. Now (unintuitively at least to me) uninstall the helm chart with:

helm uninstall kibana

3. Then install the helm chart again with:

helm install kibana elastic/kibana -f /tmp/kibana.yaml

And that's it. Your changes will be applied and you're good to go.

I'm pretty sure there's a way to create a configMap and reference it, which would then allow you to just delete the pod to have it re-read the configMap, but I haven't figured out those exact details. Maybe in another post.

Thursday, March 16, 2023

Installing additional software on the Rancher docker container

If you read one of my previous posts to install Rancher on a single docker container, you may have found that it doesn't include several commands like ping, netstat, ss, and even apt. And if you run 'uname -a', you might think that the image you're in is Ubuntu, but it's not. It's SUSE Linux (the same people who maintain Rancher), and the package manager there is accessed via the command 'zypper'. So to install several of the tools you know and love, run the following

zypper install net-tools iproute2 bind fping lsof

That's it. Now you have a few more tools for debugging.

Wednesday, March 15, 2023

Installing Rancher in a Single Docker Container on Ubuntu 20.04

This is MUCH easier than my last couple of posts because this just takes one step after you configure your OS. Rancher is a cloud native (runs on its own K8s/K3s cluster) K8s manager and container orchestration platform. It is a competitor to Red Hat OpenShift and VMWare Tanzu.

This solution is for a DEV/practice environment. 

I've uploaded the script to configure Ubuntu as a gist to Github. So all you need to do is start with a working install of Ubuntu 20.04 desktop (my test systems have been configured with 16 cores and 64GB RAM). Your user must have sudo access (you'll be prompted for the password as the scripts run) and you can run this script:

Now run this command:

docker run -d --restart=unless-stopped \
  -p 80:80 -p 443:443 \
  --privileged \

Now open your browser to http://localhost and follow the directions. It will instruct you how to get the password, then prompt you to change the password, and you're good to go. You have a local Rancher K3s cluster running in a docker container. From the UI you can probe your cluster configuration, install new applications, etc. One application of interest is:

Monitoring - This is similar to (though not exactly) the kube-prometheus-stack, with Prometheus, Grafana, and several Grafana dashboards configured.

To access the cluster from the CLI, you first need to get the container-id of your rancher container with:

docker ps

Then run:

docker exec -it container-id /bin/bash

At this point you have a root shell with access to the kubectl command.

Another application that will probably interest you is Elasticsearch. Be prepared for a LOT of failure if you try to install this one. I simply could not get it to install, and I could not determine why it failed. I couldn't find any useful logs describing where it was getting hung up. If you can figure it out, please let me know. I will keep on trying.

Update 3/16/2023: I was able to get Elasticsearch installed, and I can verify via curl to port 9200 that it's running, but that's it. I can't get any logs sent to it because the Logging app won't let me configure anything. And while I can install Kibana, I cannot figure out how to access the UI once it's installed. I've tried quite a few different things, but it's not working.

To get Elasticsearch installed, you need to perform some additional steps:

Create a directory like /home/mypv inside the Rancher docker container.
Set the owner of that directory to the user "rancher"
create a PersistentVolume in the Rancher UI to be a HostPath that points to /home/mypv with a size of 30Gi (to match the defaults for the Elasticsearch install)
In the Elasticsearch yaml, change the values of these two keys as listed here:

replicas: 1
minimumMasterNodes: 1

But, like I said, you won't be able to actually do anything with it at this point.

Tuesday, March 14, 2023

Installing the ELK stack and Fluent-Bit on Minikube on Ubuntu 20.04


This should be easy, but it took me a couple of days to successfully get it running, so that showed me that I needed to create this post. The problems are:

1. There are a LOT of out-of-date articles out there that are now just wrong (this one was written on 3/14/2023 and will be obsolete at some point; I apologize in advance if you are reading this after that point of obsolescence). It's not the fault of the authors. Components in this space are simply changing very quickly. Event some of the latest HOWTO documentation in the different github repositories is wrong (invalid/deprecated flag used, etc.)

2. The various helm charts include some example yaml files (yay!) that don't work without modification (dammit!).

3. The Fluent Bit helm chart defaults simply do not work with a default Elasticsearch install. Specifically, Elasticsearch requires (and there is no way to disable this) TLS connections with authentication, while the Fluent Bit chart is only set up for an HTTP connection to Elasticsearch with NO authentication.

So those are some of the reasons for this article.

This solution is for a DEV/practice environment. I can't possibly list all of the reasons why. Those reasons start with "it's on minikube" and include "the Elastic password is in plaintext", among many, many others.


I've uploaded the scripts as gists to Github. So all you need to do is start with a working install of Ubuntu 20.04 desktop (my test systems have been configured with 16 cores and 64GB RAM). Your user must have sudo access (you'll be prompted for the password as the scripts run) and you can run these two scripts in order:

Monday, March 13, 2023

Installing Minikube and Prometheus on Ubuntu 20.04 as of 3/11/2023


You might think it's strange that I've included a specific date in the title of this post, which means that you haven't tried to perform this kind of installation at two points in time some number of months apart. See, EVERYTHING in this space is changing rapidly. The latest and greatest way to install Prometheus in Kubernetes (whether it's actual K8s or minikube or anything else) is to install kube-prometheus-stack ( via a helm chart. But the specific details can be changed at any time. None of the many links I found gave me a working installation without modifying the commands at least a little. So I'm hoping this post is useful to at least one person before one or more changes make it obsolete.


Here's the script that will get everything installed. You can Google any of the commands you want to see why they're in here if you're curious. But if you just need a stinkin' cluster with Prometheus installed, the exact script to do it is below. Some additional links I used to get to this point:

Friday, February 10, 2023

The Fluent Bit rewrite_tag filter doesn't fully work until version 1.8.12

 I'm working with a client who has a packaged Kubernetes distribution installed that includes Fluent Bit 1.8.3. I tried the config from my last blog post on their system, and it just does NOT work as expected. In their system, it creates a new message with the new tag, but then none of the subsequent filters are applied. I had been working in the latest version (2.0.9), and everything worked like a champ. So I downloaded 1.8.3 and found that the same configuration didn't work. It seemed to partially call the rewrite_tag filter (if I set KEEP to false, it would delete the message, but if I set KEEP to true, it did nothing). The test configuration they suggest, using an input of type Dummy actually works exactly as expected. But the problem seems to be when you have an Input of type tail. And there is no workaround other than upgrading to a newer version. I actually downloaded and tested 1.8.4 through 1.8.12 before it worked correctly. So my client is now working on upgrading to a newer version.

Wednesday, February 8, 2023

Configuring Fluent Bit to send messages to the Netcool Mesage Bus probe


Fluent Bit is an open source and multi-platform log processor tool which aims to be a generic Swiss knife for logs processing and distribution.

It is included with several distributions of Kubernetes, and is used to pull log messages from multiple sources, modify them as needed, and send the records to one or more output destinations. It is amazingly customizable, so you can do just about any processing you want, with a couple of idiosyncracies, one of which I'll describe here.

The Challenge

What if you have a log message that you want to handle in two different ways:

1. Normalize the fields in the log message for storage in ElasticSearch (or Splunk, etc.).

2. Modify the log message so it has all of the appropriate fields needed for processing by your Netcool environment (fields that you don't necessarily want in your log storage system).

The Solution

Based on all of the unique restrictions in Fluent Bit, what you need to do is create a new copy of the log message, preserving the original so that the original can go through your "standard" processing, and the new message can be processed according to your needs in Netcool.

The specifics of this solution are to use a rewrite_tag FILTER to create a new, distinct copy of the message with a custom tag within the Fluent Bit pipeline, and then configure the appropriate additional FILTERs and OUTPUTs that only Match this new, custom tag. You also need to modify any existing OUTPUTs to exclude this new tag.

Here's a high-level graphic showing what we're going to do:

Our rewrite_tag FILTER is going to match all tags beginning with "kub". This will exclude our new tag, which will be "INC". So after the rewrite_tag filter, there will be two messages in the pipeline: the original plus our new one with our custom "INC" tag. We can the specify the appropriate Match statements in later FILTERs to only match the appropriate tag. So in the ES output above, the Match_Regex statement is:

Match_Regex  ^(?!INC).*

The official name of the above is a "lookahead exclude". Go ahead and try it out at if you want. It will match any tag that does NOT begin with "INC", which is the custom tag for our new messages that we want to send tou our HTTP Message Bus probe.

The rewrite_tag FILTER will be custom for your environment, but the following may be close in many cases. For my case, I want to match any message that has a log field containing the string "ERROR writing to". You'll have to analyze your current messages to find the appropriate field and string that you're interested in. But here's my rewrite_tag FILTER stanza:

    Name rewrite_tag
    Match_Regex ^(?!INC).*
    Rule    $log  ^.*Error\swriting\sto.* INC true

The "Rule" statement is the tricky part here. This statement consists of 4 parts, separated by whitespace:

Rule - the literal string "Rule"
$log - the name of the field you want to search to create a new message, preceded by "$". In this case, we want to search the field named log.
^.*Error\swriting\sto.* - the regular expression we want to match in the specified field. This regular expression CANNOT CONTAIN SPACES. That's why I'm using "\s".
INC - this is the name of the tag to set on the new message. This tag is ONLY used within the Fluent Bit pipeline, so it can literally be anything you want. I chose "INC" because these messages will be sent to the Message Bus proble to eventually create incidents in ServiceNow.
true - this specifies that we want the KEEP the original message. This allows it to continue to be processed as needed.

After you have the rewrite_tag FILTER in place, you will have at least one additional FILTER of type "modify" in your pipeline to allow you to add fields, rename fields, etc. You'll then have an OUTPUT stanza of type "http" to specify the location of the Message Bus probe. Something like the following:

    Name http
    port 80
    Match INC
    host probehost
    uri /probe/webhook/fluentbit
    format json
    json_date_format epoch

The above specifies that the URL that these messages will be sent to is 


In the json that's sent in the body of the POST request, there will be a field named date , and it will be in Unix "epoch" format, which is an integer representing the number of seconds since the beginning of the current epoch (a "normal" Unix/Linux timestamp).

That's it. That's all of the basic configuration needed on the Fluent Bit side.

Extra Credit/TLS Config

If your Message Bus probe is using TLS, you just need to add the following two lines to the above OUTPUT stanza:

    tls On
    tls.verify Off

The first line enables TLS encryption, and the second line is a shortcut that allows the connection to succeed without having to add the appropriate certificates to Fluent Bit - it will accept any certificate presented to it by the Message Bus probe, even a self-signed certificate.

Tuesday, December 4, 2018

If you run Kubernetes in the cloud, the first major vulnerability found isn't a huge issue

The first major Kubernetes (aka K8s) vulnerability was found yesterday:

It's a pretty big deal and quite scary, but patches were immediately available upon disclosure. What's even better is that the managed Kubernetes services running onAWS, Azure and Google Cloud Platform have all been patched already. If you're managing your own K8s clusters, however, you need to patch it yourself, which just takes time and know-how.

In my eyes, this is another data point that shows how proper use of cloud resources can be extremely beneficial to a company. Specifically, the big cloud players, especially AWS, are very similar to a highly competent and agile outsourced IT department. They have offerings that are years ahead of services that you would want to have onsite, and they've got testing methodologies in place to ensure that they're available 99.9% of the time.

It's true that there can be some issues in moving to the cloud, but many of the problems of the past now have very robust solutions that are included in the offerings. And those offerings are available on a pay-as-you-go basis in many cases. So you can easily keep tabs on exactly how much you're spending even on a per-application basis.

To ensure a successful digital transformation, contact us to get the experienced help that will put you on the right path.