Container challenges workflow
The JointCyberRange allows to run container based challenges.
Here are the major steps in the workflow for that.
Get your containers
You want to have one or more containers that together implement a challenge.
Presumably one of those containers has the flag.
That involves creating a Dockerfile in a local directory, with associated files.
Here is a basic example:
FROM nginx:latest
EXPOSE 80
Run locally as a docker container
First you wil need to build the image to make the container run locally and be able to interact with it, make sure to replace <image-name> with your own image name:
docker build -t <image-name> .
Then you can try:
docker run -it -p 1800:80 <image-name>
which runs the web server to accept requests at port 1800. You can probably access it as http://localhost:1800.
Once this runs, you can put the container image in a registry, so it will be possible to run this inside a Kubernetes cluster. You will need to set up an external registry in Gitlab, at the moment this is the only supported container registry.
Push the container to a registry
There are many ways of doing this.
- manually, use the
docker pushcommand line to push to your registry. - use CI/CD pipelines to push the container image to a gitlab repository (e.g. yours). For instructions, you can look here. The essence is to create a
gitlab-ci.yamlfile, which when pushed to the repository fires up the CI/CD pipeline, which in turn leads to a container image in the registry.
Run as:
docker run -it -p 1800:80 <image-path>
the <image-path> can be found in your gitlab repository under "Packages and registries > Container Registry", it is now pulled from your registry rather than locally.
If you get an error message containing: manifest unknown you want to check if the image tag is different from :latest.
Again you should be able to access it through http://localhost:1800.
If the registry is private, you will need credentials to access it. Locally, a docker login to the right registry should help you out. Depending on your setup cat ~/.docker/config.json will show them to you, or have a look at stackoverflow, because the registry credentials can be hidden in the docker desktop credentials.
Use docker logout registry.gitlab.com to test this out.
Test the docker compose file
Write a proper docker-compose.yaml file. Here is an example, make sure to fill in the <container-name> and <image-path> with your own.
The label kompose.service.type: nodeport binds the port(s) defined in the “ports” block of a container in the docker-compose.yaml to a port in the range 30000-32767. This ensures that the container will later be accessible outside your local k8s cluster or public CTFd tenant.
version: '3.3'
services:
html1:
container_name: <container-name>
image: <image-path>:latest
ports:
- 1900:80
labels:
kompose.service.type: nodeport
Start this with docker-compose up (clean up with docker-compose down).
Again, verify this by accessing the container. In this case through http://localhost:1900.
If you get the error below, it indicates you don't have permissions to access the image in the registry.
ERROR: Head "...": denied: access forbidden
See above on how to use docker login.
Look here for a multi-container setup.
Run in your K8s cluster
Use kompose to convert the docker-compose.yaml file to Kubernetes manifests.
Kompose script:
kompose convert -f docker-compose.yaml
Alternatively, you could modify the K8s manifest files yourself (at your own risk). Put it to work:
kubectl apply -f html1-deployment.yaml -f html1-service.yaml
To access the container you need to find the random port (Nodeport) which is assigned to your created K8s service. Use the command below and copy the Nodeport of the service, in this case html1-service.
kubectl get services --all-namespaces
Now you can access your container add http://localhost:<nodeport>. Replace <nodeport> with the port you copied at the previous step.
For this to work, the image needs to come from a public registry (which would leak them to CTF players), be cached in your local container runtime (flaky), or you need to have a proper K8 secret.
You may need to check the logs (kubectl get pods; kubectl logs POD; kubectl describe pod POD, or use a K8 GUI) to see if your local K8s cluster has read access to the relevant containers registries. 'ErrImagePull' or 'failing to pull image' is one error message, it could mean you don't have access to the registry.
You need to create a K8s secret and add it to html1-deployment.yaml file, for more details here.
Use the line that starts with:
kubectl create secret docker-registry regcred --docker-server
to create the secret, and add the proper line to html1-deployment.yaml.
Don't forget to delete the pod through
kubectl delete -f html1-deployment.yaml -f html1-service.yaml
We do not need to remove the name of the secret from html1-deployment.yaml before trying the next step, because we feed docker-compose files to CTFD, not the K8 deployment files.
Run in your local CTFd instance
Your CTFd instance (setup instructions) should be set up for running container challenges. This requires permissions to the Gitlab container registry where the container images are located that embody your challenges. Make sure to insert your credentials (deploy tokens) inside the CTFd Admin Panel > CC Configuration page. More instructions on how to create these credentials here.
This CTFd will generate a secret from this. In your K8 cluster, the secret is probably known as private-registry.
Proceed to set up each challenge from the CTFd Admin Panel. Select challenge type container, after that copy the content of the docker-compose.yaml file into the Compose field.
Then select the Challenge Type. This will affect how the network information is listed in the user interface of the CTFd.
- web: This will only show the the name of the container and the external ip with the Nodeport the container is accessible from. Note that every container in the
docker-compose.yamlneeds a Nodeport label. - Only use this option if all of the container(s) are reachable from the outside, otherwise the challenge will not start.
-
Use case example: Single container challenge with a website running from the container.
-
other: The external ip will be listed in one table. The other table contains the names of the container(s) with the local port for communication between containers and the Nodeport to combine with the external ip to reach the container from the outside.
- Use case example: Multi-container challenge with one container with Nopdeport as the entrypoint where you can ssh into, and than reach the other containers.
In your local CTFd instance you can now run the challenge. In K8s this will show up as a namespace starting with chal-user-. In that namespace you can check if the challenge container actually started properly, or review its error messages.
kubectl get pods --all-namespaces
The status of the pod should be 'Running'. If not, you can use kubectl logs or kubectl describe pod, or use a K8 GUI, to figure out the problem.
If you are using a public Gitlab Container Registry and get the error ErrImagePull or ImagePullBackOff make sure to fix or delete any invalid gitlab credentials inside the CTFd Admin Panel > CC Configuration page. See above for instructions to get these credentials.
To access the pod you can copy the nodeport from the running challenge and paste it behind http://kubernetes.docker.internal:<nodeport> inside your browser.
Run in a public CTFd tenant
This is the ultimate objective, of course. Again, the main item is the content of the docker-compose.yaml file. For now you can follow the same steps as you would take for your local CTFd instance.
Known issues
Cannot deploy RBAC in challenge container namespace! -> Make sure to enter your Gitlab credentials inside the CTFd Admin Panel > CC Configuration page.
Cannot extract information from challenge container! -> There is probably no nodeport defined in your docker-compose.yaml while exposing ports. When using challenge type web all ports need the Nodeport label.
Cannot deploy job in challenge container namespace! -> You are probably trying to expose a port in the docker-compose.yaml which is not available on the container. It could also be that there is no nodeport label added in the docker-compose.yaml.