Your light in the linux world

Category: Uncategorized

Easy way of managing config in Docker-compose

Ever wanted that your config files in you docker-stack updated also when you released a new version of your stack through a CI/CD pipeline? Or you where in a development phase where you are constantly changing stuff in the config but you also wanted to make sure that you knew which config was used in what release in the pipeline? Well… Maybe you should try the next trick:


 variables:
   HOSTNAME: dockerhost.tuxito.be #Host to deploy docker-compose
 Clone or Pull Repo to Remote Host
 clone:
   image: alpine
   before_script:
     - apk add openssh-client git jq curl
     - eval $(ssh-agent -s)
     - echo "$SSH_PRIVATE_KEY_ANSIBLE" | tr -d '\r' | ssh-add -
     - mkdir -p ~/.ssh
     - chmod 700 ~/.ssh
     - /usr/bin/git config --global user.email "$GITLAB_USER_EMAIL"
     - /usr/bin/git config --global user.name "$GITLAB_USER_LOGIN"
     - GIT_CLONE_URL="$(curl --silent -XGET "$CI_SERVER_URL/api/v4/projects/$CI_PROJECT_ID?private_token=$ANSIBLE_TOKEN" | jq -r '.ssh_url_to_repo')"
   script:
     - ssh -o StrictHostKeyChecking=no ansible@$HOSTNAME "printf '%s\n    %s\n' 'Host *' 'StrictHostKeyChecking no' > ~/.ssh/config && chmod 600 ~/.ssh/config"
     - ssh -o StrictHostKeyChecking=no ansible@$HOSTNAME "if [ -d "$CI_PROJECT_NAME" ]; then (rm -rf $CI_PROJECT_NAME; git clone $GIT_CLONE_URL); else git clone $GIT_CLONE_URL; fi"
 Deploy stack
 deploy:
   image: alpine
   before_script:
     - apk add openssh-client git jq curl
     - eval $(ssh-agent -s)
     - echo "$SSH_PRIVATE_KEY_ANSIBLE" | tr -d '\r' | ssh-add -
     - mkdir -p ~/.ssh
     - chmod 700 ~/.ssh
   script:
     - ssh -o StrictHostKeyChecking=no ansible@$HOSTNAME "sed -i -E "/CONF_VERSION=/s/=.*/=$CI_JOB_ID/" $CI_PROJECT_NAME/deploy.sh"
     - ssh -o StrictHostKeyChecking=no ansible@$HOSTNAME "cd $CI_PROJECT_NAME; chmod +x deploy.sh; sudo ./deploy.sh"

I’m using a gitlab-ci pipeline file here.. But the eventual purpose can be achieved using all Ci/Cd tools. It is just a matter of changing the Variable name.

More importantly this line:

ssh -o StrictHostKeyChecking=no ansible@$HOSTNAME "sed -i -E "/CONF_VERSION=/s/=.*/=$CI_JOB_ID/" $CI_PROJECT_NAME/deploy.sh"

Here you wil find the ENVIRONMENT value used in the upcoming docker-compose file CONF_VERSION. Which on his term has the value of the $CI_JOB_ID which is a baked in value of Gitlab:

CI_JOB_ID
The unique ID of the current job that GitLab CI/CD uses internally.

If you where using Bamboo for example you could use bamboo.buildNumber.

Then for the trick in your docker-compose file:


 version: "3.7"
 services:
     elasticsearch:
       image: elasticsearch:${ELASTIC_VERSION}
       hostname: elasticsearch
       environment:
           - "discovery.type=single-node"
           - "xpack.monitoring.collection.enabled=true"
       ports:
           - 9200:9200
           - 9300:9300
       networks:
           - elastic
       volumes:
         - type: volume
           source: elasticsearch-data
           target: /usr/share/elasticsearch/data
         - type: volume
           source: snapshots
           target: /snapshots
       deploy:
         mode: replicated
         replicas: 1
         placement:
           constraints: [node.hostname == morsuv1416.agfa.be]
       secrets:
         - source: elasticsearch-config
           target: /usr/share/elasticsearch/config/elasticsearch.yml
           mode: 0644
           uid: "1000"
           gid: "1000"
filebeat:
       image: docker.elastic.co/beats/filebeat:${ELASTIC_VERSION}
       hostname: "{{.Node.Hostname}}-filebeat"
       ports:
         - "5066:5066"
       user: root
       networks:
         - elastic
       secrets:
         - source: filebeat-config
           target: /usr/share/filebeat/filebeat.yml
       volumes:
         - filebeat:/usr/share/filebeat/data
         - /var/run/docker.sock:/var/run/docker.sock
         - ...
secrets:
   elasticsearch-config:
     file: configs/elasticsearch.yml
     name: elasticsearch-config-v${CONF_VERSION}
   filebeat-config:
     file: configs/filebeat.yml
     name: filebeat-config-v${CONF_VERSION}

As you can see… The name in docker stack will change but there reference to the secret (aka Config file) will be the same in docker-compose.yml.

So all there is left in the deploy.sh script that we use in the the gitlab-ci file and you are good to go:

!/bin/bash
 export ELASTIC_VERSION=7.10.1
 export ELASTICSEARCH_USERNAME=elastic
 export ELASTICSEARCH_PASSWORD=changeme
 export ELASTICSEARCH_HOST=elasticsearch
 export KIBANA_HOST=kibana
 1 is PlaceHolder. Gets changed During deploy
 export CONF_VERSION=1
 docker network ls|grep elastic > /dev/null || docker network create --driver overlay --attachable elastic
 docker stack deploy --prune --compose-file docker-compose.yml elkstack

Happy dockering!

Adding git tag to release in Bamboo

If you want to create a new version for your package in bamboo, you can add the version git tag to your package in Bamboo with following steps:

  • Add a script task in your build plan with following contents:

git_tag="$(git ls-remote --tags ${bamboo.planRepository.repositoryUrl} | cut -d/ -f3- | awk '/^[^{]*$/{version=$1}END{print version}')"
echo "git_tag=$git_tag" > git_tag.txt

This will strip the version tag from the git commit and added this to a txt file located in a working subdirectory package/apps/myapplication

  • Next you’ll need to inject the version into Bamboo. you can do this by adding a task “Inject Bamboo Variables”:

Now we can reuse this bamboo variable in a build plan by adding ${bamboo.myapplication.git_tag}

My Home Setup

So this started in my head “I want to learn kubernetes…” Well… I learned it… But the learning didn’t stop with kubernetes…

Then the learning started with a course of Udemy (https://www.udemy.com/course/learn-devops-the-complete-kubernetes-course/learn/lecture/11278680?start=0#overview ) Excellent course. Learned me the ins and outs of kubernetes. But then of course… To put learning to practice I needed a kubernetes cluster. Yes, I could use minikube… But where is the fun in that… 😀

I started with creating a k3s rancher cluster. Because of it’s low system footprint and a awesome GUI to see whats is happening to your cluster.
I used this guide https://rancher.com/docs/rancher/v2.x/en/installation/k8s-install/create-nodes-lb/ and started with “Setup Infrastructure”… Very important… Read EVERYTHING.

After some cursing and reinstalling servers to make sure I had a clean slate to start, I managed to get it up and running… Happy me 🙂

So, now it was time to deploy something on the cluster… And I started with the nice low footprint gitea app for storing my ansible playbooks, kubernetes apps,… I could have used helm, the easy way… But I was here to learn, so I created the manifest on my own (with help of my good friend Google…). But I did it. It deployed. I could reach it with my external Nginx LoadBalancer. Happy me… 🙂

So… Then came helm… Let’s see how that works… I learned the true value of the values.yml… really… If you starting with pre-created helm charts… Take your time to go through the values.yml to edit it to your environment… It safes some headache…

So, I learned how Helm works, so I was thinking more and more to a fully home CI/CD setup. So what do I need… Keeping in mind that if it was possible I needed low footprint applications…

  • Docker repository: I went for Harbor. Great tool. Nice GUI. And scans your images on security flaws. https://goharbor.io/
  • Jenkins (widely used, so…)
  • Awx… To automatically run my playbooks in conjunction with jenkins . Yes, less low footprint, but hey… Sometimes you need to sacrifice to get want you want…

So, after all that was I installed on my kubernetes cluster… I was ready to start learning Jenkins, Kubernetes and of course some wordpress… And with deploying wordpress I learned the auto scaling functionality of kubernetes… :-D. So… Happy me… Again 😀

The final setup (for now…) looks like this:

And currently following applications are installed:

© 2025 TuxITo

Theme by Anders NorenUp ↑