296 lines
15 KiB
Markdown
296 lines
15 KiB
Markdown
# Zabbix Docker Compose files
|
|
|
|
Docker Compose files to spin up an instance of Zabbix.
|
|
|
|
# How to run
|
|
|
|
Add a `COMPOSE_ENV_FILE` and save its location as a shell variable along with the location where this repo lives, here for example `/opt/containers/zabbixserver` plus all other variables. At [env/fqdn_context.env.example](env/fqdn_context.env.example) you'll find an example environment file.
|
|
|
|
When everything's ready start Zabbix with Docker Compose, otherwise head down to [Initial setup](#initial-setup) first.
|
|
|
|
## Environment
|
|
|
|
Make sure that Zabbix' upstream repo at [github.com/zabbix/zabbix-docker](https://github.com/zabbix/zabbix-docker) is checked out locally. We're going with example dir `/opt/git/github.com/zabbix/zabbix-docker/branches/latest`. We're also assuming that **_this_** repo exists at `/opt/containers/zabbixserver`.
|
|
|
|
```
|
|
export UPSTREAM_REPO_DIR='/opt/git/github.com/zabbix/zabbix-docker/branches/latest'
|
|
export UPSTREAM_COMPOSE_FILE="${UPSTREAM_REPO_DIR%/}"'/docker-compose_v3_alpine_pgsql_latest.yaml'
|
|
export UPSTREAM_ENV_FILE="${UPSTREAM_REPO_DIR%/}"'/.env'
|
|
export COMPOSE_CTX='ux_vilnius'
|
|
export COMPOSE_PROJECT_NAME='zabbixserver-'"${COMPOSE_CTX}"
|
|
export COMPOSE_ENV_FILE=<add accordingly>
|
|
export COMPOSE_OVERRIDE='/opt/containers/zabbixserver/compose.override.yaml'
|
|
```
|
|
|
|
In Zabbix' Git repo check out latest tag for whatever version you want to use, we're going with the latest `7.2.*` version.
|
|
|
|
```
|
|
git -C "${UPSTREAM_REPO_DIR}" reset --hard origin/trunk
|
|
git -C "${UPSTREAM_REPO_DIR}" checkout trunk
|
|
git -C "${UPSTREAM_REPO_DIR}" pull
|
|
git -C "${UPSTREAM_REPO_DIR}" checkout "$(git --no-pager -C "${UPSTREAM_REPO_DIR}" tag -l --sort -version:refname | grep -Fi -- '7.2.' | head -n 1)"
|
|
```
|
|
|
|
## Context
|
|
|
|
On your deployment machine create the necessary Docker context to connect to and control the Docker daemon on whatever target host you'll be using, for example:
|
|
```
|
|
docker context create fully.qualified.domain.name --docker 'host=ssh://root@fully.qualified.domain.name'
|
|
```
|
|
|
|
## Pull
|
|
|
|
Pull images from Docker Hub verbatim.
|
|
|
|
```
|
|
docker compose --project-name "${COMPOSE_PROJECT_NAME}" --file "${UPSTREAM_COMPOSE_FILE}" --file "${COMPOSE_OVERRIDE}" --env-file "${UPSTREAM_ENV_FILE}" --env-file "${COMPOSE_ENV_FILE}" pull
|
|
```
|
|
|
|
## Copy to target
|
|
|
|
Copy images to target Docker host, that is assuming you deploy to a machine that itself has no network route to reach Docker Hub or your private registry of choice. Copying in its simplest form involves a local `docker save` and a remote `docker load`. Consider the helper mini-project [quico.space/Quico/copy-docker](https://quico.space/Quico/copy-docker) where [copy-docker.sh](https://quico.space/Quico/copy-docker/src/branch/main/copy-docker.sh) allows the following workflow.
|
|
|
|
```
|
|
images="$(docker compose --project-name "${COMPOSE_PROJECT_NAME}" --file "${UPSTREAM_COMPOSE_FILE}" --file "${COMPOSE_OVERRIDE}" --env-file "${UPSTREAM_ENV_FILE}" --env-file "${COMPOSE_ENV_FILE}" config | grep -Pi -- 'image:' | awk '{print $2}' | sort | uniq)"
|
|
while IFS= read -u 10 -r image; do
|
|
copy-docker "${image}" fully.qualified.domain.name
|
|
done 10<<<"${images}"
|
|
```
|
|
|
|
This will for example copy over:
|
|
|
|
```
|
|
REPOSITORY TAG
|
|
postgres 16-alpine
|
|
zabbix/zabbix-web-nginx-pgsql alpine-7.2-latest
|
|
zabbix/zabbix-server-pgsql alpine-7.2-latest
|
|
busybox latest
|
|
```
|
|
|
|
## Start
|
|
|
|
```
|
|
docker --context 'fully.qualified.domain.name' compose --project-name "${COMPOSE_PROJECT_NAME}" --file "${UPSTREAM_COMPOSE_FILE}" --file "${COMPOSE_OVERRIDE}" --env-file "${UPSTREAM_ENV_FILE}" --env-file "${COMPOSE_ENV_FILE}" up --detach
|
|
```
|
|
|
|
## Clean up
|
|
|
|
```
|
|
docker --context 'fully.qualified.domain.name' system prune -af
|
|
docker system prune -af
|
|
```
|
|
|
|
# Initial setup
|
|
|
|
We're assuming you run Docker Compose workloads with ZFS-based bind mounts. ZFS management, creating a zpool and setting adequate properties for its datasets is out of scope of this document.
|
|
|
|
## Datasets
|
|
|
|
Create ZFS datasets and set permissions as needed.
|
|
|
|
* Parent dateset
|
|
```
|
|
export "$(grep -Pi -- '^CONTEXT=' "${COMPOSE_ENV_FILE}")"
|
|
zfs create -o canmount=off zpool/data/opt
|
|
zfs create -o mountpoint=/opt/docker-data zpool/data/opt/docker-data
|
|
```
|
|
|
|
* Container-specific datasets
|
|
```
|
|
zfs create -p 'zpool/data/opt/docker-data/zabbixserver-'"${CONTEXT}"'/postgres/config'
|
|
zfs create -p 'zpool/data/opt/docker-data/zabbixserver-'"${CONTEXT}"'/postgres/data'
|
|
zfs create -p 'zpool/data/opt/docker-data/zabbixserver-'"${CONTEXT}"'/zabbixserver/config'
|
|
zfs create -p 'zpool/data/opt/docker-data/zabbixserver-'"${CONTEXT}"'/zabbixserver/data'
|
|
zfs create -p 'zpool/data/opt/docker-data/zabbixserver-'"${CONTEXT}"'/zabbixwebnginx/config'
|
|
```
|
|
|
|
* Change ownership
|
|
```
|
|
chown -R 70:70 '/opt/docker-data/zabbixserver-'"${CONTEXT}"'/postgres/'*
|
|
chown -R 101:101 '/opt/docker-data/zabbixserver-'"${CONTEXT}"'/zabbixwebnginx/config/'*
|
|
```
|
|
The PostgreSQL container will run its processes as user ID 70, the Zabbix web frontend container will be using user ID 101.
|
|
|
|
## Additional files
|
|
|
|
Per [Datasets](#datasets) your Docker files will live at `'/opt/docker-data/zabbixserver-'"${CONTEXT}"`. Over in [build-context](build-context) you'll find a subdirectory `docker-data` that has an example file and directory structure that explains the layout you'll want to create at `'/opt/docker-data/zabbixserver-'"${CONTEXT}"`. Match the `postgres` to your `postgres` dir, the `zabbixserver` dir to your `zabbixserver` dir and lastly the `zabbixwebnginx` dir to yours.
|
|
|
|
```
|
|
docker-data/
|
|
├── postgres
|
|
│ ├── cert
|
|
│ │ ├── .ZBX_DB_CA_FILE
|
|
│ │ ├── .ZBX_DB_CERT_FILE
|
|
│ │ └── .ZBX_DB_KEY_FILE
|
|
│ └── docker-entrypoint-initdb.d
|
|
│ └── init-user-db.sh
|
|
├── zabbixserver
|
|
│ ├── config
|
|
│ │ ├── cert
|
|
│ │ │ ├── .ZBX_SERVER_CA_FILE
|
|
│ │ │ ├── .ZBX_SERVER_CERT_FILE
|
|
│ │ │ └── .ZBX_SERVER_KEY_FILE
|
|
│ │ └── docker-entrypoint.sh
|
|
│ └── data
|
|
│ ├── usr
|
|
│ │ └── lib
|
|
│ │ └── zabbix
|
|
│ │ ├── alertscripts
|
|
│ │ └── externalscripts
|
|
│ └── var
|
|
│ └── lib
|
|
│ └── zabbix
|
|
│ ├── dbscripts
|
|
│ ├── enc
|
|
│ ├── export
|
|
│ ├── mibs
|
|
│ ├── modules
|
|
│ ├── snmptraps
|
|
│ ├── ssh_keys
|
|
│ └── ssl
|
|
│ ├── certs
|
|
│ ├── keys
|
|
│ └── ssl_ca
|
|
└── zabbixwebnginx
|
|
└── config
|
|
├── cert
|
|
│ ├── dhparam.pem
|
|
│ ├── ssl.crt
|
|
│ └── ssl.key
|
|
└── modules
|
|
```
|
|
|
|
### postgres (PostgreSQL)
|
|
|
|
In `postgres/cert` place SSL certificate files that Postgres should serve to TLS-capable database clients for encrypted database connections such as for a domain `db.zabbix.example.com`. `.ZBX_DB_CA_FILE` is a certificate authority (CA) certificate, `.ZBX_DB_CERT_FILE` is a "full chain" certificate as in your domain's certificate followed by any intermediate certs concatenated one after the other. Lastly `.ZBX_DB_KEY_FILE` is your cert's unencrypted key file.
|
|
|
|
In `postgres/config/docker-entrypoint-initdb.d/init-user-db.sh` you'll find an example script file that - when your Postgres database is uninitialized - will create a second Postgres account in your database. Check out the example environment variables file [env/fqdn_context.env.example](env/fqdn_context.env.example) and specifically `ZBX_DB_USERNAME_PW` and `ZBX_DB_USERNAME_RO` to define a password and a username.
|
|
|
|
Zabbix' PostgreSQL instance by default doesn't expose a TCP port outside of its container. This setup, however, assumes that you have for example a Grafana instance or a similar entity that wants to directly connect to Postgres. Dedicated read-only database credentials come in handy in that situation.
|
|
|
|
### zabbixserver (main Zabbix server daemon)
|
|
|
|
In `zabbixserver/config/cert` place your SSL cert files. These are what the Zabbix server process serves to clients that connect to it such as `server.zabbix.example.com`. As with [PostgreSQL](#postgres-postgresql) you'll need a CA cert, a domain cert and a key file; file names are `.ZBX_SERVER_CA_FILE`, `.ZBX_SERVER_CERT_FILE` and `.ZBX_SERVER_KEY_FILE`.
|
|
|
|
In `config` there's also `docker-entrypoint.sh`. This is largely identical to the Zabbix container's internal file as seen in the official upstream GitHub repo at [github.com/zabbix/zabbix-docker commit hash 4236b6d for Dockerfiles/server-pgsql/alpine/docker-entrypoint.sh](https://github.com/zabbix/zabbix-docker/blob/4236b6d502a03ee9a4ab0a3699e740cc45f687a4/Dockerfiles/server-pgsql/alpine/docker-entrypoint.sh) (last retrieved on February 22, 2025).
|
|
|
|
Our version comments out two Bash `export` commands like so:
|
|
|
|
```
|
|
--- <unnamed>
|
|
+++ <unnamed>
|
|
@@ -394,8 +394,8 @@
|
|
|
|
export ZBX_DB_NAME="${DB_SERVER_DBNAME}"
|
|
export ZBX_DB_SCHEMA="${DB_SERVER_SCHEMA}"
|
|
- export ZBX_DB_USER="${DB_SERVER_ZBX_USER}"
|
|
- export ZBX_DB_PASSWORD="${DB_SERVER_ZBX_PASS}"
|
|
+ # export ZBX_DB_USER="${DB_SERVER_ZBX_USER}"
|
|
+ # export ZBX_DB_PASSWORD="${DB_SERVER_ZBX_PASS}"
|
|
|
|
: ${ZBX_ENABLE_SNMP_TRAPS:="false"}
|
|
[[ "${ZBX_ENABLE_SNMP_TRAPS,,}" == "true" ]] && export ZBX_STARTSNMPTRAPPER=1
|
|
```
|
|
|
|
This is a sloppy workaround to an issue that's present in newest 7.2 tags (7.2.2 and 7.2.3) where the default `docker-entrypoint.sh` will unconditionally `export` both `ZBX_DB_USER` and `ZBX_DB_PASSWORD` variables which are then unconditionally rendered into `/etc/zabbix/zabbix_server_db.conf` inside the container even when HashiCorp Vault is in use:
|
|
|
|
```
|
|
DBUser=${ZBX_DB_USER}
|
|
DBPassword=${ZBX_DB_PASSWORD}
|
|
```
|
|
|
|
If HashiCorp Vault is in use neither `DBUser` nor `DBPassword` must have a value otherwise Zabbix server will complain and exit. If you have no need for Vault - or Zabbix' official Docker containers are fixed by the time you read this - feel free to skip `docker-entrypoint.sh`.
|
|
|
|
Besides `zabbixserver/config` there's also `zabbixserver/data` with what looks like a daunting amount of subdirectories. In our example they are all empty and they all belong to bind mounts that are configured with `create_host_path: true`.
|
|
|
|
```
|
|
- type: bind
|
|
source: /opt/docker-data/zabbixserver-${CONTEXT}/zabbixserver/data/usr/lib/zabbix/alertscripts
|
|
target: /usr/lib/zabbix/alertscripts
|
|
read_only: true
|
|
bind:
|
|
--> create_host_path: true
|
|
```
|
|
|
|
If you don't want to mount any files into your Zabbix instance you can leave `zabbixserver/data` alone and Docker will create the necessary subdirs on your Docker host on container start.
|
|
|
|
If you do want all subdirs feel free to go like this:
|
|
|
|
```
|
|
cd '/opt/docker-data/zabbixserver-'"${CONTEXT}"'/zabbixserver/data'
|
|
mkdir -p {'./usr/lib/zabbix/'{'alert','external'}'scripts','./var/lib/zabbix/'{'dbscripts','enc','export','mibs','modules','snmptraps','ssh_keys','ssl/'{'certs','keys','ssl_ca'}}}
|
|
```
|
|
|
|
This will create the entire directory tree underneath `zabbixserver/data`:
|
|
|
|
```
|
|
data/
|
|
├── usr
|
|
│ └── lib
|
|
│ └── zabbix
|
|
│ ├── alertscripts
|
|
│ └── externalscripts
|
|
└── var
|
|
└── lib
|
|
└── zabbix
|
|
├── dbscripts
|
|
├── enc
|
|
├── export
|
|
├── mibs
|
|
├── modules
|
|
├── snmptraps
|
|
├── ssh_keys
|
|
└── ssl
|
|
├── certs
|
|
├── keys
|
|
└── ssl_ca
|
|
```
|
|
|
|
### zabbixwebnginx (Nginx web server)
|
|
|
|
First things first, directory `zabbixwebnginx/config/modules` is empty and due to `create_host_path: true` will be created anyway if you don't create it yourself so no worries there. In `zabbixwebnginx/config/cert` - as the name suggests - you'll place frontend SSL cert files. That's the domain certificate you want to get served when visiting Zabbix frontend with a web browser. In line with our earlier examples this might be a cert for example for `zabbix.example.com`.
|
|
|
|
Note that the file names here look relatively normal as opposed to `.ZBX_SERVER_CERT_FILE` and `.ZBX_DB_CERT_FILE` from before. We will be bind-mounting the entire `cert` directory like so:
|
|
|
|
```
|
|
- type: bind
|
|
source: /opt/docker-data/zabbixserver-${CONTEXT}/zabbixwebnginx/config/cert
|
|
target: /etc/ssl/nginx
|
|
read_only: true
|
|
bind:
|
|
create_host_path: true
|
|
```
|
|
|
|
The `cert` dir ends up getting bind-mounted into `/etc/ssl/nginx` inside the container. Since Zabbix uses a standard Nginx setup we stick to the Nginx way of calling a default cert and key file. Store your full certificate chain as `ssl.crt` and the corresponding unencrypted key as `ssl.key`. Make sure to also save a `dhparam.pem` parameters file. You can get one such file the quick and dirty way for example from Mozilla at [https://ssl-config.mozilla.org/ffdhe2048.txt](https://ssl-config.mozilla.org/ffdhe2048.txt) - just save it as `dhparam.pem` if you're so inclined. You can alternatively render a file yourself. Assuming the `parallel` binary exists on your machine you can follow [unix.stackexchange.com/a/749156](https://unix.stackexchange.com/a/749156) like so:
|
|
|
|
```
|
|
seq 10000 | parallel -N0 --halt now,success=1 openssl dhparam -out dhparam.pem 4096
|
|
```
|
|
|
|
This starts as many parallel `openssl dhparam` processes as you have CPU cores (assuming you have at most 10,000 cores). Processes essentially race each other which typically lowers waiting time for a finished parameters file by an order of magnitude since you only need one random process to finish. On a moderately modern desktop CPU with four cores this will take about 30 seconds.
|
|
|
|
When done head back up to [How to run](#how-to-run).
|
|
|
|
# Development
|
|
|
|
## Conventional commits
|
|
|
|
This project uses [Conventional Commits](https://www.conventionalcommits.org/) for its commit messages.
|
|
|
|
### Commit types
|
|
|
|
Commit _types_ besides `fix` and `feat` are:
|
|
|
|
- `refactor`: Keeping functionality while streamlining or otherwise improving function flow
|
|
- `docs`: Documentation for project or components
|
|
|
|
### Commit scopes
|
|
|
|
The following _scopes_ are known for this project. A Conventional Commits commit message may optionally use one of the following scopes or none:
|
|
|
|
- `zabbixserver`: A change to how the `zabbixserver` service component works
|
|
- `build`: Build-related changes such as `Dockerfile` fixes and features.
|
|
- `mount`: Volume or bind mount-related changes.
|
|
- `net`: Networking, IP addressing, routing changes
|
|
- `meta`: Affects the project's repo layout, file names etc.
|