Add a `COMPOSE_ENV_FILE` and save its location as a shell variable along with the location where this repo lives, here for example `/opt/containers/zabbixserver` plus all other variables. At [env/fqdn_context.env.example](env/fqdn_context.env.example) you'll find an example environment file.
Make sure that Zabbix' upstream repo at [github.com/zabbix/zabbix-docker](https://github.com/zabbix/zabbix-docker) is checked out locally. We're going with example dir `/opt/git/github.com/zabbix/zabbix-docker/branches/latest`. We're also assuming that **_this_** repo exists at `/opt/containers/zabbixserver`.
On your deployment machine create the necessary Docker context to connect to and control the Docker daemon on whatever target host you'll be using, for example:
Copy images to target Docker host, that is assuming you deploy to a machine that itself has no network route to reach Docker Hub or your private registry of choice. Copying in its simplest form involves a local `docker save` and a remote `docker load`. Consider the helper mini-project [quico.space/Quico/copy-docker](https://quico.space/Quico/copy-docker) where [copy-docker.sh](https://quico.space/Quico/copy-docker/src/branch/main/copy-docker.sh) allows the following workflow.
We're assuming you run Docker Compose workloads with ZFS-based bind mounts. ZFS management, creating a zpool and setting adequate properties for its datasets is out of scope of this document.
Per [Datasets](#datasets) your Docker files will live at `'/opt/docker-data/zabbixserver-'"${CONTEXT}"`. Over in [build-context](build-context) you'll find a subdirectory `docker-data` that has an example file and directory structure that explains the layout you'll want to create at `'/opt/docker-data/zabbixserver-'"${CONTEXT}"`. Match the `postgres` to your `postgres` dir, the `zabbixserver` dir to your `zabbixserver` dir and lastly the `zabbixwebnginx` dir to yours.
In `postgres/cert` place SSL certificate files that Postgres should serve to TLS-capable database clients for encrypted database connections such as for a domain `db.zabbix.example.com`. `.ZBX_DB_CA_FILE` is a certificate authority (CA) certificate, `.ZBX_DB_CERT_FILE` is a "full chain" certificate as in your domain's certificate followed by any intermediate certs concatenated one after the other. Lastly `.ZBX_DB_KEY_FILE` is your cert's unencrypted key file.
In `postgres/config/docker-entrypoint-initdb.d/init-user-db.sh` you'll find an example script file that - when your Postgres database is uninitialized - will create a second Postgres account in your database. Check out the example environment variables file [env/fqdn_context.env.example](env/fqdn_context.env.example) and specifically `ZBX_DB_USERNAME_PW` and `ZBX_DB_USERNAME_RO` to define a password and a username.
Zabbix' PostgreSQL instance by default doesn't expose a TCP port outside of its container. This setup, however, assumes that you have for example a Grafana instance or a similar entity that wants to directly connect to Postgres. Dedicated read-only database credentials come in handy in that situation.
In `zabbixserver/config/cert` place your SSL cert files. These are what the Zabbix server process serves to clients that connect to it such as `server.zabbix.example.com`. As with [PostgreSQL](#postgres-postgresql) you'll need a CA cert, a domain cert and a key file; file names are `.ZBX_SERVER_CA_FILE`, `.ZBX_SERVER_CERT_FILE` and `.ZBX_SERVER_KEY_FILE`.
In `config` there's also `docker-entrypoint.sh`. This is largely identical to the Zabbix container's internal file as seen in the official upstream GitHub repo at [github.com/zabbix/zabbix-docker commit hash 4236b6d for Dockerfiles/server-pgsql/alpine/docker-entrypoint.sh](https://github.com/zabbix/zabbix-docker/blob/4236b6d502a03ee9a4ab0a3699e740cc45f687a4/Dockerfiles/server-pgsql/alpine/docker-entrypoint.sh) (last retrieved on February 22, 2025).
This is a sloppy workaround to an issue that's present in newest 7.2 tags (7.2.2 and 7.2.3) where the default `docker-entrypoint.sh` will unconditionally `export` both `ZBX_DB_USER` and `ZBX_DB_PASSWORD` variables which are then unconditionally rendered into `/etc/zabbix/zabbix_server_db.conf` inside the container even when HashiCorp Vault is in use:
If HashiCorp Vault is in use neither `DBUser` nor `DBPassword` must have a value otherwise Zabbix server will complain and exit. If you have no need for Vault - or Zabbix' official Docker containers are fixed by the time you read this - feel free to skip `docker-entrypoint.sh`.
Besides `zabbixserver/config` there's also `zabbixserver/data` with what looks like a daunting amount of subdirectories. In our example they are all empty and they all belong to bind mounts that are configured with `create_host_path: true`.
If you don't want to mount any files into your Zabbix instance you can leave `zabbixserver/data` alone and Docker will create the necessary subdirs on your Docker host on container start.
First things first, directory `zabbixwebnginx/config/modules` is empty and due to `create_host_path: true` will be created anyway if you don't create it yourself so no worries there. In `zabbixwebnginx/config/cert` - as the name suggests - you'll place frontend SSL cert files. That's the domain certificate you want to get served when visiting Zabbix frontend with a web browser. In line with our earlier examples this might be a cert for example for `zabbix.example.com`.
Note that the file names here look relatively normal as opposed to `.ZBX_SERVER_CERT_FILE` and `.ZBX_DB_CERT_FILE` from before. We will be bind-mounting the entire `cert` directory like so:
The `cert` dir ends up getting bind-mounted into `/etc/ssl/nginx` inside the container. Since Zabbix uses a standard Nginx setup we stick to the Nginx way of calling a default cert and key file. Store your full certificate chain as `ssl.crt` and the corresponding unencrypted key as `ssl.key`. Make sure to also save a `dhparam.pem` parameters file. You can get one such file the quick and dirty way for example from Mozilla at [https://ssl-config.mozilla.org/ffdhe2048.txt](https://ssl-config.mozilla.org/ffdhe2048.txt) - just save it as `dhparam.pem` if you're so inclined. You can alternatively render a file yourself. Assuming the `parallel` binary exists on your machine you can follow [unix.stackexchange.com/a/749156](https://unix.stackexchange.com/a/749156) like so:
This starts as many parallel `openssl dhparam` processes as you have CPU cores (assuming you have at most 10,000 cores). Processes essentially race each other which typically lowers waiting time for a finished parameters file by an order of magnitude since you only need one random process to finish. On a moderately modern desktop CPU with four cores this will take about 30 seconds.