HashiCorp, Inc. Vault Docker Compose files
Docker Compose files to spin up an instance of HashiCorp, Inc. Vault.
How to run
Add a COMPOSE_ENV file and save its location as a shell variable along with the location where this repo lives, here for example /opt/containers/hashicorpvault plus all other variables. At env/fqdn_context.env.example you'll find an example environment file.
When everything's ready start HashiCorp, Inc. Vault with Docker Compose, otherwise head down to Initial setup first.
Environment
export COMPOSE_DIR='/opt/containers/hashicorpvault'
export COMPOSE_CTX='ux_vilnius'
export COMPOSE_PROJECT='hashicorpvault-'"${COMPOSE_CTX}"
export COMPOSE_FILE="${COMPOSE_DIR}"'/compose.yaml'
export COMPOSE_ENV=<add accordingly>
Context
On your deployment machine create the necessary Docker context to connect to and control the Docker daemon on whatever target host you'll be using, for example:
docker context create fully.qualified.domain.name --docker 'host=ssh://root@fully.qualified.domain.name'
Pull
Pull images from Docker Hub verbatim.
docker compose --project-name "${COMPOSE_PROJECT}" --file "${COMPOSE_FILE}" --file "${COMPOSE_OVERRIDE}" --env-file "${COMPOSE_ENV}" pull
Copy to target
Copy images to target Docker host, that is assuming you deploy to a machine that itself has no network route to reach Docker Hub or your private registry of choice. Copying in its simplest form involves a local docker save and a remote docker load. Consider the helper mini-project quico.space/Quico/copy-docker where copy-docker.sh allows the following workflow:
export "$(grep -Pi -- '^HASHICORPVAULT_VERSION=' "${COMPOSE_ENV}")"
copy-docker 'hashicorp/vault:'"${HASHICORPVAULT_VERSION}" fully.qualified.domain.name
Start
docker --context 'fully.qualified.domain.name' compose --project-name "${COMPOSE_PROJECT}" --file "${COMPOSE_FILE}" --env-file "${COMPOSE_ENV}" up --detach
Clean up
Get rid of unnecessary images on both the deployment and the target machine:
docker --context 'fully.qualified.domain.name' system prune -af
docker system prune -af
Initial setup
We're assuming you run Docker Compose workloads with ZFS-based bind mounts. ZFS management, creating a zpool and setting adequate properties for its datasets is out of scope of this document.
Datasets
Create ZFS datasets and set permissions as needed.
-
Parent dateset
export "$(grep -Pi -- '^CONTEXT=' "${COMPOSE_ENV}")" zfs create -o canmount=off zpool/data/opt zfs create -o mountpoint=/opt/docker-data zpool/data/opt/docker-data -
Container-specific datasets
zfs create -p 'zpool/data/opt/docker-data/hashicorpvault-'"${CONTEXT}"'/config' zfs create -p 'zpool/data/opt/docker-data/hashicorpvault-'"${CONTEXT}"'/data/file' zfs create -p 'zpool/data/opt/docker-data/hashicorpvault-'"${CONTEXT}"'/data/logs'
Additional files
Place a vault.hcl file on target server:
hashicorpvault-loft/
├── config
│ └── vault.hcl
└── data
├── file
│ ├── ...
└── logs
└── ...
The file may look like so:
backend "file" {
path = "/vault/file"
}
listener "tcp" {
address = "0.0.0.0:8200"
tls_disable = 1
}
api_addr = "https://fully.qualified.domain.name"
disable_clustering = true
ui = true
With the api_addr setting in place we assume that you'll be running a separate reverse proxy server that terminates https://fully.qualified.domain.name and forwards traffic to Vault.
When done head back up to How to run.
Development
Conventional commits
This project uses Conventional Commits for its commit messages.
Commit types
Commit types besides fix and feat are:
refactor: Keeping functionality while streamlining or otherwise improving function flowdocs: Documentation for project or components
Commit scopes
The following scopes are known for this project. A Conventional Commits commit message may optionally use one of the following scopes or none:
hashicorpvault: A change to how thehashicorpvaultservice component worksbuild: Build-related changes such asDockerfilefixes and features.mount: Volume or bind mount-related changes.net: Networking, IP addressing, routing changesmeta: Affects the project's repo layout, file names etc.