nginx/README.md

8.2 KiB

FIXME

Search and replace all mentions of FIXME with sensible content in this file and in compose.yaml.

Nginx Docker Compose files

Docker Compose files to spin up an instance of Nginx.

How to run

Add a COMPOSE_ENV file and save its location as a shell variable along with the location where this repo lives, here for example /opt/containers/nginx plus all other variables. At env/fqdn_context.env.example you'll find an example environment file.

When everything's ready start Nginx with Docker Compose, otherwise head down to Initial setup first.

Environment

export COMPOSE_DIR='/opt/containers/nginx'
export COMPOSE_CTX='ux_vilnius'
export COMPOSE_PROJECT='nginx-'"${COMPOSE_CTX}"
export COMPOSE_FILE="${COMPOSE_DIR}"'/compose.yaml'
export COMPOSE_ENV=<add accordingly>

Context

On your deployment machine create the necessary Docker context to connect to and control the Docker daemon on whatever target host you'll be using, for example:

docker context create fully.qualified.domain.name --docker 'host=ssh://root@fully.qualified.domain.name'

Pull

Pull images from Docker Hub verbatim.

docker compose --project-name "${COMPOSE_PROJECT}" --file "${COMPOSE_FILE}" --env-file "${COMPOSE_ENV}" pull

Copy to target

Copy images to target Docker host, that is assuming you deploy to a machine that itself has no network route to reach Docker Hub or your private registry of choice. Copying in its simplest form involves a local docker save and a remote docker load. Consider the helper mini-project quico.space/Quico/copy-docker where copy-docker.sh allows the following workflow:

copy-docker 'nginx:latest' fully.qualified.domain.name

Start

docker --context 'containers-1.ops.loft.seneve.de' compose --project-name "${COMPOSE_PROJECT}" --file "${COMPOSE_FILE}" --env-file "${COMPOSE_ENV}" up --detach

Clean up

Get rid of unnecessary images on both the deployment and the target machine:

docker --context 'fully.qualified.domain.name' system prune -af
docker system prune -af

Initial setup

We're assuming you run Docker Compose workloads with ZFS-based bind mounts. ZFS management, creating a zpool and setting adequate properties for its datasets is out of scope of this document.

Datasets

Create ZFS datasets and set permissions as needed.

  • Parent dateset

    export "$(grep -Pi -- '^CONTEXT=' "${COMPOSE_ENV}")"
    zfs create -o canmount=off zpool/data/opt
    zfs create -o mountpoint=/opt/docker-data zpool/data/opt/docker-data
    
  • Container-specific datasets

    zfs create -p 'zpool/data/opt/docker-data/nginx-'"${CONTEXT}"'/nginx/conf'
    zfs create -p 'zpool/data/opt/docker-data/nginx-'"${CONTEXT}"'/nginx/data'
    

    This results in a directory structure like so:

    /opt/docker-data/nginx-loft/nginx
    ├── conf
    └── data
    
  • Create subdirs

    mkdir -p '/opt/docker-data/nginx-'"${CONTEXT}"'/nginx/'{'conf/'{'certs','nginx/'{'conf.d','sites-enabled'}},'data/logs'}
    

    This creates the following dir structure:

    /opt/docker-data/nginx-loft/nginx
    ├── conf
    │   ├── certs
    │   └── nginx
    │       ├── conf.d
    │       └── sites-enabled
    └── data
        └── logs
    
  • Change ownership

    chown -R 101:101 '/opt/docker-data/nginx-'"${CONTEXT}"'/nginx/'*
    

Additional files

  • Place an ssl.conf and an nginx.conf file on target server:

    /opt/docker-data/nginx-loft/nginx
    └── conf
        └── nginx
            ├── conf.d
            │   └── ssl.conf
            └── nginx.conf
    
  • The nginx.conf file may look like so:

    user  nginx;
    worker_processes  auto;
    
    error_log  /var/log/nginx/error.log notice;
    pid        /var/run/nginx.pid;
    
    
    events {
        worker_connections  1024;
    }
    
    
    http {
        include       /etc/nginx/mime.types;
        default_type  application/octet-stream;
    
        log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                          '$status $body_bytes_sent "$http_referer" '
                          '"$http_user_agent" "$http_x_forwarded_for"';
    
        access_log  /var/log/nginx/access.log  main;
    
        sendfile        on;
        #tcp_nopush     on;
    
        keepalive_timeout  65;
    
        #gzip  on;
    
        include /etc/nginx/conf.d/*.conf;
        include /etc/nginx/sites-enabled/*.conf;
    }
    
  • An ssl.conf file may look like so:

    server_tokens off;
    
    # For a 100% SSL rating at ssllabs.com
    ssl_session_cache shared:SSL:10m;
    ssl_session_timeout 10m;
    ssl_protocols TLSv1.3;
    ssl_prefer_server_ciphers on;
    ssl_ciphers AES256+EECDH:AES256+EDH:!aNULL;
    ssl_stapling on;
    ssl_stapling_verify on;
    ssl_dhparam sslcerts/dhparam.pem;
    ssl_ecdh_curve secp384r1;
    
    add_header Strict-Transport-Security "max-age=31536000; includeSubdomains";
    # In a Nextcloud instance these two are done internally by PHP nowadays.
    # Nextcloud's admin interface will complain if you do these via the reverse
    # proxy.
    add_header X-Frame-Options DENY;
    add_header X-Content-Type-Options nosniff;
    
    # End 100% SSL rating block
    
  • Store SSL certificates as needed in /opt/docker-data/nginx-${CONTEXT}/nginx/conf/certs

  • Add per-site config files to /opt/docker-data/nginx-${CONTEXT}/nginx/conf/nginx/sites-enabled like so:

    /opt/docker-data/nginx-loft/
    └── nginx
        └── conf
            └── nginx
                └── sites-enabled
                    ├── name.domain.qualified.fully.conf
                    └── name.domain.a.also.conf
    

    Where an individual file may look like so. This largely depends on each application.

    server {
        listen                      80;
        server_name                 fully.qualified.domain.name;
    
        access_log /var/log/nginx/name.domain.qualified.fully_plain_access.log main;
        error_log /var/log/nginx/name.domain.qualified.fully_plain_error.log error;
    
        return 308 https://$server_name$request_uri;
    }
    
    server {
        listen                      443 ssl;
        listen                      [::]:443 ssl;
        http2                       on;
        server_name                 fully.qualified.domain.name;
        ssl_certificate             /etc/nginx/sslcerts/name.domain.qualified.fully_fullchain.cer;
        ssl_certificate_key         /etc/nginx/sslcerts/name.domain.qualified.fully.key;
        ssl_trusted_certificate     /etc/nginx/sslcerts/name.domain.qualified.fully_ca.cer;
    
        access_log /var/log/nginx/name.domain.qualified.fully_ssl_access.log main;
        error_log /var/log/nginx/name.domain.qualified.fully_ssl_error.log error;
    
        location / {
            proxy_pass http://fully.qualified.domain.name:63961;
            proxy_set_header Host $http_host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
        }
    }
    

When done head back up to How to run.

Development

Conventional commits

This project uses Conventional Commits for its commit messages.

Commit types

Commit types besides fix and feat are:

  • refactor: Keeping functionality while streamlining or otherwise improving function flow
  • docs: Documentation for project or components

Commit scopes

The following scopes are known for this project. A Conventional Commits commit message may optionally use one of the following scopes or none:

  • nginx: A change to how the nginx service component works
  • build: Build-related changes such as Dockerfile fixes and features.
  • mount: Volume or bind mount-related changes.
  • net: Networking, IP addressing, routing changes
  • meta: Affects the project's repo layout, file names etc.