Compare commits

...

107 Commits

Author SHA1 Message Date
da60952fe4 refactor(docs): Explain Docker registry use 2023-12-24 03:31:14 +01:00
d6ea3f1853 fix(docker-compose): For a single-service Compose file prefill our example README.md with docker compose --profile 'build' build ...
... instead of docker compose --profile
'build-service' build since the latter won't exist.
2023-12-24 03:18:11 +01:00
adb7bf6795 fix(docker-compose): When we have only one service in Docker Compose set its Dockerfile context dir to just build-context/ instead of build-context/service since the latter doesn't exist 2023-12-24 02:56:03 +01:00
215db1682d refactor(docker-compose): Set default ulimits even when var is set (but set to just an empty string) 2023-12-24 01:30:49 +01:00
36f2eecba1 refactor(compose): Copy Docker images with copy-docker 2023-10-13 02:26:41 +02:00
1f588e90bc refactor(compose): Double-check that VIP is bound on target host 2023-10-13 02:23:57 +02:00
b534a9bccf common-settings.yml is now .yaml 2023-10-13 02:08:12 +02:00
e5e78a0527 feat(compose): Work with a registry 2023-10-13 02:06:56 +02:00
d98de5aff0 refactor(compose): Mention private registry when copying 2023-10-13 01:15:24 +02:00
ffaf43e56f common-settings.yml is now .yaml 2023-10-13 01:13:24 +02:00
20d303e79a fix(compose): Update Dockerfile ref to env example file 2023-10-09 01:33:34 +02:00
117627889f fix(compose): Fix bash var quoting 2023-10-08 18:35:10 +02:00
ab9b1009cb fix(compose): Fix line breaks with multi-component build=yes 2023-10-08 17:54:17 +02:00
6605fe0866 Merge pull request 'fix(compose): Use modern Compose file name (#12)' (#13) from 12-compose-file-name-changes into master
Reviewed-on: #13
2023-10-08 15:42:58 +00:00
782981c6f8 fix(compose): Use modern Compose file name (#12) 2023-10-08 17:41:59 +02:00
aef611f731 Merge pull request '10-compose-add-conventional-commits' (#11) from 10-compose-add-conventional-commits into master
Reviewed-on: #11
2023-10-08 15:29:43 +00:00
c6e93b353d refactor(compose): Adjust spacing in example Environment and Build paragraphs (#10) 2023-10-08 17:28:50 +02:00
e112d0ef8c refactor(compose): Adjust spacing in example Environment paragraph (#10) 2023-10-08 17:28:10 +02:00
387ef3bbbe refactor(compose): Remove trailing newline (#10) 2023-10-08 17:25:50 +02:00
31f67e3aad refactor(compose): Adjust spacing in Environment paragraph (#10) 2023-10-08 17:25:11 +02:00
795585260a refactor(compose): Adjust spacing in Build paragraph (#10) 2023-10-08 17:24:44 +02:00
6c08ba37c6 feat(compose): Update Vault example with Conventional commits paragraph (#10) 2023-10-08 17:23:11 +02:00
e6b8f35906 feat(compose): Update Grafana example with Conventional commits paragraph (#10) 2023-10-08 17:21:46 +02:00
692d115d5c refactor(compose): Adjust line breaks (#10) 2023-10-08 17:18:50 +02:00
0042adba00 feat(compose): Add a README.md section explaining Conventional Commits usage (#10) 2023-10-08 17:16:03 +02:00
96cce1634b docs(docker-compose): More _FIXME_ markers 2023-06-25 00:14:02 +02:00
40ab99bf87 docs(docker-compose): More _FIXME_ markers 2023-06-25 00:13:13 +02:00
c460559b34 docs(docker-compose): Fix image link 2023-06-25 00:11:22 +02:00
9e9ec037b8 docs(docker-compose): Typo 2023-06-25 00:00:43 +02:00
71a2fe1beb docs(docker-compose): In example dir layout focus on important files 2023-06-24 23:57:58 +02:00
0436224728 docs(docker-compose): Add paragraph to copy image to target server 2023-06-24 23:57:02 +02:00
e5dcc062de docs(docker-compose): More _FIXME_ markers 2023-06-24 23:56:43 +02:00
bb6777f389 refactor(docker-compose): No needs for FIXME Markdown italics 2023-06-24 23:37:35 +02:00
467e98b01a docs(docker-compose): Add pull instructions 2023-06-24 23:36:59 +02:00
161967ebac docs(docker-compose): Flesh out individual build processes 2023-06-24 23:30:37 +02:00
3026e30783 docs(docker-compose): Pepper _FIXME_ markers throughout README.md to clean it prior to commit 2023-06-24 23:27:59 +02:00
494228b367 docs(docker-compose): Flesh out individual build processes 2023-06-24 23:22:41 +02:00
024a056d9e docs(docker-compose): Render Docker context creation before build process 2023-06-24 23:12:32 +02:00
0f4b7ac7a5 docs(docker-compose): Add ulimits example to docker-compose.yml 2023-06-24 23:01:58 +02:00
75b31844a0 refactor(docker-compose): By default we want a paragraph about build instructions to be included in README.md 2023-06-24 23:01:29 +02:00
19c38255a2 docs(docker-compose): Set owner for files and dot-files 2023-06-24 23:01:00 +02:00
9fc23aa0b2 docs(docker-compose): No need to ask for a context, we set a dummy context in docs 2023-06-21 00:38:03 +02:00
5c8cf16348 docs(docker-compose): No need to ask for a context, we set a dummy context in docs 2023-06-21 00:35:19 +02:00
5645fba3e1 docs(docker-compose): Add a sane ulimits example 2023-06-21 00:27:37 +02:00
79dc452799 docs(docker-compose): Handle remote deployment via --context 2023-06-21 00:27:23 +02:00
de2b657ec1 docs(docker-compose): Update README.md with example directory structure 2023-06-14 22:11:04 +02:00
9ef85a53e6 refactor(cookiecutter): Typo 2023-06-14 22:07:45 +02:00
1995283c73 docs(docker-compose): Minor consistency changes 2023-06-13 02:11:23 +02:00
d62fc4a8a3 docs(docker-compose): Update Cookiecutter instructions with suppressed prompt 2023-06-13 00:37:37 +02:00
e0d39b389d Merge pull request '7-rewrite-docs-inconsistencies' (#8) from 7-rewrite-docs-inconsistencies into master
Reviewed-on: #8

Fixes #7
2023-06-12 22:35:06 +00:00
4bc93e5546 docs(docker-compose): Update example directory layout (#7) 2023-06-13 00:34:19 +02:00
299c15e30d docs(docker-compose): Update example directory layout (#7) 2023-06-13 00:30:51 +02:00
448401b7b0 docs(docker-compose): Link pip section with a more detailed explanation (#7) 2023-06-13 00:30:18 +02:00
94a802c1e6 docs(docker-compose): Update example directory layout 2023-06-13 00:24:33 +02:00
2ab821e182 docs(docker-compose): Add example README Markdown file 2023-06-12 01:29:22 +02:00
ccfaaac36d Merge pull request 'add-readme-to-docker-compose' (#6) from add-readme-to-docker-compose into master
Reviewed-on: #6
2023-06-11 23:26:51 +00:00
8b09465349 docs(docker-compose): Add example README Markdown file 2023-06-12 01:25:26 +02:00
c871116f9f refactor(docker-compose): Add a subnet to examples 2023-06-11 23:19:49 +02:00
0daecee302 refactor(docker-compose): Slim down Docker bind mount directory structure 2023-06-11 23:19:19 +02:00
7fb2be86ee docs(meta): Better distiguish component_list entry examples 2023-06-11 23:05:23 +02:00
c8e6465ee9 docs(meta): Add dev instructions 2023-06-11 23:03:19 +02:00
1e9e1136f0 docs(meta): H1 headlines all the way 2023-06-11 22:51:51 +02:00
7e2cce9c72 refactor(docker-compose): Let's not commit a pyenv .python-version file 2023-06-11 22:50:42 +02:00
b033946444 feat(docker-compose): Add example for health check between two services 2023-06-11 22:49:45 +02:00
3de179d613 feat(docker-compose): Allow env inside Docker Compose Cookiecutter template 2023-06-11 22:49:04 +02:00
a7a8290f66 feat(docker-compose): Align env example file name with other docs 2023-06-11 22:42:17 +02:00
0c167ec6ab feat(docker-compose): Align env example file name with other docs 2023-06-11 22:41:24 +02:00
63cbe035ad feat(docker-compose): Align env example file name with other docs 2023-06-11 22:17:32 +02:00
8297cf976e docs(build): Update example files 2022-07-05 19:53:40 +02:00
fb26d472f2 docs(build): Update docs with inflect and new layout 2022-07-05 19:49:33 +02:00
bfbc829f91 feat(naive-python): Add grammar via inflect, improve overall handling 2022-07-05 19:42:11 +02:00
fb835f4f40 refactor(debug): Increase log level 2022-07-05 19:38:53 +02:00
b6da0efa29 feat(debug): Load inflect to render grammatically correct text 2022-07-05 19:36:52 +02:00
dab22b8c96 fix(config): Remove duplicate Configparser loading 2022-07-05 19:36:45 +02:00
f2c8b15b84 fix(debug): Add exit code 7 definition 2022-07-05 19:36:42 +02:00
8bf8ae3def fix(debug): Print when Rich is unavailable 2022-07-05 19:36:37 +02:00
1d929f5fb3 refactor(build): Double quotes instead of single quotes 2022-07-05 19:36:33 +02:00
0971286823 feat(config): For empty vars check differentiate between print and rich 2022-07-05 19:11:16 +02:00
deee35fc32 feat(config): Add list of config options where empty value is okay 2022-07-05 19:10:47 +02:00
b830f84ebe feat(config): Add check if setting is incorrectly empty 2022-07-05 19:03:21 +02:00
b64dc5f595 refactor(config): Clearer Cookiecutter variable names 2022-07-05 18:58:11 +02:00
03f11d84ce refactor(config): use_rich_logging instead of rich_logging 2022-07-05 18:50:45 +02:00
10d4d8f718 feat(config): Given a LOGLEVEL env var let user change verbosity, default to INFO 2022-07-05 18:36:29 +02:00
a065550d50 feat(config): In systemd make logging output slimmer 2022-07-05 18:35:15 +02:00
87d5a5bf04 feat(config): CFG_KNOWN_DEFAULTS can now be empty and still valid 2022-07-05 18:32:12 +02:00
71d68a47ed refactor(config): By default lead user to sparingly set CFG_KNOWN_SECTION 2022-07-05 18:26:00 +02:00
687b5bf422 refactor(debug): Be more concise in config description 2022-07-05 18:24:51 +02:00
b20e3abd1f refactor(meta): Update PyCharm meta files 2022-06-20 03:56:48 +02:00
e265dc3853 feat(python-naive): Add concept of non-overridable globals 2022-06-20 03:50:52 +02:00
7e971bc330 docs(python-naive): Typo 2022-06-17 03:32:59 +02:00
ae78b026e6 Merge pull request '4-naive-python' (#5) from 4-naive-python into master
Reviewed-on: #5
2022-06-17 01:19:37 +00:00
c9a8c0707e feat(naive-python): Add Cookiecutter template to create a Python project 2022-06-17 03:17:12 +02:00
7e960f3fce refactor(meta): Add PyCharm specifics 2022-06-17 03:16:06 +02:00
7bbc347dbf refactor(meta): Add gitignore files to deal with PyCharm 2022-06-17 03:13:21 +02:00
c69ae6f05b docs(docker-compose): Typo 2022-06-06 04:48:42 +02:00
aed1a8aafa docs(docker-compose): Typo 2022-06-06 04:45:05 +02:00
945eb67104 docs(docker-compose): Typo 2022-06-06 04:43:49 +02:00
d36e30638c docs(docker-compose): Align pointers 2022-06-06 04:42:40 +02:00
4c88acc64b Merge pull request '2-dir-name-different-from-component-name' (#3) from 2-dir-name-different-from-component-name into master
Reviewed-on: #3
2022-06-06 02:39:07 +00:00
0ea8efffcd feat(docker-compose): Allow directory name different from service or component names 2022-06-06 04:37:03 +02:00
2540cb5ba8 refactor(docker-compose): Introduce public var 'project_slug', base private rendered var on it 2022-06-06 02:50:55 +02:00
885dae2772 docs(docker-compose): Update examples to not use vars in keys 2022-06-06 02:25:20 +02:00
50cc3d409a fix(docker-compose): Remove var from networks key
Fixes #1
2022-06-06 02:02:26 +02:00
08db499fb2 docs(meta): Typo 2022-06-05 00:34:49 +02:00
5ab2f327d5 docs(meta): Typo 2022-06-05 00:29:10 +02:00
e762a6d07d docs(meta): Use CONTEXT env var where sensible 2022-06-05 00:27:19 +02:00
a2183240aa docs(meta): Rearrange docker-compose docs 2022-06-05 00:17:29 +02:00
47 changed files with 2102 additions and 304 deletions

237
.gitignore vendored Normal file
View File

@@ -0,0 +1,237 @@
# ---> JetBrains
# Covers JetBrains IDEs: IntelliJ, RubyMine, PhpStorm, AppCode, PyCharm, CLion, Android Studio, WebStorm and Rider
# Reference: https://intellij-support.jetbrains.com/hc/en-us/articles/206544839
# pyenv
.python-version
# User-specific stuff
.idea/**/workspace.xml
.idea/**/tasks.xml
.idea/**/usage.statistics.xml
.idea/**/dictionaries
.idea/**/shelf
# AWS User-specific
.idea/**/aws.xml
# Generated files
.idea/**/contentModel.xml
# Sensitive or high-churn files
.idea/**/dataSources/
.idea/**/dataSources.ids
.idea/**/dataSources.local.xml
.idea/**/sqlDataSources.xml
.idea/**/dynamic.xml
.idea/**/uiDesigner.xml
.idea/**/dbnavigator.xml
# Gradle
.idea/**/gradle.xml
.idea/**/libraries
# Gradle and Maven with auto-import
# When using Gradle or Maven with auto-import, you should exclude module files,
# since they will be recreated, and may cause churn. Uncomment if using
# auto-import.
# .idea/artifacts
# .idea/compiler.xml
# .idea/jarRepositories.xml
# .idea/modules.xml
# .idea/*.iml
# .idea/modules
# *.iml
# *.ipr
# CMake
cmake-build-*/
# Mongo Explorer plugin
.idea/**/mongoSettings.xml
# File-based project format
*.iws
# IntelliJ
out/
# mpeltonen/sbt-idea plugin
.idea_modules/
# JIRA plugin
atlassian-ide-plugin.xml
# Cursive Clojure plugin
.idea/replstate.xml
# SonarLint plugin
.idea/sonarlint/
# Crashlytics plugin (for Android Studio and IntelliJ)
com_crashlytics_export_strings.xml
crashlytics.properties
crashlytics-build.properties
fabric.properties
# Editor-based Rest Client
.idea/httpRequests
# Android studio 3.1+ serialized cache file
.idea/caches/build_file_checksums.ser
# ---> JetBrainsWorkspace
# Additional coverage for JetBrains IDEs workspace files
.idea/deployment.xml
.idea/misc.xml
.idea/remote-mappings.xml
.idea/*.iml
# ---> Python
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
# C extensions
*.so
# Distribution / packaging
.Python
build/
develop-eggs/
dist/
downloads/
eggs/
.eggs/
lib/
lib64/
parts/
sdist/
var/
wheels/
share/python-wheels/
*.egg-info/
.installed.cfg
*.egg
MANIFEST
# PyInstaller
# Usually these files are written by a python script from a template
# before PyInstaller builds the exe, so as to inject date/other infos into it.
*.manifest
*.spec
# Installer logs
pip-log.txt
pip-delete-this-directory.txt
# Unit test / coverage reports
htmlcov/
.tox/
.nox/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*.cover
*.py,cover
.hypothesis/
.pytest_cache/
cover/
# Translations
*.mo
*.pot
# Django stuff:
*.log
local_settings.py
db.sqlite3
db.sqlite3-journal
# Flask stuff:
instance/
.webassets-cache
# Scrapy stuff:
.scrapy
# Sphinx documentation
docs/_build/
# PyBuilder
.pybuilder/
target/
# Jupyter Notebook
.ipynb_checkpoints
# IPython
profile_default/
ipython_config.py
# pyenv
# For a library or package, you might want to ignore these files since the code is
# intended to run in multiple environments; otherwise, check them in:
# .python-version
# pipenv
# According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
# However, in case of collaboration, if having platform-specific dependencies or dependencies
# having no cross-platform support, pipenv may install dependencies that don't work, or not
# install all needed dependencies.
#Pipfile.lock
# poetry
# Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
# This is especially recommended for binary packages to ensure reproducibility, and is more
# commonly ignored for libraries.
# https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
#poetry.lock
# PEP 582; used by e.g. github.com/David-OConnor/pyflow
__pypackages__/
# Celery stuff
celerybeat-schedule
celerybeat.pid
# SageMath parsed files
*.sage.py
# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/
!docker-compose/examples/*/env
!docker-compose/{{ cookiecutter.__project_slug }}/env
# Spyder project settings
.spyderproject
.spyproject
# Rope project settings
.ropeproject
# mkdocs documentation
/site
# mypy
.mypy_cache/
.dmypy.json
dmypy.json
# Pyre type checker
.pyre/
# pytype static type analyzer
.pytype/
# Cython debug symbols
cython_debug/

0
.idea/.gitignore generated vendored Normal file
View File

View File

@@ -0,0 +1,6 @@
<component name="InspectionProjectProfileManager">
<settings>
<option name="USE_PROJECT_PROFILE" value="false" />
<version value="1.0" />
</settings>
</component>

8
.idea/modules.xml generated Normal file
View File

@@ -0,0 +1,8 @@
<?xml version="1.0" encoding="UTF-8"?>
<project version="4">
<component name="ProjectModuleManager">
<modules>
<module fileurl="file://$PROJECT_DIR$/.idea/py-cookiecutter-templates.iml" filepath="$PROJECT_DIR$/.idea/py-cookiecutter-templates.iml" />
</modules>
</component>
</project>

12
.idea/vcs.xml generated Normal file
View File

@@ -0,0 +1,12 @@
<?xml version="1.0" encoding="UTF-8"?>
<project version="4">
<component name="CommitMessageInspectionProfile">
<profile version="1.0">
<inspection_tool class="CommitFormat" enabled="true" level="WARNING" enabled_by_default="true" />
<inspection_tool class="CommitNamingConvention" enabled="true" level="WARNING" enabled_by_default="true" />
</profile>
</component>
<component name="VcsDirectoryMappings">
<mapping directory="$PROJECT_DIR$" vcs="Git" />
</component>
</project>

210
README.md
View File

@@ -2,24 +2,26 @@
Project directory structure templates for the Python Cookiecutter package.
## What's Cookiecutter?
# What's Cookiecutter?
The Python package [Cookiecutter](https://github.com/cookiecutter/cookiecutter) assists in creating directory structure for whatever purpose you need such as for example a new Python project, an Ansible role or a `docker-compose` project - anything really that benefits from a unifirm reproducible directory structure. If you've ever wanted to put project structure best practices into version control then Cookiecutter's here to help.
Cookiecutter is governed by so-called Cookiecutter templates, most of its magic inside Cookiecutter templates happens via the [Jinja2 template engine](https://palletsprojects.com/p/jinja/). You'll feel right at home if you're familiar with Ansible.
## Repo layout
# Repo layout
Each subdirectory in this repo is a Cookiecutter template, you'll recognize them from their telltale `cookiecutter.json` files. Directories usually also have a readme file explaining more about each individual template.
## Get started
# Get started
Get Cookiecutter like so:
```
pip install cookiecutter
```
Execute a template like so, `docker-compose` as an example:
Unfamiliar with Python and `pip`? Check out [Developing](#developing) further down to get started with a virtual environment.
When all is set execute a template like so, `docker-compose` as an example:
```
cookiecutter https://quico.space/Quico/py-cookiecutter-templates.git --directory 'docker-compose'
```
@@ -28,9 +30,13 @@ Cookiecutter prompts you for whatever info the template needs then generates fil
This is Cookiecutter prompting for info:
```
service []: grafana
project_slug [dir-name]: grafana
service [grafana]:
component_list [grafana]: grafana,nginx
context []: cncf
Select build:
1 - no
2 - yes
Choose from 1, 2 [1]:
```
The end result is a directory structure that has everything you need to hit the ground running.
@@ -50,9 +56,191 @@ The end result is a directory structure that has everything you need to hit the
│   ├── Dockerfile
│   └── extras
│   └── .gitkeep
├── common-settings.yml
├── docker-compose.override.yml
├── docker-compose.yml
└── env
└── fully.qualified.domain.name.example
├── common-settings.yaml
├── compose.override.yaml
├── compose.yaml
├── env
│   └── fqdn_context.env.example
└── README.md
```
# Developing
To change Cookiecutter templates get yourself an environment then make, test and commit your changes. First things first, the environment.
## Environment
Get yourself a Python virtual environment. In this example we're assuming you're running a Linux operating system and you'll be using [pyenv](https://github.com/pyenv/pyenv) to manage virtual environments.
### pyenv
- Install pyenv with what they are calling their automatic installer. Feel free to also read up on pyenv on their [GitHub project page](https://github.com/pyenv/pyenv).
```
curl https://pyenv.run | bash
```
- Following the installer's instruction add at least the following commands to your `~/.bashrc` file
```
export PYENV_ROOT="$HOME/.pyenv"
command -v pyenv >/dev/null || export PATH="$PYENV_ROOT/bin:$PATH"
eval "$(pyenv init -)"
```
- You will also likely want to add this line which will make sure pyenv auto-activates venvs when you navigate into certain directories. More on that down at [venv](#venv).
```
eval "$(pyenv virtualenv-init -)"
```
- Also make sure `~/.bashrc` gets loaded for example by including this in a `~/.bash_profile` file
```
[[ -f ~/.bashrc ]] && . ~/.bashrc
```
- Reload `~/.bashrc`
```
source ~/.bashrc
```
### Python
- Update pyenv's package list
```
pyenv update
```
- Pick a Python version you like, for example copy-paste `3.11.4`:
```
pyenv install --list | less
...
3.10.12
3.11.0
3.11-dev
3.11.1
3.11.2
3.11.3
3.11.4
3.12.0b2
3.12-dev
3.13-dev
...
```
- Install Python, wait for compilation to finish on your machine
```
pyenv install 3.11.4
```
### Repo
- Clone this repo
```
git clone https://quico.space/Quico/py-cookiecutter-templates.git ~/py-cookiecutter-templates
```
### venv
- Create a virtual environment where `3.11.4` is the Python version you want to use in this venv and `cookiecutter-3.11.4` is the name of your venv. Adding or dropping the Python version from your venv name comes down to personal preference.
```
pyenv virtualenv 3.11.4 cookiecutter-3.11.4
```
- In your repo path `~/py-cookiecutter-templates` create a `.python-version` file to tell pyenv to always activate your desired venv when inside this dir.
```
cd ~/py-cookiecutter-templates
pyenv local cookiecutter-3.11.4
```
pyenv will immediately prefix your shell's `${PS1}` prompt with the venv name.
```
(cookiecutter-3.11.4) [âś” 23:19 user@machine py-cookiecutter-templates]$
```
It will deactivate the venv and drop its prefix as soon as you navigate out of this dir.
```
(cookiecutter-3.11.4) [âś” 23:19 user@machine py-cookiecutter-templates]$ cd
[âś” 23:19 user@machine ~]$
```
For now though stay in `~/py-cookiecutter-templates`, you're going to want to pip-install [cookiecutter](https://pypi.org/project/cookiecutter).
- Upgrade `pip`
```
pip install --upgrade pip
```
- Install [cookiecutter](https://pypi.org/project/cookiecutter)
```
pip install cookiecutter
```
All done, your environment is set up.
## Change
Make some code changes, for example to the Docker Compose Cookiecutter template. When you're happy run your local Cookiecutter template to see how your changes are rendering.
- Create `/tmp/cookiecutter-docker-compose`
```
mkdir '/tmp/cookiecutter-docker-compose'
```
- Render a Docker Compose directory into your output directory, answer Cookiecutter's prompts:
```
# cookiecutter ~/py-cookiecutter-templates \
--directory docker-compose \
--output-dir /tmp/cookiecutter-docker-compose
project_slug [dir-name]: mydir
service [mydir]: myservice
component_list [myservice]: mycomponent_one,mycomponent_two
Select build:
1 - no
2 - yes
Choose from 1, 2 [1]: 2
```
- Observe that in `/tmp/cookiecutter-docker-compose` you now have your rendered Docker Compose dir:
```
# tree -a .
.
└── mydir
├── build-context
│   ├── mycomponent_one
│   │   ├── docker-data
│   │   │   └── .gitkeep
│   │   ├── Dockerfile
│   │   └── extras
│   │   └── .gitkeep
│   └── mycomponent_two
│   ├── docker-data
│   │   └── .gitkeep
│   ├── Dockerfile
│   └── extras
│   └── .gitkeep
├── common-settings.yaml
├── compose.override.yaml
├── compose.yaml
├── env
│   └── fqdn_context.env.example
└── README.md
```
- For rapid testing you will most likely want to not type prompt answers repeatedly. Give them as command line arguments instead, also specify `--no-input` to suppress prompts:
```
cookiecutter ~/py-cookiecutter-templates \
--no-input \
--directory docker-compose \
--output-dir /tmp/cookiecutter-docker-compose \
project_slug=mydir \
service=myservice \
component_list=mycomponent_one,mycomponent_two \
build=yes
```
## Commit prep
When you're about ready to commit changes into this repo check the following bullet points.
- Did you update the [Cookiecutter template README.md file](docker-compose/README.md), here for example for Docker Compose?
- Change in behavior?
- An updated example directory layout?
- Updated example file content?
- Did you commit a new completely rendered example directory structure into [docker-compose/examples](docker-compose/examples) dir?
- Did you change something that affects existing example directories? If so rerender them.

View File

@@ -1,11 +1,6 @@
# Cookiecutter docker-compose template
# docker-compose template
## Get started
Get Cookiecutter like so:
```
pip install cookiecutter
```
## Run it
Execute this template like so:
```
@@ -14,24 +9,67 @@ cookiecutter https://quico.space/Quico/py-cookiecutter-templates.git --directory
Cookiecutter interactively prompts you for the following info, here with example answers:
```
service []: grafana
project_slug [dir-name]: grafana
service [grafana]:
component_list [grafana]: grafana,nginx
context []: cncf
Select build:
1 - no
2 - yes
Choose from 1, 2 [1]:
```
Done, directory structure and files for your next `docker-compose` project are ready for you to hit the ground running.
## Explanation and terminology
Each `docker-compose` project forms a *__service__* that may consist of either a single or multiple *__components__*.
Your four answers translate as follows into rendered files.
The `service` variable by default is empty. In this example we've chosen to name the service `grafana`. We want `grafana` to consist of two components, namely Grafana itself and Nginx.
1. The `project_slug` is used only as directory name. A container named `vault` may be fine but the project directory name `hashicorpvault` might be more descriptive.
```
.
└── hashicorpvault <--- Here
├── build-context
│   ├── docker-data
│   │   └── .gitkeep
│   ├── Dockerfile
...
```
2. The `service` variable by default copies your `project_slug` answer. It's a style decision whether you leave it at that and just hit `Enter`. The service name will come up in rendered `compose.yaml` at purely cosmetic locations such as the `networks:` key, `container_name:` and `/opt/docker-data` volume mount presets, here with `ftp` as the service name:
```
services:
mysql:
image: "mysql:${MYSQL_VERSION}"
container_name: "ftp-mysql-${CONTEXT}" <---
networks:
ftp-default: <---
...
volumes:
# - /opt/docker-data/ftp-mysql-${CONTEXT}/... <---
# - /opt/docker-data/ftp-mysql-${CONTEXT}/... <---
# - /opt/docker-data/ftp-mysql-${CONTEXT}/... <---
...
```
Syntax for a multi-component `component_list` is a comma-separated list without spaces. The string `grafana,nginx` will be treated as a list of two components. Capitalization doesn't matter, we're lowercasing all strings automatically.
3. Treat `component_list` as the list of Docker images that make up your service. Each `docker-compose` project forms a *__service__* - see above - that consists of either a single or multiple *__components__*. They're your `services:`, your container, volume, variable names etc.:
```
services:
grafana: <---
image: "grafana:${GRAFANA_VERSION}" <---
container_name: "grafana-grafana-${CONTEXT}" <---
...
environment:
# GRAFANA_USER: ${GRAFANA_USER} <---
# GRAFANA_PASSWORD: ${GRAFANA_PASSWORD} <---
...
```
The template prefills `component_list` with whatever you previously entered as the `service` name. To create directory structure for a _single-component service_ confirm the default. If `component_list` and `service` are identical directory structure will be that for a _single-component service_.
4. Prompt `build` is a yes-no question. Cookiecutter will create a `README.md` file with copy-pastable Docker Compose commands pre-filled. If you answer `yes` to this prompt `README.md` will contain an example paragraph that explains the build process along the lines of:
```
docker compose ... --profile 'build' build
```
Whereas by answering `no` (or just hitting `<Enter>` to accept the default of `no`) no such paragraph will be added to the example Markdown file. Build instructions are only really needed if you need to locally build a derivative image.
The last prompt for a *__context__* is a generic string to help you distinguish deployments. It can be whatever you want such as for example a team name, here `cncf`.
Also check out [the Caveats section](#caveats) at the end to learn what this template does not do well.
## Result
@@ -54,15 +92,16 @@ Above example of a multi-component (two in this case) `grafana` service will giv
│   ├── Dockerfile
│   └── extras
│   └── .gitkeep
├── common-settings.yml
├── docker-compose.override.yml
├── docker-compose.yml
└── env
└── fully.qualified.domain.name.example
├── common-settings.yaml
├── compose.override.yaml
├── compose.yaml
├── env
│   └── fqdn_context.env.example
└── README.md
```
Check out file contents over in the [examples/grafana subdir](/examples/grafana).
Check out file contents over in the [examples/grafana](examples/grafana) subdir.
### Single-component
### Single-component service
With an alternative single-component `hashicorpvault` service the result may look like this:
```
@@ -74,10 +113,35 @@ With an alternative single-component `hashicorpvault` service the result may loo
│   ├── Dockerfile
│   └── extras
│   └── .gitkeep
├── common-settings.yml
├── docker-compose.override.yml
├── docker-compose.yml
└── env
└── fully.qualified.domain.name.example
├── common-settings.yaml
├── compose.override.yaml
├── compose.yaml
├── env
│   └── fqdn_context.env.example
└── README.md
```
Check out file contents over in the [examples/hashicorpvault subdir](/examples/hashicorpvault).
Check out file contents over in the [examples/hashicorpvault](examples/hashicorpvault) subdir.
## Caveats
Consider Cookiecutter's project directory and rendered files a starting point. It won't do everything perfectly.
Imagine if you will a service that consists of [Infinispan](https://infinispan.org/) among other things. In Docker Hub's content-addressable image store Infinispan's location is at `infinispan/server` so you obviously want that exact string with a forward slash to show up in your `compose.yaml` as the `image:` key's value, same with your `Dockerfile`. The `image:` key's value comes from what you enter in Cookiecutter's `component_list` prompt. Component strings are then used to also pre-fill the `volumes:` key.
_**This**_ will cause obvious issues (but the `image:` key is kinda correct):
```
services:
infinispan/server:
image: "infinispan/server:${INFINISPAN/SERVER_VERSION}"
container_name: "cacheman-infinispan/server-${CONTEXT}"
```
_**This**_ won't cause issues (but you'll have to then go in and manually change the `image:` key to use `infinispan/server`):
```
services:
infinispan:
image: "infinispan:${INFINISPAN_VERSION}"
container_name: "cacheman-infinispan-${CONTEXT}"
```
You're going to want to keep it simple and go with option 2.

View File

@@ -1,9 +1,9 @@
{
"service": "",
"project_slug": "dir-name",
"__project_slug": "{{ cookiecutter.project_slug.lower().replace(' ', '_').replace('-', '_') }}",
"service": "{{ cookiecutter.__project_slug }}",
"__service_slug": "{{ cookiecutter.service.lower().replace(' ', '_').replace('-', '_') }}",
"component_list": "{{ cookiecutter.__service_slug }}",
"__component_list_slug": "{{ cookiecutter.component_list.lower().replace(' ', '_').replace('-', '_') }}",
"context": "",
"__context_slug": "{{ cookiecutter.context.lower().replace(' ', '_').replace('-', '_') }}",
"__project_slug": "{{ cookiecutter.__service_slug }}"
"build": ["yes", "no"]
}

View File

@@ -0,0 +1,142 @@
# FIXME
Search and replace all mentions of FIXME with sensible content in this file and in [compose.yaml](compose.yaml).
# Grafana Docker Compose files
Docker Compose files to spin up an instance of Grafana FIXME capitalization FIXME.
# How to run
Add a `COMPOSE_ENV` file and save its location as a shell variable along with the location where this repo lives, here for example `/opt/containers/grafana` plus all other variables. At [env/fqdn_context.env.example](env/fqdn_context.env.example) you'll find an example environment file.
When everything's ready start Grafana with Docker Compose, otherwise head down to [Initial setup](#initial-setup) first.
## Environment
```
export COMPOSE_DIR='/opt/containers/grafana'
export COMPOSE_CTX='ux_vilnius'
export COMPOSE_PROJECT='grafana-'"${COMPOSE_CTX}"
export COMPOSE_FILE="${COMPOSE_DIR}"'/compose.yaml'
export COMPOSE_ENV=<add accordingly>
```
## Context
On your deployment machine create the necessary Docker context to connect to and control the Docker daemon on whatever target host you'll be using, for example:
```
docker context create fully.qualified.domain.name --docker 'host=ssh://root@fully.qualified.domain.name'
```
## Pull
Pull images from Docker Hub verbatim.
```
docker compose --project-name "${COMPOSE_PROJECT}" --file "${COMPOSE_FILE}" --env-file "${COMPOSE_ENV}" --profile 'full' pull
```
## Copy to target
Copy images to target Docker host, that is assuming you deploy to a machine that itself has no network route to reach Docker Hub or your private registry of choice. Copying in its simplest form involves a local `docker save` and a remote `docker load`. Consider the helper mini-project [quico.space/Quico/copy-docker](https://quico.space/Quico/copy-docker) where [copy-docker.sh](https://quico.space/Quico/copy-docker/src/branch/main/copy-docker.sh) allows the following workflow:
```
source "${COMPOSE_ENV}"
# FIXME Docker Hub image name with or without slash? FIXME
for image in 'grafana:'"${GRAFANA_VERSION}" 'nginx:'"${NGINX_VERSION}"; do
copy-docker "${image}" fully.qualified.domain.name
done
```
## Start
FIXME Does the service use a virtual IP address? FIXME
Make sure your service's virtual IP address is bound on your target host then start containers.
```
docker --context 'fully.qualified.domain.name' compose --project-name "${COMPOSE_PROJECT}" --file "${COMPOSE_FILE}" --env-file "${COMPOSE_ENV}" --profile 'full' up --detach
```
# Initial setup
We're assuming you run Docker Compose workloads with ZFS-based bind mounts. ZFS management, creating a zpool and setting adequate properties for its datasets is out of scope of this document.
## Datasets
Create ZFS datasets and set permissions as needed.
* Parent dateset
```
zfs create -o mountpoint=/opt/docker-data 'zpool/docker-data'
```
* Container-specific datasets
```
zfs create -p 'zpool/docker-data/grafana-'"${COMPOSE_CTX}"'/grafana/data/db'
zfs create -p 'zpool/docker-data/grafana-'"${COMPOSE_CTX}"'/grafana/data/logs'
zfs create -p 'zpool/docker-data/grafana-'"${COMPOSE_CTX}"'/grafana/config'
zfs create -p 'zpool/docker-data/grafana-'"${COMPOSE_CTX}"'/nginx/data/db'
zfs create -p 'zpool/docker-data/grafana-'"${COMPOSE_CTX}"'/nginx/data/logs'
zfs create -p 'zpool/docker-data/grafana-'"${COMPOSE_CTX}"'/nginx/config'
```
FIXME When changing bind mount locations to real ones remember to also update `volumes:` in [compose.yaml](compose.yaml) FIXME
* Create subdirs
```
mkdir -p '/opt/docker-data/grafana-'"${COMPOSE_CTX}"'/grafana/'{'.ssh','config','data','projects'}
```
* Change ownership
```
chown -R 1000:1000 '/opt/docker-data/grafana-'"${COMPOSE_CTX}"'/grafana/data/'*
```
## Additional files
Place the following files on target server. Use the directory structure at [build-context](build-context) as a guide, specifically at `docker-data`.
FIXME Add details about files that aren't self-explanatory FIXME
```
build-context/
├── grafana
│ ├── docker-data
│ | └── config
│ │ └── grafana.cfg
│ ├── ...
│ └── ...
└── nginx
├── docker-data
| └── config
│ └── nginx.cfg
├── ...
└── ...
```
When done head back up to [How to run](#how-to-run).
# Development
## Conventional commits
This project uses [Conventional Commits](https://www.conventionalcommits.org/) for its commit messages.
### Commit types
Commit _types_ besides `fix` and `feat` are:
- `refactor`: Keeping functionality while streamlining or otherwise improving function flow
- `docs`: Documentation for project or components
### Commit scopes
The following _scopes_ are known for this project. A Conventional Commits commit message may optionally use one of the following scopes or none:
- `grafana`: A change to how the `grafana` service component works
- `nginx`: A change to how the `nginx` service component works
- `build`: Build-related changes such as `Dockerfile` fixes and features.
- `mount`: Volume or bind mount-related changes.
- `net`: Networking, IP addressing, routing changes
- `meta`: Affects the project's repo layout, file names etc.

View File

@@ -1,6 +1,6 @@
# For the remainder of this Dockerfile EXAMPLE_ARG_FOR_DOCKERFILE will be
# available with a value of 'must_be_available_in_dockerfile', check out the env
# file at 'env/fully.qualified.domain.name.example' for reference.
# file at 'env/fqdn_context.env.example' for reference.
# ARG EXAMPLE_ARG_FOR_DOCKERFILE
# Another env var, this one's needed in the example build step below:

View File

@@ -1,6 +1,6 @@
# For the remainder of this Dockerfile EXAMPLE_ARG_FOR_DOCKERFILE will be
# available with a value of 'must_be_available_in_dockerfile', check out the env
# file at 'env/fully.qualified.domain.name.example' for reference.
# file at 'env/fqdn_context.env.example' for reference.
# ARG EXAMPLE_ARG_FOR_DOCKERFILE
# Another env var, this one's needed in the example build step below:

View File

@@ -1,5 +1,6 @@
services:
grafana-build:
# FIXME image name with or without slash? Docker Hub or private registry? With or without *_BUILD_DATE? FIXME
image: "grafana:${GRAFANA_VERSION}"
profiles: ["build", "build-grafana"]
build:
@@ -9,6 +10,7 @@ services:
EXAMPLE_ARG_FOR_DOCKERFILE: "${EXAMPLE_ARG_FROM_ENV_FILE}"
GRAFANA_VERSION: "${GRAFANA_VERSION}"
nginx-build:
# FIXME image name with or without slash? Docker Hub or private registry? With or without *_BUILD_DATE? FIXME
image: "nginx:${NGINX_VERSION}"
profiles: ["build", "build-nginx"]
build:

View File

@@ -0,0 +1,72 @@
services:
grafana:
# FIXME image name with or without slash? Docker Hub or private registry? With or without *_BUILD_DATE? FIXME
image: "grafana:${GRAFANA_VERSION}"
container_name: "grafana-grafana-${CONTEXT}"
networks:
grafana-default:
profiles: ["full", "grafana"]
depends_on:
nginx:
condition: service_healthy
ulimits:
nproc: ${ULIMIT_NPROC:-65535}
nofile:
soft: ${ULIMIT_NPROC:-65535}
hard: ${ULIMIT_NPROC:-65535}
extends:
file: common-settings.yaml
service: common-settings
ports:
# - "8080:80"
volumes:
# When changing bind mount locations to real ones remember to
# also update "Initial setup" section in README.md.
# - /opt/docker-data/grafana-${CONTEXT}/grafana/data/db:/usr/lib/grafana
# - /opt/docker-data/grafana-${CONTEXT}/grafana/data/logs:/var/log/grafana
# - /opt/docker-data/grafana-${CONTEXT}/grafana/config:/etc/grafana
environment:
# GRAFANA_USER: ${GRAFANA_USER}
# GRAFANA_PASSWORD: ${GRAFANA_PASSWORD}
nginx:
# FIXME image name with or without slash? Docker Hub or private registry? With or without *_BUILD_DATE? FIXME
image: "nginx:${NGINX_VERSION}"
container_name: "grafana-nginx-${CONTEXT}"
networks:
grafana-default:
profiles: ["full", "nginx"]
healthcheck:
test: ["CMD", "fping", "--count=1", "${GRAFANA_VIP}", "--period=500", "--quiet"]
interval: 3s
timeout: 1s
retries: 60
start_period: 2s
ulimits:
nproc: ${ULIMIT_NPROC:-65535}
nofile:
soft: ${ULIMIT_NPROC:-65535}
hard: ${ULIMIT_NPROC:-65535}
extends:
file: common-settings.yaml
service: common-settings
ports:
# - "8080:80"
volumes:
# When changing bind mount locations to real ones remember to
# also update "Initial setup" section in README.md.
# - /opt/docker-data/grafana-${CONTEXT}/nginx/data/db:/usr/lib/nginx
# - /opt/docker-data/grafana-${CONTEXT}/nginx/data/logs:/var/log/nginx
# - /opt/docker-data/grafana-${CONTEXT}/nginx/config:/etc/nginx
environment:
# NGINX_USER: ${NGINX_USER}
# NGINX_PASSWORD: ${NGINX_PASSWORD}
networks:
grafana-default:
name: grafana-${CONTEXT}
driver: bridge
driver_opts:
com.docker.network.enable_ipv6: "false"
ipam:
driver: default
config:
- subnet: ${SUBNET}

View File

@@ -1,47 +0,0 @@
services:
grafana:
image: "grafana:${GRAFANA_VERSION}"
container_name: "grafana-grafana-${CONTEXT}"
networks:
grafana-cncf:
profiles: ["full", "grafana"]
extends:
file: common-settings.yml
service: common-settings
ports:
# - "8080:80"
volumes:
# - /opt/docker-data/grafana-grafana-cncf/grafana/data/db:/usr/lib/grafana
# - /opt/docker-data/grafana-grafana-cncf/grafana/data/logs:/var/log/grafana
# - /opt/docker-data/grafana-grafana-cncf/grafana/config:/etc/grafana
environment:
# GRAFANA_USER: ${GRAFANA_USER}
# GRAFANA_PASSWORD: ${GRAFANA_PASSWORD}
nginx:
image: "nginx:${NGINX_VERSION}"
container_name: "grafana-nginx-${CONTEXT}"
networks:
grafana-cncf:
profiles: ["full", "nginx"]
extends:
file: common-settings.yml
service: common-settings
ports:
# - "8080:80"
volumes:
# - /opt/docker-data/grafana-nginx-cncf/nginx/data/db:/usr/lib/nginx
# - /opt/docker-data/grafana-nginx-cncf/nginx/data/logs:/var/log/nginx
# - /opt/docker-data/grafana-nginx-cncf/nginx/config:/etc/nginx
environment:
# NGINX_USER: ${NGINX_USER}
# NGINX_PASSWORD: ${NGINX_PASSWORD}
networks:
grafana-cncf:
name: grafana-cncf
driver: bridge
driver_opts:
com.docker.network.enable_ipv6: "false"
ipam:
driver: default
config:
# - subnet: 172.21.184.0/24

View File

@@ -0,0 +1,34 @@
CONTEXT=ux_vilnius
# Set something sensible here and uncomment
# ---
# GRAFANA_VERSION=x.y.z
# NGINX_VERSION=x.y.z
# GRAFANA_VIP=10.1.1.2
# GRAFANA_BUILD_DATE=20230731
# Feel free to leave defaults. They apply while these vars are commented out
# ---
# RESTARTPOLICY=unless-stopped
# TIMEZONE=Etc/UTC
# Subnet to use for this Docker Compose project. Docker defaults to
# container networks in prefix 172.16.0.0/12 which is 1 million addresses in
# the range from 172.16.0.0 to 172.31.255.255. Docker uses 172.17.0.0/16 for
# itself. Use any sensible prefix in 172.16.0.0/12 here except for Docker's
# own 172.17.0.0/16.
# ---
SUBNET=172.30.95.0/24
# See 'compose.override.yaml' for how to make a variable available in
# a Dockerfile
# ---
# EXAMPLE_ARG_FROM_ENV_FILE=must_be_available_in_dockerfile

View File

@@ -1,30 +0,0 @@
CONTEXT=cncf
# Set something sensible here and uncomment
# ---
# GRAFANA_VERSION=x.y.z
# NGINX_VERSION=x.y.z
# A ${LOCATION} var is usually not needed. It may be helpful when a ${CONTEXT}
# extends over more than one location e.g. to bind-mount location-specific
# config files or certificates into a container.
# ---
# LOCATION=
# Feel free to leave defaults. They apply while these vars are commented out
# ---
# RESTARTPOLICY=unless-stopped
# TIMEZONE=Etc/UTC
# See 'docker-compose.override.yml' for how to make a variable available in
# a Dockerfile
# ---
# EXAMPLE_ARG_FROM_ENV_FILE=must_be_available_in_dockerfile

View File

@@ -0,0 +1,149 @@
# FIXME
Search and replace all mentions of FIXME with sensible content in this file and in [compose.yaml](compose.yaml).
# Vault Docker Compose files
Docker Compose files to spin up an instance of Vault FIXME capitalization FIXME.
# How to run
Add a `COMPOSE_ENV` file and save its location as a shell variable along with the location where this repo lives, here for example `/opt/containers/hashicorpvault` plus all other variables. At [env/fqdn_context.env.example](env/fqdn_context.env.example) you'll find an example environment file.
When everything's ready start Vault with Docker Compose, otherwise head down to [Initial setup](#initial-setup) first.
## Environment
```
export COMPOSE_DIR='/opt/containers/hashicorpvault'
export COMPOSE_CTX='ux_vilnius'
export COMPOSE_PROJECT='vault-'"${COMPOSE_CTX}"
export COMPOSE_FILE="${COMPOSE_DIR}"'/compose.yaml'
export COMPOSE_OVERRIDE="${COMPOSE_DIR%/}"'/compose.override.yaml'
export COMPOSE_ENV=<add accordingly>
```
## Context
On your deployment machine create the necessary Docker context to connect to and control the Docker daemon on whatever target host you'll be using, for example:
```
docker context create fully.qualified.domain.name --docker 'host=ssh://root@fully.qualified.domain.name'
```
## Build
> Skip to [Pull](#pull) if you already have images in your private registry ready to use. Otherwise read on to build them now.
FIXME We build the `vault` image locally. Our adjustment to the official image is simply adding `/tmp/vault` to it. See [build-context/Dockerfile](build-context/Dockerfile). We use `/tmp/vault` to bind-mount a dedicated ZFS dataset for the application's `tmpdir` location.
```
docker compose --project-name "${COMPOSE_PROJECT}" --file "${COMPOSE_FILE}" --file "${COMPOSE_OVERRIDE}" --env-file "${COMPOSE_ENV}" --profile 'build' build
```
## Push
Push to Docker Hub or your private registry. Setting up a private registry is out of scope of this repo. Once you have a registry available you can use it like so:
- On your OS install a Docker credential helper per [github.com/docker/docker-credential-helpers](https://github.com/docker/docker-credential-helpers). This will make sure you won't store credentials hashed (and unencrypted) in `~/.docker/config.json`. On an example Arch Linux machine where D-Bus Secret Service exists this will come via something like the [docker-credential-secretservice-bin](https://aur.archlinux.org/packages/docker-credential-secretservice-bin) Arch User Repository package. Just install and you're done.
- Do a `docker login registry.example.com`, enter username and password, confirm login.
```
source "${COMPOSE_ENV}"
docker push "registry.example.com/project/vault:${VAULT_BUILD_DATE}-${VAULT_VERSION}"
```
## Pull
> Skip this step if you just built images that still exist locally on your build host.
FIXME Rewrite either [Build](#build) or this paragraph for which images are built and which ones pulled, `--profile 'full'` may not make sense.
```
docker compose --project-name "${COMPOSE_PROJECT}" --file "${COMPOSE_FILE}" --env-file "${COMPOSE_ENV}" --profile 'full' pull
```
## Copy to target
Copy images to target Docker host, that is assuming you deploy to a machine that itself has no network route to reach Docker Hub or your private registry of choice. Copying in its simplest form involves a local `docker save` and a remote `docker load`. Consider the helper mini-project [quico.space/Quico/copy-docker](https://quico.space/Quico/copy-docker) where [copy-docker.sh](https://quico.space/Quico/copy-docker/src/branch/main/copy-docker.sh) allows the following workflow:
```
source "${COMPOSE_ENV}"
# FIXME Docker Hub image name with or without slash? FIXME
copy-docker 'vault:'"${VAULT_VERSION}" fully.qualified.domain.name
```
## Start
```
docker --context 'fully.qualified.domain.name' compose --project-name "${COMPOSE_PROJECT}" --file "${COMPOSE_FILE}" --env-file "${COMPOSE_ENV}" up --detach
```
# Initial setup
We're assuming you run Docker Compose workloads with ZFS-based bind mounts. ZFS management, creating a zpool and setting adequate properties for its datasets is out of scope of this document.
## Datasets
Create ZFS datasets and set permissions as needed.
* Parent dateset
```
zfs create -o mountpoint=/opt/docker-data 'zpool/docker-data'
```
* Container-specific datasets
```
zfs create -p 'zpool/docker-data/vault-'"${COMPOSE_CTX}"'/vault/data/db'
zfs create -p 'zpool/docker-data/vault-'"${COMPOSE_CTX}"'/vault/data/logs'
zfs create -p 'zpool/docker-data/vault-'"${COMPOSE_CTX}"'/vault/config'
```
FIXME When changing bind mount locations to real ones remember to also update `volumes:` in [compose.yaml](compose.yaml) FIXME
* Create subdirs
```
mkdir -p '/opt/docker-data/vault-'"${COMPOSE_CTX}"'/vault/'{'.ssh','config','data','projects'}
```
* Change ownership
```
chown -R 1000:1000 '/opt/docker-data/vault-'"${COMPOSE_CTX}"'/vault/data/'*
```
## Additional files
Place the following files on target server. Use the directory structure at [build-context](build-context) as a guide, specifically at `docker-data`.
FIXME Add details about files that aren't self-explanatory FIXME
```
build-context/
├── docker-data
│ └── config
│ └── vault.cfg
├── ...
└── ...
```
When done head back up to [How to run](#how-to-run).
# Development
## Conventional commits
This project uses [Conventional Commits](https://www.conventionalcommits.org/) for its commit messages.
### Commit types
Commit _types_ besides `fix` and `feat` are:
- `refactor`: Keeping functionality while streamlining or otherwise improving function flow
- `docs`: Documentation for project or components
### Commit scopes
The following _scopes_ are known for this project. A Conventional Commits commit message may optionally use one of the following scopes or none:
- `vault`: A change to how the `vault` service component works
- `build`: Build-related changes such as `Dockerfile` fixes and features.
- `mount`: Volume or bind mount-related changes.
- `net`: Networking, IP addressing, routing changes
- `meta`: Affects the project's repo layout, file names etc.

View File

@@ -1,13 +1,13 @@
# For the remainder of this Dockerfile EXAMPLE_ARG_FOR_DOCKERFILE will be
# available with a value of 'must_be_available_in_dockerfile', check out the env
# file at 'env/fully.qualified.domain.name.example' for reference.
# file at 'env/fqdn_context.env.example' for reference.
# ARG EXAMPLE_ARG_FOR_DOCKERFILE
# Another env var, this one's needed in the example build step below:
# ARG HASHICORPVAULT_VERSION
# ARG VAULT_VERSION
# Example
# FROM "hashicorpvault:${HASHICORPVAULT_VERSION}"
# FROM "vault:${VAULT_VERSION}"
# RUN apt-get update && \
# apt-get -y install \
# somepackage-6.q16-6-extra && \

View File

@@ -0,0 +1,11 @@
services:
vault-build:
# FIXME image name with or without slash? Docker Hub or private registry? With or without *_BUILD_DATE? FIXME
image: "registry.example.com/project/vault:${VAULT_BUILD_DATE}-${VAULT_VERSION}"
profiles: ["build"]
build:
context: "build-context"
dockerfile: Dockerfile
args:
EXAMPLE_ARG_FOR_DOCKERFILE: "${EXAMPLE_ARG_FROM_ENV_FILE}"
VAULT_VERSION: "${VAULT_VERSION}"

View File

@@ -0,0 +1,36 @@
services:
vault:
# FIXME image name with or without slash? Docker Hub or private registry? With or without *_BUILD_DATE? FIXME
image: "registry.example.com/project/vault:${VAULT_BUILD_DATE}-${VAULT_VERSION}"
container_name: "vault-${CONTEXT}"
networks:
vault-default:
ulimits:
nproc: ${ULIMIT_NPROC:-65535}
nofile:
soft: ${ULIMIT_NPROC:-65535}
hard: ${ULIMIT_NPROC:-65535}
extends:
file: common-settings.yaml
service: common-settings
ports:
# - "8080:80"
volumes:
# When changing bind mount locations to real ones remember to
# also update "Initial setup" section in README.md.
# - /opt/docker-data/vault-${CONTEXT}/vault/data/db:/usr/lib/vault
# - /opt/docker-data/vault-${CONTEXT}/vault/data/logs:/var/log/vault
# - /opt/docker-data/vault-${CONTEXT}/vault/config:/etc/vault
environment:
# VAULT_USER: ${VAULT_USER}
# VAULT_PASSWORD: ${VAULT_PASSWORD}
networks:
vault-default:
name: vault-${CONTEXT}
driver: bridge
driver_opts:
com.docker.network.enable_ipv6: "false"
ipam:
driver: default
config:
- subnet: ${SUBNET}

View File

@@ -1,10 +0,0 @@
services:
hashicorpvault-build:
image: "hashicorpvault:${HASHICORPVAULT_VERSION}"
profiles: ["build"]
build:
context: "build-context/hashicorpvault"
dockerfile: Dockerfile
args:
EXAMPLE_ARG_FOR_DOCKERFILE: "${EXAMPLE_ARG_FROM_ENV_FILE}"
HASHICORPVAULT_VERSION: "${HASHICORPVAULT_VERSION}"

View File

@@ -1,28 +0,0 @@
services:
hashicorpvault:
image: "hashicorpvault:${HASHICORPVAULT_VERSION}"
container_name: "hashicorpvault-${CONTEXT}"
networks:
hashicorpvault-fsf:
extends:
file: common-settings.yml
service: common-settings
ports:
# - "8080:80"
volumes:
# - /opt/docker-data/hashicorpvault-fsf/data/db:/usr/lib/hashicorpvault
# - /opt/docker-data/hashicorpvault-fsf/data/logs:/var/log/hashicorpvault
# - /opt/docker-data/hashicorpvault-fsf/config:/etc/hashicorpvault
environment:
# HASHICORPVAULT_USER: ${HASHICORPVAULT_USER}
# HASHICORPVAULT_PASSWORD: ${HASHICORPVAULT_PASSWORD}
networks:
hashicorpvault-fsf:
name: hashicorpvault-fsf
driver: bridge
driver_opts:
com.docker.network.enable_ipv6: "false"
ipam:
driver: default
config:
# - subnet: 172.21.184.0/24

View File

@@ -0,0 +1,33 @@
CONTEXT=ux_vilnius
# Set something sensible here and uncomment
# ---
# VAULT_VERSION=x.y.z
# VAULT_VIP=10.1.1.2
# VAULT_BUILD_DATE=20230731
# Feel free to leave defaults. They apply while these vars are commented out
# ---
# RESTARTPOLICY=unless-stopped
# TIMEZONE=Etc/UTC
# Subnet to use for this Docker Compose project. Docker defaults to
# container networks in prefix 172.16.0.0/12 which is 1 million addresses in
# the range from 172.16.0.0 to 172.31.255.255. Docker uses 172.17.0.0/16 for
# itself. Use any sensible prefix in 172.16.0.0/12 here except for Docker's
# own 172.17.0.0/16.
# ---
SUBNET=172.30.95.0/24
# See 'compose.override.yaml' for how to make a variable available in
# a Dockerfile
# ---
# EXAMPLE_ARG_FROM_ENV_FILE=must_be_available_in_dockerfile

View File

@@ -1,29 +0,0 @@
CONTEXT=fsf
# Set something sensible here and uncomment
# ---
# HASHICORPVAULT_VERSION=x.y.z
# A ${LOCATION} var is usually not needed. It may be helpful when a ${CONTEXT}
# extends over more than one location e.g. to bind-mount location-specific
# config files or certificates into a container.
# ---
# LOCATION=
# Feel free to leave defaults. They apply while these vars are commented out
# ---
# RESTARTPOLICY=unless-stopped
# TIMEZONE=Etc/UTC
# See 'docker-compose.override.yml' for how to make a variable available in
# a Dockerfile
# ---
# EXAMPLE_ARG_FROM_ENV_FILE=must_be_available_in_dockerfile

View File

@@ -2,8 +2,7 @@ import sys
service_slug = "{{ cookiecutter.__service_slug }}"
component_list_slug = "{{ cookiecutter.__component_list_slug }}"
context_slug = "{{ cookiecutter.__context_slug }}"
for v in (service_slug, component_list_slug, context_slug):
for v in (service_slug, component_list_slug):
if not v:
print(f"Please answer all prompts with a non-empty string. Aborting and existing 3 ...")
print(f"Please answer all prompts with a non-empty string. Aborting and exiting 3 ...")
sys.exit(3)

View File

@@ -0,0 +1,228 @@
# FIXME
Search and replace all mentions of FIXME with sensible content in this file and in [compose.yaml](compose.yaml).
# {{ cookiecutter.__service_slug.capitalize() }} Docker Compose files
Docker Compose files to spin up an instance of {{ cookiecutter.__service_slug.capitalize() }} FIXME capitalization FIXME.
# How to run
Add a `COMPOSE_ENV` file and save its location as a shell variable along with the location where this repo lives, here for example `/opt/containers/{{ cookiecutter.__project_slug }}` plus all other variables. At [env/fqdn_context.env.example](env/fqdn_context.env.example) you'll find an example environment file.
When everything's ready start {{ cookiecutter.__service_slug.capitalize() }} with Docker Compose, otherwise head down to [Initial setup](#initial-setup) first.
## Environment
```
export COMPOSE_DIR='/opt/containers/{{ cookiecutter.__project_slug }}'
export COMPOSE_CTX='ux_vilnius'
export COMPOSE_PROJECT='{{ cookiecutter.__service_slug }}-'"${COMPOSE_CTX}"
export COMPOSE_FILE="${COMPOSE_DIR}"'/compose.yaml'{% if cookiecutter.build == "yes" %}
export COMPOSE_OVERRIDE="${COMPOSE_DIR%/}"'/compose.override.yaml'{% endif %}
export COMPOSE_ENV=<add accordingly>
```
## Context
On your deployment machine create the necessary Docker context to connect to and control the Docker daemon on whatever target host you'll be using, for example:
```
docker context create fully.qualified.domain.name --docker 'host=ssh://root@fully.qualified.domain.name'
```
{%- if cookiecutter.build == "yes" %}
## Build
{% set components = cookiecutter.__component_list_slug.split(',') -%}
{%- for component in components %}
{%- if loop.first %}
> Skip to [Pull](#pull) if you already have images in your private registry ready to use. Otherwise read on to build them now.
FIXME We build the `{{ cookiecutter.__service_slug }}` image locally. Our adjustment to the official image is simply adding `/tmp/{{ cookiecutter.__service_slug }}` to it. See {% if ',' in cookiecutter.__component_list_slug %}[build-context/{{ cookiecutter.__service_slug }}/Dockerfile](build-context/{{ cookiecutter.__service_slug }}/Dockerfile){%- else %}[build-context/Dockerfile](build-context/Dockerfile){%- endif %}. We use `/tmp/{{ cookiecutter.__service_slug }}` to bind-mount a dedicated ZFS dataset for the application's `tmpdir` location.
```
docker compose --project-name "${COMPOSE_PROJECT}" --file "${COMPOSE_FILE}" --file "${COMPOSE_OVERRIDE}" --env-file "${COMPOSE_ENV}" --profile 'build{% if ',' in cookiecutter.__component_list_slug %}-{{ cookiecutter.__service_slug }}{%- endif %}' build
```
{%- endif %}
{% endfor %}
## Push
Push to Docker Hub or your private registry. Setting up a private registry is out of scope of this repo. Once you have a registry available you can use it like so:
- On your OS install a Docker credential helper per [github.com/docker/docker-credential-helpers](https://github.com/docker/docker-credential-helpers). This will make sure you won't store credentials hashed (and unencrypted) in `~/.docker/config.json`. On an example Arch Linux machine where D-Bus Secret Service exists this will come via something like the [docker-credential-secretservice-bin](https://aur.archlinux.org/packages/docker-credential-secretservice-bin) Arch User Repository package. Just install and you're done.
- Do a `docker login registry.example.com`, enter username and password, confirm login.
```
source "${COMPOSE_ENV}"
{%- set components = cookiecutter.__component_list_slug.split(',') -%}
{%- if ',' in cookiecutter.__component_list_slug %}
for image in{% for component in components %} \
'{%- if cookiecutter.build == "yes" -%}{%- if loop.first -%}registry.example.com/project/{%- endif -%}{%- endif -%}{{ component }}:'"{%- if cookiecutter.build == "yes" -%}{%- if loop.first -%}${% raw %}{{% endraw %}{{ component.upper() }}_BUILD_DATE{% raw %}}{% endraw %}-{%- endif -%}{%- endif -%}${% raw %}{{% endraw %}{{ component.upper() }}_VERSION{% raw %}}{% endraw %}"{%- endfor %}; do
docker push 'registry.example.com/project/'"${image}"
done
{%- else %}
docker push "{%- if cookiecutter.build == "yes" -%}registry.example.com/project/{%- endif -%}{{ cookiecutter.__component_list_slug }}:{%- if cookiecutter.build == "yes" -%}${% raw %}{{% endraw %}{{ cookiecutter.__component_list_slug.upper() }}_BUILD_DATE{% raw %}}{% endraw %}-{%- endif -%}${% raw %}{{% endraw %}{{ cookiecutter.__component_list_slug.upper() }}_VERSION{% raw %}}{% endraw %}"
{%- endif %}
```
{%- endif %}
## Pull
{% if cookiecutter.build == "yes" %}> Skip this step if you just built images that still exist locally on your build host.
FIXME Rewrite either [Build](#build) or this paragraph for which images are built and which ones pulled, `--profile 'full'` may not make sense.{% else %}Pull images from Docker Hub verbatim.{% endif %}
```
docker compose --project-name "${COMPOSE_PROJECT}" --file "${COMPOSE_FILE}" --env-file "${COMPOSE_ENV}" --profile 'full' pull
```
## Copy to target
Copy images to target Docker host, that is assuming you deploy to a machine that itself has no network route to reach Docker Hub or your private registry of choice. Copying in its simplest form involves a local `docker save` and a remote `docker load`. Consider the helper mini-project [quico.space/Quico/copy-docker](https://quico.space/Quico/copy-docker) where [copy-docker.sh](https://quico.space/Quico/copy-docker/src/branch/main/copy-docker.sh) allows the following workflow:
```
source "${COMPOSE_ENV}"
# FIXME Docker Hub image name with or without slash? FIXME
{%- set components = cookiecutter.__component_list_slug.split(',') -%}
{%- if ',' in cookiecutter.__component_list_slug %}
for image in{% for component in components %} '{{ component }}:'"${% raw %}{{% endraw %}{{ component.upper() }}_VERSION{% raw %}}{% endraw %}"{%- endfor %}; do
copy-docker "${image}" fully.qualified.domain.name
done
{%- else %}
copy-docker '{{ cookiecutter.__component_list_slug }}:'"${% raw %}{{% endraw %}{{ cookiecutter.__component_list_slug.upper() }}_VERSION{% raw %}}{% endraw %}" fully.qualified.domain.name
{%- endif %}
```
## Start
{%- if ',' in cookiecutter.__component_list_slug %}
FIXME Does the service use a virtual IP address? FIXME
Make sure your service's virtual IP address is bound on your target host then start containers.
{%- endif %}
```
{%- if ',' in cookiecutter.__component_list_slug %}
docker --context 'fully.qualified.domain.name' compose --project-name "${COMPOSE_PROJECT}" --file "${COMPOSE_FILE}" --env-file "${COMPOSE_ENV}" --profile 'full' up --detach
{%- else %}
docker --context 'fully.qualified.domain.name' compose --project-name "${COMPOSE_PROJECT}" --file "${COMPOSE_FILE}" --env-file "${COMPOSE_ENV}" up --detach
{%- endif %}
```
# Initial setup
We're assuming you run Docker Compose workloads with ZFS-based bind mounts. ZFS management, creating a zpool and setting adequate properties for its datasets is out of scope of this document.
## Datasets
Create ZFS datasets and set permissions as needed.
* Parent dateset
```
zfs create -o mountpoint=/opt/docker-data 'zpool/docker-data'
```
* Container-specific datasets
```
{%- if ',' in cookiecutter.__component_list_slug -%}
{%- set components = cookiecutter.__component_list_slug.split(',') -%}
{%- for component in components %}
zfs create -p 'zpool/docker-data/{{ cookiecutter.__service_slug }}-'"${COMPOSE_CTX}"'/{{ component }}/data/db'
zfs create -p 'zpool/docker-data/{{ cookiecutter.__service_slug }}-'"${COMPOSE_CTX}"'/{{ component }}/data/logs'
zfs create -p 'zpool/docker-data/{{ cookiecutter.__service_slug }}-'"${COMPOSE_CTX}"'/{{ component }}/config'
{%- endfor -%}
{%- else %}
zfs create -p 'zpool/docker-data/{{ cookiecutter.__service_slug }}-'"${COMPOSE_CTX}"'/{{ cookiecutter.__service_slug }}/data/db'
zfs create -p 'zpool/docker-data/{{ cookiecutter.__service_slug }}-'"${COMPOSE_CTX}"'/{{ cookiecutter.__service_slug }}/data/logs'
zfs create -p 'zpool/docker-data/{{ cookiecutter.__service_slug }}-'"${COMPOSE_CTX}"'/{{ cookiecutter.__service_slug }}/config'
{%- endif %}
```
FIXME When changing bind mount locations to real ones remember to also update `volumes:` in [compose.yaml](compose.yaml) FIXME
* Create subdirs
```
{%- set components = cookiecutter.__component_list_slug.split(',') -%}
{% for component in components %}
{%- if loop.first %}
mkdir -p '/opt/docker-data/{{ cookiecutter.__service_slug }}-'"${COMPOSE_CTX}"'/{{ cookiecutter.__service_slug }}/'{'.ssh','config','data','projects'}
{%- endif %}
{%- endfor %}
```
* Change ownership
```
{%- set components = cookiecutter.__component_list_slug.split(',') -%}
{% for component in components %}
{%- if loop.first %}
chown -R 1000:1000 '/opt/docker-data/{{ cookiecutter.__service_slug }}-'"${COMPOSE_CTX}"'/{{ cookiecutter.__service_slug }}/data/'*
{%- endif %}
{%- endfor %}
```
## Additional files
Place the following files on target server. Use the directory structure at [build-context](build-context) as a guide, specifically at `docker-data`.
FIXME Add details about files that aren't self-explanatory FIXME
```
build-context/
{%- if ',' in cookiecutter.__component_list_slug -%}
{%- set components = cookiecutter.__component_list_slug.split(',') -%}
{%- for component in components %}
{%- if not loop.last %}
├── {{ component }}
│ ├── docker-data
│ | └── config
│ │ └── {{ component }}.cfg
│ ├── ...
│ └── ...
{%- else %}
└── {{ component }}
├── docker-data
| └── config
│ └── {{ component }}.cfg
├── ...
└── ...
{%- endif %}
{%- endfor %}
{%- else %}
├── docker-data
│ └── config
│ └── {{ cookiecutter.__service_slug }}.cfg
├── ...
└── ...
{%- endif %}
```
When done head back up to [How to run](#how-to-run).
# Development
## Conventional commits
This project uses [Conventional Commits](https://www.conventionalcommits.org/) for its commit messages.
### Commit types
Commit _types_ besides `fix` and `feat` are:
- `refactor`: Keeping functionality while streamlining or otherwise improving function flow
- `docs`: Documentation for project or components
### Commit scopes
The following _scopes_ are known for this project. A Conventional Commits commit message may optionally use one of the following scopes or none:
{%if ',' in cookiecutter.__component_list_slug -%}
{%- set components = cookiecutter.__component_list_slug.split(',') -%}
{%- for component in components %}
- `{{ component }}`: A change to how the `{{ component }}` service component works
{%- endfor -%}
{%- else %}
- `{{ cookiecutter.__service_slug }}`: A change to how the `{{ cookiecutter.__service_slug }}` service component works
{%- endif %}
- `build`: Build-related changes such as `Dockerfile` fixes and features.
- `mount`: Volume or bind mount-related changes.
- `net`: Networking, IP addressing, routing changes
- `meta`: Affects the project's repo layout, file names etc.

View File

@@ -1,6 +1,6 @@
# For the remainder of this Dockerfile EXAMPLE_ARG_FOR_DOCKERFILE will be
# available with a value of 'must_be_available_in_dockerfile', check out the env
# file at 'env/fully.qualified.domain.name.example' for reference.
# file at 'env/fqdn_context.env.example' for reference.
# ARG EXAMPLE_ARG_FOR_DOCKERFILE
# Another env var, this one's needed in the example build step below:

View File

@@ -0,0 +1,27 @@
services:
{%- if ',' in cookiecutter.__component_list_slug -%}
{%- set components = cookiecutter.__component_list_slug.split(',') -%}
{% for component in components %}
{{ component }}-build:
# FIXME image name with or without slash? Docker Hub or private registry? With or without *_BUILD_DATE? FIXME
image: "{%- if cookiecutter.build == "yes" -%}{%- if loop.first -%}registry.example.com/project/{%- endif -%}{%- endif -%}{{ component }}:{%- if cookiecutter.build == "yes" -%}{%- if loop.first -%}${% raw %}{{% endraw %}{{ component.upper() }}_BUILD_DATE{% raw %}}{% endraw %}-{%- endif -%}{%- endif -%}${% raw %}{{% endraw %}{{ component.upper() }}_VERSION{% raw %}}{% endraw %}"
profiles: ["build", "build-{{ component }}"]
build:
context: "build-context/{{ component }}"
dockerfile: Dockerfile
args:
EXAMPLE_ARG_FOR_DOCKERFILE: "${EXAMPLE_ARG_FROM_ENV_FILE}"
{{ component.upper() }}_VERSION: "${% raw %}{{% endraw %}{{ component.upper() }}_VERSION{% raw %}}{% endraw %}"
{%- endfor %}
{%- else %}
{{ cookiecutter.__component_list_slug }}-build:
# FIXME image name with or without slash? Docker Hub or private registry? With or without *_BUILD_DATE? FIXME
image: "{%- if cookiecutter.build == "yes" -%}registry.example.com/project/{%- endif -%}{{ cookiecutter.__component_list_slug }}:{%- if cookiecutter.build == "yes" -%}${% raw %}{{% endraw %}{{ cookiecutter.__component_list_slug.upper() }}_BUILD_DATE{% raw %}}{% endraw %}-{%- endif -%}${% raw %}{{% endraw %}{{ cookiecutter.__component_list_slug.upper() }}_VERSION{% raw %}}{% endraw %}"
profiles: ["build"]
build:
context: "build-context"
dockerfile: Dockerfile
args:
EXAMPLE_ARG_FOR_DOCKERFILE: "${EXAMPLE_ARG_FROM_ENV_FILE}"
{{ cookiecutter.__component_list_slug.upper() }}_VERSION: "${% raw %}{{% endraw %}{{ cookiecutter.__component_list_slug.upper() }}_VERSION{% raw %}}{% endraw %}"
{%- endif %}

View File

@@ -0,0 +1,89 @@
services:
{%- if ',' in cookiecutter.__component_list_slug -%}
{%- set components = cookiecutter.__component_list_slug.split(',') -%}
{%- set ns = namespace(found=false) -%}
{%- for component in components %}
{%- if loop.first -%}
{%- set ns.first_component = component -%}
{%- elif N is undefined -%}
{%- set ns.second_component = component -%}
{%- set N = 0 -%}
{%- endif -%}
{%- endfor -%}
{%- for component in components %}
{{ component }}:
# FIXME image name with or without slash? Docker Hub or private registry? With or without *_BUILD_DATE? FIXME
image: "{%- if cookiecutter.build == "yes" -%}{%- if loop.first -%}registry.example.com/project/{%- endif -%}{%- endif -%}{{ component }}:{%- if cookiecutter.build == "yes" -%}{%- if loop.first -%}${% raw %}{{% endraw %}{{ component.upper() }}_BUILD_DATE{% raw %}}{% endraw %}-{%- endif -%}{%- endif -%}${% raw %}{{% endraw %}{{ component.upper() }}_VERSION{% raw %}}{% endraw %}"
container_name: "{{ cookiecutter.__service_slug }}-{{ component }}-${CONTEXT}"
networks:
{{ cookiecutter.__service_slug }}-default:
profiles: ["full", "{{ component }}"]
{% if loop.first -%}
depends_on:
{{ ns.second_component }}:
condition: service_healthy
{%- else -%}
healthcheck:
test: ["CMD", "fping", "--count=1", "${% raw %}{{% endraw %}{{ ns.first_component.upper() }}_VIP{% raw %}}{% endraw %}", "--period=500", "--quiet"]
interval: 3s
timeout: 1s
retries: 60
start_period: 2s
{%- endif %}
ulimits:
nproc: ${ULIMIT_NPROC:-65535}
nofile:
soft: ${ULIMIT_NPROC:-65535}
hard: ${ULIMIT_NPROC:-65535}
extends:
file: common-settings.yaml
service: common-settings
ports:
# - "8080:80"
volumes:
# When changing bind mount locations to real ones remember to
# also update "Initial setup" section in README.md.
# - /opt/docker-data/{{ cookiecutter.__service_slug }}-${CONTEXT}/{{ component }}/data/db:/usr/lib/{{ component }}
# - /opt/docker-data/{{ cookiecutter.__service_slug }}-${CONTEXT}/{{ component }}/data/logs:/var/log/{{ component }}
# - /opt/docker-data/{{ cookiecutter.__service_slug }}-${CONTEXT}/{{ component }}/config:/etc/{{ component }}
environment:
# {{ component.upper() }}_USER: ${% raw %}{{% endraw %}{{ component.upper() }}_USER{% raw %}}{% endraw %}
# {{ component.upper() }}_PASSWORD: ${% raw %}{{% endraw %}{{ component.upper() }}_PASSWORD{% raw %}}{% endraw %}
{%- endfor -%}
{%- else %}
{{ cookiecutter.__component_list_slug }}:
# FIXME image name with or without slash? Docker Hub or private registry? With or without *_BUILD_DATE? FIXME
image: "{%- if cookiecutter.build == "yes" -%}registry.example.com/project/{%- endif -%}{{ cookiecutter.__component_list_slug }}:{%- if cookiecutter.build == "yes" -%}${% raw %}{{% endraw %}{{ cookiecutter.__component_list_slug.upper() }}_BUILD_DATE{% raw %}}{% endraw %}-{%- endif -%}${% raw %}{{% endraw %}{{ cookiecutter.__component_list_slug.upper() }}_VERSION{% raw %}}{% endraw %}"
container_name: "{{ cookiecutter.__service_slug }}-${CONTEXT}"
networks:
{{ cookiecutter.__service_slug }}-default:
ulimits:
nproc: ${ULIMIT_NPROC:-65535}
nofile:
soft: ${ULIMIT_NPROC:-65535}
hard: ${ULIMIT_NPROC:-65535}
extends:
file: common-settings.yaml
service: common-settings
ports:
# - "8080:80"
volumes:
# When changing bind mount locations to real ones remember to
# also update "Initial setup" section in README.md.
# - /opt/docker-data/{{ cookiecutter.__service_slug }}-${CONTEXT}/{{ cookiecutter.__service_slug }}/data/db:/usr/lib/{{ cookiecutter.__service_slug }}
# - /opt/docker-data/{{ cookiecutter.__service_slug }}-${CONTEXT}/{{ cookiecutter.__service_slug }}/data/logs:/var/log/{{ cookiecutter.__service_slug }}
# - /opt/docker-data/{{ cookiecutter.__service_slug }}-${CONTEXT}/{{ cookiecutter.__service_slug }}/config:/etc/{{ cookiecutter.__service_slug }}
environment:
# {{ cookiecutter.__component_list_slug.upper() }}_USER: ${% raw %}{{% endraw %}{{ cookiecutter.__component_list_slug.upper() }}_USER{% raw %}}{% endraw %}
# {{ cookiecutter.__component_list_slug.upper() }}_PASSWORD: ${% raw %}{{% endraw %}{{ cookiecutter.__component_list_slug.upper() }}_PASSWORD{% raw %}}{% endraw %}
{%- endif %}
networks:
{{ cookiecutter.__service_slug }}-default:
name: {{ cookiecutter.__service_slug }}-${CONTEXT}
driver: bridge
driver_opts:
com.docker.network.enable_ipv6: "false"
ipam:
driver: default
config:
- subnet: ${SUBNET}

View File

@@ -1,25 +0,0 @@
services:
{%- if ',' in cookiecutter.__component_list_slug -%}
{%- set components = cookiecutter.__component_list_slug.split(',') -%}
{% for component in components %}
{{ component }}-build:
image: "{{ component }}:${% raw %}{{% endraw %}{{ component.upper() }}_VERSION{% raw %}}{% endraw %}"
profiles: ["build", "build-{{ component }}"]
build:
context: "build-context/{{ component }}"
dockerfile: Dockerfile
args:
EXAMPLE_ARG_FOR_DOCKERFILE: "${EXAMPLE_ARG_FROM_ENV_FILE}"
{{ component.upper() }}_VERSION: "${% raw %}{{% endraw %}{{ component.upper() }}_VERSION{% raw %}}{% endraw %}"
{%- endfor %}
{%- else %}
{{ cookiecutter.__component_list_slug }}-build:
image: "{{ cookiecutter.__component_list_slug }}:${% raw %}{{% endraw %}{{ cookiecutter.__component_list_slug.upper() }}_VERSION{% raw %}}{% endraw %}"
profiles: ["build"]
build:
context: "build-context/{{ cookiecutter.__component_list_slug }}"
dockerfile: Dockerfile
args:
EXAMPLE_ARG_FOR_DOCKERFILE: "${EXAMPLE_ARG_FROM_ENV_FILE}"
{{ cookiecutter.__component_list_slug.upper() }}_VERSION: "${% raw %}{{% endraw %}{{ cookiecutter.__component_list_slug.upper() }}_VERSION{% raw %}}{% endraw %}"
{%- endif %}

View File

@@ -1,52 +0,0 @@
services:
{%- if ',' in cookiecutter.__component_list_slug -%}
{%- set components = cookiecutter.__component_list_slug.split(',') -%}
{%- for component in components %}
{{ component }}:
image: "{{ component }}:${% raw %}{{% endraw %}{{ component.upper() }}_VERSION{% raw %}}{% endraw %}"
container_name: "{{ cookiecutter.__service_slug }}-{{ component }}-${CONTEXT}"
networks:
{{ cookiecutter.__service_slug }}-{{ cookiecutter.__context_slug }}:
profiles: ["full", "{{ component }}"]
extends:
file: common-settings.yml
service: common-settings
ports:
# - "8080:80"
volumes:
# - /opt/docker-data/{{ cookiecutter.__service_slug }}-{{ component }}-{{ cookiecutter.__context_slug }}/{{ component }}/data/db:/usr/lib/{{ component }}
# - /opt/docker-data/{{ cookiecutter.__service_slug }}-{{ component }}-{{ cookiecutter.__context_slug }}/{{ component }}/data/logs:/var/log/{{ component }}
# - /opt/docker-data/{{ cookiecutter.__service_slug }}-{{ component }}-{{ cookiecutter.__context_slug }}/{{ component }}/config:/etc/{{ component }}
environment:
# {{ component.upper() }}_USER: ${% raw %}{{% endraw %}{{ component.upper() }}_USER{% raw %}}{% endraw %}
# {{ component.upper() }}_PASSWORD: ${% raw %}{{% endraw %}{{ component.upper() }}_PASSWORD{% raw %}}{% endraw %}
{%- endfor -%}
{%- else %}
{{ cookiecutter.__component_list_slug }}:
image: "{{ cookiecutter.__component_list_slug }}:${% raw %}{{% endraw %}{{ cookiecutter.__component_list_slug.upper() }}_VERSION{% raw %}}{% endraw %}"
container_name: "{{ cookiecutter.__service_slug }}-${CONTEXT}"
networks:
{{ cookiecutter.__service_slug }}-{{ cookiecutter.__context_slug }}:
extends:
file: common-settings.yml
service: common-settings
ports:
# - "8080:80"
volumes:
# - /opt/docker-data/{{ cookiecutter.__service_slug }}-{{ cookiecutter.__context_slug }}/data/db:/usr/lib/{{ cookiecutter.__service_slug }}
# - /opt/docker-data/{{ cookiecutter.__service_slug }}-{{ cookiecutter.__context_slug }}/data/logs:/var/log/{{ cookiecutter.__service_slug }}
# - /opt/docker-data/{{ cookiecutter.__service_slug }}-{{ cookiecutter.__context_slug }}/config:/etc/{{ cookiecutter.__service_slug }}
environment:
# {{ cookiecutter.__component_list_slug.upper() }}_USER: ${% raw %}{{% endraw %}{{ cookiecutter.__component_list_slug.upper() }}_USER{% raw %}}{% endraw %}
# {{ cookiecutter.__component_list_slug.upper() }}_PASSWORD: ${% raw %}{{% endraw %}{{ cookiecutter.__component_list_slug.upper() }}_PASSWORD{% raw %}}{% endraw %}
{%- endif %}
networks:
{{ cookiecutter.__service_slug }}-{{ cookiecutter.__context_slug }}:
name: {{ cookiecutter.__service_slug }}-{{ cookiecutter.__context_slug }}
driver: bridge
driver_opts:
com.docker.network.enable_ipv6: "false"
ipam:
driver: default
config:
# - subnet: 172.21.184.0/24

View File

@@ -0,0 +1,40 @@
CONTEXT=ux_vilnius
# Set something sensible here and uncomment
# ---
{%- set components = cookiecutter.__component_list_slug.split(',') -%}
{% for component in components %}
# {{ component.upper() }}_VERSION=x.y.z
{%- endfor %}
{%- for component in components %}
{%- if loop.first %}
# {{ component.upper() }}_VIP=10.1.1.2
# {{ component.upper() }}_BUILD_DATE=20230731
{%- endif %}
{%- endfor %}
# Feel free to leave defaults. They apply while these vars are commented out
# ---
# RESTARTPOLICY=unless-stopped
# TIMEZONE=Etc/UTC
# Subnet to use for this Docker Compose project. Docker defaults to
# container networks in prefix 172.16.0.0/12 which is 1 million addresses in
# the range from 172.16.0.0 to 172.31.255.255. Docker uses 172.17.0.0/16 for
# itself. Use any sensible prefix in 172.16.0.0/12 here except for Docker's
# own 172.17.0.0/16.
# ---
SUBNET=172.30.95.0/24
# See 'compose.override.yaml' for how to make a variable available in
# a Dockerfile
# ---
# EXAMPLE_ARG_FROM_ENV_FILE=must_be_available_in_dockerfile

View File

@@ -1,32 +0,0 @@
CONTEXT={{ cookiecutter.__context_slug }}
# Set something sensible here and uncomment
# ---
{%- set components = cookiecutter.__component_list_slug.split(',') -%}
{% for component in components %}
# {{ component.upper() }}_VERSION=x.y.z
{%- endfor %}
# A ${LOCATION} var is usually not needed. It may be helpful when a ${CONTEXT}
# extends over more than one location e.g. to bind-mount location-specific
# config files or certificates into a container.
# ---
# LOCATION=
# Feel free to leave defaults. They apply while these vars are commented out
# ---
# RESTARTPOLICY=unless-stopped
# TIMEZONE=Etc/UTC
# See 'docker-compose.override.yml' for how to make a variable available in
# a Dockerfile
# ---
# EXAMPLE_ARG_FROM_ENV_FILE=must_be_available_in_dockerfile

65
python-naive/README.md Normal file
View File

@@ -0,0 +1,65 @@
# Naive Python template
## Run it
Execute this template like so:
```
cookiecutter https://quico.space/Quico/py-cookiecutter-templates.git --directory 'python-naive'
```
Cookiecutter interactively prompts you for the following info, here with example answers:
```
project_slug [project-slug]: update-firewall-source
Select use_rich_logging:
1 - yes
2 - no
Choose from 1, 2 [1]:
Select use_config_ini:
1 - yes
2 - no
Choose from 1, 2 [1]:
Select use_inflect:
1 - yes
2 - no
Choose from 1, 2 [1]:
```
Done, directory structure and files for your next Python project are ready for you to hit the ground running.
## Explanation and terminology
Your answers translate as follows into rendered files.
1. The `project_slug` is used as a directory name for your Python project where spaces and underscores are replaced-with-dashes. It's also used for a few example variables where `we_use_underscores` instead.
```
.
└── update-firewall-source
├── examples
│   └── config.ini.example
├── requirements.in
├── requirements.txt
└── update-firewall-source.py
```
2. The `use_rich_logging` variable adds settings and examples that make ample use of the [Rich package](https://github.com/Textualize/rich/) for beautiful logging. You typically want this so it defaults to `yes`. Just hit `Enter` to confirm. The setting also adds necessary requirements.
3. With `use_config_ini` you're getting a boat load of functions, presets, variables and examples that integrate a config.ini file via the `configparser` module.
4. Lastly with `use_inflect` you're adding the `inflect` module which does grammatically correct text rendering such as plural and singular. It also includes a few examples.
## Result
### Enable Rich, configparser and inflect
Above example of a Python project with all of Rich, `configparser` and `inflect` enabled will give you a directory structure like this:
```
.
└── update-firewall-source
├── examples
│   └── config.ini.example
├── requirements.in
├── requirements.txt
└── update-firewall-source.py
```
You can see real-life example file content over at [examples/update-firewall-source](examples/update-firewall-source). Cookiecutter has generated all necessary dependencies with pinned versions and a `update-firewall-source.py` script file to get you started.

View File

@@ -0,0 +1,8 @@
{
"project_slug": "project-slug",
"__project_slug": "{{ cookiecutter.project_slug.lower().replace(' ', '-').replace('_', '-') }}",
"__project_slug_under": "{{ cookiecutter.project_slug.lower().replace(' ', '_').replace('-', '_') }}",
"use_rich_logging": ["yes", "no"],
"use_config_ini": ["yes", "no"],
"use_inflect": ["yes", "no"]
}

View File

@@ -0,0 +1,18 @@
[DEFAULT]
self_name = update-firewall-source
tmp_base_dir = /tmp/%(self_name)s
state_base_dir = /var/lib/%(self_name)s
state_files_dir = %(state_base_dir)s/state
state_file_retention = 50
state_file_name_prefix = state-
state_file_name_suffix = .log
update_firewall_source_some_option = "http://localhost:8000/api/query"
another_option = "first"
[this-is-a-section]
min_duration = 1200
max_duration = 3000
title_not_regex = this|that|somethingelse
query = @http-payload.json
dl_dir = /tmp/some/dir
another_option = "overwriting_from_default"

View File

@@ -0,0 +1,2 @@
rich
inflect

View File

@@ -0,0 +1,14 @@
#
# This file is autogenerated by pip-compile with python 3.10
# To update, run:
#
# pip-compile
#
commonmark==0.9.1
# via rich
inflect==5.6.0
# via -r requirements.in
pygments==2.12.0
# via rich
rich==12.4.4
# via -r requirements.in

View File

@@ -0,0 +1,237 @@
# Path and env manipulation
import os
# Use a config file
import configparser
# Exit with various exit codes
import sys
# Manipulate style and content of logs
import logging
from rich.logging import RichHandler
# Correctly generate plurals, singular nouns etc.
import inflect
# Exit codes
# 1: Config file invalid, it has no sections
# 2: Config file invalid, sections must define at least CONST.CFG_MANDATORY
# 7 : An option that must have a non-null value is either unset or null
class CONST(object):
__slots__ = ()
LOG_FORMAT = "%(message)s"
# How to find a config file
CFG_THIS_FILE_DIRNAME = os.path.dirname(__file__)
CFG_DEFAULT_FILENAME = "config.ini"
CFG_DEFAULT_ABS_PATH = os.path.join(CFG_THIS_FILE_DIRNAME, CFG_DEFAULT_FILENAME)
# Values you don't have to set, these are their internal defaults. You may optionally add a key 'is_global' equal
# to either True or False. By default if left off it'll be assumed False. Script will treat values where
# 'is_global' equals True as not being overridable in a '[section]'. It's a setting that only makes sense in a
# global context for the entire script. An option where 'empty_ok' equals True can safely be unset or set to
# an empty string. An example config.ini file may give a sane config example value here, removing that value
# still results in a valid file.
CFG_KNOWN_DEFAULTS = [
{"key": "self_name", "value": "update-firewall-source", "empty_ok": False},
{"key": "tmp_base_dir", "value": os.path.join(CFG_THIS_FILE_DIRNAME, "data/tmp/%(self_name)s"),
"empty_ok": False},
{"key": "state_base_dir", "value": os.path.join(CFG_THIS_FILE_DIRNAME, "data/var/lib/%(self_name)s"),
"empty_ok": False},
{"key": "state_files_dir", "value": "%(state_base_dir)s/state", "is_global": False, "empty_ok": False},
{"key": "state_file_retention", "value": "50", "is_global": False, "empty_ok": True},
{"key": "state_file_name_prefix", "value": "state-", "is_global": False, "empty_ok": True},
{"key": "state_file_name_suffix", "value": ".log", "is_global": False, "empty_ok": True},
{"key": "update_firewall_source_some_option", "value": "http://localhost:8000/api/query", "is_global": True,
"empty_ok": False},
{"key": "another_option", "value": "first", "is_global": True, "empty_ok": True}
]
# In all sections other than 'default' the following settings are known and accepted. We ignore other settings.
# Per CFG_KNOWN_DEFAULTS above most '[DEFAULT]' options are accepted by virtue of being defaults and overridable.
# The only exception are options where "is_global" equals True, they can't be overridden in '[sections]'; any
# attempt at doing it anyway will be ignored. The main purpose of this list is to name settings that do not have
# a default value but can - if set - influence how a '[section]' behaves. Repeating a '[DEFAULT]' here does not
# make sense. We use 'is_mandatory' to determine if we have to raise errors on missing settings. Here
# 'is_mandatory' means the setting must be given in a '[section]'. It may be empty.
CFG_KNOWN_SECTION = [
# {"key": "an_option", "is_mandatory": True},
# {"key": "another_one", "is_mandatory": False}
]
CFG_MANDATORY = [section_cfg["key"] for section_cfg in CFG_KNOWN_SECTION if section_cfg["is_mandatory"]]
is_systemd = any([systemd_env_var in os.environ for systemd_env_var in ["SYSTEMD_EXEC_PID", "INVOCATION_ID"]])
logging.basicConfig(
# Default for all modules is NOTSET so log everything
level="NOTSET",
format=CONST.LOG_FORMAT,
datefmt="[%X]",
handlers=[RichHandler(
show_time=False if is_systemd else True,
show_path=False if is_systemd else True,
show_level=False if is_systemd else True,
rich_tracebacks=True
)]
)
log = logging.getLogger("rich")
# Our own code logs with this level
log.setLevel(os.environ.get("LOGLEVEL") if "LOGLEVEL" in [k for k, v in os.environ.items()] else logging.INFO)
p = inflect.engine()
# Use this version of class ConfigParser to log.debug contents of our config file. When parsing sections other than
# 'default' we don't want to reprint defaults over and over again. This custom class achieves that.
class ConfigParser(
configparser.ConfigParser):
"""Can get options() without defaults
Taken from https://stackoverflow.com/a/12600066.
"""
def options(self, section, no_defaults=False, **kwargs):
if no_defaults:
try:
return list(self._sections[section].keys())
except KeyError:
raise configparser.NoSectionError(section)
else:
return super().options(section)
ini_defaults = []
internal_defaults = {default["key"]: default["value"] for default in CONST.CFG_KNOWN_DEFAULTS}
internal_globals = [default["key"] for default in CONST.CFG_KNOWN_DEFAULTS if default["is_global"]]
internal_empty_ok = [default["key"] for default in CONST.CFG_KNOWN_DEFAULTS if default["empty_ok"]]
config = ConfigParser(defaults=internal_defaults,
converters={'list': lambda x: [i.strip() for i in x.split(',') if len(x) > 0]})
config.read(CONST.CFG_DEFAULT_ABS_PATH)
def print_section_header(
header: str) -> str:
return f"Loading config section '[{header}]' ..."
def validate_default_section(
config_obj: configparser.ConfigParser()) -> None:
log.debug(f"Loading config from file '{CONST.CFG_DEFAULT_ABS_PATH}' ...")
if not config_obj.sections():
log.debug(f"No config sections found in '{CONST.CFG_DEFAULT_ABS_PATH}'. Exiting 1 ...")
sys.exit(1)
if config.defaults():
log.debug(f"Symbol legend:\n"
f"* Default from section '[{config_obj.default_section}]'\n"
f": Global option from '[{config_obj.default_section}]', can not be overridden in local sections\n"
f"~ Local option, doesn't exist in '[{config_obj.default_section}]'\n"
f"+ Local override of a value from '[{config_obj.default_section}]'\n"
f"= Local override, same value as in '[{config_obj.default_section}]'\n"
f"# Local attempt at overriding a global, will be ignored")
log.debug(print_section_header(config_obj.default_section))
for default in config_obj.defaults():
ini_defaults.append({default: config_obj[config_obj.default_section][default]})
if default in internal_globals:
log.debug(f": {default} = {config_obj[config_obj.default_section][default]}")
else:
log.debug(f"* {default} = {config_obj[config_obj.default_section][default]}")
else:
log.debug(f"No defaults defined")
def config_has_valid_section(
config_obj: configparser.ConfigParser()) -> bool:
has_valid_section = False
for config_obj_section in config_obj.sections():
if set(CONST.CFG_MANDATORY).issubset(config_obj.options(config_obj_section)):
has_valid_section = True
break
return has_valid_section
def is_default(
config_key: str) -> bool:
return any(config_key in ini_default for ini_default in ini_defaults)
def is_global(
config_key: str) -> bool:
return config_key in internal_globals
def is_same_as_default(
config_kv_pair: dict) -> bool:
return config_kv_pair in ini_defaults
def we_have_unset_options(
config_obj: configparser.ConfigParser(),
section_name: str) -> list:
options_must_be_non_empty = []
for option in config_obj.options(section_name):
if not config_obj.get(section_name, option):
if option not in internal_empty_ok:
log.warning(f"In section '[{section_name}]' option '{option}' is empty, it mustn't be.")
options_must_be_non_empty.append(option)
return options_must_be_non_empty
def validate_config_sections(
config_obj: configparser.ConfigParser()) -> None:
for this_section in config_obj.sections():
log.debug(print_section_header(this_section))
unset_options = we_have_unset_options(config_obj, this_section)
if unset_options:
log.error(f"""{p.plural("Option", len(unset_options))} {unset_options} """
f"""{p.plural("is", len(unset_options))} unset. """
f"""{p.singular_noun("They", len(unset_options))} """
f"must have a non-null value. "
f"""{p.plural("Default", len(unset_options))} {p.plural("is", len(unset_options))}:""")
for unset_option in unset_options:
log.error(f"{unset_option} = {internal_defaults[unset_option]}")
log.error(f"Exiting 7 ...")
sys.exit(7)
if not set(CONST.CFG_MANDATORY).issubset(config_obj.options(this_section, no_defaults=True)):
log.warning(f"Config section '[{this_section}]' does not have all mandatory options "
f"{CONST.CFG_MANDATORY} set, skipping section ...")
config_obj.remove_section(this_section)
else:
for key in config_obj.options(this_section, no_defaults=True):
kv_prefix = "~"
remove_from_section = False
if is_global(key):
kv_prefix = "#"
remove_from_section = True
elif is_default(key):
kv_prefix = "+"
if is_same_as_default({key: config_obj[this_section][key]}):
kv_prefix = "="
log.debug(f"{kv_prefix} {key} = {config_obj[this_section][key]}")
if remove_from_section:
config_obj.remove_option(this_section, key)
def an_important_function(
section_name: str,
config_obj: configparser.ConfigParser(),
whatever: str) -> list:
min_duration = config_obj.getint(section_name, "min_duration")
max_duration = config_obj.getint(section_name, "max_duration")
return ["I", "am", "a", "list"]
if __name__ == "__main__":
validate_default_section(config)
if config_has_valid_section(config):
validate_config_sections(config)
else:
log.error(f"No valid config section found. A valid config section has at least the mandatory options "
f"{CONST.CFG_MANDATORY} set. Exiting 2 ...")
sys.exit(2)
log.debug(f"Iterating over config sections ...")
for section in config.sections():
log.info(f"Processing section '[{section}]' ...")
# ...

View File

@@ -0,0 +1,17 @@
import os
project_dir = os.getcwd()
examples_dir_name = "examples"
config_ini_file_name = "config.ini.example"
examples_dir_abs = os.path.join(project_dir, examples_dir_name)
config_ini_file_abs = os.path.join(project_dir, examples_dir_name, config_ini_file_name)
if {% if cookiecutter.use_config_ini == "yes" -%}False{% else -%}True{%- endif -%}:
try:
os.remove(config_ini_file_abs)
try:
os.rmdir(examples_dir_abs)
except OSError:
pass
except OSError:
pass

View File

@@ -0,0 +1,18 @@
[DEFAULT]
self_name = {{ cookiecutter.__project_slug }}
tmp_base_dir = /tmp/%(self_name)s
state_base_dir = /var/lib/%(self_name)s
state_files_dir = %(state_base_dir)s/state
state_file_retention = 50
state_file_name_prefix = state-
state_file_name_suffix = .log
{{ cookiecutter.__project_slug_under }}_some_option = "http://localhost:8000/api/query"
another_option = "first"
[this-is-a-section]
min_duration = 1200
max_duration = 3000
title_not_regex = this|that|somethingelse
query = @http-payload.json
dl_dir = /tmp/some/dir
another_option = "overwriting_from_default"

View File

@@ -0,0 +1,6 @@
{%- if cookiecutter.use_rich_logging == "yes" -%}
rich
{% endif -%}
{%- if cookiecutter.use_inflect == "yes" -%}
inflect
{% endif -%}

View File

@@ -0,0 +1,22 @@
{%- if cookiecutter.use_rich_logging == "yes" or cookiecutter.use_inflect == "yes" -%}
#
# This file is autogenerated by pip-compile with python 3.10
# To update, run:
#
# pip-compile
#
{% endif -%}
{%- if cookiecutter.use_rich_logging == "yes" -%}
commonmark==0.9.1
# via rich
{% endif -%}
{%- if cookiecutter.use_inflect == "yes" -%}
inflect==5.6.0
# via -r requirements.in
{% endif -%}
{%- if cookiecutter.use_rich_logging == "yes" -%}
pygments==2.12.0
# via rich
rich==12.4.4
# via -r requirements.in
{% endif -%}

View File

@@ -0,0 +1,267 @@
{% if cookiecutter.use_config_ini == "yes" -%}
# Path and env manipulation
import os
# Use a config file
import configparser
# Exit with various exit codes
import sys
{%- endif %}
{%- if cookiecutter.use_rich_logging == "yes" %}
# Manipulate style and content of logs
import logging
from rich.logging import RichHandler
{%- endif %}
{%- if cookiecutter.use_inflect == "yes" %}
# Correctly generate plurals, singular nouns etc.
import inflect
{%- endif %}
{%- if cookiecutter.use_rich_logging == "yes" or cookiecutter.use_config_ini == "yes" %}
# Exit codes
# 1: Config file invalid, it has no sections
# 2: Config file invalid, sections must define at least CONST.CFG_MANDATORY
# 7 : An option that must have a non-null value is either unset or null
class CONST(object):
__slots__ = ()
{%- endif %}
{%- if cookiecutter.use_rich_logging == "yes" %}
LOG_FORMAT = "%(message)s"
{%- endif %}
{%- if cookiecutter.use_config_ini == "yes" %}
# How to find a config file
CFG_THIS_FILE_DIRNAME = os.path.dirname(__file__)
CFG_DEFAULT_FILENAME = "config.ini"
CFG_DEFAULT_ABS_PATH = os.path.join(CFG_THIS_FILE_DIRNAME, CFG_DEFAULT_FILENAME)
# Values you don't have to set, these are their internal defaults. You may optionally add a key 'is_global' equal
# to either True or False. By default if left off it'll be assumed False. Script will treat values where
# 'is_global' equals True as not being overridable in a '[section]'. It's a setting that only makes sense in a
# global context for the entire script. An option where 'empty_ok' equals True can safely be unset or set to
# an empty string. An example config.ini file may give a sane config example value here, removing that value
# still results in a valid file.
CFG_KNOWN_DEFAULTS = [
{"key": "self_name", "value": "{{ cookiecutter.__project_slug }}", "empty_ok": False},
{"key": "tmp_base_dir", "value": os.path.join(CFG_THIS_FILE_DIRNAME, "data/tmp/%(self_name)s"),
"empty_ok": False},
{"key": "state_base_dir", "value": os.path.join(CFG_THIS_FILE_DIRNAME, "data/var/lib/%(self_name)s"),
"empty_ok": False},
{"key": "state_files_dir", "value": "%(state_base_dir)s/state", "is_global": False, "empty_ok": False},
{"key": "state_file_retention", "value": "50", "is_global": False, "empty_ok": True},
{"key": "state_file_name_prefix", "value": "state-", "is_global": False, "empty_ok": True},
{"key": "state_file_name_suffix", "value": ".log", "is_global": False, "empty_ok": True},
{"key": "{{ cookiecutter.__project_slug_under }}_some_option", "value": "http://localhost:8000/api/query", "is_global": True,
"empty_ok": False},
{"key": "another_option", "value": "first", "is_global": True, "empty_ok": True}
]
# In all sections other than 'default' the following settings are known and accepted. We ignore other settings.
# Per CFG_KNOWN_DEFAULTS above most '[DEFAULT]' options are accepted by virtue of being defaults and overridable.
# The only exception are options where "is_global" equals True, they can't be overridden in '[sections]'; any
# attempt at doing it anyway will be ignored. The main purpose of this list is to name settings that do not have
# a default value but can - if set - influence how a '[section]' behaves. Repeating a '[DEFAULT]' here does not
# make sense. We use 'is_mandatory' to determine if we have to raise errors on missing settings. Here
# 'is_mandatory' means the setting must be given in a '[section]'. It may be empty.
CFG_KNOWN_SECTION = [
# {"key": "an_option", "is_mandatory": True},
# {"key": "another_one", "is_mandatory": False}
]
CFG_MANDATORY = [section_cfg["key"] for section_cfg in CFG_KNOWN_SECTION if section_cfg["is_mandatory"]]
{%- endif %}
{%- if cookiecutter.use_rich_logging == "yes" %}
is_systemd = any([systemd_env_var in os.environ for systemd_env_var in ["SYSTEMD_EXEC_PID", "INVOCATION_ID"]])
logging.basicConfig(
# Default for all modules is NOTSET so log everything
level="NOTSET",
format=CONST.LOG_FORMAT,
datefmt="[%X]",
handlers=[RichHandler(
show_time=False if is_systemd else True,
show_path=False if is_systemd else True,
show_level=False if is_systemd else True,
rich_tracebacks=True
)]
)
log = logging.getLogger("rich")
# Our own code logs with this level
log.setLevel(os.environ.get("LOGLEVEL") if "LOGLEVEL" in [k for k, v in os.environ.items()] else logging.INFO)
{%- endif %}{%- if cookiecutter.use_rich_logging == "no" %}
{% endif %}
{%- if cookiecutter.use_inflect == "yes" %}
p = inflect.engine()
{%- endif %}
{%- if cookiecutter.use_config_ini == "yes" %}
# Use this version of class ConfigParser to {% if cookiecutter.use_rich_logging == "yes" -%}log.debug{%- else -%}print{%- endif %} contents of our config file. When parsing sections other than
# 'default' we don't want to reprint defaults over and over again. This custom class achieves that.
class ConfigParser(
configparser.ConfigParser):
"""Can get options() without defaults
Taken from https://stackoverflow.com/a/12600066.
"""
def options(self, section, no_defaults=False, **kwargs):
if no_defaults:
try:
return list(self._sections[section].keys())
except KeyError:
raise configparser.NoSectionError(section)
else:
return super().options(section)
ini_defaults = []
internal_defaults = {default["key"]: default["value"] for default in CONST.CFG_KNOWN_DEFAULTS}
internal_globals = [default["key"] for default in CONST.CFG_KNOWN_DEFAULTS if default["is_global"]]
internal_empty_ok = [default["key"] for default in CONST.CFG_KNOWN_DEFAULTS if default["empty_ok"]]
config = ConfigParser(defaults=internal_defaults,
converters={'list': lambda x: [i.strip() for i in x.split(',') if len(x) > 0]})
config.read(CONST.CFG_DEFAULT_ABS_PATH)
def print_section_header(
header: str) -> str:
return f"Loading config section '[{header}]' ..."
def validate_default_section(
config_obj: configparser.ConfigParser()) -> None:
{% if cookiecutter.use_rich_logging == "yes" -%}log.debug{%- else -%}print{%- endif %}(f"Loading config from file '{CONST.CFG_DEFAULT_ABS_PATH}' ...")
if not config_obj.sections():
{% if cookiecutter.use_rich_logging == "yes" -%}log.debug{%- else -%}print{%- endif %}(f"No config sections found in '{CONST.CFG_DEFAULT_ABS_PATH}'. Exiting 1 ...")
sys.exit(1)
if config.defaults():
{% if cookiecutter.use_rich_logging == "yes" -%}log.debug{%- else -%}print{%- endif %}(f"Symbol legend:\n"
{% if cookiecutter.use_rich_logging == "yes" %} {% endif %}f"* Default from section '[{config_obj.default_section}]'\n"
{% if cookiecutter.use_rich_logging == "yes" %} {% endif %}f": Global option from '[{config_obj.default_section}]', can not be overridden in local sections\n"
{% if cookiecutter.use_rich_logging == "yes" %} {% endif %}f"~ Local option, doesn't exist in '[{config_obj.default_section}]'\n"
{% if cookiecutter.use_rich_logging == "yes" %} {% endif %}f"+ Local override of a value from '[{config_obj.default_section}]'\n"
{% if cookiecutter.use_rich_logging == "yes" %} {% endif %}f"= Local override, same value as in '[{config_obj.default_section}]'\n"
{% if cookiecutter.use_rich_logging == "yes" %} {% endif %}f"# Local attempt at overriding a global, will be ignored")
{% if cookiecutter.use_rich_logging == "yes" -%}log.debug{%- else -%}print{%- endif %}(print_section_header(config_obj.default_section))
for default in config_obj.defaults():
ini_defaults.append({default: config_obj[config_obj.default_section][default]})
if default in internal_globals:
{% if cookiecutter.use_rich_logging == "yes" -%}log.debug{%- else -%}print{%- endif %}(f": {default} = {config_obj[config_obj.default_section][default]}")
else:
{% if cookiecutter.use_rich_logging == "yes" -%}log.debug{%- else -%}print{%- endif %}(f"* {default} = {config_obj[config_obj.default_section][default]}")
else:
{% if cookiecutter.use_rich_logging == "yes" -%}log.debug{%- else -%}print{%- endif %}(f"No defaults defined")
def config_has_valid_section(
config_obj: configparser.ConfigParser()) -> bool:
has_valid_section = False
for config_obj_section in config_obj.sections():
if set(CONST.CFG_MANDATORY).issubset(config_obj.options(config_obj_section)):
has_valid_section = True
break
return has_valid_section
def is_default(
config_key: str) -> bool:
return any(config_key in ini_default for ini_default in ini_defaults)
def is_global(
config_key: str) -> bool:
return config_key in internal_globals
def is_same_as_default(
config_kv_pair: dict) -> bool:
return config_kv_pair in ini_defaults
def we_have_unset_options(
config_obj: configparser.ConfigParser(),
section_name: str) -> list:
options_must_be_non_empty = []
for option in config_obj.options(section_name):
if not config_obj.get(section_name, option):
if option not in internal_empty_ok:
{% if cookiecutter.use_rich_logging == "yes" -%}log.warning{%- else -%}print{%- endif %}(f"In section '[{section_name}]' option '{option}' is empty, it mustn't be.")
options_must_be_non_empty.append(option)
return options_must_be_non_empty
def validate_config_sections(
config_obj: configparser.ConfigParser()) -> None:
for this_section in config_obj.sections():
{% if cookiecutter.use_rich_logging == "yes" -%}log.debug{%- else -%}print{%- endif %}(print_section_header(this_section))
unset_options = we_have_unset_options(config_obj, this_section)
if unset_options:
{% if cookiecutter.use_rich_logging == "yes" -%}log.error{%- else -%}print{%- endif %}(f"""{% if cookiecutter.use_inflect == "yes" %}{p.plural("Option", len(unset_options))}{% else %}Options{% endif %} {unset_options} """
{% if cookiecutter.use_rich_logging == "yes" %} {% endif %}f"""{% if cookiecutter.use_inflect == "yes" %}{p.plural("is", len(unset_options))}{% else %}are{% endif %} unset. """
{% if cookiecutter.use_rich_logging == "yes" %} {% endif %}f"""{% if cookiecutter.use_inflect == "yes" %}{p.singular_noun("They", len(unset_options))}{% else %}They{% endif %} """
{% if cookiecutter.use_rich_logging == "yes" %} {% endif %}f"must have a non-null value. "
{% if cookiecutter.use_rich_logging == "yes" %} {% endif %}f"""{% if cookiecutter.use_inflect == "yes" %}{p.plural("Default", len(unset_options))} {p.plural("is", len(unset_options))}{% else %}Defaults are{% endif %}:""")
for unset_option in unset_options:
{% if cookiecutter.use_rich_logging == "yes" -%}log.error{%- else -%}print{%- endif %}(f"{unset_option} = {internal_defaults[unset_option]}")
{% if cookiecutter.use_rich_logging == "yes" -%}log.error{%- else -%}print{%- endif %}(f"Exiting 7 ...")
sys.exit(7)
if not set(CONST.CFG_MANDATORY).issubset(config_obj.options(this_section, no_defaults=True)):
{% if cookiecutter.use_rich_logging == "yes" -%}log.warning{%- else -%}print{%- endif %}(f"Config section '[{this_section}]' does not have all mandatory options "
{% if cookiecutter.use_rich_logging == "yes" %} {% endif %}f"{CONST.CFG_MANDATORY} set, skipping section ...")
config_obj.remove_section(this_section)
else:
for key in config_obj.options(this_section, no_defaults=True):
kv_prefix = "~"
remove_from_section = False
if is_global(key):
kv_prefix = "#"
remove_from_section = True
elif is_default(key):
kv_prefix = "+"
if is_same_as_default({key: config_obj[this_section][key]}):
kv_prefix = "="
{% if cookiecutter.use_rich_logging == "yes" -%}log.debug{%- else -%}print{%- endif %}(f"{kv_prefix} {key} = {config_obj[this_section][key]}")
if remove_from_section:
config_obj.remove_option(this_section, key)
{%- endif %}
def an_important_function(
section_name: str,
{%- if cookiecutter.use_config_ini == "yes" %}
config_obj: configparser.ConfigParser(),
{%- endif %}
whatever: str) -> list:
{%- if cookiecutter.use_config_ini == "yes" %}
min_duration = config_obj.getint(section_name, "min_duration")
max_duration = config_obj.getint(section_name, "max_duration")
{%- else %}
min_duration = 10
max_duration = 20
{%- endif %}
return ["I", "am", "a", "list"]
if __name__ == "__main__":
{% if cookiecutter.use_config_ini == "yes" -%}
validate_default_section(config)
if config_has_valid_section(config):
validate_config_sections(config)
else:
{% if cookiecutter.use_rich_logging == "yes" -%}log.error{%- else -%}print{%- endif %}(f"No valid config section found. A valid config section has at least the mandatory options "
{% if cookiecutter.use_rich_logging == "yes" %} {% endif %}f"{CONST.CFG_MANDATORY} set. Exiting 2 ...")
sys.exit(2)
{% if cookiecutter.use_rich_logging == "yes" -%}log.debug{%- else -%}print{%- endif %}(f"Iterating over config sections ...")
for section in config.sections():
{% if cookiecutter.use_rich_logging == "yes" -%}log.info{%- else -%}print{%- endif %}(f"Processing section '[{section}]' ...")
# ...
{%- else -%}
pass
{%- endif %}