Compare commits

..

No commits in common. "1-get-base-version-going" and "main" have entirely different histories.

7 changed files with 1 additions and 721 deletions

1
.gitignore vendored
View File

@ -1 +0,0 @@
.idea

216
README.md
View File

@ -1,217 +1,3 @@
# zfs-pacman-hook
Arch Linux pacman hook for automatic ZFS snapshots
# Setup
Get started like so:
1. Install dependency `jq`
1. Clone repo into arbitrary path `<repo>`
1. Make `pacman-zfs-snapshot.sh` executable
```
chmod +x <repo>/pacman-zfs-snapshot.sh
```
1. Symlink to files, for example
```
sudo ln -s <repo>/pacman-zfs-snapshot.sh /usr/local/bin/pacman-zfs-snapshot
sudo ln -s <repo>/pacman-zfs-snapshot-install.hook /usr/share/libalpm/hooks/00-pacman-zfs-snapshot-install.hook
sudo ln -s <repo>/pacman-zfs-snapshot-remove.hook /usr/share/libalpm/hooks/00-pacman-zfs-snapshot-remove.hook
sudo ln -s <repo>/pacman-zfs-snapshot-upgrade.hook /usr/share/libalpm/hooks/00-pacman-zfs-snapshot-upgrade.hook
sudo ln -s <repo>/pacman-zfs-snapshot.conf /etc/pacman-zfs-snapshot.conf
```
Note that while you may choose arbitrary locations for symlinks the `00-pacman-zfs-snapshot-*.hook` files reference `/usr/local/bin/pacman-zfs-snapshot`. Change that accordingly if you need to.
1. For datasets you want auto-snapshotted add property `space.quico:auto-snapshot=true`
```
zfs set space.quico:auto-snapshot=true zpool/root/archlinux
```
With any other property and any other value datasets will not be auto-snapshotted.
1. Adjust `pacman-zfs-snapshot.conf` to your liking. You may want to set `do_dry_run='true'` for a start and just reinstall a benign package to get a feel for what this hook would do.
# What's it do?
In `pacman` on every `PreTransaction`, meaning right before any actual operation on a package begins, we trigger a ZFS snapshot. This happens via a so-called hook which is a plain text config file. Hook files make use of the Arch Linux Package Management (ALPM) library, also known as `libalpm` for which `pacman` is a frontend. By default hooks are stored in `/usr/share/libalpm/hooks`. Additionally `/etc/pacman.conf` has a directory configured as:
```
#HookDir = /etc/pacman.d/hooks/
```
Hook files from both directories are collectively parsed and executed in lexicographical order. Hook names from _this_ repo begin with `00-*` so on a default Arch Linux they are the first to be executed during `pacman` transactions.
For ZFS snapshots intended to save your bacon the `00-*` naming convention is particularly critical. In `/usr/share/libalpm/hooks` you can see for example that when a kernel upgrade happens `60-mkinitcpio-remove.hook` is executed (deleting your existing `vmlinuz-*` kernel image for example at `/boot/vmlinuz-linux`). After that if you're using the `zfs-dkms` package which itself requires `dkms` which in turn installs `71-dkms-remove.hook` this hook removes your ZFS kernel module files. Both the `60-*` and optionally the `71-*` hook (for `zfs-dkms` users) run early due to their naming. If we don't create a snapshot before these hooks run we end up creating a snapshot without kernel image and without ZFS kernel module files. Our `00-*` hook files are executed early enough ensuring that a snapshot can safely return you to a working system.
## Snapshot selection
We snapshot datasets that have the `space.quico:auto-snapshot` property set to `true`. By default we further limit datasets to only those that are currently mounted in your active operating system. We identify these by asking `findmnt` for a list of mounted file systems of `fstype=="zfs"` which for example returns:
```
# findmnt --json --list --output 'fstype,source,target' | \
jq --raw-output '.[][] | select(.fstype=="zfs") | .source'
zpool/root/archlinux
```
If no dataset (or no _local_ dataset) has the property set correctly no snapshots are done. The script will print an info-level message about that on `pacman` transactions.
## Snapshot chains
We retain two different snapshot chains, one for `pacman` transactions that only affect what we are calling _trivial_ packages and a separate chain for _important_ packages. By default only the exact regular expression package name match `^(linux(-zen)?(-headers)?|systemd|zfs-(linux(-zen)?|dkms|utils))$` is considered important so in plain English any one of:
- `linux`
- `linux-headers`
- `linux-zen`
- `linux-zen-headers`
- `systemd`
- `zfs-linux`
- `zfs-linux-zen`
- `zfs-dkms`
- `zfs-utils`
Whenever an important package is affected by a transaction a snapshot goes into the corresponding chain. In all other cases - when an important package is not affected - snapshots go into the trivial chain.
The _trivial_ snapshot chain by default keeps 25 snapshots, the _important_ chain keeps 10. The thought process here is that you will likely not futz around with a kernel every day whereas you may very well install arbitrary packages multiple times a day. Snapshots should keep you safe for a low number of weeks up to maybe a full month on an average daily driver system hence the defaults of 10 and 25 snapshots, respectively.
## Dataset naming and uniqueness
Snapshots may look like so:
```
$ zfs list -o name -t all
NAME snap_date_format='%F-%H%M' ┌─── Important because systemd
zpool | | is on our list of
zpool/root ▼ ┌ Counter | important packages
zpool/root/archlinux ┌─────────────┐ ▼ ▼▼▼
zpool/root/archlinux@pacman_2023-03-07-0113_1_op:upgr_sev:imp_pkgs:systemd:bind:enchant:grep
zpool/root/archlinux@pacman_2023-03-07-0113_1_op:upgr_sev:trv_pkgs:jdk17-temurin
zpool/root/archlinux@pacman_2023-03-07-0114_1_op:inst_sev:trv_pkgs:docker-credential-secretser...
zpool/root/archlinux@pacman_2023-03-07-0115_1_op:upgr_sev:trv_pkgs:proton-ge-custom-bin
▲▲▲▲ ▲▲▲ └────────────────────────────┘
| | Max. 30 characters per our
Pacman operation that triggered this snapshot ───┘ | pacman-zfs-snapshot.conf
| setting 'pkgs_list_max_length'
Severity based on affected packages, here trivial ───────┘
```
Notice how in this case the _counter_ is `1` for all four snapshots. The counter is used as the distinguishing factor for snapshots that are otherwise identical. This avoids naming collisions by incrementing it as needed. In day-to-day operations you will typically see it at `1` as there rarely is a need to avoid collisions unless you purposely limit the timestamp length and/or package list length to the point that successive snapshots may appear identical. See [Avoiding naming collisions](#avoiding-naming-collisions) for more details.
Notice also how snapshot line 3 ends in `docker-credential-secretser...`. This snapshot was triggered on installation of the Arch User Repository package [docker-credential-secretservice-bin](https://aur.archlinux.org/packages/docker-credential-secretservice-bin) whose package name is 35 characters long. In this example our `pkgs_list_max_length` setting limits maximum name of the packages string to `30` characters. If we can't naturally fit package names into this limit by removing packages from the list we instead cut off part of the package name and add an ellipsis (three dots `...`). The default setting is `pkgs_list_max_length='30'`. In case the user wants three characters or fewer thus making an ellipsis impractical we simply trim the package name to that many characters:
```
pkgs_list_max_length='7': dock...
pkgs_list_max_length='6': doc...
pkgs_list_max_length='5': do...
pkgs_list_max_length='4': d...
pkgs_list_max_length='3': doc
pkgs_list_max_length='2': do
pkgs_list_max_length='1': d
```
With a package list allowance of 0 characters the entire `pkgs` field is removed. Above example will then look like so:
```
$ zfs list -o name -t all
NAME
zpool
zpool/root
zpool/root/archlinux
zpool/root/archlinux@pacman_2023-03-07-0113_1_op:upgr_sev:imp
zpool/root/archlinux@pacman_2023-03-07-0113_1_op:upgr_sev:trv
zpool/root/archlinux@pacman_2023-03-07-0114_1_op:inst_sev:trv
zpool/root/archlinux@pacman_2023-03-07-0115_1_op:upgr_sev:trv
```
Whatever you set as your `pkgs_list_max_length` is still just a best effort as it is subject to ZFS' internal maximum for dataset name length. This limit is currently 255 characters. For a snapshot the dataset name in front of the `@` character plus everything else starting with the `@` character til the end count against the limit. If you'd like e.g. 200 characters allocated to the package list chances are that you'll see fewer characters than that depending on how long your dataset names are on their own.
## Special characters in package names
Arch Linux has no qualms with at (`@`) characters and plus (`+`) characters in package names but ZFS very much does take issue with those. Just a heads-up, when constructing a ZFS snapshot name we replace all `@` characters in package names with one dot each (`.`) and we replace all `+` characters with one underscore each (`_`).
A snapshot name that would appear like so:
```
$ zfs list -o name -t all
NAME
zpool
zpool/root
zpool/root/archlinux
zpool/root/archlinux@pacman_2023-03-07-0113_1_op:upgr_sev:trv_pkgs:jdk17-temurin:libc++
~~~~~~
```
We'll create like so instead:
```
$ zfs list -o name -t all
NAME
zpool
zpool/root
zpool/root/archlinux
zpool/root/archlinux@pacman_2023-03-07-0113_1_op:upgr_sev:trv_pkgs:jdk17-temurin:libc__
~~~~~~
```
Have a look at `pacman-zfs-snapshot.conf` as well, its comments should be clear enough to get you going.
# Avoiding naming collisions
By default snapshot names contain a timestamp formatted like so: `2023-03-07-0114`. This makes snapshot names reasonably unique. You can change both the timestamp format and timezone in `pacman-zfs-snapshot.conf` where the format defaults to:
```
snap_date_format='%F-%H%M'
```
And the timezone defaults to:
```
snap_timezone='Etc/UTC'
```
With these settings it is possible to cause ZFS snapshot name collisions (meaning reuse of the exact same snapshot name) when all of the following conditions are true for any two `pacman` operations:
- They occur within the same minute
- They cover the same type of operation (_Install_, _Remove_ or _Upgrade_)
- They cover the same list of packages
The script safeguards against naming collisions by adding a monotonically incrementing counter after the timestamp string.
For example by running `pacman -S tmux` three times within the same minute (once for an _Install_ operation and two more times for two identical _Upgrade_ operations) your system may generate the following example snapshots:
```
zpool/root/archlinux@pacman_2023-03-07-0116_1_op:inst_sev:trv_pkgs:tmux
zpool/root/archlinux@pacman_2023-03-07-0116_1_op:upgr_sev:trv_pkgs:tmux
zpool/root/archlinux@pacman_2023-03-07-0116_2_op:upgr_sev:trv_pkgs:tmux
~~~
```
Notice that lines 2 and 3 would collide since their dataset names are virtually identical other than the counter suffix which was incremented by 1 to avoid a collision.
This facilitates a hands-off approach to using this script on a daily driver system without risking missing snapshots or employing other more involved approaches to avoid naming collisions.
# Rollback
After a rollback for example via the excellent [ZFSBootMenu](https://docs.zfsbootmenu.org/) `pacman` and all AUR helpers you may be using will consider the `pacman` database to be locked. No `pacman` transactions can start, you will for example see:
- In `pacman`
```
# pacman -Syu
:: Synchronizing package databases...
error: failed to synchronize all databases (unable to lock database)
```
- In `paru`
```
$ paru
:: Pacman is currently in use, please wait...
```
The moment a snapshot was created `pacman` was already in a transaction so it had already written its lock file to `/var/lib/pacman/db.lck`. After a clean finish `pacman` would have deleted that lock itself but since you rolled back to a point mid-transaction it's still there. Just delete the file and you're good to go:
```
sudo rm /var/lib/pacman/db.lck
```
# Development
## Conventional commits
This project uses [Conventional Commits](https://www.conventionalcommits.org/) for its commit messages.
### Commit types
Commit _types_ besides `fix` and `feat` are:
- `build`: Project structure, directory layout, build instructions for roll-out
- `refactor`: Keeping functionality while streamlining or otherwise improving function flow
- `test`: Working on test coverage
- `docs`: Documentation for project or components
### Commit scopes
The following _scopes_ are known for this project. A Conventional Commits commit message may optionally use one of the following scopes or none:
- `conf`: How we deal with script config
- `script`: Any other script work that doesn't specifically fall into the above scopes
- `hook`: Configuring the hook(s)
- `meta`: Affects the project's repo layout, readme content, file names etc.
Arch Linux pacman hook for automatic snapshots

View File

@ -1,12 +0,0 @@
[Trigger]
Operation = Install
Type = Package
Target = *
[Action]
Description = Create ZFS snapshot(s)
When = PreTransaction
Exec = /bin/sh -c 'while read -r f; do echo "$f"; done | /usr/local/bin/pacman-zfs-snapshot install'
Depends = jq
AbortOnFail
NeedsTargets

View File

@ -1,12 +0,0 @@
[Trigger]
Operation = Remove
Type = Package
Target = *
[Action]
Description = Create ZFS snapshot(s)
When = PreTransaction
Exec = /bin/sh -c 'while read -r f; do echo "$f"; done | /usr/local/bin/pacman-zfs-snapshot remove'
Depends = jq
AbortOnFail
NeedsTargets

View File

@ -1,12 +0,0 @@
[Trigger]
Operation = Upgrade
Type = Package
Target = *
[Action]
Description = Create ZFS snapshot(s)
When = PreTransaction
Exec = /bin/sh -c 'while read -r f; do echo "$f"; done | /usr/local/bin/pacman-zfs-snapshot upgrade'
Depends = jq
AbortOnFail
NeedsTargets

View File

@ -1,57 +0,0 @@
# Set to 'true' to do nothing and just print messages during pacman
# operations. Helpful to get a feel for what these hooks do. This defaults
# to 'false' so if you set this to an empty string or remove or uncomment it
# in this conf file it'll equal 'false'.
do_dry_run='false'
# Pipe-separated list of package names we consider important. Will be
# matched against regular expression ^(this_var_here)$. Snapshots taken
# before a pacman transaction on an important package have a separate
# retention from snapshots for trivial packages. Lends itself to keeping
# high-risk updates separate from everything else.
important_names='linux(-zen)?(-headers)?|systemd|zfs-(linux(-zen)?|dkms|utils)'
# Number snapshots to keep
snaps_trivial_keep='25'
snaps_important_keep='10'
# Which suffix to use in snapshot names to identify snapshots before a
# trivial pacman operation and before important pacman operations.
snaps_trivial_suffix='trv'
snaps_important_suffix='imp'
# Snapshot name will contain list of affected packages trimmed to this many
# max characters.
pkgs_list_max_length='30'
# Hook will by default snapshot all datasets that have the property
# 'space.quico:auto-snapshot=true' set, even the ones that are not currently
# mounted and may belong to unrelated operating systems. Set
# snap_only_local_datasets='true' to limit snapshots to only those datasets
# that have aforementioned property and at the same time are currently
# mounted in your running OS. Currently mounted is defined as:
# findmnt --json --list --output 'fstype,source,target' | \
# jq --raw-output '.[][] | select(.fstype=="zfs") | .source'
snap_only_local_datasets='true'
# Which characters do we want to use to separate snapshot name fields
snap_field_separator='_'
# Prefix all our snapshots with this string to keep them separate from
# snapshots done by any other means
snap_name_prefix='pacman'
# We do "$(date +<whatever>)" to put a timestamp into snapshot names.
# Defaults to "$(date +'%F-%H%M')" which returns '2023-03-07-0050'.
snap_date_format='%F-%H%M'
# The tzdata-formatted timezone name used to add timestamps to snapshot
# names. Check for example 'timedatectl list-timezones' to get a list of
# valid names on your system. Format looks like 'America/Fortaleza',
# 'Asia/Magadan' or 'Australia/Sydney'. Defaults to 'Etc/UTC'. Can also be
# the empty string (as in snap_timezone='') in which case we'll use your
# system's timezone setting.
snap_timezone='Etc/UTC'
# Which strings do we want to diffferentiate pacman operations Install,
# Remove, Upgrade
snap_op_installation_suffix='inst'
snap_op_remove_suffix='rmvl'
snap_op_upgrade_suffix='upgr'

View File

@ -1,412 +0,0 @@
#!/bin/bash
declare -a pkgs
while read pkg; do
pkgs+=("${pkg}")
done
declare conf_file
conf_file='/etc/pacman-zfs-snapshot.conf'
declare important_names snaps_trivial_keep snaps_important_keep snaps_trivial_suffix snaps_important_suffix
if [[ -r "${conf_file}" ]]; then
source "${conf_file}"
fi
# User-defined
do_dry_run="${do_dry_run:-false}"
important_names="${important_names:-linux|systemd|zfs-(dkms|utils)}"
snaps_trivial_keep="${snaps_trivial_keep:-15}"
snaps_important_keep="${snaps_important_keep:-5}"
snaps_trivial_suffix="${snaps_trivial_suffix:-trv}"
snaps_important_suffix="${snaps_important_suffix:-imp}"
pkgs_list_max_length="${pkgs_list_max_length:-30}"
snap_only_local_datasets="${snap_only_local_datasets:-true}"
snap_field_separator="${snap_field_separator:-_}"
snap_name_prefix="${snap_name_prefix:-pacman}"
snap_date_format="${snap_date_format:-%F-%H%M}"
snap_timezone="${snap_timezone:-Etc/UTC}"
snap_op_installation_suffix="${snap_op_installation_suffix:-inst}"
snap_op_remove_suffix="${snap_op_remove_suffix:-rmvl}"
snap_op_upgrade_suffix="${snap_op_upgrade_suffix:-upgr}"
# Internal
declare zfs_prop pkg_separator max_zfs_snapshot_name_length color_reset color_lyellow color_red
zfs_prop='space.quico:auto-snapshot'
pkg_separator=':'
max_zfs_snapshot_name_length='255'
color_reset='\e[0m'
color_lyellow='\e[93m'
color_red='\e[31m'
declare operation conf_op_suffix
operation="${1}"
case "${operation}" in
install)
conf_op_suffix="${snap_op_installation_suffix}"
;;
remove)
conf_op_suffix="${snap_op_remove_suffix}"
;;
upgrade)
conf_op_suffix="${snap_op_upgrade_suffix}"
;;
esac
function pprint () {
local style msg exit_code
style="${1:?}"
msg="${2:?}"
exit_code="${3}"
case "${style}" in
warn)
printf -- "${color_lyellow}"'[WARN]'"${color_reset}"' %s\n' "${msg}"
;;
err)
printf -- "${color_red}"'[ERR] '"${color_reset}"' %s\n' "${msg}"
;;
info)
printf -- '[INFO] %s\n' "${msg}"
;;
esac
[[ "${exit_code}" ]] && exit "${exit_code}"
}
function split_pkgs_by_importance () {
local pkgs_in_transaction
pkgs_in_transaction=("${@}")
for pkg in "${pkgs_in_transaction[@]}"; do
if grep -Piq -- '^('"${important_names}"')$' <<<"${pkg}"; then
important_pkgs_in_transaction+=("${pkg}")
else
trivial_pkgs_in_transaction+=("${pkg}")
fi
done
}
function set_severity () {
if [[ "${#important_pkgs_in_transaction[@]}" -ge '1' ]]; then
severity="${snaps_important_suffix}"
else
severity="${snaps_trivial_suffix}"
fi
}
function get_globally_snappable_datasets () {
local datasets_list
# For all datasets show their "${zfs_prop}" property; only print dataset
# name in column 1 and property value in column 2. In awk limit this
# list to datasets where tab-delimited column 2 has exact string
# '^true$' then further limit output by eliminating snapshots from list,
# i.e. dataset names that contain an '@' character.
datasets_list="$(zfs get -H -o 'name,value' "${zfs_prop}" | \
awk -F'\t' '{if($2 ~ /^true$/ && $1 !~ /@/) print $1}')"
while IFS= read -u10 -r dataset; do
globally_snappable_datasets+=("${dataset}")
done 10<<<"${datasets_list}"
}
function get_local_snappable_datasets () {
local datasets_list
datasets_list="$(findmnt --json --list --output 'fstype,source,target' | \
jq --raw-output '.[][] | select(.fstype=="zfs") | .source')"
while IFS= read -u10 -r dataset; do
local_snappable_datasets+=("${dataset}")
done 10<<<"${datasets_list}"
}
function trim_globally_snappable_datasets () {
for global_dataset in "${globally_snappable_datasets[@]}"; do
for local_dataset in "${local_snappable_datasets[@]}"; do
if grep -Piq -- '^'"${local_dataset}"'$' <<<"${global_dataset}"; then
snappable_datasets+=("${global_dataset}")
fi
done
done
}
function write_pkg_list_oneline () {
if [[ "${severity}" == "${snaps_important_suffix}" ]]; then
for pkg in "${important_pkgs_in_transaction[@]}"; do
if [[ "${unabridged_pkg_list_oneline}" ]]; then
unabridged_pkg_list_oneline="${unabridged_pkg_list_oneline}${pkg_separator}${pkg}"
else
unabridged_pkg_list_oneline="${pkg}"
fi
done
fi
if [[ "${#trivial_pkgs_in_transaction[@]}" -ge '1' ]]; then
for pkg in "${trivial_pkgs_in_transaction[@]}"; do
if [[ "${unabridged_pkg_list_oneline}" ]]; then
unabridged_pkg_list_oneline="${unabridged_pkg_list_oneline}${pkg_separator}${pkg}"
else
unabridged_pkg_list_oneline="${pkg}"
fi
done
fi
}
function trim_single_remaining_package_name () {
local pkg_name
pkg_name="${shorter_pkg_list}"
case 1 in
# Trim to 1 to 3 characters, no trailing ellipsis (...)
$(( 1<=pkgs_list_max_length && pkgs_list_max_length<=3 )))
pkg_name="${pkg_name::${pkgs_list_max_length}}"
;;
# Show as many pkg name characters as we can while also
# fitting an ellipsis into the name (...) to indicate
# that we've cut the pkg name off at the end.
$(( pkgs_list_max_length>=4 )))
pkg_name="${pkg_name::$(( pkgs_list_max_length - 3 ))}"'...'
;;
esac
shorter_pkg_list="${pkg_name}"
}
function trim_pkg_list_oneline () {
local available_pkg_list_length
available_pkg_list_length="${1}"
if [[ "${available_pkg_list_length}" -lt "${pkgs_list_max_length}" ]]; then
# If we have fewer characters available before hitting the
# ZFS internal maximum snapshot name length than the user
# wants limit package list length.
pkgs_list_max_length="${available_pkg_list_length}"
fi
local shorter_pkg_list
if [[ "${pkgs_list_max_length}" -le '0' ]]; then
# User wants zero characters of pkg names in snapshot name,
# no need to even find an appropriate pkg name string. Just
# set to empty string and we're done here.
shorter_pkg_list=''
else
shorter_pkg_list="${unabridged_pkg_list_oneline}"
while [[ "${#shorter_pkg_list}" -gt "${pkgs_list_max_length}" ]]; do
shorter_pkg_list="${shorter_pkg_list%${pkg_separator}*}"
if ! grep -Piq "${pkg_separator}" <<<"${shorter_pkg_list}"; then
# Only one package remains in package list, no need to continue
break
fi
done
# If pkg name is still too long trim it. If there's enough
# space for an ellipsis (...) we add that to indicate we've
# trimmed the name, otherwise we just take however many
# characters of the pkg name we can get.
if [[ "${#shorter_pkg_list}" -gt "${pkgs_list_max_length}" ]]; then
trim_single_remaining_package_name
fi
fi
trimmed_pkg_list_oneline="${shorter_pkg_list}"
}
function test_snap_names_for_validity () {
local snap_counter max_dataset_name_length trimmed_pkg_list_oneline dataset_name_no_pkgs dataset_name_with_pkgs
snap_counter="${1}"
max_dataset_name_length='0'
for dataset in "${snappable_datasets[@]}"; do
# Begin building snapshot name
dataset_name_no_pkgs="${dataset}"'@'"${snap_name_prefix}${snap_field_separator}${date_string}"
# Append counter
dataset_name_no_pkgs="${dataset_name_no_pkgs}${snap_field_separator}${snap_counter}"
# Append operation, severity and packages fields
dataset_name_no_pkgs="${dataset_name_no_pkgs}${snap_field_separator}"'op:'"${conf_op_suffix}${snap_field_separator}"'sev:'"${severity}"
# Update the longest snapshot name seen so far. We add an automatic
# +6 to string length (or more exactly ${#snap_field_separator}+5)
# to account for the fact that by default the dataset will end in
# the separator string "${snap_field_separator}" plus 'pkgs:' for a
# total of 6 additional characters. If these additional characters
# cause us to reach or go over the ZFS dataset name length limit
# there's no point in attempting to add package names to snapshots.
# We calculate as if these additional characters existed and we add
# dataset names to our planned_snaps array as if they don't.
if [[ "$(( ${#dataset_name_no_pkgs}+${#snap_field_separator}+5 ))" -gt "${max_dataset_name_length}" ]]; then
max_dataset_name_length="$(( ${#dataset_name_no_pkgs}+${#snap_field_separator}+5 ))"
fi
planned_snaps+=("${dataset_name_no_pkgs}")
done
# Abort if this is longer than what ZFS allows
if [[ "${max_dataset_name_length}" -gt "${max_zfs_snapshot_name_length}" ]]; then
pprint 'err' 'Snapshot name would exceed ZFS '"${max_zfs_snapshot_name_length}"' chars limit. Aborting ...' '1'
fi
if [[ "${max_dataset_name_length}" -eq "${max_zfs_snapshot_name_length}" ]]; then
for planned_snap in "${planned_snaps[@]}"; do
if grep -Piq -- '^'"${planned_snap}"'$' <<<"${existing_snaps}"; then
# This snapshot name already exists. Unset array and break.
# Try again with next higher counter suffix.
unset planned_snaps[@]
break
fi
done
# If planned_snaps array still has members we take the snapshot
# names already generated. If not we return without array in which
# case this function will run again with the snapshot counter
# incremented by one. Maximum length seen across all snapshot names
# is exactly the ZFS snapshot character limit. We won't be able to
# add packages to snapshot names but they will all fit perfectly.
# This is good enough.
return
else
# We have enough room to add package names.
local available_pkg_list_length
available_pkg_list_length="${pkgs_list_max_length}"
if [[ "${max_dataset_name_length}" -gt $(( max_zfs_snapshot_name_length - pkgs_list_max_length )) ]]; then
available_pkg_list_length="$(( max_zfs_snapshot_name_length - max_dataset_name_length ))"
fi
trim_pkg_list_oneline "${available_pkg_list_length}"
for planned_snap_id in "${!planned_snaps[@]}"; do
planned_snaps["${planned_snap_id}"]="${planned_snaps[${planned_snap_id}]}${snap_field_separator}"'pkgs:'"${trimmed_pkg_list_oneline}"
if grep -Piq -- '^'"${planned_snaps[${planned_snap_id}]}"'$' <<<"${existing_snaps}"; then
# This snapshot name already exists. Unset array and break.
# Try again with next higher counter suffix.
unset planned_snaps[@]
break
fi
done
fi
}
function generate_snap_names () {
local snap_counter existing_snaps
snap_counter='0'
existing_snaps="$(zfs list -t all -oname -H)"
until [[ "${#planned_snaps[@]}" -gt '0' ]]; do
snap_counter="$(( snap_counter+1 ))"
test_snap_names_for_validity "${snap_counter}"
done
}
function do_snaps () {
local snap_return_code
if [[ "${do_dry_run}" == 'true' ]]; then
pprint 'info' 'Dry-run, pretending to atomically do ZFS snapshot:'
for planned_snap in "${planned_snaps[@]}"; do
pprint 'info' ' '"${planned_snap}"
done
else
zfs snapshot "${planned_snaps[@]}"
snap_return_code="${?}"
if [[ "${snap_return_code}" -eq '0' ]]; then
successfully_snapped_datasets=("${snappable_datasets[@]}")
pprint 'info' 'ZFS snapshot atomically done:'
for planned_snap in "${planned_snaps[@]}"; do
pprint 'info' ' '"${planned_snap}"
done
else
pprint 'warn' 'ZFS snapshot failed:'
for planned_snap in "${planned_snaps[@]}"; do
pprint 'warn' ' '"${planned_snap}"
done
fi
fi
}
function get_snaps_in_cur_sev () {
local dataset_to_query
dataset_to_query="${1:?}"
snap_list="$(zfs list -H -o 'name' -t snapshot "${dataset_to_query}")"
snaps_done_by_us="$(grep -Pi -- '@'"${snap_name_prefix}${snap_field_separator}" <<<"${snap_list}")"
snaps_in_cur_sev="$(grep -Pi -- "${snap_field_separator}"'sev:'"${severity}${snap_field_separator}" <<<"${snaps_done_by_us}")"
printf -- '%s\n' "${snaps_in_cur_sev}"
}
function do_retention () {
local snap_list snaps_done_by_us snaps_in_cur_sev snaps_limit oldest_snap snap_return_code
local -a destroyed_snaps failed_to_destroy_snaps
if [[ "${do_dry_run}" == 'true' ]]; then
pprint 'info' 'Dry-run, skipping potential ZFS destroy operations ...'
else
for successfully_snapped_dataset in "${successfully_snapped_datasets[@]}"; do
snaps_in_cur_sev="$(get_snaps_in_cur_sev "${successfully_snapped_dataset}")"
if [[ "${severity}" == "${snaps_important_suffix}" ]]; then
snaps_limit="${snaps_important_keep}"
else
snaps_limit="${snaps_trivial_keep}"
fi
while [[ "$(get_snaps_in_cur_sev "${successfully_snapped_dataset}" | wc -l)" -gt "${snaps_limit}" ]]; do
oldest_snap="$(get_snaps_in_cur_sev "${successfully_snapped_dataset}" | head -n1)"
zfs destroy "${oldest_snap}"
snap_return_code="${?}"
if [[ "${snap_return_code}" -eq '0' ]]; then
destroyed_snaps+=("${oldest_snap}")
else
failed_to_destroy_snaps+=("${oldest_snap}")
fi
done
if [[ "${#destroyed_snaps[@]}" -gt '0' ]]; then
pprint 'info' 'Oldest ZFS snapshot'"$([[ "${#failed_to_destroy_snaps[@]}" -gt '1' ]] && printf -- '%s' 's')"' in chain '"'"'sev:'"${severity}"''"'"' destroyed:'
for destroyed_snap in "${destroyed_snaps[@]}"; do
pprint 'info' ' '"${destroyed_snap}"
done
fi
if [[ "${#failed_to_destroy_snaps[@]}" -gt '0' ]]; then
pprint 'warn' 'Failed to prune ZFS snapshot'"$([[ "${#failed_to_destroy_snaps[@]}" -gt '1' ]] && printf -- '%s' 's')"' in chain '"'"'sev:'"${severity}"''"'"':'
for failed_to_destroy_snap in "${failed_to_destroy_snaps[@]}"; do
pprint 'warn' ' '"${failed_to_destroy_snap}"
done
fi
done
fi
return 0
}
function main () {
local pkgs_in_transaction
pkgs_in_transaction=("${@}")
# Replace characters that are valid as Arch Linux package names but invalid as ZFS dataset names with something ZFS
# doesn't mind. Replace at characters ('@') indiscriminately with one dot each ('.'), replace plus characters ('+')
# with one underscore each ('_').
pkgs_in_transaction=("${pkgs_in_transaction[@]//+/_}")
pkgs_in_transaction=("${pkgs_in_transaction[@]//@/.}")
local -a important_pkgs_in_transaction trivial_pkgs_in_transaction
split_pkgs_by_importance "${pkgs_in_transaction[@]}"
local severity
set_severity
local -a globally_snappable_datasets
get_globally_snappable_datasets
local -a snappable_datasets
if [[ "${snap_only_local_datasets}" == 'true' ]]; then
local local_snappable_datasets
get_local_snappable_datasets
trim_globally_snappable_datasets
if [[ "${#snappable_datasets[@]}" -eq '0' ]]; then
pprint 'info' 'ZFS snapshot skipped, no local (= currently mounted) dataset has'
pprint 'info' 'property '"'"''"${zfs_prop}"''"'"' set to '"'"'true'"'"'. At the same'
pprint 'info' 'time option '"'"'snap_only_local_datasets'"'"' equals '"'"'true'"'"' so'
pprint 'info' 'we must only snapshot local datasets. Nothing to do here while'
pprint 'info' 'none of them have '"'"''"${zfs_prop}"''"'"' set to '"'"'true'"'"'.' '0'
fi
else
snappable_datasets=("${globally_snappable_datasets}")
if [[ "${#snappable_datasets[@]}" -eq '0' ]]; then
pprint 'info' 'ZFS snapshot skipped, no dataset has property '"'"''"${zfs_prop}"''"'"' set to '"'"'true'"'"'.' '0'
fi
fi
local unabridged_pkg_list_oneline
write_pkg_list_oneline
local date_string
local -a planned_snaps
date_string="$($([[ "${snap_timezone}" ]] && printf -- 'export TZ='"${snap_timezone}"); date +"${snap_date_format}")"
generate_snap_names
local -a successfully_snapped_datasets
do_snaps
do_retention
}
main "${pkgs[@]}"