Merge remote-tracking branch 'origin/1-get-base-version-going' into 1-get-base-version-going
This commit is contained in:
commit
83e11af519
97
README.md
97
README.md
@ -38,11 +38,14 @@ Hook files from both directories are collectively parsed and executed in lexicog
|
|||||||
|
|
||||||
For ZFS snapshots intended to save your bacon the `00-*` naming convention is particularly critical. In `/usr/share/libalpm/hooks` you can see for example that when a kernel upgrade happens `60-mkinitcpio-remove.hook` is executed (deleting your existing `vmlinuz-*` kernel image for example at `/boot/vmlinuz-linux`). After that if you're using the `zfs-dkms` package which itself requires `dkms` which in turn installs `71-dkms-remove.hook` this hook removes your ZFS kernel module files. Both the `60-*` and optionally the `71-*` hook (for `zfs-dkms` users) run early due to their naming. If we don't create a snapshot before these hooks run we end up creating a snapshot without kernel image and without ZFS kernel module files. Our `00-*` hook files are executed early enough ensuring that a snapshot can safely return you to a working system.
|
For ZFS snapshots intended to save your bacon the `00-*` naming convention is particularly critical. In `/usr/share/libalpm/hooks` you can see for example that when a kernel upgrade happens `60-mkinitcpio-remove.hook` is executed (deleting your existing `vmlinuz-*` kernel image for example at `/boot/vmlinuz-linux`). After that if you're using the `zfs-dkms` package which itself requires `dkms` which in turn installs `71-dkms-remove.hook` this hook removes your ZFS kernel module files. Both the `60-*` and optionally the `71-*` hook (for `zfs-dkms` users) run early due to their naming. If we don't create a snapshot before these hooks run we end up creating a snapshot without kernel image and without ZFS kernel module files. Our `00-*` hook files are executed early enough ensuring that a snapshot can safely return you to a working system.
|
||||||
|
|
||||||
By default we identify the active system dataset by doing `findmnt / --noheadings --output source` which for example returns:
|
We snapshot datasets that have the `space.quico:auto-snapshot` property set to `true`. By default we further limit datasets to only those that are currently mounted in your active operating system. We identify these by asking `findmnt` for a list of mounted file systems of `fstype=="zfs"` which for example returns:
|
||||||
```
|
```
|
||||||
|
# findmnt --json --list --output 'fstype,source,target' | \
|
||||||
|
jq --raw-output '.[][] | select(.fstype=="zfs") | .source'
|
||||||
|
|
||||||
zpool/root/archlinux
|
zpool/root/archlinux
|
||||||
```
|
```
|
||||||
If exactly one source returns that is the exact name of a ZFS dataset in an imported zpool we create a snapshot on it. If no source returns we silently exit. If more than one source returns we raise an error and halt the `pacman` transaction.
|
If no dataset (or no _local_ dataset) has the property set correctly no snapshots are done. The script will print an info-level message about that on `pacman` transactions.
|
||||||
|
|
||||||
We retain two different snapshot chains, one for `pacman` transactions that only affect what we are calling _trivial_ packages and a separate chain for _important_ packages. By default only the exact regular expression package name match `^(linux(-zen)?(-headers)?|systemd|zfs-(linux(-zen)?|dkms|utils))$` is considered important so in plain English any one of:
|
We retain two different snapshot chains, one for `pacman` transactions that only affect what we are calling _trivial_ packages and a separate chain for _important_ packages. By default only the exact regular expression package name match `^(linux(-zen)?(-headers)?|systemd|zfs-(linux(-zen)?|dkms|utils))$` is considered important so in plain English any one of:
|
||||||
|
|
||||||
@ -63,22 +66,24 @@ The _trivial_ snapshot chain by default keeps 25 snapshots, the _important_ chai
|
|||||||
Snapshots may look like so:
|
Snapshots may look like so:
|
||||||
```
|
```
|
||||||
$ zfs list -o name -t all
|
$ zfs list -o name -t all
|
||||||
NAME ┌─── Important because systemd
|
NAME snap_date_format='%F-%H%M' ┌─── Important because systemd
|
||||||
zpool snap_date_format='%F-%H%M' | is on our list of
|
zpool | | is on our list of
|
||||||
zpool/root ▼ | important packages
|
zpool/root ▼ ┌ Counter | important packages
|
||||||
zpool/root/archlinux ┌─────────────┐ ▼▼▼
|
zpool/root/archlinux ┌─────────────┐ ▼ ▼▼▼
|
||||||
zpool/root/archlinux@pacman_2023-03-07-0113_op:upgr_sev:imp_pkgs:systemd:bind:enchant:grep
|
zpool/root/archlinux@pacman_2023-03-07-0113_1_op:upgr_sev:imp_pkgs:systemd:bind:enchant:grep
|
||||||
zpool/root/archlinux@pacman_2023-03-07-0113_op:upgr_sev:trv_pkgs:jdk17-temurin
|
zpool/root/archlinux@pacman_2023-03-07-0113_1_op:upgr_sev:trv_pkgs:jdk17-temurin
|
||||||
zpool/root/archlinux@pacman_2023-03-07-0114_op:inst_sev:trv_pkgs:docker-credential-secretser...
|
zpool/root/archlinux@pacman_2023-03-07-0114_1_op:inst_sev:trv_pkgs:docker-credential-secretser...
|
||||||
zpool/root/archlinux@pacman_2023-03-07-0115_op:upgr_sev:trv_pkgs:proton-ge-custom-bin
|
zpool/root/archlinux@pacman_2023-03-07-0115_1_op:upgr_sev:trv_pkgs:proton-ge-custom-bin
|
||||||
▲▲▲▲ ▲▲▲ └────────────────────────────┘
|
▲▲▲▲ ▲▲▲ └────────────────────────────┘
|
||||||
| | Max. 30 characters per our
|
| | Max. 30 characters per our
|
||||||
Pacman operation that triggered this snapshot ───┘ | pacman-zfs-snapshot.conf
|
Pacman operation that triggered this snapshot ───┘ | pacman-zfs-snapshot.conf
|
||||||
| setting 'pkgs_list_max_length'
|
| setting 'pkgs_list_max_length'
|
||||||
Severity based on affected packages, here trivial ───────┘
|
Severity based on affected packages, here trivial ───────┘
|
||||||
```
|
```
|
||||||
|
|
||||||
Notice how snapshot line 3 ends in `docker-credential-secretser...`. This snapshot was triggered on installation of the Arch User Repository package [docker-credential-secretservice-bin](https://aur.archlinux.org/packages/docker-credential-secretservice-bin) whose package name is 35 characters long. In this example our `pkgs_list_max_length` setting limits maximum name of the packages string to `30` characters. If we can't naturally fit package names into this limit by removing packages from the list we instead cut off part of the package name and add an ellipsis (three dots `...`). The default setting is `pkgs_list_max_length='30'`. In case the user wants three characters or fewer thus making an ellipsis impractical we simply trim the package name to that many characters:
|
Notice how in this case the _counter_ is `1` for all four snapshots. The counter is used as the distinguishing factor for snapshots that are otherwise identical. This avoids naming collisions by incrementing it as needed. In day-to-day operations you will typically see it at `1` as there rarely is a need to avoid collisions unless you purposely limit the timestamp length and/or package list length to the point that successive snapshots may appear identical. See [Avoiding naming collisions](#avoiding-naming-collisions) for more details.
|
||||||
|
|
||||||
|
Notice also how snapshot line 3 ends in `docker-credential-secretser...`. This snapshot was triggered on installation of the Arch User Repository package [docker-credential-secretservice-bin](https://aur.archlinux.org/packages/docker-credential-secretservice-bin) whose package name is 35 characters long. In this example our `pkgs_list_max_length` setting limits maximum name of the packages string to `30` characters. If we can't naturally fit package names into this limit by removing packages from the list we instead cut off part of the package name and add an ellipsis (three dots `...`). The default setting is `pkgs_list_max_length='30'`. In case the user wants three characters or fewer thus making an ellipsis impractical we simply trim the package name to that many characters:
|
||||||
```
|
```
|
||||||
pkgs_list_max_length='7': dock...
|
pkgs_list_max_length='7': dock...
|
||||||
pkgs_list_max_length='6': doc...
|
pkgs_list_max_length='6': doc...
|
||||||
@ -95,10 +100,10 @@ NAME
|
|||||||
zpool
|
zpool
|
||||||
zpool/root
|
zpool/root
|
||||||
zpool/root/archlinux
|
zpool/root/archlinux
|
||||||
zpool/root/archlinux@pacman_2023-03-07-0113_op:upgr_sev:imp
|
zpool/root/archlinux@pacman_2023-03-07-0113_1_op:upgr_sev:imp
|
||||||
zpool/root/archlinux@pacman_2023-03-07-0113_op:upgr_sev:trv
|
zpool/root/archlinux@pacman_2023-03-07-0113_1_op:upgr_sev:trv
|
||||||
zpool/root/archlinux@pacman_2023-03-07-0114_op:inst_sev:trv
|
zpool/root/archlinux@pacman_2023-03-07-0114_1_op:inst_sev:trv
|
||||||
zpool/root/archlinux@pacman_2023-03-07-0115_op:upgr_sev:trv
|
zpool/root/archlinux@pacman_2023-03-07-0115_1_op:upgr_sev:trv
|
||||||
```
|
```
|
||||||
|
|
||||||
Whatever you set as your `pkgs_list_max_length` is still just a best effort as it is subject to ZFS' internal maximum for dataset name length. This limit is currently 255 characters. For a snapshot the dataset name in front of the `@` character plus everything else starting with the `@` character til the end count against the limit. If you'd like e.g. 200 characters allocated to the package list chances are that you'll see fewer characters than that depending on how long your dataset names are on their own.
|
Whatever you set as your `pkgs_list_max_length` is still just a best effort as it is subject to ZFS' internal maximum for dataset name length. This limit is currently 255 characters. For a snapshot the dataset name in front of the `@` character plus everything else starting with the `@` character til the end count against the limit. If you'd like e.g. 200 characters allocated to the package list chances are that you'll see fewer characters than that depending on how long your dataset names are on their own.
|
||||||
@ -121,61 +126,19 @@ With these settings it is possible to cause ZFS snapshot name collisions (meanin
|
|||||||
- They cover the same type of operation (_Install_, _Remove_ or _Upgrade_)
|
- They cover the same type of operation (_Install_, _Remove_ or _Upgrade_)
|
||||||
- They cover the same list of packages
|
- They cover the same list of packages
|
||||||
|
|
||||||
|
The script safeguards against naming collisions by adding a monotonically incrementing counter after the timestamp string.
|
||||||
|
|
||||||
For example by running `pacman -S tmux` three times within the same minute (once for an _Install_ operation and two more times for two identical _Upgrade_ operations) your system may generate the following example snapshots:
|
For example by running `pacman -S tmux` three times within the same minute (once for an _Install_ operation and two more times for two identical _Upgrade_ operations) your system may generate the following example snapshots:
|
||||||
```
|
```
|
||||||
zpool/root/archlinux@pacman_2023-03-07-0116_op:inst_sev:trv_pkgs:tmux
|
zpool/root/archlinux@pacman_2023-03-07-0116_1_op:inst_sev:trv_pkgs:tmux
|
||||||
zpool/root/archlinux@pacman_2023-03-07-0116_op:upgr_sev:trv_pkgs:tmux
|
zpool/root/archlinux@pacman_2023-03-07-0116_1_op:upgr_sev:trv_pkgs:tmux
|
||||||
|
zpool/root/archlinux@pacman_2023-03-07-0116_2_op:upgr_sev:trv_pkgs:tmux
|
||||||
|
~~~
|
||||||
```
|
```
|
||||||
|
|
||||||
Notice that there is no third snapshot for the second identical _Upgrade_ operation as this script skipped snapshot creation.
|
Notice that lines 2 and 3 would collide since their dataset names are virtually identical other than the counter suffix which was incremented by 1 to avoid a collision.
|
||||||
|
|
||||||
The rationale is that you're doing the exact same operation twice or more. There's most likely no reasonable expectaion that your operating system enters a different state on successive `pacman` operations so there's no need to deal with multiple snapshots capturing the same state.
|
This facilitates a hands-off approach to using this script on a daily driver system without risking missing snapshots or employing other more involved approaches to avoid naming collisions.
|
||||||
|
|
||||||
Your `pacman` command line output will show this like so:
|
|
||||||
```
|
|
||||||
:: Running pre-transaction hooks...
|
|
||||||
(1/1) Create ZFS snapshot(s)
|
|
||||||
[WARN] ZFS snapshot skipped (same operation exists at 2023-03-07-0116):
|
|
||||||
[WARN] zpool/root/archlinux@pacman_2023-03-07-0116_op:upgr_sev:trv_pkgs:tmux
|
|
||||||
[WARN] No ZFS snapshot left to do after accounting for identical operations at 2023-03-07-0116.
|
|
||||||
```
|
|
||||||
|
|
||||||
Note that this script will not blindly skip doing **_all_** snapshots in this situation. It will still happily create snapshots that don't cause naming collisions for example when affected snapshots were already deleted or when you're adding an additional dataset to the list of datasets you want to snapshot. In `pacman` command line output you'll then see warnings as needed and regular info-level messages for newly created snapshots where possible:
|
|
||||||
```
|
|
||||||
:: Running pre-transaction hooks...
|
|
||||||
(1/1) Create ZFS snapshot(s)
|
|
||||||
[WARN] ZFS snapshot skipped (same operation exists at 2023-03-07-0116):
|
|
||||||
[WARN] zpool/root/archlinux@pacman_2023-03-07-0116_op:upgr_sev:trv_pkgs:tmux
|
|
||||||
[WARN] zpool/root/archlinux/pacman-cache@pacman_2023-03-07-0116_op:upgr_sev:trv_pkgs:tmux
|
|
||||||
[INFO] ZFS snapshot atomically done:
|
|
||||||
[INFO] zpool/data/var/lib/docker@pacman_2023-03-07-0116_op:upgr_sev:trv_pkgs:tmux
|
|
||||||
```
|
|
||||||
|
|
||||||
This behavior is not configurable. During testing and development we considered adding a monotonically increasing counter to timestamps such as:
|
|
||||||
```
|
|
||||||
...2023-03-07-0116-1...
|
|
||||||
...2023-03-07-0116-2...
|
|
||||||
...2023-03-07-0116-3...
|
|
||||||
```
|
|
||||||
While this would effectively avoid naming collisions we decided against it. Weighing pros and cons the _skip_ approach seems ever so slightly simpler than the _counter_ approach.
|
|
||||||
|
|
||||||
## A word of warning
|
|
||||||
|
|
||||||
Note that skipping snapshot creation to avoid naming collisions can become overly dangerous if you strip away too many unique features from snapshot names. This may happen mostly in two ways:
|
|
||||||
1. Remove the package name list by setting `pkgs_list_max_length='0'`.
|
|
||||||
1. Remove distinguishing characters from timestamps via `snap_date_format='%F-%H%M'`
|
|
||||||
|
|
||||||
Without a package list two consecutive snapshots may look like so:
|
|
||||||
```
|
|
||||||
zpool/root/archlinux@pacman_2023-03-07-0116_op:inst_sev:trv
|
|
||||||
zpool/root/archlinux@pacman_2023-03-07-0116_op:upgr_sev:trv
|
|
||||||
```
|
|
||||||
If you then install any unrelated package within the same minute the `pacman` operation will be treated as identical to line 1 and this script will skip snapshot creation. Similarly if you lower timestamp fidelity to e.g. `%Y%m%d` (`20230307` instead of `2023-03-07-0116`) above example snapshots will look like so:
|
|
||||||
```
|
|
||||||
zpool/root/archlinux@pacman_20230307_op:inst_sev:trv
|
|
||||||
zpool/root/archlinux@pacman_20230307_op:upgr_sev:trv
|
|
||||||
```
|
|
||||||
All future _Install_ or _Upgrade_ operations within the same day will then also not be covered by snapshots. While this script will print warnings when it skips snapshot creation we suggest you change `pkgs_list_max_length` and `snap_date_format` options carefully. Defaults have proven to work well on example daily driver systems.
|
|
||||||
|
|
||||||
# Rollback
|
# Rollback
|
||||||
|
|
||||||
|
@ -31,7 +31,8 @@ snap_op_remove_suffix="${snap_op_remove_suffix:-rmvl}"
|
|||||||
snap_op_upgrade_suffix="${snap_op_upgrade_suffix:-upgr}"
|
snap_op_upgrade_suffix="${snap_op_upgrade_suffix:-upgr}"
|
||||||
|
|
||||||
# Internal
|
# Internal
|
||||||
declare pkg_separator max_zfs_snapshot_name_length color_reset color_lyellow color_red
|
declare zfs_prop pkg_separator max_zfs_snapshot_name_length color_reset color_lyellow color_red
|
||||||
|
zfs_prop='space.quico:auto-snapshot'
|
||||||
pkg_separator=':'
|
pkg_separator=':'
|
||||||
max_zfs_snapshot_name_length='255'
|
max_zfs_snapshot_name_length='255'
|
||||||
color_reset='\e[0m'
|
color_reset='\e[0m'
|
||||||
@ -63,7 +64,7 @@ function pprint () {
|
|||||||
printf -- "${color_lyellow}"'[WARN]'"${color_reset}"' %s\n' "${msg}"
|
printf -- "${color_lyellow}"'[WARN]'"${color_reset}"' %s\n' "${msg}"
|
||||||
;;
|
;;
|
||||||
err)
|
err)
|
||||||
printf -- "${color_red}"'[ERR]'"${color_reset}"' %s\n' "${msg}"
|
printf -- "${color_red}"'[ERR] '"${color_reset}"' %s\n' "${msg}"
|
||||||
;;
|
;;
|
||||||
info)
|
info)
|
||||||
printf -- '[INFO] %s\n' "${msg}"
|
printf -- '[INFO] %s\n' "${msg}"
|
||||||
@ -95,12 +96,12 @@ function set_severity () {
|
|||||||
|
|
||||||
function get_globally_snappable_datasets () {
|
function get_globally_snappable_datasets () {
|
||||||
local datasets_list
|
local datasets_list
|
||||||
# For all datasets show their 'space.quico:auto-snapshot' property; only
|
# For all datasets show their "${zfs_prop}" property; only print dataset
|
||||||
# print dataset name in column 1 and property value in column 2. In awk
|
# name in column 1 and property value in column 2. In awk limit this
|
||||||
# limit this list to datasets where tab-delimited column 2 has exact
|
# list to datasets where tab-delimited column 2 has exact string
|
||||||
# string '^true$' then further limit output by eliminating snapshots
|
# '^true$' then further limit output by eliminating snapshots from list,
|
||||||
# from list, i.e. dataset names that contain an '@' character.
|
# i.e. dataset names that contain an '@' character.
|
||||||
datasets_list="$(zfs get -H -o 'name,value' 'space.quico:auto-snapshot' | \
|
datasets_list="$(zfs get -H -o 'name,value' "${zfs_prop}" | \
|
||||||
awk -F'\t' '{if($2 ~ /^true$/ && $1 !~ /@/) print $1}')"
|
awk -F'\t' '{if($2 ~ /^true$/ && $1 !~ /@/) print $1}')"
|
||||||
while IFS= read -u10 -r dataset; do
|
while IFS= read -u10 -r dataset; do
|
||||||
globally_snappable_datasets+=("${dataset}")
|
globally_snappable_datasets+=("${dataset}")
|
||||||
@ -147,39 +148,6 @@ function write_pkg_list_oneline () {
|
|||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
|
|
||||||
function find_max_dataset_name_length () {
|
|
||||||
local longest_op_suffix op_suffix_string
|
|
||||||
longest_op_suffix='0'
|
|
||||||
for op_suffix in "${snap_op_installation_suffix}" "${snap_op_remove_suffix}" "${snap_op_upgrade_suffix}"; do
|
|
||||||
if [[ "${#op_suffix}" -gt "${longest_op_suffix}" ]]; then
|
|
||||||
longest_op_suffix="${#op_suffix}"
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
op_suffix_string="$(head -c "${longest_op_suffix}" '/dev/zero' | tr '\0' '_')"
|
|
||||||
|
|
||||||
local longest_sev_suffix sev_suffix_string
|
|
||||||
longest_sev_suffix='0'
|
|
||||||
for sev_suffix in "${snaps_trivial_suffix}" "${snaps_important_suffix}"; do
|
|
||||||
if [[ "${#sev_suffix}" -gt "${longest_sev_suffix}" ]]; then
|
|
||||||
longest_sev_suffix="${#sev_suffix}"
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
sev_suffix_string="$(head -c "${longest_sev_suffix}" '/dev/zero' | tr '\0' '_')"
|
|
||||||
|
|
||||||
local dataset_name_no_pkgs
|
|
||||||
max_dataset_name_length='0'
|
|
||||||
for dataset in "${snappable_datasets[@]}"; do
|
|
||||||
dataset_name_no_pkgs="${dataset}"'@'"${snap_name_prefix}${snap_field_separator}${date_string}${snap_field_separator}"'op:'"${op_suffix_string}${snap_field_separator}"'sev:'"${sev_suffix_string}${snap_field_separator}"'pkgs:'
|
|
||||||
if [[ "${#dataset_name_no_pkgs}" -gt "${max_dataset_name_length}" ]]; then
|
|
||||||
max_dataset_name_length="${#dataset_name_no_pkgs}"
|
|
||||||
fi
|
|
||||||
done
|
|
||||||
|
|
||||||
if [[ "${max_dataset_name_length}" -gt "${max_zfs_snapshot_name_length}" ]]; then
|
|
||||||
pprint 'warn' 'Snapshot name would exceed ZFS '"${max_zfs_snapshot_name_length}"' chars limit. Skipping snapshots ...' '0'
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
function trim_single_remaining_package_name () {
|
function trim_single_remaining_package_name () {
|
||||||
local pkg_name
|
local pkg_name
|
||||||
pkg_name="${shorter_pkg_list}"
|
pkg_name="${shorter_pkg_list}"
|
||||||
@ -200,7 +168,7 @@ function trim_single_remaining_package_name () {
|
|||||||
|
|
||||||
function trim_pkg_list_oneline () {
|
function trim_pkg_list_oneline () {
|
||||||
local available_pkg_list_length
|
local available_pkg_list_length
|
||||||
available_pkg_list_length="$((${max_zfs_snapshot_name_length} - ${max_dataset_name_length}))"
|
available_pkg_list_length="${1}"
|
||||||
if [[ "${available_pkg_list_length}" -lt "${pkgs_list_max_length}" ]]; then
|
if [[ "${available_pkg_list_length}" -lt "${pkgs_list_max_length}" ]]; then
|
||||||
# If we have fewer characters available before hitting the
|
# If we have fewer characters available before hitting the
|
||||||
# ZFS internal maximum snapshot name length than the user
|
# ZFS internal maximum snapshot name length than the user
|
||||||
@ -235,76 +203,111 @@ function trim_pkg_list_oneline () {
|
|||||||
trimmed_pkg_list_oneline="${shorter_pkg_list}"
|
trimmed_pkg_list_oneline="${shorter_pkg_list}"
|
||||||
}
|
}
|
||||||
|
|
||||||
function omit_duplicate_snaps () {
|
function test_snap_names_for_validity () {
|
||||||
local existing_snaps
|
local snap_counter max_dataset_name_length trimmed_pkg_list_oneline dataset_name_no_pkgs dataset_name_with_pkgs
|
||||||
local -a unneeded_snaps
|
snap_counter="${1}"
|
||||||
existing_snaps="$(zfs list -t all -oname -H)"
|
max_dataset_name_length='0'
|
||||||
|
for dataset in "${snappable_datasets[@]}"; do
|
||||||
|
# Begin building snapshot name
|
||||||
|
dataset_name_no_pkgs="${dataset}"'@'"${snap_name_prefix}${snap_field_separator}${date_string}"
|
||||||
|
|
||||||
for planned_snap in "${planned_snaps[@]}"; do
|
# Append counter
|
||||||
if grep -Piq -- '^'"${planned_snap}"'$' <<<"${existing_snaps}"; then
|
dataset_name_no_pkgs="${dataset_name_no_pkgs}${snap_field_separator}${snap_counter}"
|
||||||
unneeded_snaps+=("${planned_snap}")
|
|
||||||
else
|
# Append operation, severity and packages fields
|
||||||
needed_snaps+=("${planned_snap}")
|
dataset_name_no_pkgs="${dataset_name_no_pkgs}${snap_field_separator}"'op:'"${conf_op_suffix}${snap_field_separator}"'sev:'"${severity}"
|
||||||
|
|
||||||
|
# Update the longest snapshot name seen so far. We add an automatic
|
||||||
|
# +6 to string length (or more exactly ${#snap_field_separator}+5)
|
||||||
|
# to account for the fact that by default the dataset will end in
|
||||||
|
# the separator string "${snap_field_separator}" plus 'pkgs:' for a
|
||||||
|
# total of 6 additional characters. If these additional characters
|
||||||
|
# cause us to reach or go over the ZFS dataset name length limit
|
||||||
|
# there's no point in attempting to add package names to snapshots.
|
||||||
|
# We calculate as if these additional characters existed and we add
|
||||||
|
# dataset names to our planned_snaps array as if they don't.
|
||||||
|
if [[ "$(( ${#dataset_name_no_pkgs}+${#snap_field_separator}+5 ))" -gt "${max_dataset_name_length}" ]]; then
|
||||||
|
max_dataset_name_length="$(( ${#dataset_name_no_pkgs}+${#snap_field_separator}+5 ))"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
|
planned_snaps+=("${dataset_name_no_pkgs}")
|
||||||
done
|
done
|
||||||
|
|
||||||
if [[ "${#unneeded_snaps[@]}" -gt '0' ]]; then
|
# Abort if this is longer than what ZFS allows
|
||||||
if [[ "${do_dry_run}" == 'true' ]]; then
|
if [[ "${max_dataset_name_length}" -gt "${max_zfs_snapshot_name_length}" ]]; then
|
||||||
pprint 'warn' 'Dry-run, ZFS snapshot skipped (same operation exists at '"${date_string}"'):'
|
pprint 'err' 'Snapshot name would exceed ZFS '"${max_zfs_snapshot_name_length}"' chars limit. Aborting ...' '1'
|
||||||
else
|
fi
|
||||||
pprint 'warn' 'ZFS snapshot skipped (same operation exists at '"${date_string}"'):'
|
|
||||||
|
if [[ "${max_dataset_name_length}" -eq "${max_zfs_snapshot_name_length}" ]]; then
|
||||||
|
for planned_snap in "${planned_snaps[@]}"; do
|
||||||
|
if grep -Piq -- '^'"${planned_snap}"'$' <<<"${existing_snaps}"; then
|
||||||
|
# This snapshot name already exists. Unset array and break.
|
||||||
|
# Try again with next higher counter suffix.
|
||||||
|
unset planned_snaps[@]
|
||||||
|
break
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
# If planned_snaps array still has members we take the snapshot
|
||||||
|
# names already generated. If not we return without array in which
|
||||||
|
# case this function will run again with the snapshot counter
|
||||||
|
# incremented by one. Maximum length seen across all snapshot names
|
||||||
|
# is exactly the ZFS snapshot character limit. We won't be able to
|
||||||
|
# add packages to snapshot names but they will all fit perfectly.
|
||||||
|
# This is good enough.
|
||||||
|
return
|
||||||
|
else
|
||||||
|
# We have enough room to add package names.
|
||||||
|
local available_pkg_list_length
|
||||||
|
available_pkg_list_length="${pkgs_list_max_length}"
|
||||||
|
if [[ "${max_dataset_name_length}" -gt $(( max_zfs_snapshot_name_length - pkgs_list_max_length )) ]]; then
|
||||||
|
available_pkg_list_length="$(( max_zfs_snapshot_name_length - max_dataset_name_length ))"
|
||||||
|
fi
|
||||||
|
trim_pkg_list_oneline "${available_pkg_list_length}"
|
||||||
|
for planned_snap_id in "${!planned_snaps[@]}"; do
|
||||||
|
planned_snaps["${planned_snap_id}"]="${planned_snaps[${planned_snap_id}]}${snap_field_separator}"'pkgs:'"${trimmed_pkg_list_oneline}"
|
||||||
|
if grep -Piq -- '^'"${planned_snaps[${planned_snap_id}]}"'$' <<<"${existing_snaps}"; then
|
||||||
|
# This snapshot name already exists. Unset array and break.
|
||||||
|
# Try again with next higher counter suffix.
|
||||||
|
unset planned_snaps[@]
|
||||||
|
break
|
||||||
fi
|
fi
|
||||||
for unneeded_snap in "${unneeded_snaps[@]}"; do
|
|
||||||
pprint 'warn' ' '"${unneeded_snap}"
|
|
||||||
done
|
done
|
||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
|
|
||||||
function do_snaps () {
|
function generate_snap_names () {
|
||||||
local snap_name snap_return_code
|
local snap_counter existing_snaps
|
||||||
local -a planned_snaps
|
snap_counter='0'
|
||||||
for snappable_dataset_id in "${!snappable_datasets[@]}"; do
|
existing_snaps="$(zfs list -t all -oname -H)"
|
||||||
snap_name="${snappable_datasets[${snappable_dataset_id}]}"'@'"${snap_name_prefix}${snap_field_separator}${date_string}${snap_field_separator}"'op:'"${conf_op_suffix}${snap_field_separator}"'sev:'"${severity}"
|
until [[ "${#planned_snaps[@]}" -gt '0' ]]; do
|
||||||
# If we have at least one pkg name character to append we do
|
snap_counter="$(( snap_counter+1 ))"
|
||||||
# so now but if we're not even allowed to append a single
|
test_snap_names_for_validity "${snap_counter}"
|
||||||
# character we might as well skip the 'pkgs' field
|
|
||||||
# altogether.
|
|
||||||
if [[ "${pkgs_list_max_length}" -ge '1' ]]; then
|
|
||||||
snap_name="${snap_name}${snap_field_separator}"'pkgs:'"${trimmed_pkg_list_oneline}"
|
|
||||||
fi
|
|
||||||
planned_snaps["${snappable_dataset_id}"]="${snap_name}"
|
|
||||||
done
|
done
|
||||||
local -a needed_snaps
|
}
|
||||||
omit_duplicate_snaps
|
|
||||||
if [[ "${#needed_snaps[@]}" -gt '0' ]]; then
|
function do_snaps () {
|
||||||
|
local snap_return_code
|
||||||
if [[ "${do_dry_run}" == 'true' ]]; then
|
if [[ "${do_dry_run}" == 'true' ]]; then
|
||||||
pprint 'info' 'Dry-run, pretending to atomically do ZFS snapshot:'
|
pprint 'info' 'Dry-run, pretending to atomically do ZFS snapshot:'
|
||||||
for needed_snap in "${needed_snaps[@]}"; do
|
for planned_snap in "${planned_snaps[@]}"; do
|
||||||
pprint 'info' ' '"${needed_snap}"
|
pprint 'info' ' '"${planned_snap}"
|
||||||
done
|
done
|
||||||
else
|
else
|
||||||
zfs snapshot "${needed_snaps[@]}"
|
zfs snapshot "${planned_snaps[@]}"
|
||||||
snap_return_code="${?}"
|
snap_return_code="${?}"
|
||||||
if [[ "${snap_return_code}" -eq '0' ]]; then
|
if [[ "${snap_return_code}" -eq '0' ]]; then
|
||||||
successfully_snapped_datasets=("${snappable_datasets[@]}")
|
successfully_snapped_datasets=("${snappable_datasets[@]}")
|
||||||
pprint 'info' 'ZFS snapshot atomically done:'
|
pprint 'info' 'ZFS snapshot atomically done:'
|
||||||
for needed_snap in "${needed_snaps[@]}"; do
|
for planned_snap in "${planned_snaps[@]}"; do
|
||||||
pprint 'info' ' '"${needed_snap}"
|
pprint 'info' ' '"${planned_snap}"
|
||||||
done
|
done
|
||||||
else
|
else
|
||||||
pprint 'warn' 'ZFS snapshot failed:'
|
pprint 'warn' 'ZFS snapshot failed:'
|
||||||
for needed_snap in "${needed_snaps[@]}"; do
|
for planned_snap in "${planned_snaps[@]}"; do
|
||||||
pprint 'warn' ' '"${needed_snap}"
|
pprint 'warn' ' '"${planned_snap}"
|
||||||
done
|
done
|
||||||
fi
|
fi
|
||||||
fi
|
fi
|
||||||
else
|
|
||||||
if [[ "${do_dry_run}" == 'true' ]]; then
|
|
||||||
pprint 'warn' 'Dry-run, no ZFS snapshot left to do after accounting for identical operations at '"${date_string}"'.'
|
|
||||||
else
|
|
||||||
pprint 'warn' 'No ZFS snapshot left to do after accounting for identical operations at '"${date_string}"'.'
|
|
||||||
fi
|
|
||||||
fi
|
|
||||||
}
|
}
|
||||||
|
|
||||||
function get_snaps_in_cur_sev () {
|
function get_snaps_in_cur_sev () {
|
||||||
@ -376,19 +379,27 @@ function main () {
|
|||||||
local local_snappable_datasets
|
local local_snappable_datasets
|
||||||
get_local_snappable_datasets
|
get_local_snappable_datasets
|
||||||
trim_globally_snappable_datasets
|
trim_globally_snappable_datasets
|
||||||
|
if [[ "${#snappable_datasets[@]}" -eq '0' ]]; then
|
||||||
|
pprint 'info' 'ZFS snapshot skipped, no local (= currently mounted) dataset has'
|
||||||
|
pprint 'info' 'property '"'"''"${zfs_prop}"''"'"' set to '"'"'true'"'"'. At the same'
|
||||||
|
pprint 'info' 'time option '"'"'snap_only_local_datasets'"'"' equals '"'"'true'"'"' so'
|
||||||
|
pprint 'info' 'we must only snapshot local datasets. Nothing to do here while'
|
||||||
|
pprint 'info' 'none of them have '"'"''"${zfs_prop}"''"'"' set to '"'"'true'"'"'.' '0'
|
||||||
|
fi
|
||||||
else
|
else
|
||||||
snappable_datasets=("${globally_snappable_datasets}")
|
snappable_datasets=("${globally_snappable_datasets}")
|
||||||
|
if [[ "${#snappable_datasets[@]}" -eq '0' ]]; then
|
||||||
|
pprint 'info' 'ZFS snapshot skipped, no dataset has property '"'"''"${zfs_prop}"''"'"' set to '"'"'true'"'"'.' '0'
|
||||||
|
fi
|
||||||
fi
|
fi
|
||||||
|
|
||||||
local unabridged_pkg_list_oneline
|
local unabridged_pkg_list_oneline
|
||||||
write_pkg_list_oneline
|
write_pkg_list_oneline
|
||||||
|
|
||||||
local date_string max_dataset_name_length
|
local date_string
|
||||||
|
local -a planned_snaps
|
||||||
date_string="$($([[ "${snap_timezone}" ]] && printf -- 'export TZ='"${snap_timezone}"); date +"${snap_date_format}")"
|
date_string="$($([[ "${snap_timezone}" ]] && printf -- 'export TZ='"${snap_timezone}"); date +"${snap_date_format}")"
|
||||||
find_max_dataset_name_length
|
generate_snap_names
|
||||||
|
|
||||||
local trimmed_pkg_list_oneline
|
|
||||||
trim_pkg_list_oneline
|
|
||||||
|
|
||||||
local -a successfully_snapped_datasets
|
local -a successfully_snapped_datasets
|
||||||
do_snaps
|
do_snaps
|
||||||
|
Loading…
x
Reference in New Issue
Block a user