* Create one `userpass` username named like your alias, define your own password
* Add your own entity to group `administrators`
Log out. Never again use the `root` token unless there's a good reason.
Get the Vault command-line client via [vaultproject.io/downloads](https://www.vaultproject.io/downloads). It'll install the Vault service itself along with the command-line client. Just ignore the service or keep it disabled via `systemctl disable --now vault.service`. You only need the `vault` binary.
We're going to allow all human users to change their own `userpass` password. The policy to do so is at [policies/human/change-own-password.hcl](policies/human/change-own-password.hcl). For a hands-on example of an actual password change via HTTP API see [Hands-on](#hands-on) but first:
* Before you can load the policy into Vault you need to replace the string `ACCESSOR` in it with _your_ particular `userpass` accessor. Get it like so:
```
# List auth methods
vault auth list
# Expected result similar to:
Path Type Accessor Description
---- ---- -------- -----------
token/ token auth_token_d3aad127 token based credentials
userpass/ userpass auth_userpass_6671d643 n/a
```
Over in [policies/human/change-own-password.hcl](policies/human/change-own-password.hcl) replace `ACCESSOR` with what you're seeing here in the Accessor column. Feel free to read up on [templated policies](https://www.vaultproject.io/docs/concepts/policies#templated-policies) for more info.
Optionally [policies/cfgmgmt/cfgmgmt.hcl](policies/cfgmgmt/cfgmgmt.hcl) gets you started with read-only secrets access for example for a config management tool like Ansible.
You'll want to create an Ansible entity and a `userpass` alias. Think of the alias as glue that ties an auth method to an entity. This in turn allows you to specify policies that apply to the entity, get inherited by aliases and lastly inherited by auth methods.
In this simple use case create create a user in the `userpass` auth method, use the same name used from both the entity and its alias. Use that user to authenticate against Vault and retrieve a token. You'll likely want a distinct group where your Ansible entity becomes a member and which uses a policy such as the example at [policies/cfgmgmt/cfgmgmt.hcl](policies/cfgmgmt/cfgmgmt.hcl).
Optionally from [policies/kv-writer/kv-writer.hcl](policies/kv-writer/kv-writer.hcl) load a policy that allows affected entities to create `kv` secrets, create new versions for existing secrets and to traverse the UI directory structure of secrets. Entities with this policy will not be able to read secrets nor see if versions exist at a given location.
As a similar narrowly scoped use case consider a Zabbix monitoring instance that may need access to credentials, session IDs, tokens or other forms of authentication to monitor machines and services.
In Vault with a user that has sufficient permissions:
* Create an entity `zabbix` without a policy.
* Add an alias of type `userpass` to the entity.
* Within the `userpass` auth method create a user (an account if you will) with the same name as the alias you just created so in this case `zabbix`, set a password for the account
Now tie it all together by creating a group named `rbacgroup_zabbix`. Add the `zabbix` entity to it and make it use the policy `zabbix`. At this point the policy does not yet exist which is fine, you can set a policy name and Vault will simply link your group to the policy `zabbix` which does not exist. You'll get to that in a minute.
Next up check out [policies/zabbix/zabbix.hcl](policies/zabbix/zabbix.hcl). Do some light replacement before importing it into Vault. The policy file contains a few occurrences of the string `GROUPID`, replace them with the group ID of `rbacgroup_zabbix`.
* Via Vault's UI you can get the group ID at `Access > Groups > rbacgroup_zabbix`.
* Via the `vault` command-line client you can do it like so where the `id` value is what you're after:
With your ID in hand and [policies/zabbix/zabbix.hcl](policies/zabbix/zabbix.hcl) updated import it as a new policy. You're going to want to save it with the same policy name you assigned earlier to `rbacgroup_zabbix` which was `zabbix`. This role will grant read-only access to secrets underneath a folder `for_rbacgroup_zabbix` which in our example lives inside a `kv` version 2 secrets engine mounted at its default location `kv`.
Now whenever your Zabbix instance needs access to something store secrets underneath `kv/for_rbacgroup_zabbix`. The policy will make sure only the group with correct ID will have access to secrets underneath that directory.
Log in to Vault with `userpass` and the `zabbix` account from above, get the account's token and lastly double-check that `zabbix` with its token can read a secret:
Side note, if your token regularly expires you may want to store the token itself in Vault and let Zabbix monitor token expiry via the Zabbix equivalent of:
Users wishing to browse the `rbacgroup_zabbix` directory structure via Vault's UI will need to manually begin their browsing at `kv/list/for_rbacgroup_zabbix`. Users with higher privileges such as administrators will be able to list all directories underneath the root `kv` object in Vault's web UI. This will include not only `zabbix`-specific data but also directories intended for other users which is why `kv/list` access is not granted to `rbacgroup_zabbix`.
Their `list` permission only begins one lever deeper at `kv/list/for_rbacgroup_zabbix`. It may make sense to communicate an entrypoint link to end users that - in this case - will look like:
The next example will explain orphan tokens. If you've followed examples above your Vault instance will have an `administrators` group with an `administrator` policy assigned to it. Users in that group will already have `write` access to `auth/token/create-orphan` so you can just use one of your `administrators` entities to follow along.
By default a token is associated with the entity that created it, as a consequence a token cannot outlive the maximum time to live configured for its parent entity. In order to decouple a token from its parent entity and for a token to live longer than its parent entity you can make it an _orphan_ token.
A token created with a `period` on the other hand - a time value indicating its maximum time to live - is considered a _periodic_ token. Unless a token is also orphaned its time to live still remains limited to that of its parent entity.
For services that cannot handle renewal natively you will want both at the same time: a periodic orphan token. The first few prep steps will sound familiar from the Zabbix paragraph above. For example's sake let's assume we want the `remco` file generator (see [github.com/HeavyHorst/remco](https://github.com/HeavyHorst/remco)) to have Vault access:
* Add an alias of type `userpass` also named `remco` to the entity
* Within the `userpass` auth method create a user (an account if you will) with the same name as the alias you just created so in this case `remco`, set a password for the account
Create a group named `rbacgroup_remco`. Add the `remco` entity to it and make it use the policy `remco`. At this point the policy does not yet exist which is fine, you can set a policy name and Vault will simply link your group to the policy `remco` which does not exist. You'll get to that in a minute.
Next up we'll be working with [policies/remco/remco.hcl](policies/remco/remco.hcl). Notice that contrary to the policy file we used for [Zabbix credentials storage](#zabbix-credentials-storage) this one is not completely templated. An orphan token has no parent `entity_id` so it does not inherit its parent's group membership. As a result dynamically granting access based on group ID doesn't work for an orphan token. We specify the hard-coded path as a concession to the orphan token. The templated part of the policy file still needs to be adjusted: replace `GROUPID` with the actual group ID of `rbacgroup_remco`, refer to paragraph [Group ID replacement](#group-id-replacement) for a how-to guide. Continue reading there until you reach the point talking about logging in to Vault as the newly created user then return here.
For our `remco` example application we want a periodic orphan token.
Log in to Vault with `userpass` and an account that has `write` access to `auth/token/create-orphan`, for example an administrator as detailed in [Permission to create orphan tokens](#permission-to-create-orphan-tokens) above. Get the account's token. Send the following API command to create a periodic orphan token:
Note that we do not specify `"renewable":true` as periodic tokens are implicitly renewable. We also don't specify `"ttl":"768h"` or similar values as the period in our periodic token will override a time-to-live value anyway rendering the time-to-live irrelevant. We can get our desired token by simply specifying for example `"period":"768h"`.
Revoke an orphan token like so via Vault CLI client. See [Authenticate against Vault](#authenticate-against-vault) at the top for how to authenticate your Vault CLI client and then:
Find all orphan tokens by their accessor like so. This requires `list` access to `auth/token/accessors`. Members of the `administrators` group outlined above have this.
So far we've unsealed Vault after every daemon start with multiple Shamir's Secret Sharing unseal keys or Shamir unseal keys for short. We can have a Vault instance auto-unseal itself by accessing Amazon Web Services (AWS) Key Management Service (KMS) for example via public Internet. Vault supports other providers, we're going with AWS KMS as an example.
> [!WARNING]
> Per [aws.amazon.com/kms/pricing](https://aws.amazon.com/kms/pricing) having one KMS key costs USD 1 per month pro-rated per hour of key existence. The following example assumes you're creating a symmetric encryption and decryption key with KMS as key material origin in a single region. As such API actions `kms:Encrypt`, `kms:Decrypt` and `kms:DescribeKey` (among others that we don't care about) are included in AWS KMS' Free Tier for up to 20,000 requests monthly. As long as Vault auto-unseals at most 20,000 times a month these operations won't cost you anything. A flat fee of USD 1 for the existence of a KMS key, however, applies.
#### Create resources
In AWS one way to get the necessary resources is to:
* In Key Management Service (KMS)
* Create one key
* Symmetric
* Encryption and decryption
* Key material origin KMS
* Single region
* Write down its Amazon Resource Name (ARN)
* In Identity and Access Management (IAM)
* Create an IAM policy in plain JSON, replace `<arn>` with ARN from above:
```
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "EnableKMSForVaultAutoUnseal",
"Effect": "Allow",
"Action": [
"kms:Encrypt",
"kms:Decrypt",
"kms:DescribeKey"
],
"Resource": "<arn>"
}
]
}
```
* Create a user group, attach above customer-managed policy to it
One way to configure Vault to do AWS KMS auto-unsealing is to specify the following environment variables. All but the fifth one (`VAULT_SEAL_TYPE=awskms`) depend on the KMS key you created so fill in the blanks with your specific key material. We're assuming that you have a Vault instance running with Docker Compose or a similar mechanism, we'll leave the implementation details to you for how to get these five environment variables into your Vault instance. Note that `VAULT_AWSKMS_SEAL_KEY_ID` is the key's Amazon Resource Name.
(Re-)Start Vault to make the variables take effect. Vault (for example in its Docker log) will then indicate that it's entering seal migration mode:
```
core: entering seal migration mode; Vault will not automatically unseal even if using an autoseal: from_barrier_type=shamir to_barrier_type=awskms
```
At first glance when in this mode Vault will appear to behave like normal in that its web UI will be up and available. However, any attempt to unseal Vault with your previous unseal keys will do nothing.
Log in to Vault with the `vault` CLI client, see [Authenticate against Vault](#authenticate-against-vault) above. Issue the migration command as many times as you have your required Shamir unseal keys configured, enter them one at a time.
```
# vault operator unseal -migrate
# Which will prompt for:
Unseal Key (will be hidden):
```
After each `vault operator unseal -migrate` command the `vault` binary will print the current migration status like so (here after entering one of two Shamir unseal keys):
```
Key Value
--- -----
Seal Type awskms
Recovery Seal Type shamir
Initialized true
--> Sealed true
Total Recovery Shares 3
Threshold 2
--> Unseal Progress 1/2
Unseal Nonce dc..86
Seal Migration in Progress true
Version 1.18.3
Build Date 2024-12-16T14:00:53Z
Storage Type file
HA Enabled false
```
After entering the last unseal key vault will be unsealed and migrated. This will look like so:
```
# vault status
Key Value
--- -----
Seal Type awskms
Recovery Seal Type shamir
Initialized true
--> Sealed false
Total Recovery Shares 3
Threshold 2
Version 1.18.3
Build Date 2024-12-16T14:00:53Z
Storage Type file
Cluster Name vault-cluster
Cluster ID 9a..01
HA Enabled false
```
In its log output Vault will indicate that seal migration is complete. That's all there it to it: after every subsequent Vault daemon start it will attempt to access AWS KMS via public Internet and auto-unseal Vault.
Once a Vault instance is configured to auto-unseal with AWS KMS or any other key management provider Vault will not ever unseal via any other mechanism. If you lose Internet connectivity to AWS or the key gets deleted at AWS your Vault instance will remain locked. Your way out in this case is to migrate back to Shamir unseal keys.
The setup roughly goes like so:
* Configure Vault to **_not_** use AWS KMS
* (Re-)Start Vault daemon
* Via CLI migrate unseal mechanism to Shamir unseal keys
We'll assume that you're running Vault with Docker Compose and that you have environment variables set like so, obviously with real values instead of blanks:
Your container also has a bind-mounted HCL-formatted config file `/vault/config/vault.hcl`:
```
backend "file" {
path = "/vault/file"
}
listener "tcp" {
address = "0.0.0.0:8200"
tls_disable = 1
}
api_addr = "https://fully.qualified.domain.name"
disable_clustering = true
ui = true
```
Add this section to the file:
```
seal "awskms" {
disabled = true
}
```
So that the end result looks like so:
```
backend "file" {
path = "/vault/file"
}
listener "tcp" {
address = "0.0.0.0:8200"
tls_disable = 1
}
api_addr = "https://fully.qualified.domain.name"
disable_clustering = true
ui = true
seal "awskms" {
disabled = true
}
```
Also remove the environment variable `VAULT_SEAL_TYPE` from your container or at least change it to an empty string (`VAULT_SEAL_TYPE=`). Now (re-)start Vault and it'll report:
```
core: entering seal migration mode; Vault will not automatically unseal even if using an autoseal: from_barrier_type=awskms to_barrier_type=shamir
```
The rest is the reverse of [Migrate Vault to auto-unseal with AWS KMS](#migrate-vault-to-auto-unseal-with-aws-kms): log in to Vault via CLI `vault` binary and do a few incantations of `vault operator unseal -migrate`. Use your Shamir unseal keys. After successful migration `vault` binary will print:
```
Key Value
--- -----
--> Seal Type shamir
Initialized true
Sealed false
Total Shares 3
Threshold 2
Version 1.18.3
Build Date 2024-12-16T14:00:53Z
Storage Type file
Cluster Name vault-cluster
Cluster ID 9a..01
HA Enabled false
```
You now have decoupled Vault from AWS KMS and are back to using Shamir unseal keys. After every Vault daemon restart you'll have to manually unseal your instance again.
If during any of the above steps you've used the Vault command-line client to authenticate against Vault with your `root` token make sure that client's `~/.vault-token` file is deleted. It contains the verbatim `root` token.
If successful Vault will not return data. You may want to make response headers visible via `curl --include`. A successful password change results in an HTTP status code 204.