Share Secrets Safely

...and here is how!

share secrets safely, or sheesy in short, is a command-line tool which makes using cryptography powered by gpg easy.

It's there for you and your team to help you avoid putting plain-text secrets into your repository from day one.

However, there is more to it and this guide will give you an overview of the difficulties associated with shared secrets, and how to overcome them.

About shared secrets!

secrets, that's knowledge whose possession yields a value. It can be credentials providing access to databases, which in turn may contain confidential data or all your customers payment information. Or it can be a token giving full access to a big companys AWS account with unlimited credit.

The first line of defense is to never, ever store secrets in plain text! This cannot be stressed enough. Never do it, ever, and instead read the 'First Steps' to learn how to avoid it from day one.

Rotation, Rotation, Rotation!

An interesting property of shared secrets is that once they have been read, they must be considered leaked. By using sheesy you try to assure no unauthorized party gets to easily read them, but authorized parties still read them eventually, adding the risk of leakage each time.

After adding sheesy into your workflow, it must be your next goal to make it easy (e.g. automatically) to rotate these secrets. This invalidates any leak and acts like a reset. The shorter the secrets remain valid, the better when it comes to risk of leakage.

If you think further, the safest secrets are the ones that never stay valid for an extended period of time, and which are tied to a specific entity.

The Tools

Fortunately all the tooling exists to avoid plain-text secrets and make sharing them a little safer. sheesy does nothing more than bringing them together in a single binary.

Those are namely

  • gpg-cryptography
    • This provides user-identification with public keys as well as proven cryptography
    • The Web of Trust helps to conveniently assure public keys actually belong to the individual, assuring nobody sneaks in.
  • sheesy command-line tool
    • A binary communicating with the gpg cryptography engine via gpgme.
    • It provides a great user experience making using gpg easy even without prior knowledge.
    • It provides capabilities to make it easy to use sheesy vaults from your build pipeline.
  • pass - the standard unix password manager
    • pass is a shell-script which drives the gpg program to make it easier to use in teams. sheesy vaults are fully compatible to pass vaults.

You might ask yourself why you would chose sheesy over other tools. The comparisons that follow should be helpful in deciding what's best for you.

Pass

The standard unix pass is a shell script, which requires the presence of various standard unix tools, among which are tree and getopt. The latter are actually not necessarily present, and if they are they may not produce exactly the same results. On OSX for example, the gpg file suffix is shown instead of hidden, and pass goes in an endless loop if getopt is broken, which it is by default when brew reinstall gnu-getopt was not invoked.

sheesy has only one dependency: gpg, and even there it does not depend on the executable, but rather the gpg-agent. It does not invoke the gpg command, and thus will not be confused by a change in the way gpg interprets its arguments between minor version increments.

Besides, as pass just invokes gpg, it suffers from the horrible and hard-to-grok error messages that it produces. Using pass and gpg requires to overcome a significant learning barrier, and you are required to know and understand the 'Web of Trust' and all the error messages that come with not having one big enough to encrypt for the desired recipients.

sheesy is built with great user experience as first class requirement, and even though you will always see the underlying gpg error, it will explain what it means and provide you with hints to solve the issue. When encryption fails, it will list exactly for which recipient you cannot encrypt, and why.

pass even has a few tests, but it's unclear when and where these run. sheesy is developed in a test-driven fashion, and has user-centric tests that model real-world interaction. This is the reason why those interactions are designed to be understandable, consistent and easy to remember.

Gopass

gopass is 'the slightly more awesome standard unix password manager for teams' as claimed on the projects github page. As I have never used it beyond trying it locally, this paragraph might be lacking details. However, a first impression is worth something, and here we go.

As it is a go program, it comes without any dependencies except for the gpg executable. It calls it directly, and thus would be vulnerable to changes to the way gpg parses its arguments.

It's feature-ladden and seems overwhelming at first, it is clearly not centered around user experience. Otherwise the user-journey would be much more streamlined and easier to comprehend. Many advanced features I certainly don't get to enjoy that way.

Somewhat a sybling of the issue above seems to be that it is hell-bent on being a personal password store. Thus it will store meta-data in your home directory and really wants a root-store which is placed in your home by default. So-called 'mounts' are really just a way to let it know about other pass compatible vaults, and I believe that makes it a buzz-word. Nonetheless, this made it hard for me to get started with it, and I still feel highly uncomfortable to use it thanks to it opinionatedness.

Last but not least, and an issue that I find makes the case for not using gopass is that it actually abandons the Web of Trust in favor of simplicity to the user. Even though I understand why one would do that, I think the Web of Trust is an awesome idea, with terrible user experience, which just begs you to make it usable for the masses thanks to better tooling.

Additionally gopass just aims to be a slightly more awesome than pass, which shows as it is basically pass written in go with more features.

Even though it certainly is better than pass, I wouldn't want to use it in its place because it adds so much complexity while entirely removing the 'Web of Trust'. The latter seemed to have happened rather sneakily, which I find problematic.

It should be valued that they actively increase test-coverage, but it also means that they don't do test-driven development, which nourishes my doubt in the quality of the software.

Via HomeBrew (OSX and Linux)

This is by far the easiest way to get the binary. Just execute the following code:

brew tap share-secrets-safely/cli
brew install sheesy

This will install gpg as well, which is required for the sheesy vault to work.

Via Git Clone

Thanks to the getting-started repository, obtaining the release binaries on demand becomes a breeze. This is particularly useful for quick fetchin sheesy for use within containers.

git clone https://github.com/share-secrets-safely/getting-started
./getting-started/sy
Cloning into 'getting-started'...
Downloading https://github.com/share-secrets-safely/cli/releases/download/4.0.10/sy-cli-Linux-x86_64.tar.gz...
error: 'sy' requires a subcommand, but one was not provided

USAGE:
    sy <SUBCOMMAND>

For more information try --help

The binaries are download when ./sy is first executed, and you will find them in ./bin/$(uname -s)/* for further use.

Via Releases

Please note that in order to use sy, you will need a working installation of gpg.

Navigate to the releases page and download a release binary suitable for your system. A full example for linux looks like this:

curl --fail -Lso sy.tar.gz https://github.com/share-secrets-safely/cli/releases/download/4.0.5/sy-cli-Linux-x86_64.tar.gz
curl --fail -Lso sy.tar.gz.gpg https://github.com/share-secrets-safely/cli/releases/download/4.0.5/sy-cli-Linux-x86_64.tar.gz.gpg
# verify 'sy' was built by one of the maintainers
gpg --import <(curl -s https://raw.githubusercontent.com/share-secrets-safely/cli/master/signing-keys.asc) 2>/dev/null
gpg --sign-key --yes --batch 763629FEC8788FC35128B5F6EE029D1E5EB40300 &>/dev/null
gpg --verify ./sy.tar.gz.gpg sy.tar.gz
# now that we know it's the real thing, let's use it.
tar xzf sy.tar.gz
# This will print out that the file was created by one of the maintainers. If you chose to
# trust the respective key after verifying it belongs to the maintainers, gpg will tell you
# it is verified.

# Finally put the executable into your PATH
mv ./sy /usr/local/bin
gpg: Signature made Fri Aug 16 09:58:02 2019 UTC
gpg:                using RSA key 763629FEC8788FC35128B5F6EE029D1E5EB40300
gpg: checking the trustdb
gpg: marginals needed: 3  completes needed: 1  trust model: pgp
gpg: depth: 0  valid:   1  signed:   1  trust: 0-, 0q, 0n, 0m, 0f, 1u
gpg: depth: 1  valid:   1  signed:   0  trust: 1-, 0q, 0n, 0m, 0f, 0u
gpg: Good signature from "Sebastian Thiel (Yubikey USB-C) <sthiel@thoughtworks.com>" [full]

Now the sy executable is available in your PATH.

sy --help
sy 4.0.10
Sebastian Thiel <byronimo@gmail.com>
The 'share-secrets-safely' CLI to interact with GPG/pass-like vaults.

USAGE:
    sy <SUBCOMMAND>

FLAGS:
    -h, --help       Prints help information
    -V, --version    Prints version information

SUBCOMMANDS:
    vault          Various commands to store and retrieve secrets and control who has access.
    substitute     Substitutes templates using structured data. The idea is to build a tree of data that is used to
                   substitute in various templates, using multiple inputs and outputs.That way, secrets (like
                   credentials) can be extracted from the vault just once and used wherever needed without them
                   touching disk.Liquid is used as template engine, and it's possible to refer to and inherit from
                   other templates by their file-stem. Read more on their website at
                   https://shopify.github.io/liquid .
    process        Merge JSON or YAML files from standard input from specified files. Multi-document YAML files are
                   supported. Merging a single file is explicitly valid and can be used to check for syntax errors.
    extract        Extract scalar or complex values from any JSON or YAML file. Multi-document YAML files are
                   supported.
    completions    generate completions for supported shell
    help           Prints this message or the help of the given subcommand(s)

Read more on https://share-secrets-safely.github.io/cli

Via Cargo

If you already have cargo available, installation is as easy as the following:

cargo install sheesy-cli

This installation should be preferred as it makes updating the binary much easier. If you don't have cargo yet, you can install it via instructions on rustup.rs.

Please note that for building on OSX, you are required to locally install certain dependencies, which is also the case on linux systems.

First of all, we assume that you have installed the sy command-line program already. If not, please have a look at the chapter about installation.

Initializing the vault

Assuming your current directory is empty, just running vault will error.

sy vault
error: Failed to read vault configuration file at './sy-vault.yml'

Caused by: 
 1: No such file or directory (os error 2)

What you want to do is to initialize the vault. This will add yourself as the first recipient and add some state to the directory you chose or that you are in.

sy vault init
error: No existing GPG key found for which you have a secret key. Please create one with 'gpg --gen-key' and try again.

Assuming we have followed the instructions or already have setup a gpg key, you will get quite a different result.

sy vault init
Exported public key for user Tester (for testing only) <tester@example.com> to '.gpg-keys/D6339718E9B58FCE3C66C78AAA5B7BF150F48332'
vault initialized at './sy-vault.yml'

Adding Resources

Usually what happens next is to add some resources. For now they will only be encrypted for you, and thus can only be read by you.

Resources are added via resource specs, or can be created by editing.

There are various ways to add new resources...

...by gathering output from a program...

echo hi | sy vault add :from-program
Added 'from-program'.

...or from existing files.

sy vault add $SRC_DIR/README.md:README.md
Added 'README.md'.

You can list existing resources...

sy vault list
syv://.
README.md
from-program

or you can show them.

sy vault show from-program
hi

Meet Alexis!

Even though secrets that are not shared with anyone (but yourself) are great for security, they are not too useful in many settings.

So we will add our first recipient, Alexis!

As always, Alexis will require an own gpg key, and for the sake of simplicity we will assume it is already present.

Usually it's also easiest to let new recipients 'knock at the door of the vault', and leave it to existing recipients of the vault to 'let them in'.

In this analogy, 'knocking the door' is equivalent to placing their key in the vaults keys directory. 'Letting them in' means re-encrypting all resources for the current and the new recipients after verifying their key truly belongs to them.

That's quite a lot to digest, so let's start and make small steps.

Let's let Alexis look at the vault:

sy vault
syv://.
README.md
from-program

What a good start! We can list resource, but does that mean we can also see them?

sy vault show from-program
error: Export your public key using 'sy vault recipient init', then ask one of the existing recipients to import your public key using 'sy vault recipients add <your-userid>.'
Caused by: 
 2: The content was not encrypted for you.
 1: No secret key (gpg error 17)

Phew, that's good actually! It's also good that it tells you right away how to solve the issue. Let's follow along.

sy vault recipient init
Exported public key for user c <c@example.com> (905E53FE2FC0A500100AB80B056F92A52DF04D4E).

Cough, let's ignore this key seems to be for c c@example.com, Alexis loves simplicity!

Let's see what changed - where is this key exactly?

tree -a
.
├── .gpg-id
├── .gpg-keys
│   ├── 905E53FE2FC0A500100AB80B056F92A52DF04D4E
│   └── D6339718E9B58FCE3C66C78AAA5B7BF150F48332
├── README.md.gpg
├── from-program.gpg
└── sy-vault.yml

1 directory, 6 files

It looks like the keyfile is actually stored in a hidden directory. But as you can see, it's just something that can be configured to your liking.

cat sy-vault.yml
---
name: ~
auto_import: true
trust_model: always
secrets: "."
gpg_keys: ".gpg-keys"
recipients: "./.gpg-id"

That's all we can do here, now back to the prime recipient.

Adding Alexis

Back with the very first recipient of the vault who has already been informed about Alexis arrival. We received an e-mail and know it's c@example.com, maybe we can use that.

sy vault recipient add c@example.com
error: Fingerprint 'c@example.com' must only contain characters a-f, A-F and 0-9.

Looks like it doesn't like the format. The problem is that for verification purposes, it wants you to add the fingerprint, which you should also have received by Alexis. This links the key (identified by its fingerprint) to Alexis.

Let's spell it out:

sy vault recipient add 2DF04D4E
Imported recipient key at path '.gpg-keys/905E53FE2FC0A500100AB80B056F92A52DF04D4E'
Signed recipients key user c <c@example.com> (905E53FE2FC0A500100AB80B056F92A52DF04D4E) with signing key Tester (for testing only) <tester@example.com> (D6339718E9B58FCE3C66C78AAA5B7BF150F48332)
Exported public key for user user c <c@example.com> to '.gpg-keys/905E53FE2FC0A500100AB80B056F92A52DF04D4E'
Added recipient user c <c@example.com>
Re-encrypted 'README.md' for new recipient(s)
Re-encrypted 'from-program' for new recipient(s)

Let's look at the steps that it performs in details:

  • import
    • it finds Alexis key as identified by the fingerprint in the vaults keys directory and imports it into the gpg keychain.
  • signing
    • Alexis key is signed with the one of the prime recipient, which indicates we verified that it truly belongs to Alexis.
  • export
    • Alexis key is re-exported as it now also contains said signature. The fact that the prime recipient believes that the key belongs to Alexis is communicated to others that way, which helps building the Web of Trust.
  • encrypt
    • Each resource of the vault is re-encrypted for all recipients. This means Alexis will be able to get to peek inside.

(If we would already have Alexis in our keychain and signed their key, you could also more easily add them using their email alongside the --verified flag. Find all possible flags of the sy-vault-recipients-add here)

Back with Alexis, the latest recipient of the vault

Now that Alexis has been added as a recipient, it should be possible to peek at the secrets it contains!

sy vault show from-program
hi

Beautiful!

And what's even better: Alexis now can add recipients on their own!

Next steps

This is just the beginnings! Feel free to add more resources and recipients, add the contents of the vault to git and distribute it that way, or add it to your tooling to extract secrets when building your software.

The vault sub-command is quite a complex one as it implements all interactions with vaults. A vault contains shared secrets, and is compatible to the unix password manager.

It provides subcommands for dealing with two kinds of items

  • resources
  • recipients

About Resources

Most of the time, when using the vault, you will deal with the resources contained within. A resource is an encrypted secret so that it is readable and writable by all recipients.

Resources can be added, removed, edited, listed and shown.

As they are used most of the time, they are found directly in the vault sub-command.

About Recipients

Each recipient is identified by their gpg-key, which is tied to their identity. New recipients can only be added by existing recipients of the vault, which also requires them to verify that the new key truly belongs to the respective person.

Recipients may indicate trust-relationships between each other, which allows to encrypt for recipients whose keys have not been explicitly verified. This is called the Web of trust, a feature that sheesy makes easier to use.

As they are used less often, they are tucked away in the recipients sub-command.

The vault sub-command

As the vault sub-command is only a hub, we recommend you to look at its sub-commands instead.

sy vault --help
sy-vault 4.0.10
Sebastian Thiel <byronimo@gmail.com>
Various commands to store and retrieve secrets and control who has access.

USAGE:
    sy vault [OPTIONS] --config-file <path> [SUBCOMMAND]

FLAGS:
    -h, --help    Prints help information

OPTIONS:
    -s, --select <selector>     Specify the vault which should be the leader.This is particularly relevant for
                                operations with partitions.It has no effect during 'vault init'.A vault can be selected
                                by the directory used to stored its resources, by its name (which may be ambiguous), or
                                by the index in the vault description file. [default: 0]
    -c, --config-file <path>    Path to the vault configuration YAML file. [default: ./sy-vault.yml]

SUBCOMMANDS:
    init          Initialize the vault in the current directory. If --gpg-key-id is unset, we will use the only key
                  that you have a secret key for, assuming it is yours.If you have multiple keys, the --gpg-key-id
                  must be specified to make the input unambiguous.
    add           Add a new resource to the vault.
    edit          Edit a resource. This will decrypt the resource to a temporary file, open up the $EDITOR you have
                  specified, and re-encrypt the changed content before deleting it on disk.
    show          Decrypt a resource
    list          List the vault's content.
    remove        Delete a resource from the vault.
    recipients    Interact with recipients of a vault. They can encrypt and decrypt its contents.
    partitions    A partition is essentially another vault, as such it has its own recipients list, name, keys
                  directory place to store resources.Its major promise is that it is non-overlapping with any other
                  partition.Its main benefit is that it allows one recipients list per resource directory,
                  effectively emulating simple access control lists.
    help          Prints this message or the help of the given subcommand(s)
sy vault init --help
sy-vault-init 
Initialize the vault in the current directory. If --gpg-key-id is unset, we will use the only key that you have a secret
key for, assuming it is yours.If you have multiple keys, the --gpg-key-id must be specified to make the input
unambiguous.

USAGE:
    sy vault --config-file <path> init [FLAGS] [OPTIONS]

FLAGS:
    -p, --first-partition    Setting this flag indicates that you want to add partitions later.It enforces a
                             configuration which makes your vault suitable, namely it assures that you set an explicit
                             secrets directory.
    -h, --help               Prints help information
        --no-auto-import     If set, missing keys will not automatically be imported to your keychain. This may make
                             attempts to encrypt resources fail.

OPTIONS:
    -i, --gpg-key-id <userid>...      The key-id of the public key identifying a recipient in your gpg keychain.
    -k, --gpg-keys-dir <directory>    The directory to hold the public keys of all recipients.Please note that these
                                      keys are exported with signatures. [default: .gpg-keys]
    -n, --name <name>                 The name of the vault. It can be used to identify the vault more easily, and its
                                      primary purpose is convenience.
    -r, --recipients-file <path>      The path to the file to hold the fingerprints of all recipients. If set to just
                                      the file name, like 'recipients', it will be interpreted as relative to the
                                      --secrets-dir. If a path is given, like './recipients', it is used as is.
                                      [default: .gpg-id]
    -s, --secrets-dir <path>          The directory which stores the vaults secrets. [default: .]
        --trust-model <model>         The model by which keys to encrypt for are verified to truly belong to the person.
                                      If unset, it defaults to 'always'.'always': whenever a key has been added to the
                                      vault, it is trusted without your intervention. 'web-of-trust': the standard GPG
                                      web of trust with default rules. In the most simple case, you will need to sign a
                                      key prior to be able to encrypt for it. [default: always]  [possible values: web-
                                      of-trust, always]
sy vault add --help
sy-vault-add 
Add a new resource to the vault.

USAGE:
    sy vault --config-file <path> add <spec>

FLAGS:
    -h, --help    Prints help information

ARGS:
    <spec>    A specification identifying a mapping from a source file to be stored in a location of the vault. It
              takes the form '<src>:<dst>', where '<src>' is equivalent to '<src>:<src>'.<dst> should be vault-
              relative paths, whereas <src> must point to a readable file and can be empty to read from
              standard input, such as in ':<dst>'.If standard input is a TTY, it will open the editor as defined by
              the EDITOR environment variable.
sy vault edit --help
sy-vault-edit 
Edit a resource. This will decrypt the resource to a temporary file, open up the $EDITOR you have specified, and re-
encrypt the changed content before deleting it on disk.

USAGE:
    sy vault --config-file <path> edit [FLAGS] [OPTIONS] <path>

FLAGS:
    -h, --help              Prints help information
        --no-create         If set, the resource you are editing must exist. Otherwise it will be created on the fly,
                            allowing you to add new resources by editing them.
        --no-try-encrypt    Unless set, we will assure encryption works prior to launching the editor. This assures you
                            do not accidentally loose your edits.

OPTIONS:
    -e, --editor <editor>    The path to your editor program. It receives the decrypted content as first argument and is
                             expected to write the changes back to that file before quitting. [default: vim]

ARGS:
    <path>    Either a vault-relative path to the file as displayed by 'vault show',a vault-relative path with the
              '.gpg' suffix, or an absolute path with or without the '.gpg' suffix.
sy vault list --help
sy-vault-list 
List the vault's content.

USAGE:
    sy vault --config-file <path> list

FLAGS:
    -h, --help    Prints help information
sy vault remove --help
sy-vault-remove 
Delete a resource from the vault.

USAGE:
    sy vault --config-file <path> remove <path>...

FLAGS:
    -h, --help    Prints help information

ARGS:
    <path>...    The vault-relative path of a resource in the vault
sy vault show --help
sy-vault-show 
Decrypt a resource

USAGE:
    sy vault --config-file <path> show <path>

FLAGS:
    -h, --help    Prints help information

ARGS:
    <path>    Either a vault-relative path to the file as displayed by 'vault show',a vault-relative path with the
              '.gpg' suffix, or an absolute path with or without the '.gpg' suffix.
sy vault recipients --help
sy-vault-recipients 
Interact with recipients of a vault. They can encrypt and decrypt its contents.

USAGE:
    sy vault --config-file <path> recipients [SUBCOMMAND]

FLAGS:
    -h, --help    Prints help information

SUBCOMMANDS:
    init      Add your single (or the given) recipient key to the vault by exporting the public key and storing it
              in the vaults local gpg key database. Afterwards someone able to decrypt the vault contents can re-
              encrypt the content for you.
    add       Add a new recipient. This will re-encrypt all the vaults content.If the '--verified' flag is unset,
              you will have to specify the fingerprint directly (as opposed to allowing the recipients email address
              or name) to indicate you have assured yourself that it actually belongs to the person.Otherwise the
              respective key as identified by its fingerprint will then be imported and signed. It is expected that
              you have assured the keys fingerprint belongs to the recipient. Keys will always be exported into the
              vaults key directory (if set), which includes signatures.Signatures allow others to use the 'Web of
              Trust' for convenient encryption.
    list      List the vaults recipients as identified by the recipients file.
    remove    Remove the given recipient. This will re-encrypt all the vaults content for the remaining
              recipients.The gpg keychain will not be altered, thus the trust-relationship with the removed
              recipient is left intact.However, the recipients key file will be removed from the vault.
    help      Prints this message or the help of the given subcommand(s)
sy vault recipients init --help
sy-vault-recipients-init 
Add your single (or the given) recipient key to the vault by exporting the public key and storing it in the vaults local
gpg key database. Afterwards someone able to decrypt the vault contents can re-encrypt the content for you.

USAGE:
    sy vault recipients init [userid]...

FLAGS:
    -h, --help    Prints help information

ARGS:
    <userid>...    The key-id of the public key identifying a recipient in your gpg keychain.
sy vault recipients add --help
sy-vault-recipients-add 
Add a new recipient. This will re-encrypt all the vaults content.If the '--verified' flag is unset, you will have to
specify the fingerprint directly (as opposed to allowing the recipients email address or name) to indicate you have
assured yourself that it actually belongs to the person.Otherwise the respective key as identified by its fingerprint
will then be imported and signed. It is expected that you have assured the keys fingerprint belongs to the recipient.
Keys will always be exported into the vaults key directory (if set), which includes signatures.Signatures allow others
to use the 'Web of Trust' for convenient encryption.

USAGE:
    sy vault recipients add [FLAGS] [OPTIONS] <userid>...

FLAGS:
    -h, --help        Prints help information
        --verified    If specified, you indicate that the user id to be added truly belongs to a person you know and
                      that you have verified that relationship already. You have used `gpg --sign-key <recipient>` or
                      have set the owner trust to ultimate so that you can encrypt for the recipient.

OPTIONS:
    -p, --partition=<partition>...     Identifies the partition to add the recipient to. This can be done either using
                                       its name or its secrets directory.If unset, the recipient will be added to
                                       naturally selected vault, see the --select flag.
        --signing-key <signing-key>    The userid or fingerprint of the key to use for signing not-yet-verified keys. It
                                       must only be specified if you have access to multiple secret keys which are also
                                       current recipients.

ARGS:
    <userid>...    The key-id of the public key identifying a recipient in your gpg keychain.
sy vault recipients list --help
sy-vault-recipients-list 
List the vaults recipients as identified by the recipients file.

USAGE:
    sy vault recipients list

FLAGS:
    -h, --help    Prints help information
sy vault recipients remove --help
sy-vault-recipients-remove 
Remove the given recipient. This will re-encrypt all the vaults content for the remaining recipients.The gpg keychain
will not be altered, thus the trust-relationship with the removed recipient is left intact.However, the recipients key
file will be removed from the vault.

USAGE:
    sy vault recipients remove [OPTIONS] <userid>...

FLAGS:
    -h, --help    Prints help information

OPTIONS:
    -f, --partition=<partition>...    Identifies the partition to remove the recipient from. This can be done either
                                      using its name or its secrets directory.If unset, the recipient will be added to
                                      naturally selected vault, see the --select flag.

ARGS:
    <userid>...    The key-id of the public key identifying a recipient in your gpg keychain.
sy vault partitions --help
sy-vault-partitions 
A partition is essentially another vault, as such it has its own recipients list, name, keys directory place to store
resources.Its major promise is that it is non-overlapping with any other partition.Its main benefit is that it allows
one recipients list per resource directory, effectively emulating simple access control lists.

USAGE:
    sy vault --config-file <path> partitions [SUBCOMMAND]

FLAGS:
    -h, --help    Prints help information

SUBCOMMANDS:
    add       Adds a partition to the vault.
    remove    Removes a partition.Please note that this will not delete any files on disk, but change the vault
              description file accordingly.
    help      Prints this message or the help of the given subcommand(s)
sy vault partitions add --help
sy-vault-partitions-add 
Adds a partition to the vault.

USAGE:
    sy vault partitions add [OPTIONS] <partition-path>

FLAGS:
    -h, --help    
            Prints help information


OPTIONS:
    -i, --gpg-key-id <userid>...    
            The fingerprint or user ids of the members of the partition.If unset, it will default to your key, if there
            is no ambiguity.
    -n, --name <name>               
            The name of the partitions vault to use. If unset, it will default to the basename of the partitions
            resources directory.
    -r, --recipients-file <path>    
            The path to the file to hold the fingerprints of all recipients. If set to just the file name, like
            'recipients', it will be interpreted as relative to the --secrets-dir. If a path is given, like
            './recipients', it is used as is.

ARGS:
    <partition-path>    
            The path at which the partition should store resources.It may be a relative path which will be adjusted to
            be relative to the root of the resource directory of the master vault.It may also be an absolute directory,
            which works but removes portability.
sy vault partitions remove --help
sy-vault-partitions-remove 
Removes a partition.Please note that this will not delete any files on disk, but change the vault description file
accordingly.

USAGE:
    sy vault partitions remove <partition-selector>

FLAGS:
    -h, --help    Prints help information

ARGS:
    <partition-selector>    A partition can be selected by the directory used to stored its resources, by its name
                            (which may be ambiguous), or by the index in the vault description file.

Tools are everything not directly related to managing secrets, and help to use them while avoiding them to touch disk.

This can be achieved by putting the following capabilities together:

  1. Context Creation
    • A context is just a bunch of properties in a structure.
    • Used to instantiate and customize templates.
    • Parts of it may be secret.
    • It can live in multiple places, such as files and in-memory as it is produced in real-time by programs. The latter can be 'sheesy' decrypting a file on the fly.
  2. Template Substitution
    • Using a templating engine and a set of templates, the data can be placed in any kind of file to be consumed by other tools.
    • It's also useful to maintain a library of templates which are controlled by contexts, which change depending on the one use-case.

As an abstract example, this is how the build-pipeline for kubernetes could look like:

stage=production
merge \
    <(show-secret $stage/infrastructure.yml) \
    etc/team.json \
    etc/stages/$stage.yml \
| substitute \
    --separator $'---\n' \
    etc/template/k8s-shared-infrastructure.yml \
    etc/template/k8s-$stage-infrastructure.yml \
| kubectl --kubeconfig <(show-secret $stage/kube.config) apply -f -

Read on to learn more about the individual tools to merge, substitute and extract.

sy substitute --help
sy-substitute 
Substitutes templates using structured data. The idea is to build a tree of data that is used to substitute in various
templates, using multiple inputs and outputs.That way, secrets (like credentials) can be extracted from the vault just
once and used wherever needed without them touching disk.Liquid is used as template engine, and it's possible to refer
to and inherit from other templates by their file-stem. Read more on their website at https://shopify.github.io/liquid .

USAGE:
    sy substitute [FLAGS] [OPTIONS] [--] [template-spec]...

FLAGS:
    -h, --help        
            Prints help information

        --no-stdin    
            If set, we will not try to read structured data from standard input. This may be required in some situations
            where we are blockingly reading from a standard input which is attached to a pseudo-terminal.
    -v, --validate    
            If set, the instantiated template will be parsed as YAML or JSON. If both of them are invalid, the command
            will fail.

OPTIONS:
    -d, --data=<data>
            Structured data in YAML or JSON format to use when instantiating/substituting the template. If set,
            everything from standard input is interpreted as template.
    -e, --engine=<name>
            The choice of engine used for the substitution. Valid values are 'handlebars' and 'liquid'. 'liquid', the
            default, is coming with batteries included and very good at handling
                                   one template at a time.
                                   'handlebars' supports referencing other templates using partials, which is useful for
            sharing of common functionality. [default: liquid]  [possible values: handlebars, liquid]
        --partial=<template>...
            A file to be read as partial template, whose name will be the its file stem. It can then be included from
            another template, and thus act as a function call.
        --replace=<find-this:replace-with-that>...
            A simple find & replace for values for the string data to be placed into the template. The word to find is
            the first specified argument, the second one is the word to replace it with, e.g. -r=foo:bar.
    -s, --separator=<separator>
            The string to use to separate multiple documents that are written to the same stream. This can be useful to
            output a multi-document YAML file from multiple input templates to stdout if the separator is '---'. The
            separator is also used when writing multiple templates into the same file, like in 'a:out b:out'. [default: 
            ]

ARGS:
    <template-spec>...    
            Identifies the how to map template files to output. The syntax is '<src>:<dst>'. <src> and <dst> are a
            relative or absolute paths to the source templates or destination files respectively. If <src> is
            unspecified, the template will be read from stdin, e.g. ':output'. Only one spec can read from stdin. If
            <dst> is unspecified, the substituted template will be output to stdout, e.g 'input.hbs:' or 'input.hbs'.
            Multiple templates are separated by the '--separator' accordingly. This is particularly useful for YAML
            files,where the separator should be `$'---\n'`

You can also use this alias: sub.

Control your output

template-specs are the bread and butter of this substitution engine. They allow to not only specify the input templates, like ./some-file.tpl, but also set the output location.

By default, this is standard ouptut, but can easily be some-file.yml, as in ./some-file.tpl:out/some-file.yml.

You can have any amount of template specs, which allows them to use the same, possibly expensive, data-model.

Separating YAML Documents

At first sight, it might not be so useful to output multiple templates to standard output. Some formats are built just for that usecase, provided you separate the documents correctly.

If there are multiple YAML files for instance, you can separate them like this:

echo 'value: 42' \
| sy substitute --separator=$'---\n' <(echo 'first: {{value}}') <(echo 'second: {{value}}')
first: 42
---
second: 42

Also note the explicit newline in the separator, which might call for special syntax depending on which shell you use.

Validating YAML or JSON Documents

In the example above, how great would it be to protect ourselves from accidentially creating invalid YAML or JSON documents?

Fortunately, sheesy has got you covered with the --validate flag.

echo 'value: 42' \
| sy substitute --validate <(echo '{"first":"{{value}}}') 
error: Validation of template output at 'stream' failed. It's neither valid YAML, nor JSON
Caused by: 
 1: while scanning a quoted scalar, found unexpected end of stream at line 1 column 10

Protecting against 'special' values

When generating structured data files, like YAML or JSON, even with a valid template you are very vulnerable to the values contained in the data-model. Some passwords for instance may contain characters which break your output. Even though --validate can tell you right away, how can you make this work without pre-processing your data?

--replace to the rescure. The following example fails to validate as the password was now changed to contain a special character in the JSON context:

echo 'password: xyz"abc' \
| sy substitute --validate <(echo '{"pw":"{{password}}"}') 
error: Validation of template output at 'stream' failed. It's neither valid YAML, nor JSON
Caused by: 
 1: while parsing a flow mapping, did not find expected ',' or '}' at line 1 column 12

Here is how it looks like without validation:

echo 'password: xyz"abc' \
| sy substitute <(echo '{"pw":"{{password}}"}') 
{"pw":"xyz"abc"}

You can fix it by replacing all violating characters with the respective escaped version:

echo 'password: xyz"abc' \
| sy substitute --replace='":\"' --validate <(echo '{"pw":"{{password}}"}') 
{"pw":"xyz\"abc"}

How to use multi-file data in your templates

You have probably seen this coming from a mile away, but this is a great opportunity for a shameless plug to advertise sy merge.

sy merge allows to merge multiple files together to become one, and even some additional processing to it. That way you can use the combined data as model during template substitution.

sy merge --at=team team.yml --at=project project.yml --at=env --environment \
| sy substitute kubernetes-manifest.yaml.tpl
apiVersion: v1
data:
  game.properties: |
    enemies=aliens
    lives=3
    enemies.cheat=true
  ui.properties: |
    color.good=purple
    color.bad=yellow
kind: ConfigMap
metadata:
  name: game-config
  namespace: default
  labels:
    team: awesomenessies
    department: finance
    project: fantasti-project
    kind: AI-research

Templates from STDIN ? Sure thing...

By default, we read the data model from stdin and expect all templates to be provided by template-spec. However, sometimes doing exactly the opposite might be what you need.

In this case, just use the -d flag to feed the data model, which automatically turns standard input into expecting the template.

echo '{{greeting | capitalize}} {{name}}' | sy substitute -d <(echo '{"greeting":"hello", "name":"Hans"}')
Hello Hans

Meet the engines

The substitution can be performed by various engines, each with their distinct advantages and disadvantages.

This section sums up their highlights.

Liquid (default)

The Liquid template engine was originally created for web-shops and is both easy to use as well as fully-featured.

It’s main benefit is its various filters, which can be used to put something into uppercase ({{ “something” | uppercase }}), or to encode text into base64 ({{ “text” | base64 }}).

There are a few filters which have been added for convenience:

  • base64
    • Converts anything into its base64 representation.
    • No arguments are supported.

Handlebars

The first optional template engine is handlebars. Compared to Liquid, it’s rather bare-bone and does not support any filters. The filtering syntax also makes chained filters more cumbersome.

However, it allows you to use partials, which are good to model something like multiple sites, which share a header and a footer. The shared portions are filled with data that contextually originates in the page that uses them.

For example, in an invocation like this you can declare headers and footers without rendering them, and then output multiple pages that use it.

Here is the content of the files used:

cat data.json
{
  "title" : "Main Heading",
  "parent" : "base0"
}
cat base0.hbs
<html>
  <head>{{title}}</head>
  <body>
    <div><h1>Derived from base0.hbs</h1></div>
    {{~> page}}
  </body>
</html>
cat template.hbs
{{#*inline "page"}}
  <p>Rendered in partial, parent is {{parent}}</p>
{{/inline}}
{{~> (parent)~}}

When using these in substitution, this is the output:

sy substitute --engine=handlebars -d data.json base0.hbs:/dev/null template.hbs
<html>
  <head>Main Heading</head>
  <body>
    <div><h1>Derived from base0.hbs</h1></div>
  <p>Rendered in partial, parent is base0</p>

  </body>
</html>

The perceived disadvantage of having close to zero available filters would have to be compensated using a processing program which takes the data, and adds all the variations that you would need in your templates:

./data-processor < data.json | sy substitute template.tpl
The normal title: Main Heading
The capitalized title: main heading

The data-processor in the example just adds transformed values for all fields it sees:

./data-processor < data.json
{
  "title" : "Main Heading",
  "title_lowercase" : "main heading",
}
sy process --help
sy-process 
Merge JSON or YAML files from standard input from specified files. Multi-document YAML files are supported. Merging a
single file is explicitly valid and can be used to check for syntax errors.

USAGE:
    sy process [FLAGS] [OPTIONS] [--] [path-or-value]...

FLAGS:
    -h, --help            Prints help information
        --no-overwrite    If set, values in the merged document may not overwrite values already present. This is
                          enabled by default,and can be explicitly turned off with --overwrite.
        --no-stdin        If set, we will not try to read structured data from standard input. This may be required in
                          some situations where we are blockingly reading from a standard input which is attached to a
                          pseudo-terminal.
        --overwrite       If set, values in the merged document can overwrite values already present. This is disabled
                          by default,and can be explicitly turned off with --no-overwrite.

OPTIONS:
    -a, --at=<pointer>...            Use a JSON pointer to specify an existing mapping at which the next merged value
                                     should be placed. This affects only the next following --environment or <path>.
                                     Valid specifications are for example '0/a/b/4' or 'a.b.0'. If it is specified last,
                                     without a following merged value, the entire aggregated value so far is moved.
    -e, --environment=<filter>...    Import all environment variables matching the given filter. If no filter is set,
                                     all variables are imported. Otherwise it is applied as a glob, e.g. 'FOO*' includes
                                     'FOO_BAR', but not 'BAZ_BAR'.Other valid meta characters are '?' to find any
                                     character, e.g. 'FO?' matches 'FOO'. [default: *]
    -o, --output=<mode>              Specifies how the merged result should be serialized. [default: json]  [possible
                                     values: json, yaml]
    -s, --select=<pointer>...        Use a JSON pointer to specify which sub-value to use. This affects only the next
                                     following --environment or <path>. Valid specifications are for example '0/a/b/4'
                                     or 'a.b.0', and they must point to a valid value. If it is specified last, without
                                     a following merged value, a sub-value is selected from the aggregated value.

ARGS:
    <path-or-value>...    The path to the file to include, or '-' to read from standard input. It must be in a
                          format that can be output using the --output flag. Alternatively it can be a value
                          assignment like 'a=42' or a.b.c=value.

You can also use these aliases:

  • merge
  • show

It helps to use this powerful command by understanding its mindset a little.

  • it wants to produce a single document (JSON or YAML) from multiple input documents (JSON or YAML)
  • by default, it will refuse to overwrite existing values
  • multi-document YAML files are fully supported
  • standard input is a valid source for documents
  • the order of arguments matter, as this program is implemented as state-machine
  • by default it produces JSON output

This program helps to quickly manipulate various inputs and produce a new output, which can then more easily be used in programs like sy substitute, or to generate configuration files.

Now let's look typical use-cases.

Merge multiple documents into one

This case often arises by the mere necessity of keeping things separate. Even though keeping all data in a single structured data file would work just fine, in practice not all information is under your direct control and thus pulled in separately from other locations.

For substitution, multiple files are not viable, which is why a single file should be produced instead:

sy merge --at=project project.yml --at=team team.yml
{
  "project": {
    "kind": "ios-game",
    "name": "super-punch"
  },
  "team": {
    "name": "dept. nine",
    "product-owner": "Max Owner"
  }
}

As you can see, we use the --at flag to put the contents of both files into their own namespaces. Without that, we would have a clashing name field which makes the program fail.

sy merge project.yml team.yml
error: The merge failed due to conflicts
Caused by: 
 1: Refusing to merge due to the following clashing keys:
name


Overwriting individual values (or creating new ones)

Sometimes during testing, it's useful to change a single value, in the configuration for instance. You can easily do this using the key=value specification.

sy merge game-config.yml player.lives=99
error: The merge failed due to conflicts
Caused by: 
 1: Refusing to merge due to the following clashing keys:
player.lives


However, the above fails as we won't ever overwrite existing values. Let's try to argue with the program to make it work nonetheless:

sy merge game-config.yml player.lives=99 --overwrite
error: The merge failed due to conflicts
Caused by: 
 1: Refusing to merge due to the following clashing keys:
player.lives


This might appear unexpected, even though it is not when you have understood that the order matters. In the example above, --overwrite simply applies too late for overwriting the value. If we swap their positions, it will work.

sy merge game-config.yml  --overwrite player.lives=99 player.invincible=true
{
  "enemies": {
    "cheat": true,
    "type": "aliens"
  },
  "player": {
    "invincible": true,
    "lives": 99
  }
}

Please note that --overwrite acts as a toggle and affects all following arguments. You can toggle it back off with the --no-overwrite flag

sy merge game-config.yml  --overwrite player.lives=99 \
        --no-overwrite --at=project project.yml --at=team team.yml \
        -o yaml
---
enemies:
  cheat: true
  type: aliens
player:
  lives: 99
project:
  kind: ios-game
  name: super-punch
team:
  name: dept. nine
  product-owner: Max Owner

Converting YAML to JSON (or vice versa)

As a side-effect, you can easily convert YAML to JSON, like so...

sy process < game-config.yml
{
  "enemies": {
    "cheat": true,
    "type": "aliens"
  },
  "player": {
    "lives": 3
  }
}

...or the other way around:

sy process < game-config.yml | sy process -o yaml
---
enemies:
  cheat: true
  type: aliens
player:
  lives: 3

Accessing to environment variables

More often than not, the environment contains information you will want to make use of in our configuration. It's easy to bring it into your data model, and filter them by their name.

sy process --environment=HO*
{
  "HOME": "/root",
  "HOSTNAME": "52170f35ed69"
}

Of course this can be combined with all other flags:

sy process --at env --environment=HO* env.NEW=value
{
  "env": {
    "HOME": "/root",
    "HOSTNAME": "52170f35ed69",
    "NEW": "value"
  }
}

'Pulling up' values to allow general substitution

It's most common to have different sets of configuration for different environments. For example, most deploy to at least two stages: pre-production and production.

When using sy process for generating configuration to be used by tooling, it's not practical to force the tooling to know the stage.

Imagine the following configuration file:

cat multi-stage-config.yml
pre-production:
  replicas: 1
  max-cpu: 200mi
  max-memory: 64M
production:
  replicas: 3
  max-cpu: 2000mi
  max-memory: 1024M

Tools either get to know which stage configuration to use, or you 'pull it up' for them:

sy process --select=production multi-stage-config.yml -o yaml
---
max-cpu: 2000mi
max-memory: 1024M
replicas: 3

Using multi-document yaml documents as input

A feature that is still rare in the wild, probably due to lacking tool support, is multi-document YAML files.

We fully support them, but will merge them into a single document before processing it any further.

A file like this...

cat multi-document.yml
---
pre-production:
  replicas: 1
  max-memory: 64M
---
production:
  replicas: 3
  max-memory: 1024M

...looks like this when processing. Clashing keys will clash unless --overwrite is set.

sy process multi-document.yml -o yaml
---
pre-production:
  max-memory: 64M
  replicas: 1
production:
  max-memory: 1024M
  replicas: 3

Controlling standard input

We will read JSON or YAML from standard input if possible. To make it more flexible, any non-path flags are applied to standard input. This may lead to unexpected output if more than one document source is specified.

Let's start with a simple case:

cat team.yml | sy process --at=team-from-stdin
{
  "team-from-stdin": {
    "name": "dept. nine",
    "product-owner": "Max Owner"
  }
}

In the moment another file is added for processing, it's a bit more difficult to control which argument applies where:

cat team.yml | sy process --at=team-from-stdin --at=project project.yml
{
  "kind": "ios-game",
  "name": "super-punch",
  "project": {
    "name": "dept. nine",
    "product-owner": "Max Owner"
  }
}

As you can see, the project key is used for standard input, and the team-from-stdin is seemingly ignored.

To fix this, be explicit to make obvious what you expect:

cat team.yml | sy process --at=team-from-stdin - --at=project project.yml
{
  "project": {
    "kind": "ios-game",
    "name": "super-punch"
  },
  "team-from-stdin": {
    "name": "dept. nine",
    "product-owner": "Max Owner"
  }
}

Note the single dash (-), which indicates when to read from standard input. As standard input is always consumed entirely, it can only be specified once.

sy extract --help
sy-extract 
Extract scalar or complex values from any JSON or YAML file. Multi-document YAML files are supported.

USAGE:
    sy extract [FLAGS] [OPTIONS] <pointer>...

FLAGS:
    -h, --help        Prints help information
        --no-stdin    If set, we will not try to read structured data from standard input. This may be required in some
                      situations where we are blockingly reading from a standard input which is attached to a pseudo-
                      terminal.

OPTIONS:
    -f, --file=<file>...    The path to the file to include, or '-' to read from standard input. It must be in a format
                            that can be output using the --output flag.
    -o, --output=<mode>     Specifies how the extracted result should be serialized. If the output format is not
                            explicitly set, the output will be a single scalar value per line. If the output contains a
                            complex value, the default serialization format will be used. [possible values: json, yaml]

ARGS:
    <pointer>...    Use a JSON pointer to specify which value to extract. Valid specifications are for example
                    '0/a/b/4' or 'a.b.0', and they must point to a valid value.

You can also use this alias: fetch.

The extract sub-command is meant for those cases when you need individual values from a file with structured data, for example when consuming credentials in scripts.

When extracting scalar values, those will be output one per line. If an extracted value is not scalar, e.g. an object or array, the output mode of all extracted values will change to JSON (default) or YAML with --output.

All values are specified using JSON-pointer notation, with the added convenience that slashes (-) can be exchanged with dots (.) .

Extracting username and password

Given an input file like this, here is how you can extract username and password for usage in scripts:

cat credentials.yml
user:
  name: Hans
  password: geheim

Extract using the JSON pointer syntax is quite straightforward, and rather forgiving:

sy extract -f=credentials.yml user.name /user/password
Hans
geheim

From here it should be easy to assign individual values to variables for use in scripts

password="$(sy extract user.password < credentials.yml)"
username="$(sy extract user.name < credentials.yml)"
echo Authorization: Basic $(echo $username:$password | base64)
Authorization: Basic SGFuczpnZWhlaW0K

Collecting individual values into a structured format

By default, and as long as you are not extracting a non-sclar value, the output will be a single line per value. Otherwise, you will either get a list of JSON or YAML values.

sy extract user.name user/password -o json < credentials.yml
[
  "Hans",
  "geheim"
]