Ensure Content Trust on Kubernetes using Notary and Open Policy Agent

A detailed guide to help you to ensure that only signed images can get deployed on the cluster

Maximilian Siegert
20 min readJun 21, 2020

by Daniel Geiger and Maximilian Siegert

In this blog post we want to show you how to enforce image trust on your Kubernetes Cluster by fully relying on two well known CNCF hosted open source solutions: Notary and Open Policy Agent (OPA). The main idea of this approach is to use OPA’s flexibility to fine-granularly define your own content trust restrictions.

The blog post is structured as follows:

1. Requirements to reconstruct our installation

2. Basic concepts for image trust with Notary

3. Installing Notary on Kubernetes

4. Basic concepts of Admission Controls and Open Policy Agent

5. Installing Open Policy Agent on Kubernetes

6. Define the Validating Admission Controls to enforce content trust

7. Define the Mutating Admission Webhook to automate content trust

8. Summary and Outlook

If you are already familiar with Notary and/or Open Policy Agent you might want to skip the part on “Basic concepts for image trust with Notary” or “Basic concepts of Admission Controls and Open Policy Agent”.

Requirements to reconstruct our installation

If you want to follow the installation process along with this blog you will require the following components:

  1. A Kubernetes Cluster or Minikube with enabled admission-plugins:
  • You should have at least the following admission-plugins enabled:
MutatingAdmissionWebhook, ValidatingAdmissionWebhook 
  • If you decide to use Minikube you can start it as follows:
minikube start --extra-config=apiserver.enable-admission-plugins=MutatingAdmissionWebhook,ValidatingAdmissionWebhook

2. A private registry or at least a Docker Hub ID to push your signed images to.

3. The helm-charts to install opa, notary and the notary-wrapper from our git repository:

Basic concepts for image trust and Notary

(Skip the Notary basics and go to the Notary installation)

Signing your code, executables, or scripts and enforcing to install only trusted software artifacts is considered best practice to ensure authenticity and integrity. The concept of signing software artefacts is not new, there is a plethora of vendors for solutions and services on the market and basically, every organization has its own ways of handling the signing and trust enforcement of software artefacts. However, if you take a look at content trust in the area of containerized applications, you’ll probably find out, that the number of available options is getting thinner.

What is Notary?
You might have already heard of Notary which is an open source signing solution for software artefacts based on The Update Framework (TUF).[1]

How does Notary work?
Let’s start with the core concept of Notary. Notary uses roles and metadata files to sign the content of trusted collections, which are called Globally Unique Names (GUN).

If you would take a Docker image for example, the GUN would be equal to:

[registry]/[repository name]:[tag]

[registry] is the URL to your image source repository and [repository name] the name of your image. The [tag] labelling the image (typically with its version).

Notary performs the signing of an image by using TUFs’ roles and key hierarchy. There are five types of keys to sign corresponding metadata files and store them (as .json) in the Notary database[2]. The graphic below illustrates the key hierarchy and the typical location where the keys are stored:

Key hierarchy in Notary
  1. Root Keys: Each GUN has its own root role and key. Root keys are the root of all trust and used to sign the root metadata file that lists the IDs of the root, targets, snapshot and timestamp public keys. The root key is typically held by a collection (GUN) owner and kept offline (e.g. in a local directory such as ~/.docker/trust/private or a Yubikey).
  2. Targets Key: The targets key signs the targets metadata file which lists all filenames in the collection, their sizes and respective hashes. The metadata file is used to verify the integrity of all of the actual contents of the repository. This means also that the targets metadata file contains an entry for each image tag. Targets keys can further be used to delegate trust to other collaborators via delegation roles. The targets key is also held by a collection (GUN) owner/administrator and kept locally (e.g. in a local directory such as ~/.docker/trust/private or a Yubikey).
  3. Delegation Keys: As stated above targets keys can optionally delegate trust to other delegation roles. These roles will have their own keys to sign delegation metadata files, which list filenames in the collection their sizes and respective hashes. The delegation metadata files are then used to verify the integrity of some or all of the actual contents of the repository. The keys are held by anyone from the collection owner to the collection collaborators (e.g. in a local directory such as ~/.docker/trust/private or a Yubikey).
  4. Snapshot Keys: The snapshot key signs the snapshot metadata file, which numerates the filenames, sizes, and hashes of the root, targets, and delegation metadata files for each collection (GUN). The primary objective of the metadata file is to verify the integrity of the other metadata files. The snapshot key is held by either the collection owner (locally) or the Notary service if you use multiple collaborators via delegation roles.
  5. Timestamp Keys: The timestamp key signs the timestamp metadata file, which provides freshness guaranties for the collection. It has the shortest expiry time of any particular piece of metadata and by specifying the filename, size, and hash of the most recent snapshot for the collection. The metadata file is used to verify the integrity of the snapshot file. The timestamp key is held by the Notary service so that it can be automatically re-generated when requested by the server.

The actual Notary service to manage the key architecture consists of:

  1. A Notary Server that stores and updates the signed metadata files for your trusted collections (GUNs) in an associated database
  2. A Notary Signer which stores private keys to sign metadata for the Notary Server.

The diagram from the Docker documentation of Notary pretty much summarizes the communication between a client and both the Notary Signer and Server. Below we adapted a simplified version of that flow [2]:

Simplified client server communication for notary
  1. Notary Server optionally supports authentication from clients using JWT tokens. If you don’t have the feature enabled you can simply upload new metadata files. When clients upload new metadata files, the Notary Server checks them against any previous versions for conflicts, and verifies the signatures, checksums, and validity of the uploaded metadata.
  2. Once all the uploaded metadata has been validated, Notary Server generates the timestamp (and in case of delegations the snapshot) metadata. It sends this generated metadata to the Notary Signer for signing.
  3. Notary Signer retrieves the necessary encrypted private keys from its database if available, decrypts the keys, and uses them to sign the metadata. If successful, it sends the signatures back to Notary Server.
  4. Notary Server is the source of truth for the state of all trusted collections (GUNs) storing both client-uploaded and server-generated metadata in the TUF database. The generated timestamp and snapshot metadata certify that the metadata files the client uploaded are the most recent for that trusted collection. Notary Server notifies the client that their upload was successful.
  5. The client can now immediately download the latest metadata from the server. Notary Server only needs to obtain the metadata from the database, since none of the metadata has expired.

In the case that the timestamp has expired, Notary Server would go through the entire sequence where it generates a new timestamp, request Notary Signer for a signature, and stores the newly signed timestamp in the database. It then sends this new timestamp, along with the rest of the stored metadata, to the requesting client.

Even though signing with Notary may seem complex, the biggest advantage of using Notary to sign your images is probably the existing integration in the Docker client. You can easily enforce image trust on your local devices by using the flags:

  • DOCKER_CONTENT_TRUST=1 to activate Notary on your client and,
  • DOCKER_CONTENT_TRUST_SERVER=”<url-to-your-Notary-server>” to provide your own Notary installation as source of trust.

After those flags are set, your Docker Client will verify the signature before every pull and request your signing credentials to sign each build before every push. Docker HUB even provides its own default DOCKER_CONTENT_TRUST_SERVER=” https://notary.docker.io” which, if content trust is activated, is used to sign your pushed images.

If a pulled image is signed, you can simply call the command docker trust inspect <GUN> and check the output such as:

docker trust inspect nginx:latest[
{
"Name": "nginx:latest",
"SignedTags": [
{
"SignedTag": "latest",
"Digest": "b2xxxxxxxxxxxxx4a0395f18b9f7999b768f2",
"Signers": [
"Repo Admin"
]
}
],
"Signers": [],
"AdministrativeKeys": [
{
"Name": "Root",
"Keys": [
{
"ID": "d2fxxxxxxx042989d4655a176e8aad40d"
}
]
},
...
]
}
]

Besides using the docker trust command, you can also download the Notary Client to communicate directly with your Notary Server. You can download the client here.

Installing Notary on Kubernetes

Now that you have a basic understanding on how Notary works we can go ahead and install our own private Notary service on Kubernetes. We prepared two shell scripts and several helm charts to ensure a straight forward and easy from scratch installation. If you have not done yet, please clone our repository:

git clone https://github.com/k8s-gadgets/k8s-content-trust

Installation

Navigate into the notary-k8s subdirectory.
[Optional] Building Notary and add it to your own registry: To build the latest Notary images from scratch, navigate to the build directory. If you want to build and push the Notary images to your own private registry, open the build.sh file in your editor, edit the REGISTRY-flag to match your own registry and execute the build.sh script.

bash build.sh

Next, you will need to navigate to the helm/notary directory and generate TLS certificates to ensure the secure communication with the Notary service and between its components:

cd helm/notary
bash generateCerts.sh

Since we have the Docker images prepared and fresh TLS certificates loaded to the charts, we can continue with the deployment to Kubernetes using helm. As in every helm installation you might want to have a look at the values.yaml file to overwrite some parameters such as the default passwords ( passwordalias1Name, passwordalias1Value) or your private repository.

Next we can create the namespace and install the helm charts:

kubectl create namespace notary
# switch to namespace notary
helm install notary notary

Check if your images are up and running:

kubectl get pods –n notary

Congratulations! If all pods are running, you just installed Notary on your cluster. However, before we will try out our Notary Server we should apply the last generated template for the Notary Wrapper as well.

The Notary Wrapper is an extension that we have developed so that OPA can later interact with the Notary service. It is a CLI REST interface with minimum capabilities to retrieve signed image digests and check for trust data on the server - nothing more.

Copy the certificates from the notary-k8s/helm/certs/* directory to a helm/notary-wrapper/certs/* directory:

  • notary-wrapper.crt
  • notary-wrapper.key
  • root-ca.crt

Go to the notary-wrapper subdirectory in our repository. Create the OPA namespace (as we will use it later only in combination with OPA) and perform the helm installation

kubectl create namespace opa
# switch to namespace opa
helm install notary-wrapper notary-wrapper

done!

Testing Notary
As soon as all of our components are installed, we can start filling Notary with our trust data. The illustration below shows you how the procedure works:

Simplified interaction to sign images using your private Notary

Initially, we need some local images to sign, so we will pull a few examples from the Docker Hub:

If you already enabled DOCKER_CONTENT_TRUST and did not specify a DOCKER_CONTENT_TRUST_SERVER or you specified your new server, the pull might not work.

docker pull nginx:latest
docker pull busybox:latest

Next we have to establish the connection between our Notary Client and the running Notary Service:

  1. Add Notary Server to your /etc/hosts file:
127.0.0.1 notary-server-svc

2. Open a second tab in your terminal and create a port forwarding for the Notary Server pod to access it locally (the server is in the namespace notary):

# switch to namespace notary
kubectl port-forward notary-server-<...> 4443:4443

3. The first time you want to sign, you’ll need to copy your root-ca.crt from the installation to your .docker/tls directory:

mkdir -p $HOME/.docker/tls/notary-server-svc:4443
cp <...>/helm/notary/certs/root-ca.crt $HOME/.docker/tls/notary-server-svc:4443/

4. Go back to your first tab and enforce content trust:

export DOCKER_CONTENT_TRUST_SERVER=https://notary-server-svc:4443
export DOCKER_CONTENT_TRUST=1

Notary is now activated and you should no longer be able to pull any images that have not been signed by your Notary service. This means that we can proceed to tag, sign and push our images (in our example we will simply push to our own Docker Hub space using our own image signatures):

docker tag nginx:latest docker.io/<hub-id>/nginx:1 
docker push docker.io/<hub-id>/nginx:1
docker tag busybox:latest docker.io/<hub-id>busybox:1
docker push docker.io/<hub-id>/busybox:1

The push commands should each prompt you generate password requests for the signing keys. If those are set the images will be pushed to the Docker Hub and the trust data to our Notary Server. To verify this you can check your images with the already mentioned docker trust inspect command or use notary list if you have installed the Notary Client. The exact command to do so should look something like this:

notary -s https://notary-server-svc:4443 --tlscacert $HOME/.docker/tls/notary-server-svc:4443/root-ca.crt list docker.io/<hub-id>/nginx# output
NAME DIGEST SIZE (BYTES) ROLE
---- ------ ------------ ----
1 cccef6d6bdea671c394954b0dxxxxxxxx 948 targets

Tip: If you have to re-install notary for some reason and want to sign images with new keys you’ll have to go to your .docker/tls directory and delete your formerly stored key. You also should go to .docker/trust/tuf and delete the existing trustdata for images you want to newly sign.

Now, you may also want to test the Notary Wrapper. Analogous to the Notary Server you need to open a new tab in your console and add Notary Wrapper to your /etc/hosts file:

127.0.0.1 notary-wrapper-svc

Save, close and create a port-forwarding for port 4445 to access it (the wrapper is in the namespace opa):

# switch to namespace opa
kubectl port-forward notary-wrapper-<...> 4445:4445

Now you can perform 2 operations to check for trust data using your GUN, tag or digest. As we use a TLS connection you will also need to add the root-ca.crt we generated in our installation:

curl -X POST https://notary-wrapper-svc:4445/list -H “Content-Type: application/json” -d ‘{“GUN”:”docker.io/<hub-id>/nginx”, “Tag”:”1", “notaryServer”:”notary-server-svc.notary.svc:4443”}’ --cacert PATH/TO/YOUR/NOTARY/certs/root-ca.crt# output - One item
{
"Name":"1",
"Digest":"cccef6d6bdexxxxxx422",
"Size":"948",
"Role":"targets"
}
  • Use the URL https://notary-wrapper-svc:4445/verify together with your GUN and the RepoDigest to validate if trust data for the digest exist and returns a 200 or 404 code. If you don’t know your RepoDigest perform docker inspect GUN:Tag and check for the “RepoDigests” flag.
    Curl Example:
curl -X POST https://notary-wrapper-svc:4445/verify -H “Content-Type: application/json” -d ‘{“GUN”:”docker.io/<hub-id>/nginx”, “SHA”:”<your-RepoDigest>”, “notaryServer”:”notary-server-svc.notary.svc:4443”}’ --cacert PATH/TO/YOUR/NOTARY/certs/root-ca.crt

The Notary Wrapper will be mandatory for enforcing content trust later. If you finished with testing the Notary Wrapper you can close your port-forwarding and continue.

Enforcing content trust on Kubernetes
Despite the fact that we can now sign our images and generate trust data, we are still missing one piece of the puzzle: the enforcement of content trust on the Kubernetes Cluster. Especially if you want to make this last mile, you might be confronted with the first limitations as plain Kubernetes does not provide any flags to activate content trust.
The only possible way of doing so would be by relying on the underlying Docker engine and fork the Image Authorization plugin to enforce the DOCKER_CONTENT_TRUST (see also this discussion). This however comes with some drawbacks:

  1. You have to rely on the Docker engine on your worker nodes for enforcement.
  2. DOCKER_CONTENT_TRUST is a binary flag that is either on or off. You will not be able to deploy any images that are not signed by your Notary Server on these nodes.
  3. DOCKER_CONTENT_TRUST is only able to check if signature metadata to an image (tag) exist, but not if the actual signature belongs to the wanted image (tag).

To overcome these drawbacks we have to look at Kubernetes Admission controls.

Basic concepts of Admission Controls and Open Policy Agent

(Skip the OPA basics and go to the OPA installation)

In short, Kubernetes Admission Controllers are plugins that, when enabled, manage and enforce how resources must be configured on a cluster. They are part of the Kubernetes API-request lifecycle and act like a gatekeeper for request objects. Besides the 30 pre-coded admission controllers Kubernetes is shipped with (e.g. PodSecurity Policies), you may also find the need to create your own specific control rules. In this case the Admission Controller can use Mutating and Validating Webhooks:

  • Mutating Admission Controller: Mutation webhooks alter a request object to fulfill a desired configuration.
  • Validating Admission Controller: Validation webhooks can decide whether the request object fulfils all desired configurations and deny the deployment otherwise.

It is very important to know the order in which the admission controls are triggered:

Kube API Lifecycle

Kubernetes will always perform mutating admission controls before any validating admission controls. This ensures that any mutated request objects fulfils your validation policies.[3] One of the best ways to implement your Mutating and Validating Webhooks, is by using Open Policy Agent (OPA).

What is OPA?

OPA is a general-purpose policy engine that unifies policy enforcement across the stack. It uses a high-level declarative language (Rego) to specify your policies. The illustration below shows you how OPA integrates with the Kubernetes API-Lifecycle:

Kube API Lifecycle with OPA

Installing Open Policy Agent on Kubernetes

We will take advantage of OPAs flexibility to enforce our content trust in Kubernetes with Rego policies. However, before we can start with that we need to install OPA into our cluster.

Assuming you have the plugins from the requirements chapter enabled and during the notary-wrapper the OPA-namespace created, you can start directly with the open-policy-agent installation by navigating to its directory in our repository.

As the communication between Kubernetes and OPA must be secured using TLS you need to create additional certificates/key pair for OPA.

# copy the root-ca
cp ~/PATH/TO/k8-content-trust/notary-k8s/helm/notary/certs/root-ca.crt ~/PATH/TO/k8-content-trust/open-policy-agent/helm/opa/certs
# generate the additional OPA certs
cd helm/opa
bash generateCerts.sh

As OPA will become effective directly after installation you should exclude some of the namespaces that don’t need to get checked against our admission controls:

kubectl label ns kube-system openpolicyagent.org/webhook=ignore
kubectl label ns opa openpolicyagent.org/webhook=ignore
kubectl label ns notary openpolicyagent.org/webhook=ignore

Next we ensure that the parameters validating and mutating in the values.yaml are configured as below (we will get to mutating: true a bit later) and run helm to install OPA:

values.yml
# switch to namespace opa
helm upgrade --install opa opa

The helm script covers the installation guide found on the OPA site but also contains the needed rulesets for verification and mutation (found in open-policy-agent/rego).

After the installation is finished you can open a new tab in your terminal and follow the OPA logs to see incoming webhook requests being issued by the Kubernetes API server:

# ctrl-c to exit
kubectl logs -n opa -f opa-deploy-<...> opa

Validating Admission Controls to enforce content trust

Now, we get to the fun part of enforcing content trust. With our Notary service and OPA set to go, we will first start to deny any image that we don’t trust. Therefore, you should first know one thing about the relationship between a Docker tag and digests.

Typically, we are all used to deploy images using their GUN and tags. However, most people tend to ignore the fact that even though an image tag is unique it can be overwritten (if not prohibited by the registry). A collection owner would therefore be able to push a signed image (tag) with altered content under the same tag twice to your registry. In order to avoid this behaviour you should pull your images using their unique digests.

To reflect this as Validating Webhook we will define 2 Rego-rules:

  1. Deny any deployment that contains an ordinary tag or latest
  2. Deny any image deployment that uses a digest which is not signed by our Notary Server

Note: All controlls were already installed using helm!

Lets start with rule number one (see helm/opa/policy/validating/rules.rego):

rules.rego

The Rule above checks whether an API request attempts to CREATE or UPDATE a resource of type “Pod” or “Deployment” to Kubernetes.

Depending on the resource type the helper rules get_images[x] will ensure that we will iterate over all images within the request (e.g. when using a pod with multiple sidecars), verifying that each image is pulled directly over the digest::

[GUN]@sha256:[digest hash]

Therefore, we simply need to check if the image pull specification makes use of the “@sha256” operator for each image. Otherwise, we can assume that the request is attempting to deploy an image using a tag or the plain GUN. If the rule fires, the sender of the deployment request will receive the error message.

Next we continue to define the second rule to deny digests that are not signed by our trusted entity (aka our Notary).

In this Rego-rule we will make use of our Notary Wrapper by using OPAs integrated http.send function in the get_checksum_status(image) helper rule. The first part retrieves the digests from each image specified within the deployment request. Then the get_checksum_status(image) will send the image GUN and digest as payload via http.send to the Notary Wrapper which will then validate whether each image has been signed. If the request will not return a status_code 200 for all digests the rule fires and denies a further deployment.

A short comment on the http.send function: As of now, http.send is not designed to return a response when the target is not available (for further information follow this feature-request on the OPA git repo). As long as the Notary Wrapper can be reached, you will not face any issues with this feature. However, if the Notary Wrapper would be unavailble for any reason, OPA will fail which will then be caught by the failurePolicy: Fail defined in the ValidatingWebhookConfiguration.

The two validation-rules described above are all you need to enforce content trust on your Kubernetes cluster!

To test them, you can simply deploy a new pod by pulling your images either via tag, a bad or good digest:

trust-pinning-test.yml

Note: There is also a directory with various test scenarios you can adjust to your needs in our repository (open-policy-agent/tests).

The following illustration summarizes our efforts so far:

OPA image verification process

Each pod deployment is passed as an API request to the Kubernetes API server and starts the validation process:

  1. The request triggers the validating webhook which calls OPA
  2. OPA will check the image pull specification and in case of a pull over digest will ask the Notary Wrapper for trust data. The Notary Wrapper will look into the Notary-server and returns the response back to OPA for the deployment decision. If no Rego-rule is triggered, then Kubernetes will continue its regular deployment.
  3. Pull the image via its valid digest from your registry (in our example the Docker-HUB)
  4. Deploy the pod.

Basically, we could now stop as content trust is enforced. However, always checking for your valid RepoDigests to the corresponding image version you want to deploy feels rather inconvenient. Wouldn’t it be better to continue using your image tags the way you probably always have and still ensure that content trust is enforced?

Define the Mutating Admission Webhook to automate content trust

Well you can and this is where mutating admission webhooks come at hand. As previously mentioned, Kubernetes offers the capability to use mutating admission controls that are capable of patching API requests before any other validation is processed.

We will take this to our very advantage by writing a mutation webhook that will be triggered every time a user attempts to deploy an image with a tag. In this case, the mutating webhook kicks in and automatically alters the pull specification to use the valid digest belonging to the tag. This leaves us with the following mutation flow:

OPA trusted image alteration on pod deployments

Any on the Kube API server flows through the mutation webhook:

  1. If the request has kind “Pod” and the operation is a create or update request containing an image with a tag, OPA will be called by the mutating webhook (before any validation control!).
  2. OPA will check the image pull specification for the used tags. Then, OPA will send a new http.send request for each used tag in the deployment scheme to the Notary Wrapper performing a look up on the Notary Server.
  3. If the Notary Wrapper has found an entry to the image tag on the Notary Server it will return the latest corresponding RepoDigest back to OPA and otherwise an error-message.
  4. OPA will then perform a patch on the deployment scheme replacing the image tag with the corresponding digest and return the altered request back to the Kube API server.
  5. The Kube API server continues with its regular lifecycle performing the object schema validation and its validating admission controls. If the request is valid, it starts pulling the image via the RepoDigest from the trusted registry and deploying the pod.

As we already registered OPA as a mutating webhook during our installation, we only have to add a new Rego-rule with a mutating admission webhook. The easiest way of doing so, is by going back to your local helm directory of open-policy-agent to enable the parameter mutatingand performing a helm upgrade:

values.yml
# switch to namespace opa
helm upgrade --install opa opa

A mutating admission webhook in OPA is always part of a Rego main method that performs the alteration on the API-request. The helm upgrade will add the new ruleset to your installation:

main.rego

Lets briefly break down what the new main method does:

  1. OPA will perform our admission reviews adding the expected response-behaviour reflected in the response-rules.
  2. The first response rule allows any API-request that does not require a mutation.
  3. The second response rule will call the patch rule.
  4. The patch rule will attempt to patch any API request that contains a CREATE or UPDATE operation on an object of kind “Pod” or “Deployment”. The result parameter will first retrieve the images of the API request and verify that each image is pulled via the digest (using “@sha256:” in the url).
  5. If this condition is not true the rule will continue to extract the image-name and tag using the helper-rule: split_image.
  6. The returned array from split_image will be used to look-up the digest in notary performed in the get_digest-rule sending out the request to the notary-wrapper via the http.send-function. If notary has a digest to the image and tag, it will be returned by the get_digest-rule and assigned to the patchedImage-parameter. If there is no corresponding digest in notary, it will receive and assign the 404-message from the notary-wrapper.
  7. Kubernetes requires each patch to be in the .json format. Hence, the .json-patch (assigned to p) will need to perform the “replace”-operation on the path-parameter replacing its value with the new pull-string. Because the path-parameter for images is nested different in a pod (”spec/containers/id/[pull-string])and a deployment-request (”spec/templates/spec/containers/id/[pull-string]) we needed to create two get_digest and get_path helper-rules) covering both types of requests.
  8. OPA will encode all patches and return the response as mutated API request to the Kube API server, where it will continue in the API-request life-cycle.

If you want to test the mutating webhook described above you can simply look at the open-policy-agent/tests folder and adapt one of the files in the folders pod or deployment to your repoistory. If you kept your validating webhooks from the previous chapter, you can even play around a little bit and check what happens if you combine valid and invalid tags or digest. The matrix below summarizes how the webhooks will behave:

Summary and Outlook

Finally, we have the Kubernetes cluster that ensures content trust, without even interfering in your regular deployment habits. Since we fully relied on OPA for trust pinning you can now even extend the rules further to your needs or combine them with your other validation rules.

We know that this has been a very long post but we wanted to give you as much details as possible to get yourself going. In our opinion, image signing and trust pinning is one of the largest blind spots of the current discussions around container security. Even though we have many available tools to scan and harden our container images we tend to neglect talking about verifying their integrity.

Where do we go from here? We also want to encourage the community to contribute as there are still a lot of details we did not cover so far like:

  • Performance: Performance testing and increasing the speed of the introduced admission controls for both validations and mutations
  • Production-Readiness: Making the Notary setup highly available and attaching the Notary Client (incl. the Docker-CLI) to a HSM
  • CI/CD-Integration: Automating Notary signing in a CI/CD-pipeline (e.g. to Jenkins, Azure DevOps, Kaniko etc.).

Thanks for staying to the end of this post. We hope you enjoyed it. We want to especially thank Asad, Torin and Jeff from OPA/Styra who supported us all the way specifically by defining the rules we used.

--

--