Skip to content

Validating Configuration and Enforcing Policies

ConfigHub Triggers enable you to automatically validate configuration data and enforce policies across your Spaces. This guide covers the practicalities of setting up, managing, and troubleshooting Triggers, Apply Gates, and related workflows.

For background on the concepts, see:

  • Triggers — what Triggers are and how they work
  • Functions — the code that Triggers execute
  • Gates — how Apply Gates, Apply Warnings, and other gates protect operations
  • Working with Functions — how to discover, invoke, and create Triggers

Creating Triggers

A Trigger associates a Function with an Event type (e.g., Mutation, PostClone) and a configuration format (ToolchainType). When the specified event occurs on a Unit, the function is automatically invoked.

cub trigger create --space my-space check-replicas Mutation Kubernetes/YAML vet-celexpr 'r.kind != "Deployment" || r.spec.replicas > 1'

Description

Use --description to explain what the Trigger checks and how to fix failures. The description is shown in the UI when hovering over an Apply Gate.

cub trigger create --space my-space check-replicas Mutation Kubernetes/YAML \
  --description "Ensures Deployments have more than one replica for high availability" \
  vet-celexpr 'r.kind != "Deployment" || r.spec.replicas > 1'

When no description is provided, the UI shows the function name and arguments as a fallback.

Warn mode

Use --warn to create a non-blocking Trigger that produces Apply Warnings instead of Apply Gates. This is useful for advisory policies or gradual rollouts:

cub trigger create --space my-space soft-check Mutation Kubernetes/YAML --warn vet-celexpr '...'

Toggle warn mode on an existing Trigger with --warn or --unwarn on update.

Filtering which Units a Trigger applies to

By default, a Trigger applies to all Units in the Space (and any Spaces or Targets that select it). You can restrict which Units a Trigger applies to using:

  • --where-unit — a filter expression evaluated against Unit metadata:

    cub trigger create --space my-space check-prod Mutation Kubernetes/YAML \
      --where-unit "Labels.Environment = 'production'" \
      vet-celexpr '...'
    
  • --unit-filter — a reference to a saved Filter entity with From=Unit:

    cub filter create --space my-space prod-units Unit --where-field "Labels.Environment = 'production'"
    cub trigger create --space my-space check-prod Mutation Kubernetes/YAML \
      --unit-filter my-space/prod-units \
      vet-celexpr '...'
    
  • --where-resource — restricts which resources within a Unit's configuration data the function operates on, using metadata path expressions.

Worker-executed Triggers

Triggers can execute on a Bridge Worker instead of the built-in function executor:

cub trigger create --space my-space worker-check Mutation Kubernetes/YAML \
  --worker my-space/my-worker \
  vet-celexpr '...'

See Worker disconnection behavior for how ConfigHub handles unavailable workers.

Preventing mistakes with built-in validation

These built-in functions catch common errors and are recommended as Triggers for all Spaces:

vet-schemas — runs kubeconform to validate Kubernetes resource schemas. This catches typos in field names, incorrect types, missing required fields, and invalid API versions before configuration is applied:

cub trigger create --space my-space valid-k8s Mutation Kubernetes/YAML vet-schemas

vet-placeholders — detects placeholder values that have not yet been replaced by the user or resolved through a linked configuration. Placeholder values (such as confighubplaceholder or 999999999) indicate that a value still needs to be provided. This prevents deploying incomplete configuration:

cub trigger create --space my-space complete-k8s Mutation Kubernetes/YAML vet-placeholders

Custom policy expressions with CEL

vet-celexpr — evaluates a simple boolean CEL expression against each resource. The resource is available as r. Returns true (pass) or false (fail). Use this for straightforward constraints:

cub trigger create --space my-space check-replicas Mutation Kubernetes/YAML \
  vet-celexpr 'r.kind != "Deployment" || r.spec.replicas > 1'

A more realistic example enforcing that Deployments run as non-root:

cub trigger create --space my-space ensure-nonroot Mutation Kubernetes/YAML \
  --description "Ensures Deployment containers run as non-root" \
  vet-celexpr 'r.kind != "Deployment" || (r.spec.template.spec.securityContext.runAsNonRoot == true && r.spec.template.spec.containers.all(container, !has(container.securityContext.runAsNonRoot) || container.securityContext.runAsNonRoot == true)) || r.spec.template.spec.containers.all(container, has(container.securityContext.runAsNonRoot) && container.securityContext.runAsNonRoot == true)'

vet-cel — a more capable CEL function that can return a ValidationResult with detailed failure information. The expression can return a simple bool, or a map with passed (bool) and details (list of strings). Parameters can be passed via --param key=value and accessed in the expression as params.key:

cub trigger create --space my-space check-labels Mutation Kubernetes/YAML \
  vet-cel 'has(r.metadata.labels) && has(r.metadata.labels.team) ? {"passed": true} : {"passed": false, "details": ["Missing required label: team"]}'

Custom policy logic with Starlark

vet-starlark — validates resources using a Starlark program (a Python-like language created by Google). The program must define a validate(r) function that returns a dict with passed (bool) and optionally details (list of strings). Starlark is a good choice for policies that involve more complex logic than a single CEL expression can express:

cub trigger create --space my-space check-resources Mutation Kubernetes/YAML \
  vet-starlark 'def validate(r):
    if r["kind"] != "Deployment":
        return {"passed": True}
    containers = r["spec"]["template"]["spec"]["containers"]
    missing = [c["name"] for c in containers if "resources" not in c or "requests" not in c.get("resources", {})]
    if missing:
        return {"passed": False, "details": ["Containers missing resource requests: " + ", ".join(missing)]}
    return {"passed": True}'

Policy engines: Kyverno and OPA Gatekeeper

For organizations that already use Kyverno or OPA Gatekeeper to enforce policies at Kubernetes admission time, ConfigHub can validate configuration against the same policies before deployment. This gives you a single set of policies for both admission control and pre-deployment validation.

Two approaches are available:

Standard worker with --worker-functions — install a standard ConfigHub worker with the built-in vet-kyverno-server or vet-opa-gatekeeper functions. These functions call the policy engine's admission webhook endpoint to validate resources:

cub worker install --space my-space \
  --unit kyverno-worker-unit \
  --target my-target \
  -n kyverno \
  --worker-functions vet-kyverno-server \
  -e "KYVERNO_URL=https://kyverno-svc.kyverno.svc:443" \
  -e "KYVERNO_SKIP_TLS_VERIFY=true" \
  my-kyverno-worker
cub worker install --space my-space \
  --unit gatekeeper-worker-unit \
  --target my-target \
  -n gatekeeper-system \
  --worker-functions vet-opa-gatekeeper \
  -e "GATEKEEPER_URL=https://gatekeeper-webhook-service.gatekeeper-system.svc:443" \
  -e "GATEKEEPER_SKIP_TLS_VERIFY=true" \
  my-gatekeeper-worker

Custom workers — build and deploy custom worker images using the SDK examples. The examples repo includes ready-to-use custom workers for:

  • kyverno-server — validates against Kyverno policies via its webhook server
  • opa-gatekeeper — validates against OPA Gatekeeper constraints
  • kyverno — validates using Kyverno as a library (no running server needed)
  • kube-score — scores Kubernetes resources against best-practice checks

Once a worker is running, create a Trigger to validate configuration automatically:

cub trigger create --space my-space validate-kyverno Mutation Kubernetes/YAML \
  --worker my-space/my-kyverno-worker \
  vet-kyverno-server

For details on deploying webhook validation workers, see the admission webhook functions guide. For background on Kubernetes policy enforcement, see What are state-based policy constraints good for? and Vetting Kubernetes configuration with Kyverno prior to deployment.

Organizing Triggers across Spaces and Targets

Dedicated Trigger Space

Rather than scattering Triggers across application Spaces, create a dedicated Space to hold Triggers, along with related entities such as Filters, Invocations, Views, Attributes, Tags, and ChangeSets. This Space should not contain Units, BridgeWorkers, or Targets — it serves purely as an organizational home for operational tooling:

cub space create app-common --description "Shared Triggers, Filters, and operational tooling"

Use Labels on Triggers to categorize them by purpose, making it easier to select subsets with Filters:

cub trigger create --space app-common valid-k8s Mutation Kubernetes/YAML \
  --label Purpose=validation \
  vet-schemas

cub trigger create --space app-common complete-k8s Mutation Kubernetes/YAML \
  --label Purpose=validation \
  vet-placeholders

cub trigger create --space app-common check-replicas Mutation Kubernetes/YAML \
  --label Purpose=policy --label Scope=production \
  vet-celexpr 'r.kind != "Deployment" || r.spec.replicas > 1'

cub trigger create --space app-common validate-kyverno Mutation Kubernetes/YAML \
  --worker my-space/my-kyverno-worker \
  --label Purpose=policy --label Engine=kyverno \
  vet-kyverno-server

Validation on Spaces, policies on Targets

In scenarios where application teams own Units in Spaces and a platform team owns the Kubernetes clusters (along with their Workers and Targets), a useful pattern is:

  • Attach validation Triggers to Spaces — functions like vet-schemas and vet-placeholders catch mistakes in configuration authoring. These apply regardless of where the configuration is deployed.
  • Attach policy Triggers to Targets — functions like vet-kyverno-server, vet-opa-gatekeeper, and environment-specific vet-celexpr constraints enforce what is allowed on a particular cluster. The platform team controls these independently of the application teams.

This separation lets application teams manage their own validation while the platform team enforces deployment policies centrally.

Using Filters to select Triggers

Since Triggers are ToolchainType-specific, and you may want different subsets of Triggers for different Spaces or Targets, use Filters with From=Trigger to select the right Triggers. Then attach the Filter to Spaces and Targets using --trigger-filter:

# Create Filters for different purposes
cub filter create --space app-common validation-triggers Trigger \
  --where-field "Labels.Purpose = 'validation'"

cub filter create --space app-common policy-triggers Trigger \
  --where-field "Labels.Purpose = 'policy'"

cub filter create --space app-common prod-policy-triggers Trigger \
  --where-field "Labels.Purpose = 'policy' AND Labels.Scope = 'production'"

Attach Filters to Spaces and Targets:

# Application Spaces get validation Triggers
cub space create my-app --trigger-filter app-common/validation-triggers

# Production Targets get policy Triggers
cub target create --space my-app prod-cluster \
  --trigger-filter app-common/prod-policy-triggers

To update an existing Space or Target:

cub space update --patch --trigger-filter app-common/validation-triggers my-app
cub target update --patch --space my-app --trigger-filter app-common/policy-triggers prod-cluster

When Triggers are added or modified in the Trigger Space, refresh the consuming Spaces and Targets to pick up the changes:

cub space update --patch --refresh-triggers my-app
cub target update --patch --space my-app --refresh-triggers prod-cluster

How Triggers are applied to Spaces and Targets

Triggers are associated with Spaces and Targets through WhereTrigger and TriggerFilterID fields. These fields control which Triggers from across the organization are selected for a given Space or Target.

Default behavior

When a Space has no explicit WhereTrigger or TriggerFilterID, ConfigHub automatically sets a default WhereTrigger that selects all Triggers within the Space. This default is set when the first Trigger is created in the Space.

Cross-Space Trigger selection

Triggers can be selected by Spaces and Targets other than the Space they were created in:

cub space update --patch --where-trigger "Space.Slug = 'shared-policies' AND Event = 'Mutation'" my-app-space

Refreshing Trigger lists

When a Trigger is created, updated, or deleted within its own Space, the Space's Trigger list (TriggerIDs) and hash (TriggerHash) are automatically refreshed, and affected Units are enqueued for re-evaluation.

However, explicit refresh is needed in two main scenarios:

  1. Cross-Space Trigger changes: When Triggers are modified in one Space but consumed by another Space or Target through WhereTrigger or TriggerFilterID, the consuming Space or Target does not automatically detect the change. The manager of the consuming Space or Target must explicitly refresh:
cub space update --patch --refresh-triggers my-app-space
cub target update --patch --space my-space --refresh-triggers my-target

This is by design: since Triggers can affect many Units across many Spaces, delaying re-evaluation gives managers of consuming Spaces an opportunity to verify which Triggers match and potentially update their filters before applying the changes.

  1. Permission verification: Refreshing checks that the caller has Edit permission on the Space or Target. This ensures that only authorized users can change which Triggers are actively applied.

Trigger hashing

ConfigHub computes a TriggerHash for each Space and Target based on the combined hashes of all selected Triggers. When the hash changes (due to Trigger creation, update, deletion, or refresh), all affected Units are enqueued for asynchronous trigger re-evaluation.

Apply Gates and Apply Warnings

When a validating Trigger function fails, ConfigHub creates an Apply Gate on the Unit. Apply Gates block Apply operations until the issue is resolved.

Gate naming

Apply Gates use a 3-part name format: <space-slug>/<trigger-slug>/<function-name>. For example:

my-space/check-replicas/vet-celexpr

This format identifies which Trigger and function produced the gate, and from which Space the Trigger originates.

The awaiting/triggers gate

When configuration data changes, ConfigHub sets a temporary awaiting/triggers Apply Gate on the Unit. This gate indicates that Triggers have been enqueued for asynchronous evaluation but have not yet completed. It is automatically removed when all Triggers finish executing.

The awaiting/triggers gate also appears after approval, when Triggers are re-evaluated to determine whether approval-related gates should be cleared.

Waiting for Triggers

Many CLI commands support --wait (enabled by default) to wait for the awaiting/triggers gate to clear:

cub unit approve --space my-space --wait my-unit
cub unit update --space my-space --wait my-unit ...

When --wait is active, the CLI polls the Unit until the awaiting/triggers gate is removed, then reports any remaining Apply Gates or Apply Warnings.

When Triggers are re-evaluated

Triggers run asynchronously in the background. They are re-evaluated when:

  • Unit data changes — any mutation to a Unit's configuration data (create, update, function invocations, link resolution, clone)
  • Links are added or removed — resolved data from linked Units may affect Trigger results
  • Unit is approved — the ApprovedBy list changes, which may affect approval-checking Triggers like vet-approvedby
  • Apply is attempted with outstanding Apply Gates — the Unit is re-enqueued for Trigger re-evaluation so that gates can be re-checked with the current Trigger configuration
  • Trigger configuration changes — creating, updating, or deleting a Trigger in the Space automatically re-evaluates all affected Units
  • Space or Target Trigger list is refreshed — after --refresh-triggers, all Units in the Space (or with the Target) are re-evaluated

Worker disconnection behavior

When a Bridge Worker responsible for executing a Trigger's function is disconnected or unresponsive:

  • Other Triggers continue to execute — only Triggers for the unavailable worker are affected
  • Validating and mutating Triggers for the unavailable worker produce Apply Gates — reflecting that the validation or mutation could not be performed
  • After the fail-open duration (default 6 hours), the Trigger is automatically disabled

Configuring fail-open duration

Use --fail-open-after to set a per-Trigger duration (requires --worker):

cub trigger create --space my-space worker-check Mutation Kubernetes/YAML \
  --worker my-space/my-worker \
  --fail-open-after 30m \
  vet-celexpr '...'

When not set, the default duration of 6 hours is used.

Invoking Triggers imperatively

Triggers can be invoked on demand using cub function do with the --trigger flag:

cub function do --space my-space --where "Slug = 'my-unit'" --trigger my-space/check-replicas

Updating Apply Gates imperatively

Add --update-apply-gates to selectively update Apply Gates based on the Trigger results:

cub function do --space my-space --where "Slug = 'my-unit'" \
  --trigger my-space/check-replicas \
  --update-apply-gates

This is useful when a worker comes back online and you want to re-evaluate Triggers and update gates without waiting for the next automatic re-evaluation. Unlike automatic trigger evaluation, imperative --update-apply-gates only modifies gates for the invoked Triggers — it does not clear gates from other Triggers.

Permissions

  • Creating, updating, or deleting a Trigger requires Edit permission on the Space containing the Trigger.
  • Refreshing Triggers on a Space or Target requires Edit permission on that Space or Target.
  • Approving a Unit requires Approve permission on the Unit.
  • Invoking Triggers with --update-apply-gates requires Edit permission on the affected Units.

Troubleshooting

Units have unexpected Apply Gates

Use cub trigger list to see which Triggers are active in a Space:

cub trigger list --space my-space

Check the validation results on a specific Unit:

cub unit get --space my-space --jq ".Unit.ValidationResults" my-unit

Triggers are not running

  • Check that the Trigger is not disabled: cub trigger get --space my-space my-trigger
  • Check that the Space's WhereTrigger or TriggerFilterID selects the Trigger: cub space get --jq ".Space.TriggerIDs" my-space
  • If using cross-Space Triggers, ensure the consuming Space has been refreshed: cub space update --patch --refresh-triggers my-space

Worker Triggers produce gates after worker restart

When a worker reconnects, existing Units are not automatically re-evaluated. Use imperative invocation with --update-apply-gates to re-evaluate:

cub function do --space my-space --trigger my-space/worker-check --update-apply-gates

Or refresh the Space's Triggers to enqueue all Units:

cub space update --patch --refresh-triggers my-space