Validating Configuration with Kubernetes Admission Webhooks
Kubernetes admission webhooks enforce policies at deploy time by intercepting API requests. Policy engines like Kyverno and OPA Gatekeeper use admission webhooks to validate resources against policies or constraints defined in the cluster.
ConfigHub can call these same webhooks to validate configuration before it is deployed, using worker functions that send AdmissionReview requests to a running webhook server. This gives you a single set of policies for both admission control and pre-deployment validation.
The SDK provides two built-in functions (vet-kyverno-server and vet-opa-gatekeeper) and a shared library (k8s-admission-webhook) for building your own.
Using the built-in functions
vet-kyverno-server
The vet-kyverno-server function validates Kubernetes resources against policies deployed in a Kyverno cluster. It supports both ClusterPolicy and ValidatingPolicy resources.
The function takes no parameters. It validates all resources in the unit against all matching Kyverno policies:
cub function do vet-kyverno-server --space $SPACE --unit my-unit --worker "$SPACE/my-kyverno-worker"
A passing result looks like:
Function(s) succeeded on unit ...
OUTPUT
------
true 0
A failing result includes the policy and rule that failed:
Function(s) succeeded on unit ...
OUTPUT
------
false 0 policy "require-labels" rule "validation": The label 'team' is required.
Attributes:
<nil> default/bad-pod v1/Pod
The worker needs access to both the Kyverno webhook service and the Kubernetes API (for discovering which webhook configurations exist). It uses these environment variables:
| Variable | Required | Description |
|---|---|---|
KYVERNO_URL |
Yes | Base URL of the Kyverno webhook (e.g., https://kyverno-svc.kyverno.svc:443) |
KYVERNO_CA_CERT_PATH |
No | Path to a CA certificate file for TLS verification |
KYVERNO_SKIP_TLS_VERIFY |
No | Set to true to skip TLS certificate verification (development only) |
vet-opa-gatekeeper
The vet-opa-gatekeeper function works the same way, but validates against OPA Gatekeeper constraint templates and constraints:
cub function do vet-opa-gatekeeper --space $SPACE --unit my-unit --worker "$SPACE/my-gatekeeper-worker"
A failing result includes the constraint name:
Function(s) succeeded on unit ...
OUTPUT
------
false 0 constraint "require-team-label": Missing required labels: {"team"}
Attributes:
<nil> default/bad-deploy apps/v1/Deployment
Environment variables:
| Variable | Required | Description |
|---|---|---|
GATEKEEPER_URL |
Yes | Base URL of the Gatekeeper webhook (e.g., https://gatekeeper-webhook-service.gatekeeper-system.svc:443) |
GATEKEEPER_CA_CERT_PATH |
No | Path to a CA certificate file for TLS verification |
GATEKEEPER_SKIP_TLS_VERIFY |
No | Set to true to skip TLS certificate verification (development only) |
Deploying a webhook validation worker
There are two ways to run a webhook validation worker: in-cluster (alongside the webhook server) or out-of-cluster (for development).
In-cluster deployment
The worker runs in the same cluster as the policy engine. This is the recommended approach for production. The examples below use Kyverno, but the same pattern applies to Gatekeeper.
Build and push a container image for the worker. The SDK includes example Dockerfiles at examples/kyverno-server and examples/opa-gatekeeper.
Install the worker using cub worker install:
cub worker install --space $SPACE \
--unit kyverno-worker-unit \
--target $TARGET \
-n kyverno-worker \
--image my-registry/kyverno-server-worker:latest \
-e "KYVERNO_URL=https://kyverno-svc.kyverno.svc:443" \
-e "KYVERNO_SKIP_TLS_VERIFY=true" \
my-kyverno-worker
cub unit apply --space $SPACE kyverno-worker-unit
The worker also needs permission to list ValidatingWebhookConfigurations so it can discover which webhooks to call. Create a ClusterRole and ClusterRoleBinding:
kubectl create clusterrole webhook-reader \
--verb=list,watch --resource=validatingwebhookconfigurations.admissionregistration.k8s.io
kubectl create clusterrolebinding worker-webhook-reader \
--clusterrole=webhook-reader \
--group="system:serviceaccounts:kyverno-worker"
Then install the ConfigHub connection secret and wait for the worker to be ready:
kubectl -n kyverno-worker wait --for=create deployment/my-kyverno-worker --timeout=120s
cub worker install --space $SPACE \
--export-secret-only \
-n kyverno-worker \
my-kyverno-worker 2>/dev/null | kubectl apply -f -
kubectl -n kyverno-worker rollout status deployment/my-kyverno-worker --timeout=120s
For a complete working demo, see the demo.sh scripts in the SDK examples.
For more details on installing workers, see the Custom Workers guide.
Out-of-cluster development
For local development, use kubectl port-forward to reach the webhook service, then run the worker locally:
kubectl -n kyverno port-forward svc/kyverno-svc 8443:443 &
cub worker run --space $SPACE \
--executable ./my-kyverno-worker \
-e "KYVERNO_URL=https://localhost:8443" \
-e "KYVERNO_SKIP_TLS_VERIFY=true" \
my-kyverno-worker
The worker automatically connects to ConfigHub and registers the function. The kubeconfig from your default context is used for webhook discovery.
Using validation in triggers
Once a worker is running and registered, you can use the validation function in Triggers to automatically validate configuration before it is applied.
Create a Mutation trigger so that units are validated whenever they are modified:
cub trigger create --space $SPACE \
--worker "$SPACE/my-kyverno-worker" \
validate-kyverno Mutation Kubernetes/YAML vet-kyverno-server
Any Kubernetes/YAML unit in the space will be validated against Kyverno policies when its configuration is changed. If validation fails, the change is flagged.
See the Functions guide for more on triggers and apply gates.
Building a custom admission webhook function
If you use a policy engine other than Kyverno or OPA Gatekeeper, you can build your own validation function using the k8s-admission-webhook library from the SDK.
Project setup
Create a new Go module for your worker:
mkdir my-webhook-worker && cd my-webhook-worker
go mod init example.com/my-webhook-worker
go get github.com/confighub/sdk
go get github.com/confighub/sdk/worker-function-impl
Writing the function
The library provides three main components:
admissionwebhook.WebhookClientsends AdmissionReview requests to a webhook endpoint.admissionwebhook.K8sDiscoveryClientdiscovers ValidatingWebhookConfigurations from the Kubernetes API. It supports both in-cluster and kubeconfig-based authentication. The companionadmissionwebhook.WebhookWatcherwatches for changes so the endpoint list stays current.admissionwebhook.ValidateResourcesiterates over all resources in the configuration data, matches them against discovered webhooks, calls the webhooks, and aggregates the results into aValidationResult.
Here is a minimal example:
package mypolicyengine
import (
"context"
"fmt"
"log/slog"
"strings"
"github.com/cockroachdb/errors"
"github.com/confighub/sdk/configkit/k8skit"
"github.com/confighub/sdk/function/api"
"github.com/confighub/sdk/function/handler"
"github.com/confighub/sdk/third_party/gaby"
admissionwebhook "github.com/confighub/sdk/worker-function-impl/k8s-admission-webhook"
)
const (
envURL = "MY_POLICY_URL"
envCACertPath = "MY_POLICY_CA_CERT_PATH"
envSkipTLSVerify = "MY_POLICY_SKIP_TLS_VERIFY"
)
var (
webhookClient *admissionwebhook.WebhookClient
watcher *admissionwebhook.WebhookWatcher
)
// Init creates the webhook client and starts watching for webhook changes.
// It is called once at startup via FunctionRegistration.FunctionInit.
func Init() error {
client, err := admissionwebhook.NewWebhookClient(
envURL, envCACertPath, envSkipTLSVerify,
)
if err != nil {
return err
}
discoveryClient, err := admissionwebhook.NewK8sDiscoveryClient()
if err != nil {
return errors.Wrap(err, "failed to create K8s discovery client")
}
// Configure the selector to match your policy engine's VWCs.
selector := admissionwebhook.WebhookSelector{
LabelSelector: "my-policy-engine.io/managed-by%3Dcontroller",
ConfigNames: []string{"my-policy-engine-webhook-config"},
}
w := admissionwebhook.NewWebhookWatcher(discoveryClient, selector)
if err := w.Start(context.Background()); err != nil {
return errors.Wrap(err, "failed to start webhook watcher")
}
webhookClient = client
watcher = w
return nil
}
// GetSignature returns the function signature.
func GetSignature() api.FunctionSignature {
return api.FunctionSignature{
FunctionName: "vet-my-policy-engine",
Parameters: []api.FunctionParameter{},
RequiredParameters: 0,
OutputInfo: &api.FunctionOutput{
ResultName: "passed",
Description: "True if all resources pass policy validation",
OutputType: api.OutputTypeValidationResult,
},
Mutating: false,
Validating: true,
Hermetic: false,
Idempotent: true,
Description: "Validates Kubernetes resources against my-policy-engine.",
FunctionType: api.FunctionTypeCustom,
AffectedResourceTypes: []api.ResourceType{api.ResourceTypeAny},
}
}
// Function is the function implementation.
func Function(fArgs handler.FunctionImplementationArguments) (gaby.Container, any, error) {
if webhookClient == nil || watcher == nil {
return fArgs.ParsedData, nil, errors.New("not initialized")
}
webhooks := watcher.GetEndpoints()
return admissionwebhook.ValidateResources(
webhookClient, webhooks,
k8skit.NewK8sResourceProvider(), fArgs.ParsedData,
responseConverter,
)
}
// responseConverter parses your policy engine's response format into
// details and failed attributes.
func responseConverter(
resp *admissionwebhook.AdmissionResponse,
resourceInfo api.ResourceInfo,
) ([]string, []api.AttributeValue) {
var details []string
var failedAttrs []api.AttributeValue
if resp.Status != nil && resp.Status.Message != "" {
// Parse the message according to your policy engine's format.
// For example, split on newlines and extract structured fields.
for _, line := range strings.Split(resp.Status.Message, "\n") {
line = strings.TrimSpace(line)
if line == "" {
continue
}
details = append(details, line)
failedAttrs = append(failedAttrs, api.AttributeValue{
AttributeInfo: api.AttributeInfo{
AttributeIdentifier: api.AttributeIdentifier{
ResourceInfo: resourceInfo,
},
AttributeMetadata: api.AttributeMetadata{
AttributeName: api.AttributeNameNone,
},
},
Issues: []api.Issue{{
Identifier: "policy-violation",
Message: line,
}},
})
}
}
return details, failedAttrs
}
The ResponseConverter callback is where you parse your policy engine's specific message format. Kyverno uses a policy-name:\n rule-name: message format. Gatekeeper uses [constraint-name] message. Your engine will have its own format.
Writing the main.go
The main.go creates an executor, registers the function (with FunctionInit for one-time initialization), and starts the worker connector:
package main
import (
"log"
"os"
"github.com/confighub/sdk/configkit/k8skit"
"github.com/confighub/sdk/function/executor"
"github.com/confighub/sdk/function/handler"
"github.com/confighub/sdk/worker"
"github.com/confighub/sdk/workerapi"
mypolicyengine "example.com/my-webhook-worker/mypolicyengine"
)
func main() {
exec := executor.NewEmptyExecutor()
exec.RegisterToolchain(k8skit.NewK8sResourceProvider())
if err := exec.RegisterFunction(workerapi.ToolchainKubernetesYAML, handler.FunctionRegistration{
FunctionSignature: mypolicyengine.GetSignature(),
Function: mypolicyengine.Function,
FunctionInit: mypolicyengine.Init,
}); err != nil {
log.Fatalf("Failed to register function: %v", err)
}
connector, err := worker.NewConnector(worker.ConnectorOptions{
WorkerID: os.Getenv("CONFIGHUB_WORKER_ID"),
WorkerSecret: os.Getenv("CONFIGHUB_WORKER_SECRET"),
ConfigHubURL: os.Getenv("CONFIGHUB_URL"),
FunctionExecutor: exec,
})
if err != nil {
log.Fatalf("Failed to create connector: %v", err)
}
if err := connector.Start(); err != nil {
log.Fatalf("Failed to start connector: %v", err)
}
}
The FunctionInit field causes Init() to be called once when RegisterFunction is called, before the worker connects to ConfigHub. This initializes the webhook client and starts the webhook watcher.
Key design decisions
WebhookSelector controls which ValidatingWebhookConfigurations are discovered:
LabelSelectorfilters VWCs by label (URL-encoded, e.g.,key%3Dvalue). This is passed to the Kubernetes API as a query parameter.ConfigNamesfilters VWCs by name after retrieval. Many policy engines create multiple VWCs (for policies, exceptions, etc.) but only one handles user resource validation.
WebhookWatcher keeps the endpoint list current without re-discovering on every function invocation. It uses the Kubernetes watch API and falls back to periodic re-listing (every 30 seconds) if the watch stream breaks.
ResponseConverter is the only part that differs between policy engines. Everything else (client, discovery, webhook matching, AdmissionReview construction, result aggregation) is handled by the shared library.
Containerizing and deploying
Build a Docker image following the same pattern as the SDK examples, then deploy using cub worker install as described above.