Create, Run, and Manage Workers
In order to apply a configuration unit to create or update live resources, a target must be attached to the unit. The Target represents a deployment target, typically (for now) a Kubernetes cluster.
The Apply action is implemented by a bridge built into a worker. The worker runs in your environment with credentials you provide to it and performs actions on your behalf. It's similar in some ways to a GitOps controller (e.g., FluxCD controllers) or CI runner, but it performs specific built-in actions.
Live-resources actions performed on units through the ConfigHub API, specifically apply, destroy, refresh, and import, are relayed to connected workers. Configuration data to be applied is sent to the worker from ConfigHub, and the worker sends back action progress/status, current live state, and, in the case of refresh and import, updated configuration data.
Workers can also run custom functions that you write, which can be invoked imperatively via the ConfigHub API or as triggers. In the case of functions, configuration data and unit metadata is sent to the worker, and updated configuration data and function output is sent back to ConfigHub.
Create a Worker
You can create a worker entity via the UI (click Add on the workers page) or via the CLI:
cub worker create --space platform-dev cluster-worker
This just registers the worker with ConfigHub and creates an identity and a secret for it. You should then see it in the worker list.
cub worker list --space "*"
Run
You need to run the worker in order for it to perform bridge actions and/or execute functions. Running the worker will also automatically create targets the worker is responsible for. That is the main way targets are created in ConfigHub, though you can also copy targets or create similar targets with different parameters. Targets are created in the same space as the worker, but may be attached to units in any space.
You can run your worker however you like. To run our default worker build in your cluster we have a convenience command, cub worker install.
To bootstrap our worker, you can execute:
cub worker install cluster-worker --space platform-dev --env IN_CLUSTER_TARGET_NAME=dev-cluster --export --include-secret | kubectl apply -f -
By default cub worker install --export doesn't output the secret. Use --include-secret if you want that included with the rest of the configuration or --export-secret-only to just output the secret separately. You can manage the secret using your secret management solution of choice, such as the external secrets operator.
By default, the worker image is pinned to the latest release. You will need to update the image tag to upgrade it to a newer build. You can do that using cub:
cub worker upgrade --filename worker.yaml
You have the option to manage the worker's configuration in ConfigHub. The cub worker install --unit <name> command will store the configuration in a unit. See below for an example. At the moment, you still need to bootstrap the worker in the cluster.
cub worker install --space platform-dev cluster-worker --unit cluster-worker
cub worker install --space platform-dev cluster-worker --export-secret-only | kubectl apply -f -
cub unit get --space platform-dev cluster-worker --data-only | kubectl apply -f -
After the worker has been bootstrapped, you can update the configuration and apply it through ConfigHub, and the worker should be able to apply its own configuration, with the existing replica handing off to an updated one. To upgrade the unit:
cub worker upgrade --space platform-dev --unit cluster-worker
cub unit apply --space platform-dev cluster-worker
With the configuration in a unit, you can also use functions normally to modify the configuration.
Workload-related functions that may be useful include set-image, set-image-reference, set-env-var, set-container-resources, set-container-volume-mount-path, set-container-port, set-pod-defaults, and vet-schemas.
You can also use cub function local to execute these functions on the configuration locally.
Manage
ConfigHub will report the connection status of your worker. The condition should be Ready if it is responding to actions and heartbeat messages. The condition will be Connected just after it first connects, Disconnected if it is not connected, Unresponsive if it is connected but not responding to heartbeat messages, and NotReady if the worker responds to a heartbeat with an error message.
% cub worker list --space "*"
NAME CONDITION SPACE LAST-SEEN
cluster-worker Ready platform-prod 2025-10-23 22:05:44
cluster-worker Ready platform-dev 2025-10-23 22:05:33
If a worker has been in a condition other than Ready for long enough, actions directed at the worker will start to fail immediately and triggers dependent on the worker will be automatically disabled.
Bridge Capabilities and Target ConfigTypes
A worker advertises one or more bridge capabilities (selected at install time via cub worker install -t <list>, e.g. -t kubernetes,argocdrenderer,argocdoci). Each capability is a bridge implementation that can handle Units of a particular ProviderType, ToolchainType, and LiveStateType.
These capabilities register on each Target the worker creates as ConfigTypes. A single Target can therefore serve multiple bridges — a Unit's ProviderType selects which bridge on that Target processes the Unit. See Target → Config Types for the full model.
Each ConfigType may expose Options — typed, named parameters that configure the bridge's behavior. Options are set on the Target and can be overridden per-Unit via TargetOptions. For example, the ArgoCDRenderer and FluxRenderer bridges accept IsAuthoritative, which controls whether the bridge creates/updates/deletes the Application / HelmRelease / Kustomization in the cluster (and, when true, disables ArgoCD autosync or sets spec.suspend=true on the Flux resource automatically). See Target → Bridge Options.
Set an option on a Target:
cub target update --space <space> --patch --option IsAuthoritative=true <target-slug>
The schema for ConfigTypes and BridgeOptions is defined in core/worker/api/bridge_worker_info.go in the SDK.
Using Targets and Workers
To attach a target to a unit, set the TargetID field on the unit. This is typically done with a patch unit API call. In the UI, you can select Update after selecting units on the unit list page or on the Overview tab of the unit details page.
In the CLI you can use the set-target command to attach the target to one or more units:
cub unit set-target --space "*" --where "Space.Labels.Environment = 'dev'" platform-dev/dev-cluster
If you added custom functions to your worker, you can use them in triggers.
cub trigger create --space app-dev --worker platform-dev/cluster-worker custom-check Mutation Kubernetes/YAML my-custom-function
Deleting Workers
If you no longer need a worker and it is no longer running, you can delete it. Deleting the worker will delete any targets associated with it and remove them from units they are attached to.
You won't be able to delete a worker in use by triggers, however, without first deleting those triggers or updating them to not use the worker.
cub worker delete --space platform-dev cluster-worker
Worker Replicas and High Availability
ConfigHub supports running multiple Worker instances with identical credentials to ensure continuous operation. When Workers connect using the same worker_id and secret, the system automatically manages active and standby connections for seamless failover.
The default worker configuration generated by cub worker install runs one replica in the active state, but sets maxSurge to 1 and maxUnavailable to 0 so that an additional replica is created before the running replica is terminated.
Connection Registration and Management
First Connection (Active)
The first Worker to connect becomes the active connection immediately. This Worker processes all operations, maintains the SSE stream, and updates Worker information and Targets in the database.
Subsequent Connections (Standby)
Additional Workers connecting with the same credentials register as standby connections. ConfigHub manages these as standby Workers, which do not update Worker information or Targets. Standby Workers enter a waiting state, ready to take over if the active Worker fails.
ConfigHub sends keepalive messages to standby Workers every 30 seconds to maintain the connection. Workers log these keepalives to stdout and discard them without further processing. These keepalive messages are distinct from heartbeat messages, which are sent by Workers to ConfigHub to report their health and status.
Standby Mode Behavior
When a Worker enters standby mode, ConfigHub sends an initial "standby" event to inform the Worker of its status. The Worker remains in standby, waiting for ConfigHub to promote it to active. While in standby mode, Workers do not process operations or update Worker data.
Failover Mechanism
ConfigHub handles failover automatically when the active Worker connection fails:
1. Detection and Removal
When the active connection disconnects, ConfigHub detects the failure and removes the disconnected Worker from active management.
2. Health Check and Promotion
ConfigHub checks the health of remaining standby Workers and promotes the first healthy standby to become the new active Worker.
3. Active Worker Initialization
After promotion, the newly active Worker updates Worker information and Targets in the database, re-queues any operations that were in-flight to ensure reliability, and begins processing operations immediately.
Re-queue Mechanism
When a new Worker is promoted to active, ConfigHub re-queues operations that were sent to the previous active Worker but not yet completed or failed. For more details about how operations are queued and handled, see Unit Action.