The core concept underlying ConfigHub is **Configuration as Data**.

## Background

In ConfigHub, configuration is represented, stored, and managed as data. It is serialized using standard data formats, such as YAML, and stored in a database within ConfigHub. Code that operates on configuration is separate from the data, and the data is the source of record. Configuration data is not [parameterized](https://itnext.io/the-tension-between-flexibility-and-simplicity-in-infrastructure-as-code-6cec841e3d16). It contains literal values for every field in the configuration.

This is in contrast to Infrastructure as Code (IaC), which represents configuration as code or in a code-like format and is stored and managed as code in source control systems, and frequently deployed via Continuous Integration (CI) [pipelines](https://itnext.io/the-factory-metaphor-makes-sense-for-building-applications-but-not-for-deployment-and-operations-8d800e3772d7) or [GitOps tools](https://itnext.io/is-gitops-actually-useful-a1c851ba99d8).

ConfigHub enables configuration updates and queries to be performed via API rather than code-oriented processes and tools. The configuration data is not in git for similar reasons that database tables aren't maintained as templates or programs in git that generate CSV files.

## Example

### IaC

Take [Helm](https://helm.sh) as an example. There's a detailed walkthrough of a relatively simple case in [this blog post](https://itnext.io/complexity-and-toil-in-infrastructure-as-code-6ca9a6d2af37), but here's an even simpler, common one.

Let's say you want to automatically update a container image tag to rollout each new release of your application. You'd template that field of the [Kubernetes Deployment](https://kubernetes.io/docs/concepts/workloads/controllers/deployment/) something like this:

```
      containers:
        - image: {{ .Values.deployment.main.image }}
```

And add a parameter to the `values.yaml` file for that environment like this:

```
deployment:
    main:
        image: ghcr.io/myorg/myapp:release12345
```

Now if you wanted to set that image automatically in CI/CD, there are a few common approaches:

1. You could set it on the `helm upgrade` command line with `--set "deployment.main.image=ghcr.io/myorg/myapp:$TAG"`.
2. You could use `yq`, `sed`, `awk`, or a similar tool [to change](https://itnext.io/configuration-editing-is-imperative-fa9db379fbe4) the `values.yaml` file: `yq -i ".deployment.main.image = ghcr.io/myorg/myapp:$TAG" values.yaml`
3. You could template the `values.yaml` file and use `envsubst` or similar tool to set the value from an environment variable.
4. You could write a file that set just that one input value, and use it to override the value in the main `values.yaml` file.

In any of those cases, the new image value wouldn't normally be recorded or versioned. To do that, an updated `values.yaml` file would need to be committed back to git. That is often done manually: clone or pull, branch, edit, commit, push, review, merge.

Also, because the Helm chart template syntax is no longer just YAML, it's hard to check it for correctness. It could be misindented, contain invalid properties, specify invalid values, be templated incorrectly, violate organizational policies, etc. It needs to be rendered to YAML first using `helm template`. The output can then be run through [validation tools](https://itnext.io/kubernetes-configuration-linting-tools-699ddeedaeec) like [kubeconform](https://github.com/yannh/kubeconform) and/or [policy tools](https://itnext.io/what-are-state-based-policy-constraints-good-for-019b70f0b698) like [kyverno](https://kyverno.io/).

Then if a problem is found, it needs to be traced back to the template and/or values, and fixed manually due to the [unidirectional approach](https://itnext.io/the-unidirectionality-of-infrastructure-as-code-creates-asymmetry-40c9f5eed959) of IaC and the lack of an explicit relationship between deployed resources and the configuration sources. The change can't be made directly to the clusters or other systems under management because such changes are considered to produce undesirable [drift](https://itnext.io/why-configuration-drift-is-so-hard-to-avoid-in-practice-443248cafc9c).

### ConfigHub

On the other hand, with ConfigHub, the full standard Kubernetes Deployment YAML would be stored, including the image.

```
apiVersion: apps/v1
kind: Deployment
...
      containers:
      - name: main
        image: ghcr.io/myorg/myapp:release12345
```

Then to change the image, you can just do:

```
cub function do --unit myapp set-image main "ghcr.io/myorg/myapp:$TAG"
```

And the change is stored, versioned, and validated automatically. You don't need to deal with complicated templates. You don't need to worry about messing up the YAML indentation. You don't need to perform 10 git commands to make a one-line change.

If you want to search for where a specific release was deployed, you could do something like:

```
cub unit list --space "*" --resource-type apps/v1/Deployment --where-data "spec.template.spec.containers.*.image#reference = ':$TAG'"
```

If you're thinking, "but I prefer code to YAML," you can write your own functions, like `set-image`, to perform common transformations. And not only to perform transformations, but also to validate or introspect configuration. For instance, you can get the current image value with a function:

```
cub function do --where "Slug='myapp'" get-image main --show values
```

One way you can think of [functions](entities/function.md) is as mini tools to perform common configuration tasks. They can be single-purpose, like `set-image` and `get-image`, or multi-purpose, if desired. It's up to you.

You can also write or read configuration data through client surfaces (GUIs, CLIs), other tools, and automation external to ConfigHub.

The configuration data can be queried, analyzed, and updated through interoperable, reusable automation, individually and in bulk.

## Conclusion

ConfigHub provides the database, API, and SDK to unleash your automation of configuration data.
