Skip to content

Latest commit

 

History

History
266 lines (206 loc) · 11.2 KB

README.md

File metadata and controls

266 lines (206 loc) · 11.2 KB

Contributions Welcome License Test Go Report Card codecov Go Reference

validator (AKA Validation Controller) monitors ValidationResults created by one or more validator plugins and uploads them to a sink of your choosing, e.g., Slack or Alertmanager.

validator architecture

Description

The validator repository is fairly minimal - all the heavy lifting is done by the validator plugins. Installation of validator and one or more plugins is accomplished via Helm.

Plugins:

Installation

Connected

For connected installations, two options are supported: the validator CLI, validatorctl, and Helm. Using validatorctl is recommended, as it provides a text-based user interface (TUI) for configuring validator.

Validator CLI

  1. Download the latest release of validatorctl from https://github.com/validator-labs/validatorctl/releases
  2. Execute validatorctl
    validatorctl install

Helm

Install Validator by pulling the latest Helm chart and installing it in your cluster. Use the following commands to install the latest version of the chart.

helm repo add validator https://validator-labs.github.io/validator
helm repo update
helm install validator validator/validator -n validator --create-namespace

Check out the Helm install guide for a step-by-step guide for installing and using Validator.

Air-gapped

For air-gapped installations, the recommended approach is to use Hauler. Hauls containing all validator artifacts (container images, Helm charts, and the validator CLI) are generated for multiple platforms (linux/amd64 and linux/arm64) during each validator release.

Prerequisites:

Once the prerequisites are met, the following steps document the air-gapped installation procedure:

  1. Download the Hauler Store (then somehow get it across the air-gap)
    # Download the Haul for your chosen release and platform, e.g.:
    curl -L https://github.com/validator-labs/validator/releases/download/v0.0.46/validator-haul-linux-amd64.tar.zst -o validator-haul-linux-amd64.tar.zst
  2. Load the Hauler Store (on the air-gapped workstation)
    # Load the air-gapped content to your local hauler store.
    hauler store load validator-haul-linux-amd64.tar.zst
  3. Extract validatorctl from the Hauler Store
    # Extract the validator CLI binary, validatorctl, from the hauler store.
    # It's always tagged as "latest" within the store, despite being versioned.
    # This is a hauler defect. The version can be verified via `validatorctl version`.
    hauler store extract -s store hauler/validatorctl:latest
    chmod +x validatorctl && mv validatorctl /usr/local/bin
  4. Serve the Hauler Store
    # Serve the content as a registry from the hauler store.
    # (Defaults to <FQDN or IP>:5000).
    nohup hauler store serve registry | tee -a hauler.log &
    
    # Optionally tail the hauler registry logs
    tail -f hauler.log
  5. Execute validatorctl
    validatorctl install

Sinks

Validator can be configured to emit updates to various event sinks whenever a ValidationResult is created or updated. See configuration details below for each supported sink.

Alertmanager

Integrate with the Alertmanager API to emit alerts to all supported Alertmanager receivers, including generic webhooks. The only required configuration is an Alertmanager endpoint. HTTP basic authentication and TLS are also supported. See values.yaml for configuration details.

Sample Output

Screen Shot 2023-11-15 at 10 42 20 AM

Setup

  1. Install Alertmanager in your cluster (if it isn't installed already)

  2. Configure Alertmanager alert content. Alerts can be formatted/customized via the following labels and annotations:

    Labels

    • alertname
    • plugin
    • validation_result
    • expected_results

    Annotations

    • state
    • validation_rule
    • validation_type
    • message
    • status
    • detail
      • pipe-delimited array of detail messages, see sample config for parsing example
    • failure (also pipe-delimited)
    • last_validation_time

    Example Alertmanager ConfigMap used to produce the sample output above:

    apiVersion: v1
    data:
    alertmanager.yml: |
       global:
          slack_api_url: https://slack.com/api/chat.postMessage
       receivers:
       - name: default-receiver
          slack_configs:
          - channel: <channel-id>
          text: |-
             {{ range .Alerts.Firing -}}
             *Validation Result: {{ .Labels.validation_result }}/{{ .Labels.expected_results }}*
    
             {{ range $k, $v := .Annotations }}
             {{- if $v }}*{{ $k | title }}*:
             {{- if match "\\|" $v }}
             - {{ reReplaceAll "\\|" "\n- " $v -}}
             {{- else }}
             {{- printf " %s" $v -}}
             {{- end }}
             {{- end }}
             {{ end }}
    
             {{ end }}
          title: "{{ (index .Alerts 0).Labels.plugin }}: {{ (index .Alerts 0).Labels.alertname }}\n"
          http_config:
             authorization:
                credentials: xoxb--<bot>-<token>
          send_resolved: false
       route:
          group_interval: 10s
          group_wait: 10s
          receiver: default-receiver
          repeat_interval: 1h
       templates:
       - /etc/alertmanager/*.tmpl
    kind: ConfigMap
    metadata:
    name: alertmanager
    namespace: alertmanager
  3. Install validator and/or upgrade your validator Helm release, configuring values.sink accordingly.

Slack

Sample Output

Screen Shot 2023-11-10 at 4 30 12 PM Screen Shot 2023-11-10 at 4 18 22 PM

Setup

  1. Go to https://api.slack.com/apps and click Create New App, then select From scratch. Pick an App Name and Slack Workspace, then click Create App.

  2. Go to OAuth & Permissions and copy the Bot User OAuth Token under the OAuth Tokens for Your Workspace section. Save it somewhere for later. Scroll down to Scopes and click Add an OAuth Scope. Enable the chat:write scope for your bot.

  3. Find and/or create a channel in Slack and note its Channel ID (at the very bottom of the modal when you view channel details). Add the bot you just created to the channel via View channel details > Integrations > Apps > Add apps.

  4. Install validator and/or upgrade your validator Helm release, configuring values.sink accordingly.

Development

You’ll need a Kubernetes cluster to run against. You can use kind to get a local cluster for testing, or run against a remote cluster. Note: Your controller will automatically use the current context in your kubeconfig file (i.e. whatever cluster kubectl cluster-info shows).

Running on the cluster

  1. Install Instances of Custom Resources:
kubectl apply -f config/samples/
  1. Build and push your image to the location specified by IMG:
make docker-build docker-push IMG=<some-registry>/validator:tag
  1. Deploy the controller to the cluster with the image specified by IMG:
make deploy IMG=<some-registry>/validator:tag

Uninstall CRDs

To delete the CRDs from the cluster:

make uninstall

Undeploy controller

UnDeploy the controller from the cluster:

make undeploy

Contributing

All contributions are welcome! Feel free to reach out on the Spectro Cloud community Slack.

Make sure pre-commit is installed.

Install the pre-commit scripts:

pre-commit install --hook-type commit-msg
pre-commit install --hook-type pre-commit

How it works

This project aims to follow the Kubernetes Operator pattern.

It uses Controllers, which provide a reconcile function responsible for synchronizing resources until the desired state is reached on the cluster.

Test It Out

  1. Install the CRDs into the cluster:
make install
  1. Run your controller (this will run in the foreground, so switch to a new terminal if you want to leave it running):
make run

NOTE: You can also run this in one step by running: make install run

Modifying the API definitions

If you are editing the API definitions, generate the manifests such as CRs or CRDs using:

make manifests

NOTE: Run make --help for more information on all potential make targets

More information can be found via the Kubebuilder Documentation