Skip to content

Commit

Permalink
feat: Introduce new Dump function to run pods with two containers
Browse files Browse the repository at this point in the history
In order to separate data dump generation by database tools and export
by `kando`, a pod performing database dump can have two separate containers
and set up the pipe over file instead of anonymous pipe
  • Loading branch information
hairyhum committed Sep 6, 2024
1 parent 40ece11 commit 32d48b2
Show file tree
Hide file tree
Showing 7 changed files with 583 additions and 14 deletions.
44 changes: 34 additions & 10 deletions docs/functions.rst
Original file line number Diff line number Diff line change
Expand Up @@ -108,23 +108,32 @@ Example:
- |
echo "Example"
KubeTask
Dump
--------

KubeTask spins up a new container and executes a command via a Pod.
This allows you to run a new Pod from a Blueprint.
Dump spins up a new pod with teo containers connected via shared emptyDir volume.
It's similar to KubeTask, but allows using multiple images to move backup data.
"dump" container is one responsible for generating data, while "export" container
should export it to destination.
The main difference between them is that phase outputs can only generated from the
"export" container outputs.

.. csv-table::
:header: "Argument", "Required", "Type", "Description"
:align: left
:widths: 5,5,5,15

`namespace`, No, `string`, namespace in which to execute (the pod will be created in controller's namespace if not specified)
`image`, Yes, `string`, image to be used for executing the task
`command`, Yes, `[]string`, command list to execute
`dumpImage`, Yes, `string`, image to be used in "dump" container
`dumpCommand`, Yes, `[]string`, command list to execute in "dump" container
`exportImage`, Yes, `string`, image to be used in "export" container
`exportCommand`, Yes, `[]string`, command list to execute in "export" container
`podOverride`, No, `map[string]interface{}`, specs to override default pod specs with
`podAnnotations`, No, `map[string]string`, custom annotations for the temporary pod that gets created
`podLabels`, No, `map[string]string`, custom labels for the temporary pod that gets created
`sharedStorageMedium`, No, `string`, medium setting for shared volume, see https://kubernetes.io/docs/concepts/storage/volumes/#emptydir
`sharedStorageSize`, No, `string`, sizeLimit setting for shared volume
`sharedStorageDir`, No, `string`, directory to mount shared volume, defaults to `/tmp`

Example:

Expand All @@ -135,20 +144,35 @@ Example:
name: examplePhase
args:
namespace: "{{ .Deployment.Namespace }}"
image: busybox
podOverride:
containers:
- name: container
- name: export
imagePullPolicy: IfNotPresent
podAnnotations:
annKey: annValue
podLabels:
labelKey: labelValue
command:
- sh
sharedStorageMedium: Memory
sharedStorageSize: 1Gi
sharedStorageDir: /tmp/
dumpImage: ubuntu
dumpCommand:
- bash
- -c
- |
echo "Example"
mkfifo /tmp/pipe-file
for i in {1..10}
do
echo $i
sleep 0.1
done > /tmp/pipe-file
exportImage: ubuntu
exportCommand:
- bash
- -c
- |
while [ ! -e /tmp/pipe-file ]; do sleep 1; done
cat /tmp/pipe-file
ScaleWorkload
-------------
Expand Down
64 changes: 64 additions & 0 deletions docs_new/functions.md
Original file line number Diff line number Diff line change
Expand Up @@ -129,6 +129,70 @@ Example:
echo "Example"
```
### Dump
Dump spins up a new pod with teo containers connected via shared emptyDir volume.
It's similar to KubeTask, but allows using multiple images to move backup data.
"dump" container is one responsible for generating data, while "export" container
should export it to destination.
The main difference between them is that phase outputs can only generated from the
"export" container outputs.
| Argument | Required | Type | Description |
| ----------- | :------: | ----------------------- | ----------- |
| namespace | No | string | namespace in which to execute (the pod will be created in controller's namespace if not specified) |
| dumpImage | Yes | string | image to be used in "dump" container |
| dumpCommand | Yes | []string | command list to execute in "dump" container |
| exportImage | Yes | string | image to be used in "export" container |
| exportCommand | Yes | []string | command list to execute in "export" container |
| podOverride | No | map[string]interface{} | specs to override default pod specs with |
| podAnnotations | No | map[string]string | custom annotations for the temporary pod that gets created |
| podLabels | No | map[string]string | custom labels for the temporary pod that gets created |
| sharedStorageMedium | No | string | medium setting for shared volume, see https://kubernetes.io/docs/concepts/storage/volumes/#emptydir |
| sharedStorageSize | No | string | sizeLimit setting for shared volume |
| sharedStorageDir | No | string | directory to mount shared volume, defaults to `/tmp` |


Example:

``` yaml
- func: KubeTask
name: examplePhase
args:
namespace: "{{ .Deployment.Namespace }}"
podOverride:
containers:
- name: export
imagePullPolicy: IfNotPresent
podAnnotations:
annKey: annValue
podLabels:
labelKey: labelValue
sharedStorageMedium: Memory
sharedStorageSize: 1Gi
sharedStorageDir: /tmp/
dumpImage: ubuntu
dumpCommand:
- bash
- -c
- |
mkfifo /tmp/pipe-file
for i in {1..10}
do
echo $i
sleep 0.1
done > /tmp/pipe-file
exportImage: ubuntu
exportCommand:
- bash
- -c
- |
while [ ! -e /tmp/pipe-file ]; do sleep 1; done
cat /tmp/pipe-file
```


### ScaleWorkload

ScaleWorkload is used to scale up or scale down a Kubernetes workload.
Expand Down
Loading

0 comments on commit 32d48b2

Please sign in to comment.