Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Readd validate inventory and display plan, remove os_images and release_images and set default installer to agent based. #307

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 4 additions & 2 deletions deploy_cluster.yml
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
---
- import_playbook: playbooks/validate_inventory.yml

- import_playbook: playbooks/deploy_cluster_agent_based_installer.yml
when: (use_agent_based_installer | default(false)) | bool
when: (use_agent_based_installer | default(true)) | bool

- import_playbook: playbooks/deploy_cluster_assisted_installer.yml
when: not ((use_agent_based_installer | default(false)) | bool)
when: not ((use_agent_based_installer | default(true)) | bool)
2 changes: 2 additions & 0 deletions deploy_day2_workers.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,6 @@
---
- import_playbook: playbooks/validate_inventory.yml

- import_playbook: playbooks/create_vms.yml
when: groups['day2_workers'] | default([]) | length > 0
vars:
Expand Down
2 changes: 1 addition & 1 deletion deploy_prerequisites.yml
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,6 @@
- import_playbook: playbooks/deploy_registry.yml

- import_playbook: playbooks/deploy_assisted_installer_onprem.yml
when: not ((use_agent_based_installer | default(false)) | bool)
when: not ((use_agent_based_installer | default(true)) | bool)

- import_playbook: playbooks/deploy_sushy_tools.yml
154 changes: 111 additions & 43 deletions docs/inventory.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,63 @@ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.

## Inventory Validation

### OS Image and Release Image requirements
You are now required to provide the os_image and release_image for the openshift version you want to deploy.

Note: We have provided a script which automates steps 1 to 8 in the hack directory however there are some dependancies to it.

The os_image for a relase can be generated by:
1. Navigating to this url https://mirror.openshift.com/pub/openshift-v4/<ARCH>/dependencies/rhcos/. Where arch is the architecture you wish to deploy onto.
- For example we will use `x86_64` producing:
[https://mirror.openshift.com/pub/openshift-v4/x86_64/dependencies/rhcos/](https://mirror.openshift.com/pub/openshift-v4/amd64/dependencies/rhcos/)
2. Selecing the Y stream you wish to deploy e.g. 4.15
3. Then selecting the Z stream latest version <= to the version you wish to deploy e.g. 4.15.5 you could select 4.15.0
4. Navigating into that director you can find the `live iso` and the `live rootfs image` files note down there urls.
5. You then go to following URL replacing the place holders. Where arch is the same as before and OS_VERSION is the version you selected in the previous step.
https://mirror.openshift.com/pub/openshift-v4/<ARCH>/clients/ocp/<OS_VERSION>/release.txt

- For example using arch as before and `4.15.0` producing: [https://mirror.openshift.com/pub/openshift-v4/x86_64/clients/ocp/4.15.0/release.txt](https://mirror.openshift.com/pub/openshift-v4/x86_64/clients/ocp/4.15.0/release.txt)
6. You then need to gather the machine-os version from the release.txt in this case `415.92.202402201450-0`
7. You can now produce the os_image using the following template
```yaml
os_images:
- openshift_version: <Y STREAM VERSION>,
cpu_architecture: <ARCH>,
url: <URL FOR LIVE ISO FILE from step 4>,
rootfs_url: <URL FOR FOOTFS IMAGE from step 4>,
version: <MACHINE-OS VERSION from step 6>,
```
For the 4.15.5 example this would look like:
```yaml
os_images:
- openshift_version: "4.15",
cpu_architecture: "x86_64",
url: "https://mirror.openshift.com/pub/openshift-v4/x86_64/dependencies/rhcos/4.15/4.15.0/rhcos-4.15.0-x86_64-live.x86_64.iso",
rootfs_url: "https://mirror.openshift.com/pub/openshift-v4/x86_64/dependencies/rhcos/4.15/4.15.0/rhcos-live-rootfs.x86_64.img",
version: "415.92.202402201450-0",
```
8. You can build your release image using the template:
```yaml
release_images:
- openshift_version: <Y STREAM VERSION>,
cpu_architecture: <ARCH>,
cpu_architectures:
- <ARCH>,
url: "quay.io/openshift-release-dev/ocp-release:<Z STREAM VERSION>-<arch>",
version: <Z STREAM VERSION>,
```
For the 4.15.5 example this would look like:
```yaml
release_images:
- openshift_version: "4.15",
cpu_architecture: "x86_64",
cpu_architectures:
- "x86_64"
url: "quay.io/openshift-release-dev/ocp-release:4.15.5-x86_64",
version: "4.15.5",
```
9. Insert `os_images` and `release_images` into the all section of your inventory.

### Cluster config checks:

#### Highly Available OpenShift cluster node checks
Expand Down Expand Up @@ -46,32 +103,7 @@ In addition to that, the following checks must be met for both HA and SNO deploy
- All values of `role` are supported
- If any nodes are virtual (vendor = KVM) then a vm_host is defined

There three possible groups of nodes are `masters`, `workers` and `day2_workers`.

#### Day 2 nodes

Day 2 nodes are added to an existing cluster. The reason why the installation of day 2 nodes is built into the main path of our automation, is that for assisted installer day 2 nodes can be on a different L2 network which the main flow does not allow.

Add a second ISO name parameter to the inventory to avoid conflict with the original:

```yaml
# day2 workers require custom parameter
day2_discovery_iso_name: "discovery/day2_discovery-image.iso"
```

Then add the stanza for day2 workers:

```yaml
day2_workers:
vars:
role: worker
vendor: HPE
hosts:
worker3: # Ensure this does not conflict with any existing workers
ansible_host: 10.60.0.106
bmc_address: 172.28.11.26
mac: 3C:FD:FE:78:AB:05
```
There three possible groups of nodes are `masters`, `workers` and `day2_workers` (day2_workers are onprem assisted installer only) .

### Network checks

Expand Down Expand Up @@ -195,7 +227,7 @@ network_config:
- name: ens1f0
type: ethernet
mac: "40:A6:B7:3D:B3:70"
state: down
state: down
- name: ens1f1
type: ethernet
mac: "40:A6:B7:3D:B3:71"
Expand Down Expand Up @@ -595,17 +627,6 @@ The basic network configuration of the inventory for the fully bare metal deploy
bmc_address: 172.30.10.7
# ...
```
## Additional Partition Deployment

For OCP 4.8+ deployments you can set partitions if required on the nodes. You do this by adding the snippet below to the node definition. Please ensure you provide the correct label and size(MiB) for the additional partitions you want to create. The device can either be the drive in which RHCOS image needs to be installed or it can be any additional drive on the node that requires partitioning. In the case that the device is equal to the host's `installation_disk_path` then a partition will be added defined by `disks_rhcos_root`. All additional partitions must be added under `extra_partitions` key as per the example below.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One sentance per line here.


```yaml
disks:
- device: "{{ installation_disk_path }}"
extra_partitions:
partition_1: 1024
partition_2: 1024
```

## PXE Deployment
You must have these services when using PXE deployment
Expand Down Expand Up @@ -651,9 +672,9 @@ You must have these services when using PXE deployment
vendor: pxe
bmc_address: "nfvpe-21.oot.lab.eng.bos.redhat.com"
bmc_port: 8082

```
> **Note**: that the BMCs of the nodes in the cluster must be routable from the bastion host and the HTTP Store must be routable from the BMCs
> **Note**: that the BMCs of the nodes in the cluster must be routable from the bastion host and the HTTP Store must be routable from the BMCs.

These two examples are not the only type of clusters that can be deployed using Crucible. A hybrid cluster can be created by mixing virtual and bare metal nodes.

Expand Down Expand Up @@ -727,12 +748,59 @@ all:
ansible_host: 192.168.10.17
bmc_ip: 172.30.10.7
```
# Defining a password for the discovery iso.

# On prem assisted installer only
These features require that the onprem assisted installer option.
To use them set `use_agent_based_installer: false` in the all section of the inventory.

## Defining a password for the discovery iso.

If users wish to provide password for the discovery ISO, they must define `hashed_discovery_password` in the `all` section inventory.
The value provided in `hashed_discovery_password` can be created by using `mkpasswd --method=SHA-512 MyAwesomePassword`.


# Operators
## Operators

It is possible to install a few operators as part of the cluster installtion. These operators are Local Storage Operator (`install_lso: True`), Open Data Fabric (`install_odf: True`) and Openshift Virtualization (`install_cnv: True`)

## Day 2 nodes

Day 2 nodes are added to an existing cluster.
The reason why the installation of day 2 nodes is built into the main path of our automation, is that for assisted installer day 2 nodes can be on a different L2 network which the main flow does not allow.

Add a second ISO name parameter to the inventory to avoid conflict with the original:

```yaml
# day2 workers require custom parameter
day2_discovery_iso_name: "discovery/day2_discovery-image.iso"
```

Then add the stanza for day2 workers:

```yaml
day2_workers:
vars:
role: worker
vendor: HPE
hosts:
worker3: # Ensure this does not conflict with any existing workers
ansible_host: 10.60.0.106
bmc_address: 172.28.11.26
mac: 3C:FD:FE:78:AB:05
```

## Additional Partition Deployment

For OCP 4.8+ deployments you can set partitions if required on the nodes.
You do this by adding the snippet below to the node definition.
Please ensure you provide the correct label and size(MiB) for the additional partitions you want to create.
The device can either be the drive in which RHCOS image needs to be installed or it can be any additional drive on the node that requires partitioning.
In the case that the device is equal to the host's `installation_disk_path` then a partition will be added defined by `disks_rhcos_root`.
All additional partitions must be added under `extra_partitions` key as per the example below.

```yaml
disks:
- device: "{{ installation_disk_path }}"
extra_partitions:
partition_1: 1024
partition_2: 1024
```
15 changes: 15 additions & 0 deletions hack/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
# generate_os_release_images.py

## Requriments

```shell
pip install semver beautifulsoup4
```

## Usage
Can be used to generate `os_images` and `release_images`.

Here's an example for multiple different ocp versions:
```shell
./generate_os_release_images.py -a x86_64 -v 4.12.29 -v 4.11.30 -v 4.13.2 -v 4.14.12 -v 4.15.1
```
Loading
Loading