Skip to content

Commit

Permalink
Add a script to help users generate os_images and release_images
Browse files Browse the repository at this point in the history
  • Loading branch information
nocturnalastro committed Mar 26, 2024
1 parent f796a8c commit 0c0be05
Show file tree
Hide file tree
Showing 4 changed files with 284 additions and 31 deletions.
123 changes: 92 additions & 31 deletions docs/inventory.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,63 @@ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.

## Inventory Validation

### OS Image and Release Image requirements
You are now required to provide the os_image and release_image for the openshift version you want to deploy.

Note: We have provided a script which automates steps 1 to 8 in the hack directory however there are some dependancies to it.

The os_image for a relase can be generated by:
1. Navigating to this url https://mirror.openshift.com/pub/openshift-v4/<ARCH>/dependencies/rhcos/. Where arch is the architecture you wish to deploy onto.
- For example we will use `x86_64` producing:
[https://mirror.openshift.com/pub/openshift-v4/x86_64/dependencies/rhcos/](https://mirror.openshift.com/pub/openshift-v4/amd64/dependencies/rhcos/)
2. Selecing the Y stream you wish to deploy e.g. 4.15
3. Then selecting the Z stream latest version <= to the version you wish to deploy e.g. 4.15.5 you could select 4.15.0
4. Navigating into that director you can find the `live iso` and the `live rootfs image` files note down there urls.
5. You then go to following URL replacing the place holders. Where arch is the same as before and OS_VERSION is the version you selected in the previous step.
https://mirror.openshift.com/pub/openshift-v4/<ARCH>/clients/ocp/<OS_VERSION>/release.txt

- For example using arch as before and `4.15.0` producing: [https://mirror.openshift.com/pub/openshift-v4/x86_64/clients/ocp/4.15.0/release.txt](https://mirror.openshift.com/pub/openshift-v4/x86_64/clients/ocp/4.15.0/release.txt)
6. You then need to gather the machine-os version from the release.txt in this case `415.92.202402201450-0`
7. You can now produce the os_image using the following template
```yaml
os_images:
- openshift_version: <Y STREAM VERSION>,
cpu_architecture: <ARCH>,
url: <URL FOR LIVE ISO FILE from step 4>,
rootfs_url: <URL FOR FOOTFS IMAGE from step 4>,
version: <MACHINE-OS VERSION from step 6>,
```
For the 4.15.5 example this would look like:
```yaml
os_images:
- openshift_version: "4.15",
cpu_architecture: "x86_64",
url: "https://mirror.openshift.com/pub/openshift-v4/x86_64/dependencies/rhcos/4.15/4.15.0/rhcos-4.15.0-x86_64-live.x86_64.iso",
rootfs_url: "https://mirror.openshift.com/pub/openshift-v4/x86_64/dependencies/rhcos/4.15/4.15.0/rhcos-live-rootfs.x86_64.img",
version: "415.92.202402201450-0",
```
8. You can build your release image using the template:
```yaml
release_images:
- openshift_version: <Y STREAM VERSION>,
cpu_architecture: <ARCH>,
cpu_architectures:
- <ARCH>,
url: "quay.io/openshift-release-dev/ocp-release:<Z STREAM VERSION>-<arch>",
version: <Z STREAM VERSION>,
```
For the 4.15.5 example this would look like:
```yaml
release_images:
- openshift_version: "4.15",
cpu_architecture: "x86_64",
cpu_architectures:
- "x86_64"
url: "quay.io/openshift-release-dev/ocp-release:4.15.5-x86_64",
version: "4.15.5",
```
9. Insert `os_images` and `release_images` into the all section of your inventory.

### Cluster config checks:

#### Highly Available OpenShift cluster node checks
Expand Down Expand Up @@ -46,32 +103,7 @@ In addition to that, the following checks must be met for both HA and SNO deploy
- All values of `role` are supported
- If any nodes are virtual (vendor = KVM) then a vm_host is defined

There three possible groups of nodes are `masters`, `workers` and `day2_workers`.

#### Day 2 nodes

Day 2 nodes are added to an existing cluster. The reason why the installation of day 2 nodes is built into the main path of our automation, is that for assisted installer day 2 nodes can be on a different L2 network which the main flow does not allow.

Add a second ISO name parameter to the inventory to avoid conflict with the original:

```yaml
# day2 workers require custom parameter
day2_discovery_iso_name: "discovery/day2_discovery-image.iso"
```
Then add the stanza for day2 workers:
```yaml
day2_workers:
vars:
role: worker
vendor: HPE
hosts:
worker3: # Ensure this does not conflict with any existing workers
ansible_host: 10.60.0.106
bmc_address: 172.28.11.26
mac: 3C:FD:FE:78:AB:05
```
There three possible groups of nodes are `masters`, `workers` and `day2_workers` (day2_workers are onprem assisted installer only) .

### Network checks

Expand Down Expand Up @@ -195,7 +227,7 @@ network_config:
- name: ens1f0
type: ethernet
mac: "40:A6:B7:3D:B3:70"
state: down
state: down
- name: ens1f1
type: ethernet
mac: "40:A6:B7:3D:B3:71"
Expand Down Expand Up @@ -651,7 +683,7 @@ You must have these services when using PXE deployment
vendor: pxe
bmc_address: "nfvpe-21.oot.lab.eng.bos.redhat.com"
bmc_port: 8082
```
> **Note**: that the BMCs of the nodes in the cluster must be routable from the bastion host and the HTTP Store must be routable from the BMCs

Expand Down Expand Up @@ -727,12 +759,41 @@ all:
ansible_host: 192.168.10.17
bmc_ip: 172.30.10.7
```
# Defining a password for the discovery iso.

# On prem assisted installer only
These features require that the onprem assisted installer option.
To use them set `use_agent_based_installer: false` in the all section of the inventory.

## Defining a password for the discovery iso.

If users wish to provide password for the discovery ISO, they must define `hashed_discovery_password` in the `all` section inventory.
The value provided in `hashed_discovery_password` can be created by using `mkpasswd --method=SHA-512 MyAwesomePassword`.


# Operators
## Operators

It is possible to install a few operators as part of the cluster installtion. These operators are Local Storage Operator (`install_lso: True`), Open Data Fabric (`install_odf: True`) and Openshift Virtualization (`install_cnv: True`)

## Day 2 nodes

Day 2 nodes are added to an existing cluster. The reason why the installation of day 2 nodes is built into the main path of our automation, is that for assisted installer day 2 nodes can be on a different L2 network which the main flow does not allow.

Add a second ISO name parameter to the inventory to avoid conflict with the original:

```yaml
# day2 workers require custom parameter
day2_discovery_iso_name: "discovery/day2_discovery-image.iso"
```

Then add the stanza for day2 workers:

```yaml
day2_workers:
vars:
role: worker
vendor: HPE
hosts:
worker3: # Ensure this does not conflict with any existing workers
ansible_host: 10.60.0.106
bmc_address: 172.28.11.26
mac: 3C:FD:FE:78:AB:05
```
15 changes: 15 additions & 0 deletions hack/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
# generate_os_release_images.py

## Requriments

```shell
pip install -r ./pip-requirements.txt
```

## Useage
Can be used to generate `os_images` and `release_images`.

Here's an example for multiple different ocp versions:
```shell
./generate_os_release_images.py -a x86_64 -v 4.12.29 -v 4.11.30 -v 4.13.2 -v 4.14.12 -v 4.15.1
```
175 changes: 175 additions & 0 deletions hack/generate_os_release_images.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,175 @@
#! /usr/bin/env python3

try:
from BeautifulSoup import BeautifulSoup
except ImportError:
from bs4 import BeautifulSoup

import yaml
import semver
import requests
import re
import argparse

DEBUG = False

def generate_image_values(ocp_version, arch):
rhcos = requests.get(
f"https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/{ocp_version.major}.{ocp_version.minor}"
)
if not rhcos.ok:
raise ValueError(
f"Failed to find rhcos dependencies for version: {ocp_version.major}.{ocp_version.minor}"
)

page = BeautifulSoup(rhcos.content, "lxml")
versions = map(lambda p: p["href"].strip("/"), page.find_all("a")[1:-1])

os_version = None
for v in versions:
ver = semver.Version.parse(v)
if ver.compare(ocp_version) < 1 and (
os_version is None or os_version.compare(ver) == -1
):
os_version = ver

if os_version is None:
raise ValueError(
f"Failed to find a version <= {ocp_version} in {versions.join(', ')}"
)

release_info = requests.get(
f"https://mirror.openshift.com/pub/openshift-v4/{arch}/clients/ocp/{os_version}/release.txt"
)
if not release_info.ok:
raise ValueError(f"Failed to find release.txt for version: {os_version}")

rhcos_version_match = re.search(
r"^\s+machine-os (?P<rhcos_version>.+) Red Hat Enterprise Linux CoreOS$",
release_info.content.decode(),
re.MULTILINE,
)
rhcos_version = rhcos_version_match.groupdict()["rhcos_version"]

if DEBUG:
print(arch)
print(ocp_version)
print(os_version)
print(rhcos_version)

result = {
"os_images": {
str(os_version): {
"openshift_version": f"{os_version.major}.{os_version.minor}",
"cpu_architecture": f"{arch}",
"url": f"https://mirror.openshift.com/pub/openshift-v4/{arch}/dependencies/rhcos/{os_version.major}.{os_version.minor}/{os_version}/rhcos-{os_version}-{arch}-live.{arch}.iso",
"rootfs_url": f"https://mirror.openshift.com/pub/openshift-v4/{arch}/dependencies/rhcos/{os_version.major}.{os_version.minor}/{os_version}/rhcos-live-rootfs.{arch}.img",
"version": f"{rhcos_version}",
},
},
"release_images": [
{
"openshift_version": f"{ocp_version.major}.{ocp_version.minor}",
"cpu_architecture": arch,
"cpu_architectures": [arch],
"url": f"quay.io/openshift-release-dev/ocp-release:{ocp_version}-{arch}",
"version": str(ocp_version),
},
],
}

return result


def merge_results(results):
merged = {
"os_images": {},
"release_images": [],
}

for r in results:
for os_v, os in r["os_images"].items():
merged["os_images"][os_v] = os
for os in r["release_images"]:
merged["release_images"].append(os)

res = {
"os_images": [],
"release_images": merged["release_images"],
}

for os in merged["os_images"].values():
res["os_images"].append(os)

return res


def verify_urls(merged):
for os in merged["os_images"]:
url_head = requests.head(os["url"])
if not url_head.ok:
raise ValueError(f"file not found at expected url {os['url']}")
rootfs_url_head = requests.head(os["rootfs_url"])
if not rootfs_url_head.ok:
raise ValueError(f"file not found at expected url {os['rootfs_url']}")

for release in merged["release_images"]:
url_head = requests.head(os["url"])
if not url_head.ok:
raise ValueError(f"file not found at expected url {os['url']}")


def main(ocp_versions, arch, verify):
results = []
for v in ocp_versions:
results.append(generate_image_values(v, arch))

if DEBUG:
print(results)

merged_results = merge_results(results)
if DEBUG:
print(merged_results)

class IndentDumper(yaml.Dumper):
def increase_indent(self, flow=False, indentless=False):
return super(IndentDumper, self).increase_indent(flow, False)

if verify:
verify_urls(merged_results)

print(yaml.dump(merged_results, Dumper=IndentDumper))


if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument(
"-a",
"--arch",
help="target archictecture",
)
parser.add_argument(
"-v",
"--version",
action="append",
)
parser.add_argument(
"--skip-verify",
action="store_false",
default=True,
)
parser.add_argument(
"--debug",
action="store_true",
default=False,
)

args = parser.parse_args()

DEBUG = args.debug

ocp_versions = []
for v in args.version:
ocp_versions.append(semver.Version.parse(v))

main(ocp_versions=ocp_versions, arch=args.arch, verify=args.skip_verify)
2 changes: 2 additions & 0 deletions hack/pip-requirments.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
semver==3.0.2
beautifulsoup4==4.12.3

0 comments on commit 0c0be05

Please sign in to comment.