Skip to content

Commit

Permalink
enhancement proposal for Packet IPI
Browse files Browse the repository at this point in the history
  • Loading branch information
displague committed Aug 17, 2020
1 parent 66044da commit 0ece0f0
Showing 1 changed file with 197 additions and 0 deletions.
197 changes: 197 additions & 0 deletions enhancements/installer/packet-ipi.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,197 @@
---
title: packet-ipi
authors:
- "@displague"
- TBD
reviewers:
- TBD
approvers:
- TBD
creation-date: 2020-08-13
last-updated: 2020-08-13
status: provisional
---

# Packet IPI

## Release Signoff Checklist

- [ ] Enhancement is `implementable`
- [ ] Design details are appropriately documented from clear requirements
- [ ] Test plan is defined
- [ ] Graduation criteria
- [ ] User-facing documentation is created in
[openshift-docs](https://github.com/openshift/openshift-docs/)

## Summary

[Support for OpenShift 4](https://github.com/RedHatSI/openshift-packet-deploy)
on Packet was initially provided through the user provisioned (UPI) workflow.

This enhancement proposes adding tooling and documentation to help users deploy
OpenShift 4 on Packet using the installer provisioned infrastructure (IPI)
workflow.

## Motivation

Users who want to deploy OpenShift on single-tenant cloud infrastructure in
facilities where high levels of interconnection are possible, may wish to take
advantage of Packet's bare metal and virtual networking infrastructure.

We can help these users simplify the process of installing OpenShift on Packet
by introducing an installer-provisioned options that take advantage of various
models of compute, memory, GPU, or storage classes of devices, hardware and IP
reservations, virtual networks, and fast direct-attached storage.

Currently, users who want to [deploy OpenShift on
Packet](https://www.packet.com/solutions/openshift/) must follow the [OpenShift
via Terraform on
Packet][https://github.com/RedHatSI/openshift-packet-deploy/blob/master/terraform/README.md]
UPI instructions. This process is not streamlined and can not be integrated into
a simple experience like the [Try OpenShift 4](https://www.openshift.com/try)
workflow.

### Goals

The main goal of the Packet IPI is to provide users with an easier path to
running Red Hat OpenShift 4 on a horizontally scalable bare metal architecture
in data centers.

As a first step, we would add the Packet IPI to the installer codebase. With accompanying documentation, this makes a the Packet option available through the CLI. This would enable users to deploy OpenShift with IPI in a way
that is very similar to UPI but simplifies the process.

Following other IPI installers, this first step would include

- Making Packet IPI documentation available here:
<https://github.com/openshift/installer/blob/master/docs/user/packet/install_ipi.md>
- Adapt Packet's sig-lifecyclce ClusterAPI v1alpha2 for use as an OpenShift's ClusterAPI v1beta1 Machine driver
- Prepare the Terraform code and Packet types necessary for an IPI installer
- Making a CI job executing the provisioning scripts to test the IPI installer

### Non-Goals

It is outside the scope of this enhancement to provide explanations about the
installation of infrastructure elements that are considered as required and
owned by the user (e.g. DNS, Load Balancer).

Additional enhancement requests will be created to determine the best approach for fulfilling each need.

## Proposal

- Define and implement Packet types in the openshift/installer
- High level variables should include:
- API Key
- Project ID
- Boostrap node variables include:
- Device plan (defaulting to minimum required)
- Facility
- Virtual Networks
- Control plane variables rely on boostrap variables and add:
- Device plan (defaulting to provide a good experience)
- Additional clusters creation should include
- Device plan (defaulting to provide a good experience)
- Facility
- Virtual Networks
- Future: IP Reservations, Hardware Reservations, Spot Market Requests
- Write documentation to help users use and understand the Packet installer, to include:
- Packet usage basics (accounts and API keys)
- Packet resource requirements and options
- OpenShift Installer options
- Non-Packet components that build on the experience (DNS configuration)
- Setup the CI job to have a running test suite

### Implementation Details/Notes/Constraints

- RHCOS must be made available and maintained.
Images available for provisioning may be custom or official.
Official images can be hosted outside of Packet, such as ContainerLinux and RancherOS. More details area available here: <https://github.com/packethost/packet-images>
Custom Images can be hosted on git, as described here:
- <https://www.packet.com/developers/docs/servers/operating-systems/custom-images/>
- <https://www.packet.com/developers/changelog/custom-image-handling/>
- Packet does not offer a managed DNS solution. Route53, CloudFlare, or RFC 2136
(DNS Update), and NS record based zone forwarding are considerations. Existing
IPI drivers like BareMetal have similar limitations.
- LoadBalancing can be achieved with the Packet CCM, which creates and maintains
a MetalLB deployment within the cluster.
- Both Public and Private IPv4 and IPv6 addresses made available for each node.
Private and VLAN Networks can be restricted at the project level (default) or
node by node. While custom ASNs and BGP routing features are supported, these
and other networking features will not be exposed through the IPI.
<https://www.packet.com/cloud/network/>
- Openshift with run directly on RHCOS directly on baremetal without need for a
VM layer.

### Risks and Mitigations

The official Packet ClusterAPI, CCM, and CSI drivers are all relatively new and
may make undertake substantial design changes. This project will either need to
adopt and maintain the current releases, or adopt newer releases developed
alongside this project.

The CI will need to assure that Packet resources are cleanly released after failed and successful builds. Packet is currently used in the baremetal provisioning tests, so there is some prior art available. If the same account is used for testing the IPI, it may need account quota increases.

## Design Details

### Test Plan

We will use existing UPI platforms, such as OpenStack, AWS, and GCP, as the
inspiration for our testing strategy:

- A new e2e job will be created
- At the moment, we think unit tests will probably not be necessary. However, we
will cover any required changes to the existing codebase with appropriate
tests.

### Graduation Criteria

The proposal is to follow a graduation process based on the existence of a CI
running suite with end-to-end jobs. We will evaluate its feedback along with
feedback from QE's and testers.

We consider the following as part of the necessary steps:

- CI jobs present and regularly scheduled.
- IPI document published in the OpenShift repo.
- End to end jobs are stable and passing and evaluated with the same criteria as
comparable IPI providers.
- Developers of the team have successfully used the IPI to deploy on Packet
following the documented procedure.

## Implementation History

Significant milestones in the life cycle of a proposal should be tracked in
`Implementation History`.

## Drawbacks

The IPI implementation is provisioned on baremetal which faces resource
availability limitations beyond those in VM environment. CI, QE, documentation,
and tests will need to be generous when defining provisioning times. Tests and
documentation should also be made flexible about facilities and device model
choices to avoid physical limitations.

## Alternatives

People not using the IPI workflow can follow the [Packet
UPI](https://github.com/RedHatSI/openshift-packet-deploy/blob/master/terraform/README.md)
document. This requires more manual work and the necessary knowledge to identify
Packet specific parts without any automation help.

Users may also follow along with the [Deploying OpenShift 4.4 on
Packet](https://www.openshift.com/blog/deploying-openshift-4.4-on-packet) blog
post, but there is no automation provided.

ProjectID is being defined as a high level variable. Arguably, OrganizationID
could take this position, as this represents the billing account. OrganizationID
is inferred by ProjectID in the current proposal. In a variation where
Organization is required at install, ProjectID could become a required or
optional (inherited) property for each cluster. Keep in mind, Projects are one
way to share a private network between nodes.

## Infrastructure Needed

As has been demonstrated in the Packet UPI, users will need access to a Packet Project, API Key, and no less than 4 servers. One of these will be used for bootstrapping while the other 3 represent the control plane. The fourth node, used for bootstrapping can be removed once the cluster is installed.

In the UPI driver, an additional server was used as a bastion node for
installation needs. This IPI implementation will seek to avoid the need for
such a node through use of the CSI driver and a hosted RHCOS image.

0 comments on commit 0ece0f0

Please sign in to comment.