(initial version) |
(tweak deployment) |
||
Line 15: | Line 15: | ||
** move mgmt interfaces back to normal mgmt network | ** move mgmt interfaces back to normal mgmt network | ||
** reimage machines | ** reimage machines | ||
** 2 virthosts on one chassis, 1 on another, each with an os-master and a os-node | |||
** 1 ppc64le virthost | |||
** 2 aarch64 virthost | |||
** 8 nodes (6 and 2) as bare metal openshift nodes | |||
* setup playbooks and deploy OpenShift 3.11 | * setup playbooks and deploy OpenShift 3.11 | ||
* test deploying / migrating some applications from the old cloud | * test deploying / migrating some applications from the old cloud |
Latest revision as of 21:40, 18 February 2019
This is a page to track high level work around repurposing our OpenStack cloud into a community OpenShift cluster
Point of contact: Kevin Fenzi
Why?
We currently have an old RHOSP5 cloud running various things: copr and it's builders, test machines for package maintainers, development instances for out application developers and so on. We were planning to move this to a newer OpenStack version, but we determined that it's more maint burden than we would like moving forward. We already have 4 OpenShift clusters deployed internally that we are moving applications to, so it seems like it would be not much additional burden to maintain another one. This instance would be isolated from all our other machines and thus be a good place to allow community applications or development or proof of concept things. Additionally we can trial kubevirt for vm needs inside OpenShift.
Our internal OpenShift instances all are tightly controlled on permissions and all applications are deployed via ansible, allowing us to completely rebuild the cluster easily from ansible. This community instance could be much more open and leave backups and deployment to the maintainers of the application/instance.
What?
- Redeploy machines that were used in OpenStack testing
- move mgmt interfaces back to normal mgmt network
- reimage machines
- 2 virthosts on one chassis, 1 on another, each with an os-master and a os-node
- 1 ppc64le virthost
- 2 aarch64 virthost
- 8 nodes (6 and 2) as bare metal openshift nodes
- setup playbooks and deploy OpenShift 3.11
- test deploying / migrating some applications from the old cloud
- add kubevirt and test
- Determine policies.
- We will want to make sure expectations are met for things here / SLE
- Migrate old cloud loads to community OpenShift or other place
- retire old cloud hardware that is EOL and add hardware that still is supported to community OpenShift
Outstanding Questions
- Name we should use? Can re-use fedorainfracloud, but thats kind of generic
- External IP blocks and how to use them best
- Policy around access and SLE
When?
Starting now
Who?
Kevin and Smooge and Patrick and Rick
How?
Deploy via our regular ansible playbooks.
Status
Still in planning stages/initial deployment