(initial communishift page) |
No edit summary |
||
(9 intermediate revisions by 6 users not shown) | |||
Line 1: | Line 1: | ||
= dashboard = | |||
Sep 2020 update "About the Future of Communishift": https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org/message/Q74QYG54MNBCY7UX2GITPAZYKHD6BYFH/ | |||
as of Oct 2020 https://console-openshift-console.apps.os.fedorainfracloud.org/ is not available. | |||
= communishift = | = communishift = | ||
Line 13: | Line 15: | ||
== Technical Details == | == Technical Details == | ||
* Running OpenShift 4. | * Running OpenShift 4.3 | ||
* 8 compute nodes and 3 master nodes that are all dell fx1 blades | * 8 compute nodes and 3 master nodes that are all dell fx1 blades | ||
* Total vCPU: 552 vCPU | * Total vCPU: 552 vCPU | ||
Line 20: | Line 22: | ||
== Access == | == Access == | ||
For now, Users in the group 'communishift' will be granted access via Fedora ipsilon and have a quota of | For now, Users in the group 'communishift' will be granted access via Fedora ipsilon and have a quota of 5 projects, 10pods and 5 volume claims. If you need additional resources, please file a ticket. Make sure to note what you need them for. | ||
We hope to open access to larger groups soon after a trial period. | |||
We | We reserve the right to remove at any time, any access/application/content for any reason. | ||
== Support == | == Support == | ||
* It is 100% up to the community member or group to write/maintain/deploy/backup/manage their app | * It is 100% up to the community member or group to write/maintain/deploy/backup/manage their app (see best practices below.) | ||
( | * Fedora Infrastructure will keep the cluster up and functioning, anything above that is up to YOU. | ||
* Fedora Infrastructure will keep the cluster up and functioning, anything above that is up to YOU. | |||
== Best Practices == | == Best Practices == | ||
* You can maintain your application however you like, but here's some best practices: | * You can maintain your application however you like, but here's some best practices: | ||
** Create a pagure/github/gitlab project for your application and use its git to track your config | ** Create a pagure/github/gitlab project for your application and use its git to track your config. Allow people to file tickets and PR's against it. | ||
** Make sure your application has a clear note where to file issues or how to contact the owner(s) | |||
** Make sure your | |||
** backup your application config and data often. (see 'oc export') | ** backup your application config and data often. (see 'oc export') | ||
Line 47: | Line 43: | ||
* How does this relate to the Fedora Infrastructure Private Cloud? | * How does this relate to the Fedora Infrastructure Private Cloud? | ||
** It replaces it. We found it difficult to deploy and manage openstack with the resources available to us, so we elected to switch over to OpenShift. | |||
* Submit a RFR asking infrastructure to take it on and move it to a higher SLE place/state. | * My app/thing is super popular and I want it to be more supported, what can I do? | ||
** Submit a RFR asking infrastructure to take it on and move it to a higher SLE place/state. | |||
* The cluster was reinstalled and I lost my work, what can I do? | * The cluster was reinstalled and I lost my work, what can I do? | ||
** You should have backed up your application and data as we asked you to. Sorry. | |||
* You should have backed up your application and data as we asked you to. Sorry. | |||
* I need more (cpu/storage/apps/privs), what can I do? | * I need more (cpu/storage/apps/privs), what can I do? | ||
** Ask for them in a ticket. | |||
* Ask for them in a ticket. |
Latest revision as of 14:16, 14 October 2020
dashboard
Sep 2020 update "About the Future of Communishift": https://lists.fedoraproject.org/archives/list/devel@lists.fedoraproject.org/message/Q74QYG54MNBCY7UX2GITPAZYKHD6BYFH/ as of Oct 2020 https://console-openshift-console.apps.os.fedorainfracloud.org/ is not available.
communishift
communishift is the name for the OpenShift community cluster run by the Fedora project.
It's intended to be a place where community members can test/deploy/run things that are of benifit to the community at a lower SLE (Service Level Expectation) than services directly run and supported by infrastructure, additionally doing so in a self service manner.
It's also an incubator for applications that may sommeday be more fully supported once they prove their worth.
Finally, it's a place for Infrastructure folks to learn and test and discover openshift in a less constrained setting than our production clusters.
Technical Details
- Running OpenShift 4.3
- 8 compute nodes and 3 master nodes that are all dell fx1 blades
- Total vCPU: 552 vCPU
- Total Memory: 1.72 TiB
Access
For now, Users in the group 'communishift' will be granted access via Fedora ipsilon and have a quota of 5 projects, 10pods and 5 volume claims. If you need additional resources, please file a ticket. Make sure to note what you need them for.
We hope to open access to larger groups soon after a trial period.
We reserve the right to remove at any time, any access/application/content for any reason.
Support
- It is 100% up to the community member or group to write/maintain/deploy/backup/manage their app (see best practices below.)
- Fedora Infrastructure will keep the cluster up and functioning, anything above that is up to YOU.
Best Practices
- You can maintain your application however you like, but here's some best practices:
- Create a pagure/github/gitlab project for your application and use its git to track your config. Allow people to file tickets and PR's against it.
- Make sure your application has a clear note where to file issues or how to contact the owner(s)
- backup your application config and data often. (see 'oc export')
Questions and Answers
- How does this relate to the Fedora Infrastructure Private Cloud?
- It replaces it. We found it difficult to deploy and manage openstack with the resources available to us, so we elected to switch over to OpenShift.
- My app/thing is super popular and I want it to be more supported, what can I do?
- Submit a RFR asking infrastructure to take it on and move it to a higher SLE place/state.
- The cluster was reinstalled and I lost my work, what can I do?
- You should have backed up your application and data as we asked you to. Sorry.
- I need more (cpu/storage/apps/privs), what can I do?
- Ask for them in a ticket.