OpenShift

OpenShift is a Kubernetes-based platform for running containers. The upstream project, OpenShift Origin, is what Red Hat bases the OpenShift Container Platform product on. Fedora runs OpenShift Container Platform rather than OpenShift Origin.

Getting Started

If you’ve never used OpenShift before a good place to start is with MiniShift, which deploys OpenShift Origin in a virtual machine.

OpenShift in Fedora Infrastructure

Fedora has two OpenShift deployments: Staging OpenShift and Production OpenShift. In addition to being the staging deployment of OpenShift itself, the staging deployment is intended to be a place for developers to deploy the staging version of their applications.

Some features of OpenShift are not functional in Fedora’s deployment, mainly due to the lack of HTTP/2 support (at the time of this writing). Additionally, users are not allowed to alter configuration, roll out new deployments, run builds, etc. in the web UI or CLI.

Web User Interface

Some of the web user interface is currently non-functional since it requires HTTP/2. The rest is locked down to be read-only, making it of limited usefulness.

Command-line Interface

Although the CLI is also locked down to be read only, it is possible to view logs and request debugging containers from os-control01 or your local machine. For example, to view the logs of a deployment in staging:

$ ssh os-control01.iad2.fedoraproject.org
$ oc login api.ocp.fedoraproject.org:6443
You must obtain an API token by visiting https://oauth-openshift.apps.ocp.fedoraproject.org/oauth/token/request

$ oc login api.ocp.fedoraproject.org:6443 --token=<Your token here>
$ oc get pods
librariesio2fedmsg-28-bfj52          1/1       Running     522        28d
$ oc logs librariesio2fedmsg-28-bfj52

Deploying Your Application

Applications are deployed to OpenShift using Ansible playbooks. You will need to create an Ansible Role for your application. A role is made up of several YAML files that define OpenShift objects. To create these YAML objects you have two options:

  1. Copy and paste an existing role and do your best to rewrite all the files to work for your application. You will likely make mistakes which you won’t find until you run the playbook and when you do learn that your configuration is invalid, it won’t be clear where you messed up.

  2. Set up your own deployment of OpenShift where you can click through the web UI to create your application (and occasionally use the built-in text editor when the UI doesn’t have buttons for a feature you need). Once you’ve done that, you can export all the configuration files and drop them into the infra ansible repository. They will be "messy" with lots of additional data OpenShift adds for you (including old revisions of the configuration).

Both approaches have their downsides. #1 has a very long feedback cycle as you edit the file, commit it to the infra repository, and then run the playbook. #2 generates most of the configuration, but will produce crufty files. Additionally, you will likely not have your OpenShift deployment set up the same way Fedora does so you still may produce configurations that won’t work.

You will likely need (at a minimum) the following objects:

  • A BuildConfig

    • This defines how your container is built.

  • An ImageStream

    • This references a "stream" of container images and lets you trigger deployments or image builds based on changes in a stream.

  • A Deployment

    • This defines how your container is deployed (how many replicas, what ports are available, etc)

    • Note: DeploymentConfigs are deprecated, do not use them!

  • A Service

    • An internal load balancer that routes traffic to your pods.

  • A Route

    • This exposes a Service as a host name.

  • Storage Storage

    • On the Fedora Infra clusters in both staging and production, an automated storage provisioning system is in place. To access simply create a PVC:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: PVCNAME-UPDATE
spec:
  volumeName: PVCNAME-VOL-UPDATE
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
  storageClassName: 'ocs-storagecluster-cephfs'
  volumeMode: Filesystem