Don't allow installing Che with Openshift Oauth when no OAuth user exist (#74)

* First round of impl in the Go code
* Correct management of the status
* Add `users` permisions in cluster role
* Upgrade `operator-sdk` CLI pre-req to `v0.10.0`
* Produce a CSV diff to help reviews
* fix trailing spaces that break `gen-csv` desc mgt
* Update nightly CSVs
* Add the new OLM descriptors in the new nightly CSVs

Signed-off-by: David Festal <dfestal@redhat.com>
pull/75/head
David Festal 2019-09-02 18:47:30 +02:00 committed by GitHub
parent 09a90beea6
commit 9682f3448f
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
20 changed files with 1173 additions and 98 deletions

View File

@ -31,3 +31,9 @@ rules:
- infrastructures
verbs:
- get
- apiGroups:
- user.openshift.io
resources:
- users
verbs:
- list

View File

@ -3,7 +3,7 @@
OLM packages scripts are using some required dependencies that need to be installed
- [curl](https://curl.haxx.se/)
- [https://github.com/kislyuk/yq](https://github.com/kislyuk/yq) and not [http://mikefarah.github.io/yq/](http://mikefarah.github.io/yq/)
- [Operator SDK v0.8.2](https://github.com/operator-framework/operator-sdk/blob/v0.8.2/doc/user/install-operator-sdk.md)
- [Operator SDK v0.10.0](https://github.com/operator-framework/operator-sdk/blob/v0.10.0/doc/user/install-operator-sdk.md)
If these dependencies are not installed, `docker-run.sh` can be used as a container bootstrap to run a given script with the appropriate dependencies.

View File

@ -17,7 +17,7 @@ GIT_ROOT_DIRECTORY=$(git rev-parse --show-toplevel)
IMAGE_NAME="eclipse/che-operator-olm-build"
# Operator SDK
OPERATOR_SDK_VERSION=v0.8.2
OPERATOR_SDK_VERSION=v0.10.0
init() {
BLUE='\033[1;34m'

View File

@ -1,3 +1,3 @@
role-path: generated/current-role.yaml
role-paths: [ "generated/roles/role.yaml" ]
operator-path: ../../deploy/operator.yaml
crd-cr-paths: ["../../deploy/crds/org_v1_che_crd.yaml"]

View File

@ -134,7 +134,7 @@ spec:
```
***important:*** The operator is only tracking resources in its own namespace. If CheCluster is not created in this namespace it's ignored.
The operator will now create pods for Eclipse Che. The deployment status can be tracked by looking at the Operator logs by using the command:
```
```
$ kubectl logs -n my-eclipse-che che-operator-554c564476-fl98z
```
***important:*** pod name is different on each installation
@ -145,7 +145,7 @@ spec:
Eclipse Che URL can be tracked by searching for available trace:
```
$ kubectl logs -f -n my-eclipse-che che-operator-7b6b4bcb9c-m4m2m | grep "Eclipse Che is now available"
time="2019-08-01T13:31:05Z" level=info msg="Eclipse Che is now available at: http://che-my-eclipse-che.gcp.my-ide.cloud"
time="2019-08-01T13:31:05Z" level=info msg="Eclipse Che is now available at: http://che-my-eclipse-che.gcp.my-ide.cloud"
```
When Eclipse Che is ready, the Eclipse Che URL is displayed in CheCluster resource in `status` section
```

View File

@ -0,0 +1,15 @@
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: checlusters.org.eclipse.che
spec:
group: org.eclipse.che
names:
kind: CheCluster
listKind: CheClusterList
plural: checlusters
singular: checluster
scope: Namespaced
version: v1
subresources:
status: {}

View File

@ -0,0 +1,119 @@
--- /home/dfestal/go/src/github.com/eclipse/che-operator/olm/eclipse-che-preview-kubernetes/deploy/olm-catalog/eclipse-che-preview-kubernetes/9.9.9-nightly.1564753341/eclipse-che-preview-kubernetes.v9.9.9-nightly.1564753341.clusterserviceversion.yaml 2019-09-02 16:52:51.000000000 +0200
+++ /home/dfestal/go/src/github.com/eclipse/che-operator/olm/eclipse-che-preview-kubernetes/deploy/olm-catalog/eclipse-che-preview-kubernetes/9.9.9-nightly.1567437268/eclipse-che-preview-kubernetes.v9.9.9-nightly.1567437268.clusterserviceversion.yaml 2019-09-02 17:14:29.000000000 +0200
@@ -49,12 +49,12 @@
categories: Developer Tools
certified: "false"
containerImage: quay.io/eclipse/che-operator:nightly
- createdAt: "2019-08-02T13:42:21Z"
+ createdAt: "2019-09-02T15:14:29Z"
description: A Kube-native development solution that delivers portable and collaborative
developer workspaces.
repository: https://github.com/eclipse/che-operator
support: Eclipse Foundation
- name: eclipse-che-preview-kubernetes.v9.9.9-nightly.1564753341
+ name: eclipse-che-preview-kubernetes.v9.9.9-nightly.1567437268
namespace: placeholder
spec:
apiservicedefinitions: {}
@@ -91,6 +91,21 @@
path: cheClusterRunning
x-descriptors:
- urn:alm:descriptor:io.kubernetes.phase
+ - description: Reason of the current status
+ displayName: Reason
+ path: reason
+ x-descriptors:
+ - 'urn:alm:descriptor:text'
+ - description: Message explaining the current status
+ displayName: Message
+ path: message
+ x-descriptors:
+ - 'urn:alm:descriptor:text'
+ - description: Link providing help related to the current status
+ displayName: Help link
+ path: helpLink
+ x-descriptors:
+ - 'urn:alm:descriptor:org.w3:link'
version: v1
description: |
A collaborative Kubernetes-native development solution that delivers Kubernetes workspaces and in-browser IDE for rapid cloud application development.
@@ -98,7 +113,7 @@
## Prerequisites
- Operator Lifecycle Manager (OLM) needs to be installed.
- Kubernetes Platform. For OpenShift, the installation is directly made from OperatorHub UI in the admin console.
-
+
OLM installation can be checked by running the command:
```
$ kubectl get pods --all-namespaces | grep olm
@@ -109,23 +124,23 @@
olm packageserver-5c5f64947b-trghp 1/1 Running 0 9m56s
olm packageserver-5c5f64947b-zqvxg 1/1 Running 0 9m56s
```
-
+
## How to Install
Install `Eclipse Che Operator` by following instructions in top right button `Install`.
-
+
A new pod che-operator is created in `my-eclipse-che` namespace
-
+
```
$ kubectl get pods --all-namespaces | grep my-eclipse-che
my-eclipse-che che-operator-554c564476-fl98z 1/1 Running 0 13s
```
-
+
The operator is now providing new Custom Resources Definitions: `checluster.org.eclipse.che`
-
+
Create a new Eclipse Che instance by creating a new CheCluster resource:
-
+
On the bottom of this page, there is a section `Custom Resource Definitions` with `Eclipse Che Cluster` name.
-
+
Click on `View YAML Example` *Link* and copy the content to a new file named `my-eclipse-che.yaml`
**Important!** Make sure you provide **K8s.ingressDomain** which is a global ingress domain of your k8s cluster, for example, `gcp.my-ide.cloud`
Create the new CheCluster by creating the resource in the `my-eclipse-che` namespace :
@@ -138,10 +153,10 @@
$ kubectl logs -n my-eclipse-che che-operator-554c564476-fl98z
```
***important:*** pod name is different on each installation
-
+
When all Eclipse Che containers are running, the Eclipse Che URL is printed
-
-
+
+
Eclipse Che URL can be tracked by searching for available trace:
```
$ kubectl logs -f -n my-eclipse-che che-operator-7b6b4bcb9c-m4m2m | grep "Eclipse Che is now available"
@@ -151,7 +166,7 @@
```
$ kubectl describe checluster/eclipse-che -n my-eclipse-che
```
-
+
```
Status:
Che Cluster Running: Available
@@ -159,7 +174,7 @@
Che Version: 7.0.0
...
```
-
+
By opening this URL in a web browser, Eclipse Che is ready to use.
## Defaults
By default, the operator deploys Eclipse Che with:
@@ -335,5 +350,5 @@
maturity: stable
provider:
name: Eclipse Foundation
- replaces: eclipse-che-preview-kubernetes.v9.9.9-nightly.1563883405
- version: 9.9.9-nightly.1564753341
+ replaces: eclipse-che-preview-kubernetes.v9.9.9-nightly.1564753341
+ version: 9.9.9-nightly.1567437268

View File

@ -1,7 +1,7 @@
packageName: eclipse-che-preview-kubernetes
channels:
- name: stable
currentCSV: eclipse-che-preview-kubernetes.v7.0.0
- name: nightly
currentCSV: eclipse-che-preview-kubernetes.v9.9.9-nightly.1564753341
- currentCSV: eclipse-che-preview-kubernetes.v9.9.9-nightly.1567437268
name: nightly
- currentCSV: eclipse-che-preview-kubernetes.v7.0.0
name: stable
defaultChannel: stable
packageName: eclipse-che-preview-kubernetes

View File

@ -1,3 +1,3 @@
operator-path: ../../deploy/operator.yaml
role-path: generated/current-role.yaml
role-paths: [ "generated/roles/role.yaml", "generated/roles/cluster_role.yaml"]
crd-cr-paths: ["../../deploy/crds/org_v1_che_crd.yaml"]

View File

@ -0,0 +1,15 @@
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: checlusters.org.eclipse.che
spec:
group: org.eclipse.che
names:
kind: CheCluster
listKind: CheClusterList
plural: checlusters
singular: checluster
scope: Namespaced
version: v1
subresources:
status: {}

View File

@ -0,0 +1,60 @@
--- /home/dfestal/go/src/github.com/eclipse/che-operator/olm/eclipse-che-preview-openshift/deploy/olm-catalog/eclipse-che-preview-openshift/9.9.9-nightly.1564753341/eclipse-che-preview-openshift.v9.9.9-nightly.1564753341.clusterserviceversion.yaml 2019-08-28 12:17:35.000000000 +0200
+++ /home/dfestal/go/src/github.com/eclipse/che-operator/olm/eclipse-che-preview-openshift/deploy/olm-catalog/eclipse-che-preview-openshift/9.9.9-nightly.1567437269/eclipse-che-preview-openshift.v9.9.9-nightly.1567437269.clusterserviceversion.yaml 2019-09-02 17:14:29.000000000 +0200
@@ -46,12 +46,12 @@
categories: Developer Tools, OpenShift Optional
certified: "false"
containerImage: quay.io/eclipse/che-operator:nightly
- createdAt: "2019-08-02T13:42:22Z"
+ createdAt: "2019-09-02T15:14:29Z"
description: A Kube-native development solution that delivers portable and collaborative
developer workspaces in OpenShift.
repository: https://github.com/eclipse/che-operator
support: Eclipse Foundation
- name: eclipse-che-preview-openshift.v9.9.9-nightly.1564753341
+ name: eclipse-che-preview-openshift.v9.9.9-nightly.1567437269
namespace: placeholder
spec:
apiservicedefinitions: {}
@@ -93,6 +93,21 @@
path: cheClusterRunning
x-descriptors:
- urn:alm:descriptor:io.kubernetes.phase
+ - description: Reason of the current status
+ displayName: Reason
+ path: reason
+ x-descriptors:
+ - 'urn:alm:descriptor:text'
+ - description: Message explaining the current status
+ displayName: Message
+ path: message
+ x-descriptors:
+ - 'urn:alm:descriptor:text'
+ - description: Link providing help related to the current status
+ displayName: Help link
+ path: helpLink
+ x-descriptors:
+ - 'urn:alm:descriptor:org.w3:link'
version: v1
description: |
A collaborative Kubernetes-native development solution that delivers OpenShift workspaces and in-browser IDE for rapid cloud application development.
@@ -227,6 +242,12 @@
- infrastructures
verbs:
- get
+ - apiGroups:
+ - user.openshift.io
+ resources:
+ - users
+ verbs:
+ - list
serviceAccountName: che-operator
deployments:
- name: che-operator
@@ -363,5 +384,5 @@
maturity: stable
provider:
name: Eclipse Foundation
- replaces: eclipse-che-preview-openshift.v9.9.9-nightly.1563883406
- version: 9.9.9-nightly.1564753341
+ replaces: eclipse-che-preview-openshift.v9.9.9-nightly.1564753341
+ version: 9.9.9-nightly.1567437269

View File

@ -1,7 +1,7 @@
packageName: eclipse-che-preview-openshift
channels:
- name: stable
currentCSV: eclipse-che-preview-openshift.v7.0.0
- name: nightly
currentCSV: eclipse-che-preview-openshift.v9.9.9-nightly.1564753341
- currentCSV: eclipse-che-preview-openshift.v9.9.9-nightly.1567437269
name: nightly
- currentCSV: eclipse-che-preview-openshift.v7.0.0
name: stable
defaultChannel: stable
packageName: eclipse-che-preview-openshift

View File

@ -73,5 +73,9 @@ do
echo " - Updating the 'stable' channel with new release in the package descriptor: ${packageFilePath}"
sed -e "s/${lastPackagePreReleaseVersion}/${RELEASE}/" "${packageFilePath}" > "${packageFilePath}.new"
mv "${packageFilePath}.new" "${packageFilePath}"
diff -u "${packageFolderPath}/${lastPackageNightlyVersion}/${packageName}.v${lastPackageNightlyVersion}.clusterserviceversion.yaml" \
"${packageFolderPath}/${RELEASE}/${packageName}.v${RELEASE}.clusterserviceversion.yaml" \
> "${packageFolderPath}/${RELEASE}/${packageName}.v${RELEASE}.clusterserviceversion.yaml.diff" || true
done
cd "${CURRENT_DIR}"

View File

@ -30,31 +30,29 @@ do
newNightlyPackageVersion="9.9.9-nightly.$(date +%s)"
echo " => will create a new version: ${newNightlyPackageVersion}"
./build-roles.sh
for role in "$(pwd)"/generated/roles/*.yaml
do
echo " - Updating new package version with roles defined in: ${role}"
cp "$role" generated/current-role.yaml
operator-sdk olm-catalog gen-csv --csv-version "${newNightlyPackageVersion}" --from-version="${lastPackageVersion}" 2>&1 | sed -e 's/^/ /'
containerImage=$(sed -n 's|^ *image: *\([^ ]*/che-operator:[^ ]*\) *|\1|p' "${packageFolderPath}/${newNightlyPackageVersion}/${packageName}.v${newNightlyPackageVersion}.clusterserviceversion.yaml")
createdAt=$(date -u +%FT%TZ)
echo " - Updating new package version fields:"
echo " - containerImage => ${containerImage}"
echo " - createdAt => ${createdAt}"
sed \
-e "s|containerImage:.*$|containerImage: ${containerImage}|" \
-e "s/createdAt:.*$/createdAt: \"${createdAt}\"/" \
"${packageFolderPath}/${newNightlyPackageVersion}/${packageName}.v${newNightlyPackageVersion}.clusterserviceversion.yaml" \
> "${packageFolderPath}/${newNightlyPackageVersion}/${packageName}.v${newNightlyPackageVersion}.clusterserviceversion.yaml.new"
mv "${packageFolderPath}/${newNightlyPackageVersion}/${packageName}.v${newNightlyPackageVersion}.clusterserviceversion.yaml.new" \
"${packageFolderPath}/${newNightlyPackageVersion}/${packageName}.v${newNightlyPackageVersion}.clusterserviceversion.yaml"
done
echo " - Updating new package version with roles defined in: ${role}"
operator-sdk olm-catalog gen-csv --csv-version "${newNightlyPackageVersion}" --from-version="${lastPackageVersion}" 2>&1 | sed -e 's/^/ /'
containerImage=$(sed -n 's|^ *image: *\([^ ]*/che-operator:[^ ]*\) *|\1|p' "${packageFolderPath}/${newNightlyPackageVersion}/${packageName}.v${newNightlyPackageVersion}.clusterserviceversion.yaml")
createdAt=$(date -u +%FT%TZ)
echo " - Updating new package version fields:"
echo " - containerImage => ${containerImage}"
echo " - createdAt => ${createdAt}"
sed \
-e "s|containerImage:.*$|containerImage: ${containerImage}|" \
-e "s/createdAt:.*$/createdAt: \"${createdAt}\"/" \
"${packageFolderPath}/${newNightlyPackageVersion}/${packageName}.v${newNightlyPackageVersion}.clusterserviceversion.yaml" \
> "${packageFolderPath}/${newNightlyPackageVersion}/${packageName}.v${newNightlyPackageVersion}.clusterserviceversion.yaml.new"
mv "${packageFolderPath}/${newNightlyPackageVersion}/${packageName}.v${newNightlyPackageVersion}.clusterserviceversion.yaml.new" \
"${packageFolderPath}/${newNightlyPackageVersion}/${packageName}.v${newNightlyPackageVersion}.clusterserviceversion.yaml"
echo " - Copying the CRD file"
cp "${packageFolderPath}/${lastPackageVersion}/eclipse-che-preview-${platform}.crd.yaml" "${packageFolderPath}/${newNightlyPackageVersion}/eclipse-che-preview-${platform}.crd.yaml"
echo " - Updating the 'nightly' channel with new version in the package descriptor: ${packageFilePath}"
sed -e "s/${lastPackageVersion}/${newNightlyPackageVersion}/" "${packageFilePath}" > "${packageFilePath}.new"
mv "${packageFilePath}.new" "${packageFilePath}"
diff -u "${packageFolderPath}/${lastPackageVersion}/${packageName}.v${lastPackageVersion}.clusterserviceversion.yaml" \
"${packageFolderPath}/${newNightlyPackageVersion}/${packageName}.v${newNightlyPackageVersion}.clusterserviceversion.yaml" \
> "${packageFolderPath}/${newNightlyPackageVersion}/${packageName}.v${newNightlyPackageVersion}.clusterserviceversion.yaml.diff" || true
done
cd "${CURRENT_DIR}"

View File

@ -205,6 +205,16 @@ type CheClusterStatus struct {
DevfileRegistryURL string `json:"devfileRegistryURL"`
// PluginRegistryURL is the Plugin registry protocol+route/ingress
PluginRegistryURL string `json:"pluginRegistryURL"`
// A human readable message indicating details about why the pod is in this condition.
// +optional
Message string `json:"message,omitempty"`
// A brief CamelCase message indicating details about why the pod is in this state.
// e.g. 'Evicted'
// +optional
Reason string `json:"reason,omitempty"`
// A URL that can point to some URL where to find help related to the current Operator status.
// +optional
HelpLink string `json:"helpLink,omitempty"`
}
// +k8s:deepcopy-gen:interfaces=k8s.io/apimachinery/pkg/runtime.Object

View File

@ -12,19 +12,22 @@
package che
import (
"k8s.io/apimachinery/pkg/api/resource"
"context"
"time"
orgv1 "github.com/eclipse/che-operator/pkg/apis/org/v1"
"github.com/eclipse/che-operator/pkg/deploy"
"github.com/eclipse/che-operator/pkg/util"
oauth "github.com/openshift/api/oauth/v1"
routev1 "github.com/openshift/api/route/v1"
userv1 "github.com/openshift/api/user/v1"
"github.com/sirupsen/logrus"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
"k8s.io/api/extensions/v1beta1"
rbac "k8s.io/api/rbac/v1"
"k8s.io/apimachinery/pkg/api/errors"
"k8s.io/apimachinery/pkg/api/resource"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/apimachinery/pkg/types"
@ -36,7 +39,6 @@ import (
"sigs.k8s.io/controller-runtime/pkg/reconcile"
logf "sigs.k8s.io/controller-runtime/pkg/runtime/log"
"sigs.k8s.io/controller-runtime/pkg/source"
"time"
)
var log = logf.Log.WithName("controller_che")
@ -47,12 +49,24 @@ var (
// Add creates a new CheCluster Controller and adds it to the Manager. The Manager will set fields on the Controller
// and Start it when the Manager is Started.
func Add(mgr manager.Manager) error {
return add(mgr, newReconciler(mgr))
reconciler, err := newReconciler(mgr)
if err != nil {
return err
}
return add(mgr, reconciler)
}
// newReconciler returns a new reconcile.Reconciler
func newReconciler(mgr manager.Manager) reconcile.Reconciler {
return &ReconcileChe{client: mgr.GetClient(), scheme: mgr.GetScheme()}
func newReconciler(mgr manager.Manager) (reconcile.Reconciler, error) {
noncachedClient, err := client.New(mgr.GetConfig(), client.Options{})
if err != nil {
return nil, err
}
return &ReconcileChe{
client: mgr.GetClient(),
nonCachedClient: noncachedClient,
scheme: mgr.GetScheme(),
}, nil
}
// add adds a new Controller to mgr with r as the reconcile.Reconciler
@ -73,7 +87,10 @@ func add(mgr manager.Manager, r reconcile.Reconciler) error {
logrus.Errorf("Failed to add OpenShift route to scheme: %s", err)
}
if err := oauth.AddToScheme(mgr.GetScheme()); err != nil {
logrus.Errorf("Failed to add oAuth to scheme: %s", err)
logrus.Errorf("Failed to add OpenShift OAuth to scheme: %s", err)
}
if err := userv1.AddToScheme(mgr.GetScheme()); err != nil {
logrus.Errorf("Failed to add OpenShift User to scheme: %s", err)
}
}
@ -180,10 +197,23 @@ type ReconcileChe struct {
// This client, initialized using mgr.Client() above, is a split client
// that reads objects from the cache and writes to the apiserver
client client.Client
// This client, is a simple client
// that reads objects without using the cache,
// to simply read objects thta we don't intend
// to further watch
nonCachedClient client.Client
scheme *runtime.Scheme
tests bool
}
const (
failedNoOpenshiftUserReason = "InstallOrUpdateFailed"
failedNoOpenshiftUserMessage = "No real user exists in the OpenShift cluster." +
" Either disable OpenShift OAuth integration or add at least one user (details in the Help link)"
howToCreateAUserLinkOS4 = "https://docs.openshift.com/container-platform/4.1/authentication/understanding-identity-provider.html#identity-provider-overview_understanding-identity-provider"
howToCreateAUserLinkOS3 = "https://docs.openshift.com/container-platform/3.11/install_config/configuring_authentication.html"
)
// Reconcile reads that state of the cluster for a CheCluster object and makes changes based on the state read
// and what is in the CheCluster.Spec. The Controller will requeue the Request to be processed again if the returned error is non-nil or
// Result.Requeue is true, otherwise upon completion it will remove the work from the queue.
@ -222,12 +252,46 @@ func (r *ReconcileChe) Reconcile(request reconcile.Request) (reconcile.Result, e
// To use Openshift v4 OAuth, the OAuth endpoints are served from a namespace
// and NOT from the Openshift API Master URL (as in v3)
// So we also need the self-signed certificate to access them (same as the Che server)
(isOpenShift4 && instance.Spec.Auth.OpenShiftOauth && ! instance.Spec.Server.TlsSupport) {
(isOpenShift4 && instance.Spec.Auth.OpenShiftOauth && !instance.Spec.Server.TlsSupport) {
if err := r.CreateTLSSecret(instance, "", "self-signed-certificate"); err != nil {
return reconcile.Result{}, err
}
}
if !tests {
deployment := &appsv1.Deployment{}
name := "che"
cheFlavor := instance.Spec.Server.CheFlavor
if cheFlavor == "codeready" {
name = cheFlavor
}
err = r.client.Get(context.TODO(), types.NamespacedName{Name: name, Namespace: instance.Namespace}, deployment)
if err != nil && instance.Status.CheClusterRunning != UnavailableStatus {
if err := r.SetCheUnavailableStatus(instance, request); err != nil {
return reconcile.Result{Requeue: true, RequeueAfter: time.Second * 1}, err
}
}
}
if instance.Spec.Auth.OpenShiftOauth {
users := &userv1.UserList{}
listOptions := &client.ListOptions{}
if err := r.nonCachedClient.List(context.TODO(), listOptions, users); err != nil {
return reconcile.Result{}, err
}
if len(users.Items) < 1 {
helpLink := ""
if isOpenShift4 {
helpLink = howToCreateAUserLinkOS4
} else {
helpLink = howToCreateAUserLinkOS3
}
if err := r.SetStatusDetails(instance, request, failedNoOpenshiftUserReason, failedNoOpenshiftUserMessage, helpLink); err != nil {
return reconcile.Result{}, err
}
return reconcile.Result{Requeue: true, RequeueAfter: time.Second * 10}, nil
}
// create a secret with OpenShift API crt to be added to keystore that RH SSO will consume
baseURL, err := util.GetClusterPublicHostname(isOpenShift4)
if err != nil {
@ -239,21 +303,11 @@ func (r *ReconcileChe) Reconcile(request reconcile.Request) (reconcile.Result, e
}
}
}
if !tests {
deployment := &appsv1.Deployment{}
name := "che"
cheFlavor := instance.Spec.Server.CheFlavor
if cheFlavor == "codeready" {
name = cheFlavor
}
err = r.client.Get(context.TODO(), types.NamespacedName{Name: name, Namespace: instance.Namespace}, deployment)
if err != nil && instance.Status.CheClusterRunning != UnavailableStatus {
if err := r.SetCheUnavailableStatus(instance, request); err != nil {
return reconcile.Result{Requeue: true, RequeueAfter: time.Second * 1}, err
}
}
if err := r.SetStatusDetails(instance, request, "", "", ""); err != nil {
return reconcile.Result{}, err
}
// create service accounts:
// che is the one which token is used to create workspace objects
// che-workspace is SA used by plugins like exec and terminal with limited privileges
@ -346,7 +400,7 @@ func (r *ReconcileChe) Reconcile(request reconcile.Request) (reconcile.Result, e
return reconcile.Result{Requeue: true, RequeueAfter: time.Second * 5}, err
}
}
desiredImage := util.GetValue(instance.Spec.Database.PostgresImage, deploy.DefaultPostgresImage(cheFlavor))
effectiveImage := pgDeployment.Spec.Template.Spec.Containers[0].Image
desiredImagePullPolicy := util.GetValue(string(instance.Spec.Database.PostgresImagePullPolicy), deploy.DefaultPullPolicyFromDockerImage(desiredImage))
@ -402,7 +456,7 @@ func (r *ReconcileChe) Reconcile(request reconcile.Request) (reconcile.Result, e
protocol = "https"
}
addRegistryRoute := func (registryType string) (string, error) {
addRegistryRoute := func(registryType string) (string, error) {
registryName := registryType + "-registry"
host := ""
if !isOpenShift {
@ -431,7 +485,7 @@ func (r *ReconcileChe) Reconcile(request reconcile.Request) (reconcile.Result, e
return protocol + "://" + host, nil
}
addRegistryDeployment := func (
addRegistryDeployment := func(
registryType string,
registryImage string,
registryImagePullPolicy corev1.PullPolicy,
@ -444,7 +498,7 @@ func (r *ReconcileChe) Reconcile(request reconcile.Request) (reconcile.Result, e
// Create a new registry service
registryLabels := deploy.GetLabels(instance, registryName)
registryService := deploy.NewService(instance, registryName, []string{"http"}, []int32{8080}, registryLabels)
if err := r.CreateService(instance,registryService); err != nil {
if err := r.CreateService(instance, registryService); err != nil {
return &reconcile.Result{}, err
}
// Create a new registry deployment
@ -478,7 +532,7 @@ func (r *ReconcileChe) Reconcile(request reconcile.Request) (reconcile.Result, e
logrus.Infof("Deployment %s is in the rolling update state", registryName)
k8sclient.GetDeploymentRollingUpdateStatus(registryName, instance.Namespace)
}
desiredMemRequest, err := resource.ParseQuantity(registryMemoryRequest)
if err != nil {
logrus.Errorf("Wrong quantity for %s deployment Memory Request: %s", registryName, err)
@ -715,7 +769,7 @@ func (r *ReconcileChe) Reconcile(request reconcile.Request) (reconcile.Result, e
logrus.Infof("Deployment %s is in the rolling update state", "keycloak")
k8sclient.GetDeploymentRollingUpdateStatus("keycloak", instance.Namespace)
}
desiredImage := util.GetValue(instance.Spec.Auth.KeycloakImage, deploy.DefaultKeycloakImage(cheFlavor))
effectiveImage := effectiveKeycloakDeployment.Spec.Template.Spec.Containers[0].Image
desiredImagePullPolicy := util.GetValue(string(instance.Spec.Auth.KeycloakImagePullPolicy), deploy.DefaultPullPolicyFromDockerImage(desiredImage))
@ -726,7 +780,7 @@ func (r *ReconcileChe) Reconcile(request reconcile.Request) (reconcile.Result, e
storedOpenshiftApiCertSecretVersion := effectiveKeycloakDeployment.Annotations["che.openshift-api-crt.version"]
if effectiveImage != desiredImage ||
effectiveImagePullPolicy != desiredImagePullPolicy ||
cheCertSecretVersion != storedCheCertSecretVersion ||
cheCertSecretVersion != storedCheCertSecretVersion ||
openshiftApiCertSecretVersion != storedOpenshiftApiCertSecretVersion {
newKeycloakDeployment := deploy.NewKeycloakDeployment(instance, keycloakPostgresPassword, keycloakAdminPassword, cheFlavor, cheCertSecretVersion, openshiftApiCertSecretVersion)
logrus.Infof(`Updating Keycloak deployment with:
@ -810,31 +864,8 @@ func (r *ReconcileChe) Reconcile(request reconcile.Request) (reconcile.Result, e
return reconcile.Result{}, err
}
if !tests {
if effectiveCheDeployment.Status.AvailableReplicas != 1 {
instance, _ := r.GetCR(request)
if err := r.SetCheUnavailableStatus(instance, request); err != nil {
return reconcile.Result{Requeue: true, RequeueAfter: time.Second * 1}, err
}
scaled := k8sclient.GetDeploymentStatus(cheDeploymentToCreate.Name, instance.Namespace)
if !scaled {
return reconcile.Result{Requeue: true, RequeueAfter: time.Second * 5}, err
}
err = r.client.Get(context.TODO(), types.NamespacedName{Name: cheDeploymentToCreate.Name, Namespace: instance.Namespace}, effectiveCheDeployment)
if effectiveCheDeployment.Status.AvailableReplicas == 1 {
if err := r.SetCheAvailableStatus(instance, request, protocol, cheHost); err != nil {
instance, _ = r.GetCR(request)
return reconcile.Result{Requeue: true, RequeueAfter: time.Second * 1}, err
}
if instance.Status.CheVersion != cheImageTag {
instance.Status.CheVersion = cheImageTag
if err := r.UpdateCheCRStatus(instance, "version", cheImageTag); err != nil {
instance, _ = r.GetCR(request)
return reconcile.Result{Requeue: true, RequeueAfter: time.Second * 1}, err
}
}
}
}
if effectiveCheDeployment.Status.Replicas > 1 {
// Specific case: a Rolling update is happening
logrus.Infof("Deployment %s is in the rolling update state", cheDeploymentToCreate.Name)
if err := r.SetCheRollingUpdateStatus(instance, request); err != nil {
instance, _ = r.GetCR(request)
@ -848,6 +879,33 @@ func (r *ReconcileChe) Reconcile(request reconcile.Request) (reconcile.Result, e
return reconcile.Result{Requeue: true, RequeueAfter: time.Second * 1}, err
}
}
} else {
if effectiveCheDeployment.Status.AvailableReplicas < 1 {
// Deployment was just created
instance, _ := r.GetCR(request)
if err := r.SetCheUnavailableStatus(instance, request); err != nil {
return reconcile.Result{Requeue: true, RequeueAfter: time.Second * 1}, err
}
scaled := k8sclient.GetDeploymentStatus(cheDeploymentToCreate.Name, instance.Namespace)
if !scaled {
return reconcile.Result{Requeue: true, RequeueAfter: time.Second * 5}, err
}
effectiveCheDeployment, err = r.GetEffectiveDeployment(instance, cheDeploymentToCreate.Name)
}
if effectiveCheDeployment.Status.AvailableReplicas == 1 &&
instance.Status.CheClusterRunning != AvailableStatus {
if err := r.SetCheAvailableStatus(instance, request, protocol, cheHost); err != nil {
instance, _ = r.GetCR(request)
return reconcile.Result{Requeue: true, RequeueAfter: time.Second * 1}, err
}
if instance.Status.CheVersion != cheImageTag {
instance.Status.CheVersion = cheImageTag
if err := r.UpdateCheCRStatus(instance, "version", cheImageTag); err != nil {
instance, _ = r.GetCR(request)
return reconcile.Result{Requeue: true, RequeueAfter: time.Second * 1}, err
}
}
}
}
}
if effectiveCheDeployment.Spec.Template.Spec.Containers[0].Image != cheDeploymentToCreate.Spec.Template.Spec.Containers[0].Image {
@ -949,7 +1007,7 @@ func (r *ReconcileChe) Reconcile(request reconcile.Request) (reconcile.Result, e
logrus.Errorf("Wrong quantity for Che deployment Memory Limit: %s", err)
return reconcile.Result{}, err
}
desiredImagePullPolicy := util.GetValue(string(instance.Spec.Server.CheImagePullPolicy), deploy.DefaultPullPolicyFromDockerImage(cheImageRepo + ":" + cheImageTag))
desiredImagePullPolicy := util.GetValue(string(instance.Spec.Server.CheImagePullPolicy), deploy.DefaultPullPolicyFromDockerImage(cheImageRepo+":"+cheImageTag))
effectiveImagePullPolicy := string(effectiveCheDeployment.Spec.Template.Spec.Containers[0].ImagePullPolicy)
desiredSelfSignedCert := instance.Spec.Server.SelfSignedCert
effectiveSelfSignedCert := r.GetDeploymentEnvVarSource(effectiveCheDeployment, "CHE_SELF__SIGNED__CERT") != nil

View File

@ -18,6 +18,7 @@ import (
orgv1 "github.com/eclipse/che-operator/pkg/apis/org/v1"
oauth "github.com/openshift/api/oauth/v1"
routev1 "github.com/openshift/api/route/v1"
userv1 "github.com/openshift/api/user/v1"
"github.com/sirupsen/logrus"
appsv1 "k8s.io/api/apps/v1"
corev1 "k8s.io/api/core/v1"
@ -70,26 +71,45 @@ func TestCheController(t *testing.T) {
},
},
}
userList := &userv1.UserList{
Items: []userv1.User{
userv1.User{
ObjectMeta: metav1.ObjectMeta{
Name: "user1",
},
},
userv1.User{
ObjectMeta: metav1.ObjectMeta{
Name: "user2",
},
},
},
}
// Objects to track in the fake client.
objs := []runtime.Object{
cheCR, pgPod,
cheCR, pgPod, userList,
}
route := &routev1.Route{}
oAuthClient := &oauth.OAuthClient{}
users := &userv1.UserList{}
user := &userv1.User{}
// Register operator types with the runtime scheme
s := scheme.Scheme
s.AddKnownTypes(orgv1.SchemeGroupVersion, cheCR)
s.AddKnownTypes(routev1.SchemeGroupVersion, route)
s.AddKnownTypes(oauth.SchemeGroupVersion, oAuthClient)
s.AddKnownTypes(userv1.SchemeGroupVersion, users, user)
// Create a fake client to mock API calls
cl := fake.NewFakeClient(objs...)
tests := true
// Create a ReconcileChe object with the scheme and fake client
r := &ReconcileChe{client: cl, scheme: s, tests: tests}
r := &ReconcileChe{client: cl, nonCachedClient: cl, scheme: s, tests: tests}
// Mock request to simulate Reconcile() being called on an event for a
// watched resource .

View File

@ -53,18 +53,46 @@ func (r *ReconcileChe) SetCheAvailableStatus(instance *orgv1.CheCluster, request
}
func (r *ReconcileChe) SetCheUnavailableStatus(instance *orgv1.CheCluster, request reconcile.Request) (err error) {
instance.Status.CheClusterRunning = UnavailableStatus
if err:= r.UpdateCheCRStatus(instance, "status: Che API", UnavailableStatus); err != nil {
instance, _ = r.GetCR(request)
return err
if instance.Status.CheClusterRunning != UnavailableStatus {
instance.Status.CheClusterRunning = UnavailableStatus
if err := r.UpdateCheCRStatus(instance, "status: Che API", UnavailableStatus); err != nil {
instance, _ = r.GetCR(request)
return err
}
}
return nil
}
func (r *ReconcileChe) SetCheRollingUpdateStatus(instance *orgv1.CheCluster, request reconcile.Request) (err error){
func (r *ReconcileChe) SetStatusDetails(instance *orgv1.CheCluster, request reconcile.Request, reason string, message string, helpLink string) (err error) {
if reason != instance.Status.Reason {
instance.Status.Reason = reason
if err := r.UpdateCheCRStatus(instance, "status: Reason", reason); err != nil {
instance, _ = r.GetCR(request)
return err
}
}
if message != instance.Status.Message {
instance.Status.Message = message
if err := r.UpdateCheCRStatus(instance, "status: Message", message); err != nil {
instance, _ = r.GetCR(request)
return err
}
}
if helpLink != instance.Status.HelpLink {
instance.Status.HelpLink = helpLink
if err := r.UpdateCheCRStatus(instance, "status: HelpLink", message); err != nil {
instance, _ = r.GetCR(request)
return err
}
}
return nil
}
func (r *ReconcileChe) SetCheRollingUpdateStatus(instance *orgv1.CheCluster, request reconcile.Request) (err error) {
instance.Status.CheClusterRunning = RollingUpdateInProgressStatus
if err:= r.UpdateCheCRStatus(instance, "status", RollingUpdateInProgressStatus); err != nil {
if err := r.UpdateCheCRStatus(instance, "status", RollingUpdateInProgressStatus); err != nil {
instance, _ = r.GetCR(request)
return err
}