Link Search Menu Expand Document

Deploying the Medical Diagnosis pattern

Table of contents

  1. Prerequisites
    1. Setting up the storage for OpenShift Data Foundation
  2. How to deploy
    1. Using OpenShift GitOps to check on Application progress
    2. Viewing the Grafana based dashboard.
    3. Making some changes on the dashboard

Prerequisites

  1. An OpenShift cluster ( Go to https://console.redhat.com/openshift/create ). See also sizing your cluster.
  2. A github account (and a token for it with repos permissions, to read from and write to your forks)
  3. Storage set up in your public/private cloud for the x-ray images
  4. The helm binary, see https://helm.sh/docs/intro/install/

The use of this blueprint depends on having at least one running Red Hat OpenShift cluster. It is desirable to have a cluster for deploying the GitOps management hub assets and a seperate cluster(s) for the medical egde facilities.

If you do not have a running Red Hat OpenShift cluster you can start one on a public or private cloud by using Red Hat’s cloud service.

Setting up the storage for OpenShift Data Foundation

Red Hat OpenShift Data Foundation relies on underlying object based storage provided by cloud providers. This storage will need to be public. The following links provide information on how to create the cloud storage required for this validated pattern on several cloud providers.

There are some utilities that have been created for the validated patterns effort to speed the process.

If you are using the utilities then you first you need to set some environment variables for your cloud provider keys.

For AWS (replace with your keys):

export AWS_ACCESS_KEY_ID=AKXXXXXXXXXXXXX
export AWS_SECRET_ACCESS_KEY=gkXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX

Then we need to create the S3 bucket and copy over the data from the validated patterns public bucket to the created bucket for your demo. You can do this on the cloud providers console or use the scripts provided on validated-patterns-utilities repo.

python s3-create.py -b mytest-bucket -r us-west-2 -p
python s3-sync-buckets.py -s com.validated-patterns.xray-source -t mytest-bucket -r us-west-2

The output should look similar to this edited/compressed output.

Bucket setup](/videos/bucket-setup.svg)

There is some key information you will need to take note of that is required by the ‘values-global.yaml’ file. You will need the URL for the buckeyt and it’s name. At the very end of the values-global.yaml file you will see a section for s3: were these values need to be changed.

How to deploy

  1. Fork this repo on GitHub. It is necessary to fork because your fork will be updated as part of the GitOps and DevOps processes.

  2. Clone the forked copy of this repo.

    git clone git@github.com:<your-username>/medical-diagnosis.git
    
  3. Create a local copy of the Helm values file that can safely include credentials

    DO NOT COMMIT THIS FILE

    You do not want to push personal credentials to GitHub.

    cp values-secret.yaml.template ~/values-secret.yaml
    vi ~/values-secret.yaml
    

    When you edit the file you can make changes to the various DB passwords if you wish.

  4. Customize the deployment for your cluster. Remember to use the data optained from the cloud storage creation (S3, Blob Storage, Cloud Storage) as part of the data to be updated in the yaml file. There are comments in the file highlightiung what what chnages need to be made.

    vi values-global.yaml
    git add values-global.yaml
    git commit values-global.yaml
    git push
    
  5. Preview the changes that will be made to the Helm charts.
    make show
    
  6. Login to your cluster using oc login or exporting the KUBECONFIG

    oc login
    

    or set KUBECONFIG to the path to your kubeconfig file. For example:

    export KUBECONFIG=~/my-ocp-env/auth/kubconfig
    
  7. Apply the changes to your cluster

    make install
    

    If the install fails and you go back over the instructions and see what was missed and change it, then run make update to continue the installation.

  8. This takes some time. Especially for the OpenShift Data Foundation operator components to install and synchronize. The make install provides some progress updates during the install. It can take up to twentry minutes. Compare your make install run progress with the following video showing a successful install.

    Demo

  9. Check that the operators have been installed in the UI.

    OpenShift UI -> Installed Operators
    

    The main operator to watch is the OpenShift Data Foundation.

Using OpenShift GitOps to check on Application progress

You can also check on the progress using OpenShift GitOps to check on the various applications deployed.

  1. Obtain the ArgoCD urls and passwords.

    The URLs and login credentials for ArgoCD change depending on the pattern name and the site names they control. Follow the instructions below to find them, however you choose to deploy the pattern.

    Display the fully qualified domain names, and matching login credentials, for all ArgoCD instances:

    ARGO_CMD=`oc get secrets -A -o jsonpath='{range .items[*]}{"oc get -n "}{.metadata.namespace}{" routes; oc -n "}{.metadata.namespace}{" extract secrets/"}{.metadata.name}{" --to=-\\n"}{end}' | grep gitops-cluster`
    CMD=`echo $ARGO_CMD | sed 's|- oc|-;oc|g'`
    eval $CMD
    
    

    The result should look something like:

    NAME                       HOST/PORT                                                                                      PATH   SERVICES                   PORT    TERMINATION            WILDCARD
    datacenter-gitops-server   datacenter-gitops-server-medical-diagnosis-datacenter.apps.wh-medctr.blueprints.rhecoeng.com          datacenter-gitops-server   https   passthrough/Redirect   None
    # admin.password
    xsyYU6eSWtwniEk1X3jL0c2TGfQgVpDH
    NAME                      HOST/PORT                                                                         PATH   SERVICES                  PORT    TERMINATION            WILDCARD
    cluster                   cluster-openshift-gitops.apps.wh-medctr.blueprints.rhecoeng.com                          cluster                   8080    reencrypt/Allow        None
    kam                       kam-openshift-gitops.apps.wh-medctr.blueprints.rhecoeng.com                              kam                       8443    passthrough/None       None
    openshift-gitops-server   openshift-gitops-server-openshift-gitops.apps.wh-medctr.blueprints.rhecoeng.com          openshift-gitops-server   https   passthrough/Redirect   None
    # admin.password
    FdGgWHsBYkeqOczE3PuRpU1jLn7C2fD6
    

    The most important ArgoCD instance to examine at this point is multicloud-gitops-hub. This is where all the applications for the hub can be tracked.

  2. Check all applications are synchronised. There are eleven different ArgoCD “applications” deployed as part of this pattern.

Viewing the Grafana based dashboard.

  1. First we need to accept SSL certificates on the browser for the dashboard. In the OpenShift console go to the Routes for project openshift-storage. Click on the URL for the s3-rgw.

    Storage Routes)

    Make sure that you see some XML and not an access denied message.

    Storage Routes)

  2. While still looking at Routes, change the project to xraylab-1. Click on the URL for the image-server. Make sure you do not see an access denied message. You ought to see a Hello World message.

    Storage Routes)

  3. Turn on the image file flow. There are three ways to go about this.

    You can go to the command line (maje sure you have KUBECONFIG set, or are logged into the cluster.

    oc scale deploymentconfig/image-generator --replicas=1
    

    Or you can go to the OpenShift UI and change the view from Administrator to Developer and select Topology. From there select the xraylab-1 project.

    Xraylab-1 Topology)

    Right click on the image-generator pod icon and select Edit Pod count.

    Pod menu)

    Up the pod count from 0 to 1 and save.

    Pod count)

    Alternatively, you can have the same outcome on the Administrator console.

    Go to the OpenShift UI under Workloads, select Deploymentconfigs for Project xraylab-1. Click on image-generator and increase the pod count to 1.

    Image Pod)

Making some changes on the dashboard

You can change some of the parameters and watch how the changes effect the dashboard.

  1. You can increase or decrease the number of image generators.

    oc scale deploymentconfig/image-generator --replicas=2
    

    Check the dashboard.

    oc scale deploymentconfig/image-generator --replicas=0
    

    Watch the dashboard stop processing images.

  2. You can also simulate the change of the AI model version - as it’s only an environment variable in the Serverless Service configuration.

    oc patch service.serving.knative.dev/risk-assessment --type=json -p '[{"op":"replace","path":"/spec/template/metadata/annotations/revisionTimestamp","value":"'"$(date +%F_%T)"'"},{"op":"replace","path":"/spec/template/spec/containers/0/env/0/value","value":"v2"}]'
    

    This changes the model version value, as well as the revisionTimestamp in the annotations, which triggers a redeployment of the service.