Skip to content

Commit

Permalink
Merge pull request #10643 from cBioPortal/update-deployment-procedure…
Browse files Browse the repository at this point in the history
…-docs

Remove old info from deployment procedure
  • Loading branch information
inodb authored Feb 20, 2024
2 parents f0701d9 + a859769 commit 3885397
Showing 1 changed file with 7 additions and 74 deletions.
81 changes: 7 additions & 74 deletions docs/development/Deployment-Procedure.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,16 +5,12 @@ see e.g. [Deploying the web application](/deployment/deploy-without-docker/Deplo
Docker](/deployment/docker/).

We deploy the master branch of backend and the master branch of frontend to
production. The public portal (https://www.cbioportal.org) runs on AWS inside
kubernetes. The configuration can be found in the knowledgesystems repo:
production. The public portal (https://www.cbioportal.org) runs on AWS EKS. The configuration
can be found in the knowledgesystems repo:

https://github.com/knowledgesystems/knowledgesystems-k8s-deployment

Other portals run at MSKCC on two internal machines called dashi and dashi2.
Since we're running several apps in several tomcats internally the procedure
for updating them is different from the public portal on AWS. The configuration
is in the mercurial portal-configuration repo. To make changes, ask Ben for
access.
Other internal MSK portals run on AWS EKS as well.

The frontend and backend can be upgraded independently. We have the following
events that can require a new deployment:
Expand Down Expand Up @@ -56,13 +52,12 @@ Once the backend repo has been tagged on github, a docker image gets build on Do

After that, if you have access to the kubernetes cluster you can change the image in the configuration of the kubernetes cluster:


https://github.com/knowledgesystems/knowledgesystems-k8s-deployment/blob/master/cbioportal/cbioportal_spring_boot.yaml
https://github.com/knowledgesystems/knowledgesystems-k8s-deployment/blob/master/public-eks/cbioportal-prod/cbioportal_spring_boot.yaml

point this line, to the new tag on docker hub e.g.:

```
image: cbioportal/cbioportal:3.0.3-web-shenandoah
image: cbioportal/cbioportal:6.0.2-web-shenandoah
```

Make sure it is an image with the postfix `-web-shenandoah`. This is the image that only has the web part of cBioPortal and uses the shenandoah garbage collector.
Expand All @@ -76,7 +71,7 @@ Also remove the `-Dfrontend.url` parameter such that the frontend version inside
Then running this command applies the changes to the cluster:

```
kubectl apply -f cbioportal/cbioportal_spring_boot.yaml
kubectl apply -f public-eks/cbioportal-prod/cbioportal_spring_boot.yaml
```

You can keep track of what's happening by looking at the pods:
Expand Down Expand Up @@ -109,71 +104,9 @@ Make sure to commit your changes to the knowledgesystems-k8s-deployment repo
and push them to the main repo, so that other people making changes to the
kubernetes config will be using the latest version.

### Private Portal Backend Upgrade
First update the frontend portal configuration to point to a new file. It's
fine if this file does not exist yet, because if it doesn't the frontend
bundled with the war will be used. We can later point the file to netlify, once
we've determined everything looks ok.

You can use this for loop to update the frontend url in all properties files
(set it to a file that doesn't exist yet and give it a sensible name e.g. `frontend_url_version_x_y_z.txt`):

```
for f in $(grep frontend.url.runtime properties/*/application.properties | grep -v beta | cut -d: -f1); do sed -i 's|frontend.url.runtime=/srv/www/msk-tomcat/frontend_url_version_2_0_0.txt|frontend.url.runtime=/srv/www/msk-tomcat/frontend_url_version_2_1_0.txt|g' $f; done
```
Same for triage-tomcat (agin set the correct file name)::

```
for f in $(grep frontend.url.runtime properties/*/application.properties | grep -v beta | cut -d: -f1); do sed -i 's|frontend.url.runtime=/srv/www/triage-tomcat/frontend_url_version_2_0_0.txt|frontend.url.runtime=/srv/www/triage-tomcat/frontend_url_version_2_1_0.txt|g' $f; done
```

Make sure you see the frontend url file updated correctly:

```
hg diff
```

Then commit and push your changes to the mercurial repo:
```
hg commit -u username -m 'update frontend url files for new release'
hg push
```

If you have your public key added for the relevant deploy scripts you should be able to deploy with the following command on dashi-dev:

```
# set PROJECT_CONFIG_HOME and PORTAL_HOME to your own directory
unset PROJECT_VERSION && export PORTAL_HOME=/data/debruiji/git/cbioportal && export PORTAL_CONFIG_HOME=/data/debruiji/hg/portal-configuration && cd ${PORTAL_CONFIG_HOME}/buildwars && hg pull && hg update && export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk.x86_64 && bash buildproductionwars.sh master && bash ${PORTAL_CONFIG_HOME}/deploywars-remotely/deployproductionportals.sh
```

If you don't have a SSH key set up to run the deploy script ask Ino.

If everything looks ok you can update the frontend url file to point to
netlify. Log in to dashi and become msk-tomcat with `sudo su - msk-tomcat`.
Then change the update script:

```
vi /data/cbio-portal-data/portal-configuration/deploy-scripts/updatefrontendurl.sh
```
to point `oldurlfile=/srv/www/msk-tomcat/frontend_url_version_2_0_0.txt` to the
new frontend url file you supplied above.

Then update the url like:

```
./updatefrontendurl.sh "https://frontend.cbioportal.org"
```

Do the same thing on dashi2.

The last step is to modify the frontend url file for the triage portal. Log in to the pipelines machine, log in as triage-tomcat user: `sudo su - triage-tomcat`, and update the frontend url file there:

```
echo 'https://frontend.cbioportal.org' > /srv/www/triage-tomcat/frontend_url_version_2_1_0.txt
```

## Upgrading Related Backend Components
Backend upgrades involving the database schema, DAO classes, etc. require updates to databases and importers. CBioPortal has multiple databases (located both internally on pipelines and in AWS) backing different portals. Similarly there are multiple importers responsible for loading portal-specific data. Every database must be manually migrated on an individual basis; all importers/data fetchers can be updated simultaenously through an existing deployment script.
Backend upgrades involving the database schema, DAO classes, etc. require updates to databases and importers. CBioPortal has multiple MySQL databases (all using AWS RDS) backing different portals. Similarly, there are multiple importers responsible for loading portal-specific data. Every database must be manually migrated on an individual basis; all importers/data fetchers can be updated simultaneously through an existing deployment script.

Before upgrading, make sure to turn off import jobs in the crontab and alert the backend pipelines team (Avery, Angelica, Rob, Manda).

Expand Down

0 comments on commit 3885397

Please sign in to comment.