Skip to content

Project 4

Umang Sharma edited this page May 5, 2022 · 68 revisions

Project 4

Deploying Custos on Jetstream

  • Spawn four instances on Jetstream 1 of medium size
  • Install Rancher on one of the instances .

To install Rancher please refer to our peer team Terra's writeup : https://github.com/airavata-courses/terra/wiki/Installing-Rancher---Step--1

Only difference is that it is Jetstream 1 , so you would need to generate the ssh password yourself using

  • sudo passwd "username"

Replace your username in the above command.

While adding the nodes to the cluster, choose the calico network


Now that your Rancher and cluster are done, login to the master node .

> git clone https://github.com/airavata-courses/DSDummies.git

> git checkout project-4

cd CUSTOS/custos_deploy/

On all the nodes ,

sudo mkdir /bitnami
sudo mkdir /bitnami/mysql
sudo mkdir /bitnami/postgresql
sudo mkdir /hashicorp
sudo mkdir /hashicorp/consul
sudo mkdir /hashicorp/consul/data

chmod 777 -R /hashicorp

Make sure you change the permissions for all directories for hashicorp/consul/data

Deploy cert-manager

cd cert-manager
> kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.8.0/cert-manager.yaml

check output:

kubectl get all -n cert-manager

All the pods should be in running phase. If not , there would be an error so check the kubectl logs to debug the issue.

It should look like this : After cert-manager installation

Create ClusterIssuer

> kubectl apply -f issuer.yaml

Deploy keycloak

cd ..
cd keycloak
helm repo add bitnami https://charts.bitnami.com/bitnami
cd postgres
  • Create PVs Create three PVs for each mount point /bitnami/postgresql
> * kubectl apply -f pv.yaml,pv1.yaml,pv2.yaml

Check output :

Then deploy postgresql

> * helm install keycloak-db-postgresql bitnami/postgresql -f values.yaml -n keycloak --version 10.12.3

Check output :

* cd ..

* kubectl create -f https://raw.githubusercontent.com/operator-framework/operator-lifecycle-manager/master/deploy/upstream/quickstart/crds.yaml

* kubectl create -f https://raw.githubusercontent.com/operator-framework/operator-lifecycle-manager/master/deploy/upstream/quickstart/olm.yaml

* git clone https://github.com/keycloak/keycloak-operator

* cp operator.yaml keycloak-operator/deploy/

* cd keycloak-operator

* make cluster/prepare

* kubectl apply -f deploy/operator.yaml -n keycloak

* cd ..

* kubectl apply -f keycloak-db-secret.yaml -n keycloak

* kubectl apply -f custos-keycloak.yaml -n keycloak

* Replace hostname in ingress.yaml

* kubectl apply -f ingress.yaml -n keycloak

Check output :

user: admin

Get admin password.

* kubectl get secret credential-custos-keycloak -o yaml -n keycloak

* echo "passwordhere" | base64 --decode

**Store this password , it would be used in further steps **

Deploy Consul

>*  helm repo add hashicorp https://helm.releases.hashicorp.com
  • Create directory /hashicorp/consul/data in each of your nodes
>*  sudo chmod 777 -R hashichorp
>*  kubectl apply -f pv.yaml,pv1.yaml
> kubectl apply -f storage.yaml
> helm install consul hashicorp/consul --version 0.31.1 -n vault --values config.yaml

Check output :

Deploy vault

> helm install vault hashicorp/vault --namespace vault -f values.yaml --version 0.10.0

Change hostname in ingress.yaml

Deploy Ingress

kubectl apply -f ingress.yaml -n vault

At this point, your output should something like this :

  • Follow instructions in UI which is hosted on 443 to generate vault token.

  • Put in 5 and 3 to initialize the keys. It would generate 5 keys, download the keys in the file .

  • In the next step , put the keys in the UI one by one to unseal the vault

After this step, your UI should look like :

The root_token to be used would be found at the end of the file you downloaded.

Check output for unsealed vault:

Deploy mysql

  • kubectl apply -f pv.yaml,pv1.yaml

Check output :

  • helm install mysql bitnami/mysql -f values.yaml -n custos --version 8.8.8

Check output :

Deploy custos

On master node, execute these steps :

kubectl delete all --all -n ingress-nginx

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.44.0/deploy/static/provider/baremetal/deploy.yaml

Now make these code changes on a spare VM on Jetstream:

git clone https://github.com/apache/airavata-custos.git
cd airavata-custos
git checkout develop

Parameters to be added in root pom.xml

UserProfile: dev
Hostname: {Hostname}
SSH_user: {user}
SSH_user_password: {your passwd}
SSH_key: your key path
Keycloak_username: admin 
Keycloak_password: keycloak_pass
vault_token: vault_token
MYSQL_user: user
MYSQL_password: pass
docker_repo: {your docker hub repo }

Changes to be made,

  1. custos-core-services/utility-services/custos-configuration-service/pom.xml --> change skipped to false

  2. custos-core-services/utility-services/custos-configuration-service/resource/*-dev.properties

    custos-core-services/utility-services/custos-configuration-service/resource/*-staging.properties

    change iam.server.url=https://{host-name}:30079/auth/

  3. Open custos-integration-services/tenant-management-service-parent/tenant-management-service/src/main/java/tasks/TenantActivationTask.java

    comment lines 225-249

  4.   In pom.xml, make sure you change these lines : 
             <vault.scheme>http</vault.scheme>
             <vault.host>vault.vault.svc.cluster.local</vault.host>
             <vault.port>8200</vault.port>
             <vault.uri>http://vault.vault.svc.cluster.local:8200</vault.uri>
    
  5. Create folder custos/artifacts in home directory of master and give 777 permission

  6. Create a new Jetstream instance where you shall be running the next steps as it doesn't works on local.

  7. On the new instance , execute the following commands :

  • sudo apt get install maven
  • generate ssh-key with the command as the normal ssh private key doesn't works :
  • ssh-keygen -t rsa -b 4096 -m pem
  • Login Docker
docker login
  • Build code
    `mvn clean install -P container`
  • Push code images to repo
   `mvn dockerfile:push -P container`
  • deploy artifacts
   `mvn antrun:run -P scp-to-remote`

Custos deployed on dev :

Run the following command now once your dev custos pods are running :

helm install cluster-management-core-service /home/ssh_user/custos/artifacts/cluster-management-core-service-1.1-SNAPSHOT.tgz -n keycloak
  1. Delete the following 2 services now which are deployed on dev:

iam-admin-core-service :

Make these code changes in root pom.xml

  • <spring.profiles.active>staging</spring.profiles.active>
  • helm uninstall iam-admin-core-service -n custos
* cd iam-admin-cc-service/
 > sudo mvn clean install -P container
 > sudo mvn dockerfile:push -P container
 > sudo mvn antrun:run -P scp-to-remote
> identity-core-service
> helm uninstall identity-core-service -n custos 
* cd identity-core-service/
 > sudo mvn clean install -P container
 > sudo mvn dockerfile:push -P container
 > sudo mvn antrun:run -P scp-to-remote

Custos deployed with 2 staging and rest dev pods :

Final step

  • Enable new engines named "secret" select v1 and "resourcesecret" also with v1. Post Request to register tenant https://{hostname}:30079/tenant-management/v1.0.0/oauth2/tenant
{
    ``"client_name":"dsdummiesproj",``
    ``"requester_email":"[email protected]",``
    ``"admin_username":"umang",``
    ``"admin_first_name":"Umang",``
    ``"admin_last_name":"Sharma",``
    ``"admin_email":"[email protected]",``
    ``"contacts":["[email protected]","[email protected]"],``
    ``"redirect_uris":["http://localhost:8080/callback*",``
    ``"{hostname}/callback*"],``
    ``"scope":"openid profile email org.cilogon.userinfo",``
    ``"domain":"{hostname}",``
    ``"admin_password":"dsdummies",``
    ``"client_uri":"https:{hostname}",``
    ``"logo_uri":"https:{hostname}",``
    ``"application_type":"web",``
    ``"comment":"Custos super tenant for production"``
}
Clone this wiki locally