Skip to content

Project 4

Umang Sharma edited this page May 6, 2022 · 68 revisions

Project 4

Deploying Custos on Jetstream

  • Spawn four instances on Jetstream 1 of medium size
  • Install Rancher on one of the instances .

To install Rancher please refer to our peer team Terra's writeup : https://github.com/airavata-courses/terra/wiki/Installing-Rancher---Step--1

Only difference is that it is Jetstream 1 , so you would need to generate the ssh password yourself using

  • sudo passwd "username"

Replace your username in the above command.

While adding the nodes to the cluster, choose the calico network


Now that your Rancher and cluster are done, login to the master node .

git clone https://github.com/airavata-courses/DSDummies.git

git checkout custos-deploy-analysis

cd CUSTOS/custos_deploy/

On all the nodes ,

sudo mkdir /bitnami
sudo mkdir /bitnami/mysql
sudo mkdir /bitnami/postgresql
sudo mkdir /hashicorp
sudo mkdir /hashicorp/consul
sudo mkdir /hashicorp/consul/data

chmod 777 -R /hashicorp

Make sure you change the permissions for all directories for hashicorp/consul/data

Also, whichever service you're deploying , always go into that service directory before executing the steps.

Deploy cert-manager

cd cert-manager
> kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.8.0/cert-manager.yaml

check output:

kubectl get all -n cert-manager

All the pods should be in running phase. If not , there would be an error so check the kubectl logs to debug the issue.

It should look like this : After cert-manager installation

Create ClusterIssuer

> kubectl apply -f issuer.yaml

Deploy keycloak

cd ..
cd keycloak
helm repo add bitnami https://charts.bitnami.com/bitnami
cd postgres
  • Create PVs Create three PVs for each mount point /bitnami/postgresql
kubectl apply -f pv.yaml,pv1.yaml,pv2.yaml

Check output :

Then deploy postgresql

helm install keycloak-db-postgresql bitnami/postgresql -f values.yaml -n keycloak --version 10.12.3

Check output :

cd ..

kubectl create -f https://raw.githubusercontent.com/operator-framework/operator-lifecycle-manager/master/deploy/upstream/quickstart/crds.yaml

kubectl create -f https://raw.githubusercontent.com/operator-framework/operator-lifecycle-manager/master/deploy/upstream/quickstart/olm.yaml

git clone https://github.com/keycloak/keycloak-operator

cp operator.yaml keycloak-operator/deploy/

cd keycloak-operator

make cluster/prepare

kubectl apply -f deploy/operator.yaml -n keycloak

cd ..

kubectl apply -f keycloak-db-secret.yaml -n keycloak

kubectl apply -f custos-keycloak.yaml -n keycloak

* Replace hostname in ingress.yaml

kubectl apply -f ingress.yaml -n keycloak

Check output :

user: admin

Get admin password.

kubectl get secret credential-custos-keycloak -o yaml -n keycloak

echo "passwordhere" | base64 --decode

**Store this password , it would be used in further steps **

Deploy Consul

cd consul
helm repo add hashicorp https://helm.releases.hashicorp.com
  • Create directory /hashicorp/consul/data in each of your nodes
sudo chmod 777 -R /hashichorp
kubectl apply -f pv.yaml,pv1.yaml
kubectl apply -f storage.yaml
helm install consul hashicorp/consul --version 0.31.1 -n vault --values config.yaml

Check output :

Deploy vault

cd vault
helm install vault hashicorp/vault --namespace vault -f values.yaml --version 0.10.0

Change hostname in ingress.yaml

Deploy Ingress

kubectl apply -f ingress.yaml -n vault

At this point, your output should something like this :

  • Follow instructions in UI which is hosted on 443 to generate vault token.

  • Put in 5 and 3 to initialize the keys. It would generate 5 keys, download the keys in the file .

  • In the next step , put the keys in the UI one by one to unseal the vault

After this step, your UI should look like :

The root_token to be used would be found at the end of the file you downloaded.

Check output for unsealed vault:

Deploy mysql

cd mysql
kubectl apply -f pv.yaml,pv1.yaml

Check output :

helm install mysql bitnami/mysql -f values.yaml -n custos --version 8.8.8

Check output :

Deploy custos

**Label this for all your nodes in the cluster, replace node_name with all your cluster nodes **

kubectl label nodes node_name custosServiceWorker="enabled"

On master node, execute these steps :

kubectl delete all --all -n ingress-nginx

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v0.44.0/deploy/static/provider/baremetal/deploy.yaml

Now make these code changes on a spare VM on Jetstream:

git clone https://github.com/apache/airavata-custos.git
cd airavata-custos
git checkout develop

Parameters to be added in root pom.xml

UserProfile: dev
Hostname: {Hostname}
SSH_user: {user}
SSH_user_password: {your passwd}
SSH_key: your key path
Keycloak_username: admin 
Keycloak_password: keycloak_pass
vault_token: vault_token
MYSQL_user: user
MYSQL_password: pass
docker_repo: {your docker hub repo }

Changes to be made,

  1. custos-core-services/utility-services/custos-configuration-service/pom.xml --> change skipped to false

  2. custos-core-services/utility-services/custos-configuration-service/resource/*-dev.properties

    custos-core-services/utility-services/custos-configuration-service/resource/*-staging.properties

    change iam.server.url=https://{host-name}:30079/auth/

  3. Open custos-integration-services/tenant-management-service-parent/tenant-management-service/src/main/java/tasks/TenantActivationTask.java

    comment lines 225-249

  4.   In pom.xml, make sure you change these lines : 
             <vault.scheme>http</vault.scheme>
             <vault.host>vault.vault.svc.cluster.local</vault.host>
             <vault.port>8200</vault.port>
             <vault.uri>http://vault.vault.svc.cluster.local:8200</vault.uri>
    
  5. Create folder custos/artifacts in home directory of master and give 777 permission

  6. Create a new Jetstream instance where you shall be running the next steps as it doesn't works on local.

  7. On the new instance , execute the following commands :

  • sudo apt get install maven
  • generate ssh-key with the command as the normal ssh private key doesn't works :
  • ssh-keygen -t rsa -b 4096 -m pem
  • Login Docker
docker login
  • Build code
    `mvn clean install -P container`
  • Push code images to repo
   `mvn dockerfile:push -P container`
  • deploy artifacts
   mvn antrun:run -P scp-to-remote`

Custos deployed on dev :

Run the following command now once your dev custos pods are running :

helm install cluster-management-core-service /home/ssh_user/custos/artifacts/cluster-management-core-service-1.1-SNAPSHOT.tgz -n keycloak
  1. Delete the following services step by now (which are deployed on dev):

iam-admin-core-service(On Master node) :

 helm uninstall iam-admin-core-service -n custos 

Make these code changes in root pom.xml

  • <spring.profiles.active>staging</spring.profiles.active>
cd iam-admin-core-service/
sudo mvn clean install -P container
sudo mvn dockerfile:push -P container
sudo mvn antrun:run -P scp-to-remote

identity-core-service(on master node)

helm uninstall identity-core-service -n custos 
cd identity-core-service/
sudo mvn clean install -P container
sudo mvn dockerfile:push -P container
sudo mvn antrun:run -P scp-to-remote

Custos deployed with 2 staging and rest dev pods :

Final step

  • Enable new engines named "secret" select v1 and "resourcesecret" also with v1. Post Request to register tenant https://{hostname}:30079/tenant-management/v1.0.0/oauth2/tenant
{
    ``"client_name":"dsdummiesproj",``
    ``"requester_email":"[email protected]",``
    ``"admin_username":"umang",``
    ``"admin_first_name":"Umang",``
    ``"admin_last_name":"Sharma",``
    ``"admin_email":"[email protected]",``
    ``"contacts":["[email protected]","[email protected]"],``
    ``"redirect_uris":["http://localhost:8080/callback*",``
    ``"{hostname}/callback*"],``
    ``"scope":"openid profile email org.cilogon.userinfo",``
    ``"domain":"{hostname}",``
    ``"admin_password":"dsdummies",``
    ``"client_uri":"https:{hostname}",``
    ``"logo_uri":"https:{hostname}",``
    ``"application_type":"web",``
    ``"comment":"Custos super tenant for production"``
}

Open secret in vault, edit 100000 and change supertenant to "true".

Set supertenant to active.

POST https://{host_name}:30079/tenant-management/v1.0.0/status

{
"client_id":"{client id you got in response to last POST request}",
"status":"ACTIVE",
"super_tenant":true,
"updatedBy":"{admin_username}"
}

It should activate the tenant and the output should be :

{
    "tenant_id": "10000000",
    "status": "ACTIVE"
}

Custos SDK setup

We used Custos Sharing Service to impose fine-grained authorization to protect resources and provide specific permissions for users and groups to access a protected resource.

We referred to Custos documentation for achieving this:

For setting up the Custos SDK run the code given below with the mentioned changes:

  • Custos ID (custos_client_id)
  • Custos Secret (custos_client_sec)
  • Admin Username (admin_user_name)
  • Admin Password (admin_password)

Code for setting up custos SDK can be found here

output of this file should look like this:

Registering user: parth
Registering user: umang
Registering user: prerna
Creating group: Admin
Creating group: Read Only Admin
Creating group: Gateway User
Assigning user parth to group Admin
Assigning user umang to group Admin
Assigning user prerna to group Read Only Admin
Assigning child group Admin to parent group Read Only Admin
Creating permission OWNER
Creating permission READ
Creating permission WRITE
Creating entity types PROJECT
Creating entity types EXPERIMENT
Register resources SEAGRD_EXP generated ID : OSEAGRD_EXPNSEAGRD_EXPLSEAGRD_EXPdSEAGRD_EXPR
Sharing entity OSEAGRD_EXPNSEAGRD_EXPLSEAGRD_EXPdSEAGRD_EXPR with user parth with permission READ
Sharing entity OSEAGRD_EXPNSEAGRD_EXPLSEAGRD_EXPdSEAGRD_EXPR with group Read Only Admin with permission READ
Access for user parth : True
Access for user umang : True
Access for user prerna : True

Custos Testing

For testing custos we created a RESTFUL flask API that interacts with the deployed tenant. For our analysis we have exposed four endpoints:

1) Register User:

Endpoint: /register-user

Request Type: POST

Request:

{
    "username": "kapil_user1",
    "first_name": "kapil_user",
    "last_name": "abc",
    "password": "12345678",
    "email": "[email protected]"
}

Response:

{
    "code": "success",
    "message": "User Registered!"
}

2) Check if user exist:

Endpoint: /user-exist

Request Type: POST

Request:

{
    "username": "kapil_user1"
}

Response:

{
    "code": "success",
    "message": true
}

3) Check user access:

Endpoint: /user-access

Request Type: POST

Request:

{
    "username": "kapil_user"
}

Response:

{
    "code": "success",
    "message": "Access for user kapil_user : False"
}

4) Get user data:

Endpoint: /get-user

Request Type: POST

Request:

{
    "username": "kapil_user"
}

Response:

{
    "code": "success",
    "message": {
                "username: "kapil_user",
                "first_name": "kapil_user",
                "last_name": "ABC",
                "email": "[email protected]",
                "realm_roles": "offline_access",
                "realm_roles": "uma_authorization",
                "state": "ACTIVE",
                "creation_time": 1651617661171.0
               }
}