Skip to content

xiaoqshuo/k8s-ha-install

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation


  安装过程请:https://www.cnblogs.com/dukuan/p/9856269.html
  

kubeadm-highavailiability - kubernetes high availiability deployment based on kubeadm, for Kubernetes version v1.11.x/v1.9.x/v1.7.x/v1.6.x

k8s logo



  • This operation instruction is for version v1.11.x kubernetes cluster

v1.11.x version now support deploy tls etcd cluster in control plane

category

  1. deployment architecture
    1. deployment architecture summary
    2. detail deployment architecture
    3. hosts list
  2. prerequisites
    1. version info
    2. required docker images
    3. system configuration
  3. kubernetes installation
    1. firewalld and iptables settings
    2. kubernetes and related services installation
    3. master hosts mutual trust
  4. masters high availiability installation
    1. create configuration files
    2. kubeadm initialization
    3. high availiability configuration
  5. masters load balance settings
    1. keepalived installation
    2. nginx load balance settings
    3. kube-proxy HA settings
    4. high availiability verify
    5. kubernetes addons installation
  6. workers join kubernetes cluster
    1. workers join HA cluster
  7. verify kubernetes cluster installation
    1. verify kubernetes cluster high availiablity installation

deployment architecture

deployment architecture summary

ha logo


category

detail deployment architecture

k8s ha

  • kubernetes components:

kube-apiserver: exposes the Kubernetes API. It is the front-end for the Kubernetes control plane. It is designed to scale horizontally – that is, it scales by deploying more instances. etcd: is used as Kubernetes’ backing store. All cluster data is stored here. Always have a backup plan for etcd’s data for your Kubernetes cluster. kube-scheduler: watches newly created pods that have no node assigned, and selects a node for them to run on. kube-controller-manager: runs controllers, which are the background threads that handle routine tasks in the cluster. Logically, each controller is a separate process, but to reduce complexity, they are all compiled into a single binary and run in a single process. kubelet: is the primary node agent. It watches for pods that have been assigned to its node (either by apiserver or via local configuration file) kube-proxy: enables the Kubernetes service abstraction by maintaining network rules on the host and performing connection forwarding.

  • load balancer

keepalived cluster config a virtual IP address (192.168.20.10), this virtual IP address point to k8s-master01, k8s-master02, k8s-master03. nginx service as the load balancer of k8s-master01, k8s-master02, k8s-master03's apiserver. The other nodes kubernetes services connect the keepalived virtual ip address (192.168.20.10) and nginx exposed port (16443) to communicate with the master cluster's apiservers.


category

hosts list

HostName IPAddress Notes Components
k8s-master01 ~ 03 192.168.20.20 ~ 22 master nodes * 3 keepalived, nginx, etcd, kubelet, kube-apiserver
k8s-master-lb 192.168.20.10 keepalived virtual IP N/A
k8s-node01 ~ 08 192.168.20.30 ~ 37 worker nodes * 8 kubelet

category

prerequisites

version info

  • Linux version: CentOS 7.4.1708

  • Core version: 4.6.4-1.el7.elrepo.x86_64

$ cat /etc/redhat-release
CentOS Linux release 7.4.1708 (Core)

$ uname -r
4.6.4-1.el7.elrepo.x86_64
  • docker version: 17.12.0-ce-rc2
$ docker version
Client:
 Version:	17.12.0-ce-rc2
 API version:	1.35
 Go version:	go1.9.2
 Git commit:	f9cde63
 Built:	Tue Dec 12 06:42:20 2017
 OS/Arch:	linux/amd64

Server:
 Engine:
  Version:	17.12.0-ce-rc2
  API version:	1.35 (minimum version 1.12)
  Go version:	go1.9.2
  Git commit:	f9cde63
  Built:	Tue Dec 12 06:44:50 2017
  OS/Arch:	linux/amd64
  Experimental:	false
  • kubeadm version: v1.11.1
$ kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"11", GitVersion:"v1.11.1", GitCommit:"b1b29978270dc22fecc592ac55d903350454310a", GitTreeState:"clean", BuildDate:"2018-07-17T18:50:16Z", GoVersion:"go1.10.3", Compiler:"gc", Platform:"linux/amd64"}
  • kubelet version: v1.11.1
$ kubelet --version
Kubernetes v1.11.1
  • networks addons

calico


category

required docker images

  • required docker images and tags
# kuberentes basic components

# use kubeadm to list all required docker images
$ kubeadm config images list --kubernetes-version=v1.11.1
k8s.gcr.io/kube-apiserver-amd64:v1.11.1
k8s.gcr.io/kube-controller-manager-amd64:v1.11.1
k8s.gcr.io/kube-scheduler-amd64:v1.11.1
k8s.gcr.io/kube-proxy-amd64:v1.11.1
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd-amd64:3.2.18
k8s.gcr.io/coredns:1.1.3

# use kubeadm to pull all required docker images
$ kubeadm config images pull --kubernetes-version=v1.11.1

# kubernetes networks addons
$ docker pull quay.io/calico/typha:v0.7.4
$ docker pull quay.io/calico/node:v3.1.3
$ docker pull quay.io/calico/cni:v3.1.3

# kubernetes metrics server
$ docker pull gcr.io/google_containers/metrics-server-amd64:v0.2.1

# kubernetes dashboard
$ docker pull gcr.io/google_containers/kubernetes-dashboard-amd64:v1.8.3

# kubernetes heapster
$ docker pull k8s.gcr.io/heapster-amd64:v1.5.4
$ docker pull k8s.gcr.io/heapster-influxdb-amd64:v1.5.2
$ docker pull k8s.gcr.io/heapster-grafana-amd64:v5.0.4

# kubernetes apiserver load balancer
$ docker pull nginx:latest

# prometheus
$ docker pull prom/prometheus:v2.3.1

# traefik
$ docker pull traefik:v1.6.3

# istio
$ docker pull docker.io/jaegertracing/all-in-one:1.5
$ docker pull docker.io/prom/prometheus:v2.3.1
$ docker pull docker.io/prom/statsd-exporter:v0.6.0
$ docker pull gcr.io/istio-release/citadel:1.0.0
$ docker pull gcr.io/istio-release/galley:1.0.0
$ docker pull gcr.io/istio-release/grafana:1.0.0
$ docker pull gcr.io/istio-release/mixer:1.0.0
$ docker pull gcr.io/istio-release/pilot:1.0.0
$ docker pull gcr.io/istio-release/proxy_init:1.0.0
$ docker pull gcr.io/istio-release/proxyv2:1.0.0
$ docker pull gcr.io/istio-release/servicegraph:1.0.0
$ docker pull gcr.io/istio-release/sidecar_injector:1.0.0
$ docker pull quay.io/coreos/hyperkube:v1.7.6_coreos.0

category

system configuration

  • on all kubernetes nodes: add kubernetes' repository
$ cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kube*
EOF
  • on all kubernetes nodes: update system
$ yum update -y
  • on all kubernetes nodes: set SELINUX to permissive mode
$ vi /etc/selinux/config
SELINUX=permissive

$ setenforce 0
  • on all kubernetes nodes: set iptables parameters
$ cat <<EOF >  /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
EOF

$ sysctl --system
  • on all kubernetes nodes: disable swap
$ swapoff -a

# disable swap mount point in /etc/fstab
$ vi /etc/fstab
#/dev/mapper/centos-swap swap                    swap    defaults        0 0

# check swap is disabled
$ cat /proc/swaps
Filename                Type        Size    Used    Priority
  • on all kubernetes nodes: reboot hosts
# reboot hosts
$ reboot

category

kubernetes installation

firewalld and iptables settings

  • on all kubernetes nodes: enable firewalld
# restart firewalld service
$ systemctl enable firewalld
$ systemctl restart firewalld
$ systemctl status firewalld
  • master ports list
Protocol Direction Port Comment
TCP Inbound 16443* Load balancer Kubernetes API server port
TCP Inbound 6443* Kubernetes API server
TCP Inbound 4001 etcd listen client port
TCP Inbound 2379-2380 etcd server client API
TCP Inbound 10250 Kubelet API
TCP Inbound 10251 kube-scheduler
TCP Inbound 10252 kube-controller-manager
TCP Inbound 10255 Read-only Kubelet API (Deprecated)
TCP Inbound 30000-32767 NodePort Services
  • on all master nodes: set firewalld policy
$ firewall-cmd --zone=public --add-port=16443/tcp --permanent
$ firewall-cmd --zone=public --add-port=6443/tcp --permanent
$ firewall-cmd --zone=public --add-port=4001/tcp --permanent
$ firewall-cmd --zone=public --add-port=2379-2380/tcp --permanent
$ firewall-cmd --zone=public --add-port=10250/tcp --permanent
$ firewall-cmd --zone=public --add-port=10251/tcp --permanent
$ firewall-cmd --zone=public --add-port=10252/tcp --permanent
$ firewall-cmd --zone=public --add-port=30000-32767/tcp --permanent

$ firewall-cmd --reload

$ firewall-cmd --list-all --zone=public
public (active)
  target: default
  icmp-block-inversion: no
  interfaces: ens2f1 ens1f0 nm-bond
  sources:
  services: ssh dhcpv6-client
  ports: 4001/tcp 6443/tcp 2379-2380/tcp 10250/tcp 10251/tcp 10252/tcp 30000-32767/tcp
  protocols:
  masquerade: no
  forward-ports:
  source-ports:
  icmp-blocks:
  rich rules:
  • worker ports list
Protocol Direction Port Comment
TCP Inbound 10250 Kubelet API
TCP Inbound 30000-32767 NodePort Services
  • on all worker nodes: set firewalld policy
$ firewall-cmd --zone=public --add-port=10250/tcp --permanent
$ firewall-cmd --zone=public --add-port=30000-32767/tcp --permanent

$ firewall-cmd --reload

$ firewall-cmd --list-all --zone=public
public (active)
  target: default
  icmp-block-inversion: no
  interfaces: ens2f1 ens1f0 nm-bond
  sources:
  services: ssh dhcpv6-client
  ports: 10250/tcp 30000-32767/tcp
  protocols:
  masquerade: no
  forward-ports:
  source-ports:
  icmp-blocks:
  rich rules:
  • on all kubernetes nodes: set firewalld to enable kube-proxy port forward
$ firewall-cmd --permanent --direct --add-rule ipv4 filter INPUT 1 -i docker0 -j ACCEPT -m comment --comment "kube-proxy redirects"
$ firewall-cmd --permanent --direct --add-rule ipv4 filter FORWARD 1 -o docker0 -j ACCEPT -m comment --comment "docker subnet"
$ firewall-cmd --reload

$ firewall-cmd --direct --get-all-rules
ipv4 filter INPUT 1 -i docker0 -j ACCEPT -m comment --comment 'kube-proxy redirects'
ipv4 filter FORWARD 1 -o docker0 -j ACCEPT -m comment --comment 'docker subnet'

# restart firewalld service
$ systemctl restart firewalld
  • on all kubernetes nodes: remove this iptables chains, this settings will prevent kube-proxy node port forward. ( Notice: please run this command each time you restart firewalld ) Let's set the crontab.
$ crontab -e
0,5,10,15,20,25,30,35,40,45,50,55 * * * * /usr/sbin/iptables -D INPUT -j REJECT --reject-with icmp-host-prohibited

category

kubernetes and related services installation

  • on all kubernetes nodes: install kubernetes and related services, then start up kubelet and docker daemon
$ yum install -y docker-ce-17.12.0.ce-0.2.rc2.el7.centos.x86_64
$ yum install -y docker-compose-1.9.0-5.el7.noarch
$ systemctl enable docker && systemctl start docker

$ yum install -y kubelet-1.11.1-0.x86_64 kubeadm-1.11.1-0.x86_64 kubectl-1.11.1-0.x86_64
$ systemctl enable kubelet && systemctl start kubelet
  • on all master nodes: install and start keepalived service
$ yum install -y keepalived
$ systemctl enable keepalived && systemctl restart keepalived

master hosts mutual trust

  • on k8s-master01: set hosts mutual trust
$ rm -rf /root/.ssh/*
$ ssh k8s-master01 pwd
$ ssh k8s-master02 rm -rf /root/.ssh/*
$ ssh k8s-master03 rm -rf /root/.ssh/*
$ ssh k8s-master02 mkdir -p /root/.ssh/
$ ssh k8s-master03 mkdir -p /root/.ssh/

$ scp /root/.ssh/known_hosts root@k8s-master02:/root/.ssh/
$ scp /root/.ssh/known_hosts root@k8s-master03:/root/.ssh/

$ ssh-keygen -t rsa -P '' -f /root/.ssh/id_rsa
$ cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys
$ scp /root/.ssh/authorized_keys root@k8s-master02:/root/.ssh/
  • on k8s-master02: set hosts mutual trust
$ ssh-keygen -t rsa -P '' -f /root/.ssh/id_rsa
$ cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys
$ scp /root/.ssh/authorized_keys root@k8s-master03:/root/.ssh/
  • on k8s-master03: set hosts mutual trust
$ ssh-keygen -t rsa -P '' -f /root/.ssh/id_rsa
$ cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys
$ scp /root/.ssh/authorized_keys root@k8s-master01:/root/.ssh/
$ scp /root/.ssh/authorized_keys root@k8s-master02:/root/.ssh/

category

masters high availiability installation

create configuration files

  • on k8s-master01: clone kubeadm-ha project source code
$ git clone https://github.com/cookeem/kubeadm-ha
  • on k8s-master01: use create-config.sh to create relative config files, this script will create all configuration files, follow the setting comment and make sure you set the parameters correctly.
$ cd kubeadm-ha

$ vi create-config.sh
# master keepalived virtual ip address
export K8SHA_VIP=192.168.60.79
# master01 ip address
export K8SHA_IP1=192.168.60.72
# master02 ip address
export K8SHA_IP2=192.168.60.77
# master03 ip address
export K8SHA_IP3=192.168.60.78
# master keepalived virtual ip hostname
export K8SHA_VHOST=k8s-master-lb
# master01 hostname
export K8SHA_HOST1=k8s-master01
# master02 hostname
export K8SHA_HOST2=k8s-master02
# master03 hostname
export K8SHA_HOST3=k8s-master03
# master01 network interface name
export K8SHA_NETINF1=nm-bond
# master02 network interface name
export K8SHA_NETINF2=nm-bond
# master03 network interface name
export K8SHA_NETINF3=nm-bond
# keepalived auth_pass config
export K8SHA_KEEPALIVED_AUTH=412f7dc3bfed32194d1600c483e10ad1d
# calico reachable ip address
export K8SHA_CALICO_REACHABLE_IP=192.168.60.1
# kubernetes CIDR pod subnet, if CIDR pod subnet is "172.168.0.0/16" please set to "172.168.0.0"
export K8SHA_CIDR=172.168.0.0

# run the shell, it will create 3 masters' kubeadm config files, keepalived config files, nginx load balance config files, and calico config files.
$ ./create-config.sh
create kubeadm-config.yaml files success. config/k8s-master01/kubeadm-config.yaml
create kubeadm-config.yaml files success. config/k8s-master02/kubeadm-config.yaml
create kubeadm-config.yaml files success. config/k8s-master03/kubeadm-config.yaml
create keepalived files success. config/k8s-master01/keepalived/
create keepalived files success. config/k8s-master02/keepalived/
create keepalived files success. config/k8s-master03/keepalived/
create nginx-lb files success. config/k8s-master01/nginx-lb/
create nginx-lb files success. config/k8s-master02/nginx-lb/
create nginx-lb files success. config/k8s-master03/nginx-lb/
create calico.yaml file success. calico/calico.yaml

# set hostname environment variables
$ export HOST1=k8s-master01
$ export HOST2=k8s-master02
$ export HOST3=k8s-master03

# copy kubeadm config files to all master nodes, path is /root/
$ scp -r config/$HOST1/kubeadm-config.yaml $HOST1:/root/
$ scp -r config/$HOST2/kubeadm-config.yaml $HOST2:/root/
$ scp -r config/$HOST3/kubeadm-config.yaml $HOST3:/root/

# copy keepalived config files to all master nodes, path is /etc/keepalived/category/
$ scp -r config/$HOST1/keepalived/* $HOST1:/etc/keepalived/
$ scp -r config/$HOST2/keepalived/* $HOST2:/etc/keepalived/
$ scp -r config/$HOST3/keepalived/* $HOST3:/etc/keepalived/

# copy nginx load balance config files to all master nodes, path is /root/
$ scp -r config/$HOST1/nginx-lb $HOST1:/root/
$ scp -r config/$HOST2/nginx-lb $HOST2:/root/
$ scp -r config/$HOST3/nginx-lb $HOST3:/root/

category

kubeadm initialization

  • on k8s-master01: use kubeadm to init a kubernetes cluster
# notice: you must save the following output message: kubeadm join --token ${YOUR_TOKEN} --discovery-token-ca-cert-hash ${YOUR_DISCOVERY_TOKEN_CA_CERT_HASH} , this command will use lately.
$ kubeadm init --config /root/kubeadm-config.yaml
kubeadm join 192.168.20.20:6443 --token ${YOUR_TOKEN} --discovery-token-ca-cert-hash sha256:${YOUR_DISCOVERY_TOKEN_CA_CERT_HASH}
  • on all master nodes: set kubectl client environment variable
$ cat <<EOF >> ~/.bashrc
export KUBECONFIG=/etc/kubernetes/admin.conf
EOF

$ source ~/.bashrc

# kubectl now can connect the kubernetes cluster
$ kubectl get nodes
  • on k8s-master01: wait until etcd, kube-apiserver, kube-controller-manager, kube-scheduler startup
$ kubectl get pods -n kube-system -o wide
NAME                                   READY     STATUS    RESTARTS   AGE       IP              NODE
...
etcd-k8s-master01                      1/1       Running   0          18m       192.168.20.20   k8s-master01
kube-apiserver-k8s-master01            1/1       Running   0          18m       192.168.20.20   k8s-master01
kube-controller-manager-k8s-master01   1/1       Running   0          18m       192.168.20.20   k8s-master01
kube-scheduler-k8s-master01            1/1       Running   1          18m       192.168.20.20   k8s-master01
...

category

high availiability configuration

  • on k8s-master01: copy certificates to other master nodes
# set master nodes hostname
$ export CONTROL_PLANE_IPS="k8s-master02 k8s-master03"

# copy certificates to other master nodes
$ for host in ${CONTROL_PLANE_IPS}; do
  scp /etc/kubernetes/pki/ca.crt $host:/etc/kubernetes/pki/ca.crt
  scp /etc/kubernetes/pki/ca.key $host:/etc/kubernetes/pki/ca.key
  scp /etc/kubernetes/pki/sa.key $host:/etc/kubernetes/pki/sa.key
  scp /etc/kubernetes/pki/sa.pub $host:/etc/kubernetes/pki/sa.pub
  scp /etc/kubernetes/pki/front-proxy-ca.crt $host:/etc/kubernetes/pki/front-proxy-ca.crt
  scp /etc/kubernetes/pki/front-proxy-ca.key $host:/etc/kubernetes/pki/front-proxy-ca.key
  scp /etc/kubernetes/pki/etcd/ca.crt $host:/etc/kubernetes/pki/etcd/ca.crt
  scp /etc/kubernetes/pki/etcd/ca.key $host:/etc/kubernetes/pki/etcd/ca.key
  scp /etc/kubernetes/admin.conf $host:/etc/kubernetes/admin.conf
done
  • on k8s-master02: master node join the cluster
# create all certificates and kubelet config files
$ kubeadm alpha phase certs all --config /root/kubeadm-config.yaml
$ kubeadm alpha phase kubeconfig controller-manager --config /root/kubeadm-config.yaml
$ kubeadm alpha phase kubeconfig scheduler --config /root/kubeadm-config.yaml
$ kubeadm alpha phase kubelet config write-to-disk --config /root/kubeadm-config.yaml
$ kubeadm alpha phase kubelet write-env-file --config /root/kubeadm-config.yaml
$ kubeadm alpha phase kubeconfig kubelet --config /root/kubeadm-config.yaml
$ systemctl restart kubelet

# set k8s-master01 and k8s-master02 HOSTNAME and ip address
$ export CP0_IP=192.168.20.20
$ export CP0_HOSTNAME=k8s-master01
$ export CP1_IP=192.168.20.21
$ export CP1_HOSTNAME=k8s-master02

# add etcd member to the cluster
$ kubectl exec -n kube-system etcd-${CP0_HOSTNAME} -- etcdctl --ca-file /etc/kubernetes/pki/etcd/ca.crt --cert-file /etc/kubernetes/pki/etcd/peer.crt --key-file /etc/kubernetes/pki/etcd/peer.key --endpoints=https://${CP0_IP}:2379 member add ${CP1_HOSTNAME} https://${CP1_IP}:2380
$ kubeadm alpha phase etcd local --config /root/kubeadm-config.yaml

# prepare to start master
$ kubeadm alpha phase kubeconfig all --config /root/kubeadm-config.yaml
$ kubeadm alpha phase controlplane all --config /root/kubeadm-config.yaml
$ kubeadm alpha phase mark-master --config /root/kubeadm-config.yaml

# modify /etc/kubernetes/admin.conf server settings
$ sed -i "s/192.168.20.20:6443/192.168.20.21:6443/g" /etc/kubernetes/admin.conf
  • on k8s-master03: master node join the cluster
# create all certificates and kubelet config files
$ kubeadm alpha phase certs all --config /root/kubeadm-config.yaml
$ kubeadm alpha phase kubeconfig controller-manager --config /root/kubeadm-config.yaml
$ kubeadm alpha phase kubeconfig scheduler --config /root/kubeadm-config.yaml
$ kubeadm alpha phase kubelet config write-to-disk --config /root/kubeadm-config.yaml
$ kubeadm alpha phase kubelet write-env-file --config /root/kubeadm-config.yaml
$ kubeadm alpha phase kubeconfig kubelet --config /root/kubeadm-config.yaml
$ systemctl restart kubelet

# set k8s-master01 and k8s-master03 HOSTNAME and ip address
$ export CP0_IP=192.168.20.20
$ export CP0_HOSTNAME=k8s-master01
$ export CP2_IP=192.168.20.22
$ export CP2_HOSTNAME=k8s-master03

# add etcd member to the cluster
$ kubectl exec -n kube-system etcd-${CP0_HOSTNAME} -- etcdctl --ca-file /etc/kubernetes/pki/etcd/ca.crt --cert-file /etc/kubernetes/pki/etcd/peer.crt --key-file /etc/kubernetes/pki/etcd/peer.key --endpoints=https://${CP0_IP}:2379 member add ${CP2_HOSTNAME} https://${CP2_IP}:2380
$ kubeadm alpha phase etcd local --config /root/kubeadm-config.yaml

# prepare to start master
$ kubeadm alpha phase kubeconfig all --config /root/kubeadm-config.yaml
$ kubeadm alpha phase controlplane all --config /root/kubeadm-config.yaml
$ kubeadm alpha phase mark-master --config /root/kubeadm-config.yaml

# modify /etc/kubernetes/admin.conf server settings
$ sed -i "s/192.168.20.20:6443/192.168.20.22:6443/g" /etc/kubernetes/admin.conf
  • on all master nodes: enable hpa to collect performance data form apiserver, add config below in file /etc/kubernetes/manifests/kube-controller-manager.yaml
$ vi /etc/kubernetes/manifests/kube-controller-manager.yaml
    - --horizontal-pod-autoscaler-use-rest-clients=false
  • on all master nodes: enable istio auto-injection, add config below in file /etc/kubernetes/manifests/kube-apiserver.yaml
$ vi /etc/kubernetes/manifests/kube-apiserver.yaml
    - --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota

# restart kubelet service
systemctl restart kubelet
  • on any master nodes: install calico network addon, after network addon installed the cluster nodes status will be READY
$ kubectl apply -f calico/

category

masters load balance settings

keepalived installation

  • on all master nodes: restart keepalived service
$ systemctl restart keepalived
$ systemctl status keepalived

# check keepalived vip
$ curl -k https://k8s-master-lb:6443

category

nginx load balance settings

  • on all master nodes: start up nginx load balance
# use docker-compose to start up nginx load balance
$ docker-compose --file=/root/nginx-lb/docker-compose.yaml up -d
$ docker-compose --file=/root/nginx-lb/docker-compose.yaml ps

# check nginx load balance
$ curl -k https://k8s-master-lb:16443

category

kube-proxy HA settings

  • on any master nodes: set kube-proxy server settings, make sure this settings use the keepalived virtual IP and nginx load balancer port (here is: https://192.168.20.10:16443)
$ kubectl edit -n kube-system configmap/kube-proxy
    server: https://192.168.20.10:16443
  • on any master nodes: restart kube-proxy pods
# find all kube-proxy pods
$ kubectl get pods --all-namespaces -o wide | grep proxy

# delete and restart all kube-proxy pods
$ kubectl delete pod -n kube-system kube-proxy-XXX

category

high availiability verify

  • on any master nodes: check cluster running status
# check kubernetes nodes status
$ kubectl get nodes
NAME           STATUS    ROLES     AGE       VERSION
k8s-master01   Ready     master    1h        v1.11.1
k8s-master02   Ready     master    58m       v1.11.1
k8s-master03   Ready     master    55m       v1.11.1

# check kube-system pods running status
$ kubectl get pods -n kube-system -o wide
NAME                                   READY     STATUS    RESTARTS   AGE       IP              NODE
calico-node-nxskr                      2/2       Running   0          46m       192.168.20.22   k8s-master03
calico-node-xv5xt                      2/2       Running   0          46m       192.168.20.20   k8s-master01
calico-node-zsmgp                      2/2       Running   0          46m       192.168.20.21   k8s-master02
coredns-78fcdf6894-kfzc7               1/1       Running   0          1h        172.168.2.3     k8s-master03
coredns-78fcdf6894-t957l               1/1       Running   0          46m       172.168.1.2     k8s-master02
etcd-k8s-master01                      1/1       Running   0          1h        192.168.20.20   k8s-master01
etcd-k8s-master02                      1/1       Running   0          58m       192.168.20.21   k8s-master02
etcd-k8s-master03                      1/1       Running   0          54m       192.168.20.22   k8s-master03
kube-apiserver-k8s-master01            1/1       Running   0          52m       192.168.20.20   k8s-master01
kube-apiserver-k8s-master02            1/1       Running   0          52m       192.168.20.21   k8s-master02
kube-apiserver-k8s-master03            1/1       Running   0          51m       192.168.20.22   k8s-master03
kube-controller-manager-k8s-master01   1/1       Running   0          34m       192.168.20.20   k8s-master01
kube-controller-manager-k8s-master02   1/1       Running   0          33m       192.168.20.21   k8s-master02
kube-controller-manager-k8s-master03   1/1       Running   0          33m       192.168.20.22   k8s-master03
kube-proxy-g9749                       1/1       Running   0          36m       192.168.20.22   k8s-master03
kube-proxy-lhzhb                       1/1       Running   0          35m       192.168.20.20   k8s-master01
kube-proxy-x8jwt                       1/1       Running   0          36m       192.168.20.21   k8s-master02
kube-scheduler-k8s-master01            1/1       Running   1          1h        192.168.20.20   k8s-master01
kube-scheduler-k8s-master02            1/1       Running   0          57m       192.168.20.21   k8s-master02
kube-scheduler-k8s-master03            1/1       Running   1          54m       192.168.20.22   k8s-master03

category

kubernetes addons installation

  • on any master nodes: enable master node pod schedulable
$ kubectl taint nodes --all node-role.kubernetes.io/master-
  • on any master nodes: install metrics-server, after v1.11.0 heapster is deprecated for performance data collection, it use metrics-server
$ kubectl apply -f metrics-server/

# wait for 5 minutes, use kubectl top to check the pod performance usage
$ kubectl top pods -n kube-system
NAME                                    CPU(cores)   MEMORY(bytes)
calico-node-wkstv                       47m          113Mi
calico-node-x2sn5                       36m          104Mi
calico-node-xnh6s                       32m          106Mi
coredns-78fcdf6894-2xc6s                14m          30Mi
coredns-78fcdf6894-rk6ch                10m          22Mi
kube-apiserver-k8s-master01             163m         816Mi
kube-apiserver-k8s-master02             79m          617Mi
kube-apiserver-k8s-master03             73m          614Mi
kube-controller-manager-k8s-master01    52m          141Mi
kube-controller-manager-k8s-master02    0m           14Mi
kube-controller-manager-k8s-master03    0m           13Mi
kube-proxy-269t2                        4m           21Mi
kube-proxy-6jc8n                        9m           37Mi
kube-proxy-7n8xb                        9m           39Mi
kube-scheduler-k8s-master01             20m          25Mi
kube-scheduler-k8s-master02             15m          19Mi
kube-scheduler-k8s-master03             15m          19Mi
metrics-server-77b77f5fc6-jm8t6         3m           43Mi
  • on any master nodes: install heapster, after v1.11.0 heapster is deprecated for performance data collection, it use metrics-server. But kube-dashboard use heapster to display performance info, so we install it.
# install heapster, wait for 5 minutes
$ kubectl apply -f heapster/
  • on any master nodes: install kube-dashboard
# install kube-dashboard
$ kubectl apply -f dashboard/

after install, open kube-dashboard in web browser, it need to login with token: https://k8s-master-lb:30000/

dashboard-login

  • on any master nodes: get kube-dashboard login token
# get kube-dashboard login token
$ kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')

login to kube-dashboard, you can see all pods performance metrics

dashboard

  • on any master nodes: install traefik
# create k8s-master-lb domain certificate
$ openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=k8s-master-lb"

# create kubernetes secret
kubectl -n kube-system create secret generic traefik-cert --from-file=tls.key --from-file=tls.crt

# install traefik
$ kubectl apply -f traefik/

after install use web browser to open traefik admin webUI: http://k8s-master-lb:30011/

traefik

  • on any master nodes: install istio
# install istio
$ kubectl apply -f istio/

# check all istio pods
$ kubectl get pods -n istio-system
NAME                                        READY     STATUS      RESTARTS   AGE
grafana-69c856fc69-jbx49                    1/1       Running     1          21m
istio-citadel-7c4fc8957b-vdbhp              1/1       Running     1          21m
istio-cleanup-secrets-5g95n                 0/1       Completed   0          21m
istio-egressgateway-64674bd988-44fg8        1/1       Running     0          18m
istio-egressgateway-64674bd988-dgvfm        1/1       Running     1          16m
istio-egressgateway-64674bd988-fprtc        1/1       Running     0          18m
istio-egressgateway-64674bd988-kl6pw        1/1       Running     3          16m
istio-egressgateway-64674bd988-nphpk        1/1       Running     3          16m
istio-galley-595b94cddf-c5ctw               1/1       Running     70         21m
istio-grafana-post-install-nhs47            0/1       Completed   0          21m
istio-ingressgateway-4vtk5                  1/1       Running     2          21m
istio-ingressgateway-5rscp                  1/1       Running     3          21m
istio-ingressgateway-6z95f                  1/1       Running     3          21m
istio-policy-589977bff5-jx5fd               2/2       Running     3          21m
istio-policy-589977bff5-n74q8               2/2       Running     3          21m
istio-sidecar-injector-86c4d57d56-mfnbp     1/1       Running     39         21m
istio-statsd-prom-bridge-5698d5798c-xdpp6   1/1       Running     1          21m
istio-telemetry-85d6475bfd-8lvsm            2/2       Running     2          21m
istio-telemetry-85d6475bfd-bfjsn            2/2       Running     2          21m
istio-telemetry-85d6475bfd-d9ld9            2/2       Running     2          21m
istio-tracing-bd5765b5b-cmszp               1/1       Running     1          21m
prometheus-77c5fc7cd-zf7zr                  1/1       Running     1          21m
servicegraph-6b99c87849-l6zm6               1/1       Running     1          21m
  • on any master nodes: install prometheus
# install prometheus
$ kubectl apply -f prometheus/

after install, open prometheus admin webUI: http://k8s-master-lb:30013/

prometheus

open grafana admin webUI (user and password isadmin): http://k8s-master-lb:30006/ after login, add prometheus datasource: http://k8s-master-lb:30006/datasources

grafana-datasource

import dashboard: http://k8s-master-lb:30006/dashboard/import import all files under heapster/grafana-dashboard directory, dashboard Kubernetes App Metrics, Kubernetes cluster monitoring (via Prometheus)

grafana-import

dashboard you imported:

grafana-cluster

grafana-app


category

workers join kubernetes cluster

workers join HA cluster

  • on all worker nodes: join kubernetes cluster
$ kubeadm reset

# use kubeadm to join the cluster, here we use the k8s-master01 apiserver address and port.
$ kubeadm join 192.168.20.20:6443 --token ${YOUR_TOKEN} --discovery-token-ca-cert-hash sha256:${YOUR_DISCOVERY_TOKEN_CA_CERT_HASH}


# set the `/etc/kubernetes/*.conf` server settings, make sure this settings use the keepalived virtual IP and nginx load balancer port (here is: https://192.168.20.10:16443)
$ sed -i "s/192.168.20.20:6443/192.168.20.10:16443/g" /etc/kubernetes/bootstrap-kubelet.conf
$ sed -i "s/192.168.20.20:6443/192.168.20.10:16443/g" /etc/kubernetes/kubelet.conf

# restart docker and kubelet service
$ systemctl restart docker kubelet
  • on any master nodes: check all nodes status
$ kubectl get nodes
NAME           STATUS    ROLES     AGE       VERSION
k8s-master01   Ready     master    1h        v1.11.1
k8s-master02   Ready     master    58m       v1.11.1
k8s-master03   Ready     master    55m       v1.11.1
k8s-node01     Ready     <none>    30m       v1.11.1
k8s-node02     Ready     <none>    24m       v1.11.1
k8s-node03     Ready     <none>    22m       v1.11.1
k8s-node04     Ready     <none>    22m       v1.11.1
k8s-node05     Ready     <none>    16m       v1.11.1
k8s-node06     Ready     <none>    13m       v1.11.1
k8s-node07     Ready     <none>    11m       v1.11.1
k8s-node08     Ready     <none>    10m       v1.11.1

category

verify kubernetes cluster installation

verify kubernetes cluster high availiablity installation

  • NodePort testing
# create a nginx deployment, replicas=3
$ kubectl run nginx --image=nginx --replicas=3 --port=80
deployment "nginx" created

# check nginx pods status
$ kubectl get pods -l=run=nginx -o wide
NAME                     READY     STATUS    RESTARTS   AGE       IP             NODE
nginx-58b94844fd-jvlqh   1/1       Running   0          9s        172.168.7.2    k8s-node05
nginx-58b94844fd-mkt72   1/1       Running   0          9s        172.168.9.2    k8s-node07
nginx-58b94844fd-xhb8x   1/1       Running   0          9s        172.168.11.2   k8s-node09

# create nginx NodePort service
$ kubectl expose deployment nginx --type=NodePort --port=80
service "nginx" exposed

# check nginx service status
$ kubectl get svc -l=run=nginx -o wide
NAME      TYPE       CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE       SELECTOR
nginx     NodePort   10.106.129.121   <none>        80:31443/TCP   7s        run=nginx

# check nginx NodePort service accessibility
$ curl k8s-master-lb:31443
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
  • pods connectivity testing
kubectl run nginx-client -ti --rm --image=alpine -- ash
/ # wget -O - nginx
Connecting to nginx (10.102.101.78:80)
index.html           100% |*****************************************|   612   0:00:00 ETA

<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

# remove all test nginx deployment and service
kubectl delete deploy,svc nginx
  • HPA testing
# create test nginx-server
kubectl run nginx-server --requests=cpu=10m --image=nginx --port=80
kubectl expose deployment nginx-server --port=80

# create hpa
kubectl autoscale deployment nginx-server --cpu-percent=10 --min=1 --max=10
kubectl get hpa
kubectl describe hpa nginx-server

# increase nginx-server load
kubectl run -ti --rm load-generator --image=busybox -- ash
wget -q -O- http://nginx-server.default.svc.cluster.local > /dev/null
while true; do wget -q -O- http://nginx-server.default.svc.cluster.local > /dev/null; done

# it may take a few minutes to stabilize the number of replicas. Since the amount of load is not controlled in any way it may happen that the final number of replicas will differ from this example.

kubectl get hpa -w

# remove all test deployment service and HPA
kubectl delete deploy,svc,hpa nginx-server

category

  • now kubernetes high availiability cluster setup successfully 😃

About

Install k8s with ha

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published