Categories
Uncategorized

kubeadm deploy high availability clustering Kubernetes 1.14.1 version

Kubernetes high-availability cluster deployment

Deployment architecture:

Master components:

  • kube-apiserver

Unified entrance Kubernetes API, clusters, each component coordinator, HTTP API provides an interface to services, resources additions and deletions to all objects of investigation and monitoring operations to APIServer process for submission to the Etcd storage.

  • kube-controller-manager

Conventional processing background tasks cluster, a resource corresponding to a controller, and ControllerManager is responsible for the management of these controllers.

  • kube-scheduler

According to the scheduling algorithm selects a Node node for the newly created Pod.

Node components:

  • kubelet

kubelet Master Agent is on Node node, lifecycle management Run container, such container is created, the data volume Pod mounts, download Secret, node status, and obtaining container work. Pod kubelet converting each into a group of containers.

  • kube-proxy

Achieve Pod Network Agent on Node node, network rules and four maintenance work load balancing.

  • docker

Run container. Third-party services:

  • etcd

Distributed key-value storage system. For maintaining the cluster state, such as object information Pod, Service and the like.

Figure clearly shows the communication protocol between Kubernetes architecture design and assembly.

 

 

 

First, environmental planning

 

Roles

 IP

Package

K8S-MASTER01

10.247.74.48

kube-apiserver
kube-controller-manager
kube-scheduler
etcd

kubelet
kube-proxy

flannel

Nginx

keepalived

K8S-MASTER02

10.247.74.49

kube-apiserver
kube-controller-manager
kube-scheduler
etcd

kubelet
kube-proxy

flannel

Nginx

keepalived

K8S-MASTER03

10.247.74.50

kube-apiserver
kube-controller-manager
kube-scheduler
etcd

kubelet
kube-proxy

flannel

Nginx

keepalived

K8S-NODE01

10.247.74.53

kubelet
kube-proxy
docker
flannel

K8S-NODE02

10.247.74.54

kubelet
kube-proxy
docker
flannel

K8S-NODE03

10.247.74.55

kubelet
kube-proxy
docker
flannel

K8S-NODE04

10.247.74.56

kubelet
kube-proxy
docker
flannel

K8S-VIP

10.247.74.51

 

Software version information

 

software

version

Linux operating system

Red Hat Enterprise 7.6_x64

Kubernetes

1.14.1

Docker

18.06.3-ce

Etcd

3.0

Nginx

17.0

1.1 Preparation System Environment (all nodes)

# Set the host name and closed selinux, swap partition

cat <>/etc/hosts

10.247.74.48  TWDSCPA203V

10.247.74.49  TWDSCPA204V

10.247.74.50  TWDSCPA205V

10.247.74.53  TWDSCPA206V

10.247.74.54  TWDSCPA207V

10.247.74.55  TWDSCPA208V

10.247.74.56  TWDSCPA209V

10.247.74.51  K8S-VIP

EOF

sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

swapoff -a

sed -i 's/\/dev\/mapper\/centos-swap/\#\/dev\/mapper\/centos-swap/g' /etc/fstab

systemctl enable ntpd

systemctl start ntpd 

# Set the kernel parameters

echo "* soft nofile 32768" >> /etc/security/limits.conf

echo "* hard nofile 65535" >> /etc/security/limits.conf

echo "* soft nproc 32768" >> /etc/security/limits.conf

echo "* hadr nproc 65535" >> /etc/security/limits.conf

cat > /etc/sysctl.d/k8s.conf <

# IPVS load module

Execute the following script Kubernetes all nodes (greater than 4.19 if the kernel is replaced nf_conntrack_ipv4 nf_conntrack):

cat > /etc/sysconfig/modules/ipvs.modules <

# Script execution

chmod 755  /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

# Installation ipvs related management software

yum install ipset ipvsadm -y
reboot

1.2 Installation Docker (all nodes)

# Step 1: Install the required number of system tools

yum install -y yum-utils device-mapper-persistent-data lvm2

#sSep 2: Installation ddocker

yum update -y && yum install -y docker-ce-18.06.3.ce

# Step 3: Configure docker warehouse and mirrored storage path

mkdir -p /mnt/sscp/data/docker
cat > /etc/docker/daemon.json <

# Step 4: Start to restart the service Docker

systemctl restart docker
systemctl enable docker 

1.3 deployed Nginx

一、安装依赖包
yum install -y gcc gcc-c++ pcre pcre-devel zlib zlib-devel openssl openssl-devel
二、从官网下载安装包
wget https://nginx.org/download/nginx-1.16.0.tar.gz
三、解压并安装
tar zxvf nginx-1.16.0.tar.gz
cd nginx-1.16.0
./configure --prefix=/usr/local/nginx --with-http_stub_status_module --with-http_ssl_module --with-http_realip_module --with-http_flv_module --with-http_mp4_module --with-http_gzip_static_module--with-stream --with-stream_ssl_module
make && make install
四、配置kube-apiserver反向代理
stream {
   log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
    access_log  /var/log/nginx/k8s-access.log  main;
upstream k8s-apiserver {
    server 10.247.74.48:6443;
    server 10.247.74.49:6443;
    server 10.247.74.50:6443;
}
server {
        listen 0.0.0.0:8443;
        proxy_pass k8s-apiserver;
}
}
五、启动nginx服务
/usr/local/sbin/nginx

1.4 deployment keepalived

一、下载地址:

wget https://www.keepalived.org/software/keepalived-2.0.16.tar.gz
二、解压并安装

tar xf keepalived-2.0.16.tar.gz

cd keepalived-2.0.16

./configure --prefix=/usr/local/keepalived

make && make install

cp /root/keepalived-2.0.16/keepalived/etc/init.d/keepalived /etc/init.d/

cp /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/

mkdir /etc/keepalived

cp /usr/local/keepalived/etc/keepalived/keepalived.conf /etc/keepalived/

cp /usr/local/keepalived/sbin/keepalived /usr/sbin/

二、添加配置文件

vim /etc/keepalived/keepalived.conf

MASTER:

vrrp_instance VI_1 {

    state MASTER

    interface ens32    

    virtual_router_id 51

    priority 100   

    advert_int 1   

    authentication {

        auth_type PASS     

        auth_pass 1111

    } 

    virtual_ipaddress {

        10.247.74.51/24

    }

BACKUP:

vrrp_instance VI_1 {

    state BACKUP

    interface ens32    

    virtual_router_id 51

    priority 90   

    advert_int 1   

    authentication {

        auth_type PASS     

        auth_pass 1111

    } 

    virtual_ipaddress {

        10.247.74.51/24

    }

1.5 deployment kubeadm (all nodes)

The following operations are performed on all nodes.

# As the official source of domestic inaccessible, used here to replace Ali cloud yum source:

cat < /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

#安装kubeadm, kubelet, kubectl, note that version v1.14.1 is installed here:

yum install -y kubelet-1.14.1 kubeadm-1.14.1 kubectl-1.14.1
systemctl enable kubelet && systemctl start kubelet

1.6 Master node deployment

Initialization Reference: https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-init/ https://godoc.org/k8s.io/kubernetes/cmd/kubeadm/app/apis/kubeadm/v1beta1

Create the initialization profile You can generate the initialization profile using the following command

kubeadm config print init-defaults > kubeadm-config.yaml

Modified according to the actual deployment environment:

apiVersion: kubeadm.k8s.io/v1beta1
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 10.247.74.48
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: cn-hongkong.i-j6caps6av1mtyxyofmrw
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta1
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: "10.247.74.51:8443"
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.14.1
networking:
  dnsDomain: cluster.local
  podSubnet: "10.244.0.0/16"
  serviceSubnet: 10.96.0.0/12
scheduler: {}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
featureGates:
  SupportIPVSProxyMode: true
mode: ipvs

Configuration instructions:

controlPlaneEndpoint:为vip地址和haproxy监听端口6444
imageRepository:由于国内无法访问google镜像仓库k8s.gcr.io,这里指定为阿里云镜像仓库registry.aliyuncs.com/google_containers
podSubnet:指定的IP地址段与后续部署的网络插件相匹配,这里需要部署flannel插件,所以配置为10.244.0.0/16
mode: ipvs:最后追加的配置为开启ipvs模式。

You can use the following command in the cluster configuration file to build a complete view into force:

kubeadm  config images pull  --config kubeadm-config.yaml  # 通过阿里源预先拉镜像

Initializing the Master01 node

Here the additional tee command to initialize the log output kubeadm-init.log to standby (optional).

kubeadm init --config=kubeadm-config.yaml --experimental-upload-certs | tee kubeadm-init.log

This command specifies the configuration file to be used for initialization, with the addition of the —experimental-upload-certs parameter to automatically distribute the certificate file upon subsequent joining nodes.

Add additional nodes 1.7 Master

Execute the following command:

kubeadm join 10.247.74.51:8443 --token ocb5tz.pv252zn76rl4l3f6 \
--discovery-token-ca-cert-hash sha256:141bbeb79bf58d81d551f33ace207c7b19bee1cfd7790112ce26a6a300eee5a2 \
--experimental-control-plane --certificate-key 20366c9cdbfdc1435a6f6d616d988d027f2785e34e2df9383f784cf61bab9826

Adding context:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

1.8 Adding Node node

Execute the following command:

kubeadm join 10.247.74.51:8443 --token ocb5tz.pv252zn76rl4l3f6 \
    --discovery-token-ca-cert-hash sha256:141bbeb79bf58d81d551f33ace207c7b19bee1cfd7790112ce26a6a300eee5a2

1.9 deployment flannel and query the status of the cluster

First, the deployment of flannel

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Second, check the cluster status

# kubectl get node

1.10 Follow-up

After 24h the default token fail if a new node is added can be regenerated in the master:

kubeadm token create --print-join-command
		

Leave a Reply