二进制搭建高可用k8s(无坑版)

  • Post author:
  • Post category:其他


K8s节点服务器规划


1、主机规划




主机






IP






地址





主机名





配置





角色





软件列表


192.168.1.201

Master1

4C8G

Master+lb



kube-apiserver,kube-controller-manager,kube-scheduler,etcd,kubelet,kube-proxy.docker,keepalived,haproxy

192.168.1.202

Master2

4C8G

Master+lb



kube-apiserver,kube-controller-manager,kube-scheduler,etcd,kubelet,kube-proxy.docker,keepalived,haproxy

192.168.1.203

Master3

4C8G

Master+lb



kube-apiserver,kube-controller-manager,kube-scheduler,etcd,kubelet,kube-proxy.docker,keepalived,haproxy

192.168.1.204

Node1

4C16G

Worker



kubelet









kube-proxy









docker

192.168.1.205

Node2

8C16G

Worker



kubelet









kube-proxy









docker

192.168.1.206

Node3

4C8G

Worker



kubelet









kube-proxy









docker

192.168.1.200

/

/

Vip




网络名称





网段





备注




Node




网络



192.168.1.0/24



Service




网络



10.40.0.0/16



Pod




网络



172.40.0.0/16


2、集群网络分配

一、环境初始化

cat >> /etc/hosts << EOF

192.168.1.201 k8s-master1

192.168.1.202 k8s-master2

192.168.1.203 k8s-master3

192.168.1.204 k8s-node1

192.168.1.205 k8s-node2

192.168.1.206 k8s-node3

192.168.1.200 k8s-vip

EOF



systemctl stop firewalld && systemctl disable firewalld && setenforce 0 && sed -ri 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config



swapoff -a && sed -ri 's/.*swap.*/#&/' /etc/fstab



cat <<EOF >> /etc/security/limits.conf

* soft nofile 655360

* hard nofile 131072

* soft nproc 655350

* hard nproc 655350

* soft memlock unlimited

* hard memlock unlimited

EOF



#升级内核

rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org

yum -y install https://www.elrepo.org/elrepo-release-7.0-4.el7.elrepo.noarch.rpm

yum  --enablerepo="elrepo-kernel"  -y install kernel-ml.x86_64

grub2-set-default 0

grub2-mkconfig -o /boot/grub2/grub.cfg

reboot  #重启生效



cat >/etc/modules-load.d/ipvs.conf <<EOF

ip_vs

ip_vs_lc

ip_vs_wlc

ip_vs_rr

ip_vs_wrr

ip_vs_lblc

ip_vs_lblcr

ip_vs_dh

ip_vs_sh

ip_vs_fo

ip_vs_nq

ip_vs_sed

ip_vs_ftp

ip_vs_sh

nf_conntrack

ip_tables

ip_set

xt_set

ipt_set

ipt_rpfilter

ipt_REJECT

ipip

EOF



cat > /etc/modules-load.d/containerd.conf << EOF

overlay

br_netfilter

EOF



cat <<EOF > /etc/sysctl.d/k8s.conf

net.ipv4.ip_forward = 1

net.bridge.bridge-nf-call-iptables = 1

net.bridge.bridge-nf-call-ip6tables = 1

fs.may_detach_mounts = 1

vm.overcommit_memory=1



vm.panic_on_oom=0

fs.inotify.max_user_watches=89100

fs.file-max=52706963

fs.nr_open=52706963

net.netfilter.nf_conntrack_max=2310720

net.ipv4.tcp_keepalive_time = 600

net.ipv4.tcp_keepalive_probes = 3

net.ipv4.tcp_keepalive_intvl =15

net.ipv4.tcp_max_tw_buckets = 36000

net.ipv4.tcp_tw_reuse = 1

net.ipv4.tcp_max_orphans = 327680

net.ipv4.tcp_orphan_retries = 3

net.ipv4.tcp_syncookies = 1

net.ipv4.tcp_max_syn_backlog = 16384

net.ipv4.ip_conntrack_max = 131072

net.ipv4.tcp_max_syn_backlog = 16384

net.ipv4.tcp_timestamps = 0

net.core.somaxconn = 16384

EOF

二、负载均衡器准备


1


、安装haproxy和keeplievd(master节点安装)

yum -y install haproxy keepalived


2


、HaProxy 配置

cat > /etc/haproxy/haproxy.cfg << EOF

global

 maxconn 2000

 ulimit-n 16384

 log 127.0.0.1 local0 err

 stats timeout 30s



defaults

 log global

 mode http

 option httplog

 timeout connect 5000

 timeout client 50000

 timeout server 50000

 timeout http-request 15s

 timeout http-keep-alive 15s



frontend monitor-in

 bind *:33305

 mode http

 option httplog

 monitor-uri /monitor



frontend k8s-master

 bind 0.0.0.0:16443

 bind 127.0.0.1:16443

 mode tcp

 option tcplog

 tcp-request inspect-delay 5s

 default_backend k8s-master



backend k8s-master

 mode tcp

 option tcplog

 option tcp-check

 balance roundrobin

 default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100

 server  k8s-master1  192.168.1.201:6443 check

 server  k8s-master2  192.168.1.202:6443 check

 server  k8s-master3  192.168.1.203:6443 check

EOF


3


、Keepalived 配置 (主从配置不一致,需要注意)

#主

cat > /etc/keepalived/keepalived.conf << EOF

! Configuration File for keepalived

global_defs {

   router_id LVS_DEVEL

script_user root

   enable_script_security

}

vrrp_script check_apiserver {

   script "/etc/keepalived/check_apiserver.sh"

   interval 5

   weight -5

   fall 2

rise 1

}

vrrp_instance VI_1 {

   state MASTER

   interface ens192

   mcast_src_ip 192.168.1.201

   virtual_router_id 51

   priority 100

   advert_int 2

   authentication {

       auth_type PASS

       auth_pass K8SHA_KA_AUTH

   }

   virtual_ipaddress {

       192.168.1.200

   }

   track_script {

      check_apiserver

   }

}

EOF

# 从-1:

cat >/etc/keepalived/keepalived.conf << EOF

! Configuration File for keepalived

global_defs {

   router_id LVS_DEVEL

script_user root

   enable_script_security

}

vrrp_script check_apiserver {

   script "/etc/keepalived/check_apiserver.sh"

  interval 5

   weight -5

   fall 2

rise 1

}

vrrp_instance VI_1 {

   state BACKUP

   interface ens192

   mcast_src_ip 192.168.1.202

   virtual_router_id 51

   priority 99

   advert_int 2

   authentication {

       auth_type PASS

       auth_pass K8SHA_KA_AUTH

   }

   virtual_ipaddress {

       192.168.1.200

   }

   track_script {

      check_apiserver

   }

}

EOF

# 从-2:

cat >/etc/keepalived/keepalived.conf << EOF

! Configuration File for keepalived

global_defs {

   router_id LVS_DEVEL

script_user root

   enable_script_security

}

vrrp_script check_apiserver {

   script "/etc/keepalived/check_apiserver.sh"

  interval 5

   weight -5

   fall 2

rise 1

}

vrrp_instance VI_1 {

   state BACKUP

   interface ens192

   mcast_src_ip 192.168.1.203

   virtual_router_id 51

   priority 98

   advert_int 2

   authentication {

       auth_type PASS

       auth_pass K8SHA_KA_AUTH

   }

   virtual_ipaddress {

       192.168.1.200

   }

   track_script {

      check_apiserver

   }

}

EOF


4


、健康检查脚本(master节点每台都要)

cat > /etc/keepalived/check_apiserver.sh << EOF

#!/bin/bash

err=0

for k in $(seq 1 3)

do

   check_code=$(pgrep haproxy)

   if [[ $check_code == "" ]]; then

       err=$(expr $err + 1)

       sleep 1

       continue

   else

       err=0

       break

   fi

done



if [[ $err != "0" ]]; then

   echo "systemctl stop keepalived"

   /usr/bin/systemctl stop keepalived

   exit 1

else

   exit 0

fi

EOF

chmod +x /etc/keepalived/check_apiserver.sh

#启动

systemctl enable –now haproxy keepalived

三、所有节点配置ssh免密登录


#


生成密钥文件

ssh-keygen


#


传输密钥对

for i in k8s-master2 k8s-master3 k8s-node1 k8s-node2 k8s-node3;do ssh-copy-id $i;done

四、部署etcd集群


#


以下为准备工作,在k8s-master1上操作即可


1


、下载安装证书工具


#


创建工作目录

mkdir /opt/packages


#


获取 cfssl 工具

cd /opt/packages/

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64

wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64

wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64


#


赋予执行权限

chmod +x cf*


#


将 cfssl 工具 cp 至 /usr/local/bin 下

cp ./cfssl_linux-amd64 /usr/local/bin/cfssl

cp ./cfssljson_linux-amd64 /usr/local/bin/cfssljson

cp ./cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo


#


查看 cfssl 版本

cfssl version


2


、创建etcd证书


#


配置 ca 证书请求文件

mkdir /opt/packages/cert; cd /opt/packages/cert

 cat > ca-csr.json << EOF

{

  "CN": "kubernetes",

  "key": {

      "algo": "rsa",

      "size": 2048

  },

  "names": [

    {

      "C": "CN",

      "ST": "Beijing",

      "L": "Beijing",

      "O": "kubemsb",

      "OU": "CN"

    }

  ],

  "ca": {

          "expiry": "87600h"

  }

}

EOF


#


创建 ca 证书

cfssl gencert -initca ca-csr.json | cfssljson -bare ca

# 配置 ca 证书策略

 cat > ca-config.json << EOF

{

  "signing": {

      "default": {

          "expiry": "87600h"

        },

      "profiles": {

          "kubernetes": {

              "usages": [

                  "signing",

                  "key encipherment",

                  "server auth",

                  "client auth"

              ],

              "expiry": "87600h"

          }

      }

  }

}

EOF


#


配置 ectd 请求文件

  cat > etcd-csr.json << EOF

{

  "CN": "etcd",

  "hosts": [

    "127.0.0.1",

    "192.168.1.201",

    "192.168.1.202",

    "192.168.1.203"     

  ],

  "key": {

    "algo": "rsa",

    "size": 2048

  },

  "names": [{

    "C": "CN",

    "ST": "Beijing",

    "L": "Beijing",

    "O": "kubemsb",

    "OU": "CN"

  }]

}

EOF


#


生成 etcd 证书

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes etcd-csr.json | cfssljson  -bare etcd


3


、安装etcd


#


下载 etcd 软件包

wget https://github.com/etcd-io/etcd/releases/download/v3.5.2/etcd-v3.5.2-linux-amd64.tar.gz

tar xf etcd-v3.5.2-linux-amd64.tar.gz

cp ./etcd-v3.5.2-linux-amd64/etcd* /usr/local/bin/


#


查看当前 etcd 版本

etcdctl version


#


将 etcd 可执行文件 分发至其他 master 服务器

for i in k8s-master2 k8s-master3;do scp ./etcd-v3.5.2-linux-amd64/etcd* root@$i:/usr/local/bin/ ;done


4


、配置 etcd


#


创建 etcd 配置目录,以及数据目录

mkdir -p /etc/etcd/ssl && mkdir -p /opt/data/etcd/default.etcd


# master1


配置文件

cat >  /etc/etcd/etcd.conf << EOF

# 成员信息

#[Member]

ETCD_NAME="etcd1"

ETCD_DATA_DIR="/opt/data/etcd/default.etcd"

ETCD_LISTEN_PEER_URLS="https://192.168.1.201:2380"

ETCD_LISTEN_CLIENT_URLS="https://192.168.1.201:2379,http://127.0.0.1:2379"



# 集群信息

#[Clustering]

ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.201:2380"

ETCD_ADVERTISE_CLIENT_URLS="https://192.168.1.201:2379"

ETCD_INITIAL_CLUSTER="etcd1=https://192.168.1.201:2380,etcd2=https://192.168.1.202:2380,etcd3=https://192.168.1.203:2380"

ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"

ETCD_INITIAL_CLUSTER_STATE="new"

ETCD_ENABLE_V2=true

EOF


# master2


配置文件

cat >  /etc/etcd/etcd.conf << EOF

#[Member]

ETCD_NAME="etcd2"

ETCD_DATA_DIR="/opt/data/etcd/default.etcd"

ETCD_LISTEN_PEER_URLS="https://192.168.1.202:2380"

ETCD_LISTEN_CLIENT_URLS="https://192.168.1.202:2379,http://127.0.0.1:2379"



#[Clustering]

ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.202:2380"

ETCD_ADVERTISE_CLIENT_URLS="https://192.168.1.202:2379"

ETCD_INITIAL_CLUSTER="etcd1=https://192.168.1.201:2380,etcd2=https://192.168.1.202:2380,etcd3=https://192.168.1.203:2380"

ETCD_INITIAL_CLUSTER_STATE="new"

ETCD_ENABLE_V2=true

EOF


# master3


配置文件

cat >  /etc/etcd/etcd.conf << EOF

#[Member]

ETCD_NAME="etcd3"

ETCD_DATA_DIR="/opt/data/etcd/default.etcd"

ETCD_LISTEN_PEER_URLS="https://192.168.1.203:2380"

ETCD_LISTEN_CLIENT_URLS="https://192.168.1.203:2379,http://127.0.0.1:2379"



#[Clustering]

ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.203:2380"

ETCD_ADVERTISE_CLIENT_URLS="https://192.168.1.203:2379"

ETCD_INITIAL_CLUSTER="etcd1=https://192.168.1.201:2380,etcd2=https://192.168.1.202:2380,etcd3=https://192.168.1.203:2380"

ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"

ETCD_INITIAL_CLUSTER_STATE="new"

ETCD_ENABLE_V2=true

EOF


# cp


证书到指定目录

cp /opt/packages/cert/ca*.pem /etc/etcd/ssl/

cp /opt/packages/cert/etcd*.pem /etc/etcd/ssl/


#


将证书同步至其他 master 节点

for i in k8s-master2 k8s-master3;do scp -r /etc/etcd/ssl/* $i:/etc/etcd/ssl/;done


#


编写服务启动文件(master每台节点都需要)

cat > /usr/lib/systemd/system/etcd.service << EOF

[Unit]

Description=Etcd Server

After=network.target

After=network-online.target

Wants=network-online.target



[Service]

Type=simple

EnvironmentFile=/etc/etcd/etcd.conf

WorkingDirectory=/opt/data/etcd/

ExecStart=/usr/local/bin/etcd  \

 --cert-file=/etc/etcd/ssl/etcd.pem \

 --key-file=/etc/etcd/ssl/etcd-key.pem \

 --trusted-ca-file=/etc/etcd/ssl/ca.pem \

 --peer-cert-file=/etc/etcd/ssl/etcd.pem \

 --peer-key-file=/etc/etcd/ssl/etcd-key.pem \

 --peer-trusted-ca-file=/etc/etcd/ssl/ca.pem \

 --peer-client-cert-auth --client-cert-auth

Restart=on-failure

RestartSec=5

LimitNOFILE=65536



[Install]

WantedBy=multi-user.target

EOF


#


启动 etcd 服务

systemctl daemon-reload && systemctl start etcd && systemctl status etcd


#


验证集群状态

# 查看端点是否健康,ETCDCTL_API 指定api版本,也可不指定

ETCDCTL_API=3 /usr/local/bin/etcdctl –write-out=table \

–cacert=/etc/etcd/ssl/ca.pem –cert=/etc/etcd/ssl/etcd.pem \

–key=/etc/etcd/ssl/etcd-key.pem \

–endpoints=https://192.168.1.201:2379,https://192.168.1.202:2379,https://192.168.1.203:2379 endpoint health


# 查看 etcd 成员列表

ETCDCTL_API=3 /usr/local/bin/etcdctl –write-out=table \

–cacert=/etc/etcd/ssl/ca.pem –cert=/etc/etcd/ssl/etcd.pem \

–key=/etc/etcd/ssl/etcd-key.pem \

–endpoints=https://192.168.1.201:2379,https://192.168.1.202:2379,https://192.168.1.203:2379 member list


# 查看 etcd 集群信息

ETCDCTL_API=3 /usr/local/bin/etcdctl –write-out=table \

–cacert=/etc/etcd/ssl/ca.pem –cert=/etc/etcd/ssl/etcd.pem \

–key=/etc/etcd/ssl/etcd-key.pem \

–endpoints=https://192.168.1.201:2379,https://192.168.1.202:2379,https://192.168.1.203:2379 endpoint status

五、kubernetes 软件包下载及安装


#


下载软件包,也可至官方网站进行下载

wget https://dl.k8s.io/v1.21.10/kubernetes-server-linux-amd64.tar.gz


#


解压软件包

tar -xf kubernetes-server-linux-amd64.tar.gz


#


拷贝 kube-apiserver kube-controller-manager kube-scheduler kubectl 到 master 节点

cd /opt/packages/kubernetes/server/bin

cp kube-apiserver kube-controller-manager kubectl kube-scheduler /usr/local/bin/

for i in k8s-master2 k8s-master3;do scp  kube-apiserver kube-controller-manager kubectl kube-scheduler $i:/usr/local/bin/;done


#


拷贝 kubelet kube-proxy 到 worker 节点,master 节点也可安装

for i in k8s-master1 k8s-master2 k8s-master3 k8s-node1 k8s-node2 k8s-node3;do scp kubelet kube-proxy $i:/usr/local/bin/;done


#


在所有集群节点创建目录

mkdir -p /etc/kubernetes/  && mkdir -p /etc/kubernetes/ssl && mkdir -p /opt/log/kubernetes

六、部署 kube-apiserver


1


、创建 apiserver 证书请求文件

cat > /opt/packages/cert/kube-apiserver-csr.json << "EOF"

{

"CN": "kubernetes",

  "hosts": [

    "127.0.0.1",

    "192.168.1.201",

    "192.168.1.202",

    "192.168.1.203",

    "192.168.1.204",

    "192.168.1.205",

    "192.168.1.206",

    "192.168.1.200",

    "10.40.0.1",

    "kubernetes",

    "kubernetes.default",

    "kubernetes.default.svc",

    "kubernetes.default.svc.cluster",

    "kubernetes.default.svc.cluster.local"

  ],

  "key": {

    "algo": "rsa",

    "size": 2048

  },

  "names": [

    {

      "C": "CN",

      "ST": "Beijing",

      "L": "Beijing",

      "O": "kubemsb",

      "OU": "CN"

    }

  ]

}

EOF


#


生成 apiserver 证书及 token文件

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-apiserver-csr.json | cfssljson -bare kube-apiserver

cat > token.csv << EOF

$(head -c 16 /dev/urandom | od -An -t x | tr -d ' '),kubelet-bootstrap,10001,"system:kubelet-bootstrap"

EOF


2


、创建 apiserver 服务配置文件


#


在master三台机器创建执行

mkdir -p /opt/log/api-server/


# master1


配置文件           (修改对应得ip和证书文件路径)

 cat > /etc/kubernetes/kube-apiserver.conf << "EOF"

KUBE_APISERVER_OPTS="--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \

 --anonymous-auth=false \

 --bind-address=192.168.1.201 \

 --secure-port=6443 \

 --advertise-address=192.168.1.201 \

 --insecure-port=0 \

 --authorization-mode=Node,RBAC \

 --runtime-config=api/all=true \

 --enable-bootstrap-token-auth \

 --service-cluster-ip-range=10.40.0.0/16 \

 --token-auth-file=/etc/kubernetes/token.csv \

 --service-node-port-range=30000-50000 \

 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem  \

 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem \

 --client-ca-file=/etc/kubernetes/ssl/ca.pem \

 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem \

 --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem \

 --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \

 --service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem  \

 --service-account-issuer=api --etcd-cafile=/etc/etcd/ssl/ca.pem \

 --etcd-certfile=/etc/etcd/ssl/etcd.pem \

 --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \

 --etcd-servers=https://192.168.1.201:2379,https://192.168.1.202:2379,https://192.168.1.203:2379 \

 --enable-swagger-ui=true \

 --allow-privileged=true \

 --apiserver-count=2 \

 --audit-log-maxage=30 \

 --audit-log-maxbackup=3 \

 --audit-log-maxsize=100 \

 --audit-log-path=/opt/log/api-server/kube-apiserver-audit.log \

 --event-ttl=1h \

 --alsologtostderr=true \

 --logtostderr=false \

 --log-dir=/opt/log/api-server \

 --v=4"

EOF


# master2


配置文件           (修改对应得ip和证书文件路径)

cat > /etc/kubernetes/kube-apiserver.conf << "EOF"

KUBE_APISERVER_OPTS="--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \

 --anonymous-auth=false \

 --bind-address=192.168.1.202 \

 --secure-port=6443 \

 --advertise-address=192.168.1.202 \

 --insecure-port=0 \

 --authorization-mode=Node,RBAC \

 --runtime-config=api/all=true \

 --enable-bootstrap-token-auth \

 --service-cluster-ip-range=10.40.0.0/16 \

 --token-auth-file=/etc/kubernetes/token.csv \

 --service-node-port-range=30000-50000 \

 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem  \

 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem \

 --client-ca-file=/etc/kubernetes/ssl/ca.pem \

 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem \

 --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem \

 --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \

 --service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem  \

 --service-account-issuer=api --etcd-cafile=/etc/etcd/ssl/ca.pem \

 --etcd-certfile=/etc/etcd/ssl/etcd.pem \

 --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \

 --etcd-servers=https://192.168.1.201:2379,https://192.168.1.202:2379,https://192.168.1.203:2379 \

 --enable-swagger-ui=true \

 --allow-privileged=true \

 --apiserver-count=2 \

 --audit-log-maxage=30 \

 --audit-log-maxbackup=3 \

 --audit-log-maxsize=100 \

 --audit-log-path=/opt/log/api-server/kube-apiserver-audit.log \

 --event-ttl=1h \

 --alsologtostderr=true \

 --logtostderr=false \

 --log-dir=/opt/log/api-server \

 --v=4"

EOF


# master3


配置文件           (修改对应得ip和证书文件路径)

cat > /etc/kubernetes/kube-apiserver.conf << "EOF"

KUBE_APISERVER_OPTS="--enable-admission-plugins=NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \

 --anonymous-auth=false \

 --bind-address=192.168.1.203 \

 --secure-port=6443 \

 --advertise-address=192.168.1.203 \

 --insecure-port=0 \

 --authorization-mode=Node,RBAC \

 --runtime-config=api/all=true \

 --enable-bootstrap-token-auth \

 --service-cluster-ip-range=10.40.0.0/16 \

 --token-auth-file=/etc/kubernetes/token.csv \

 --service-node-port-range=30000-50000 \

 --tls-cert-file=/etc/kubernetes/ssl/kube-apiserver.pem  \

 --tls-private-key-file=/etc/kubernetes/ssl/kube-apiserver-key.pem \

 --client-ca-file=/etc/kubernetes/ssl/ca.pem \

 --kubelet-client-certificate=/etc/kubernetes/ssl/kube-apiserver.pem \

 --kubelet-client-key=/etc/kubernetes/ssl/kube-apiserver-key.pem \

 --service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \

 --service-account-signing-key-file=/etc/kubernetes/ssl/ca-key.pem  \

 --service-account-issuer=api --etcd-cafile=/etc/etcd/ssl/ca.pem \

 --etcd-certfile=/etc/etcd/ssl/etcd.pem \

 --etcd-keyfile=/etc/etcd/ssl/etcd-key.pem \

 --etcd-servers=https://192.168.1.201:2379,https://192.168.1.202:2379,https://192.168.1.203:2379 \

 --enable-swagger-ui=true \

 --allow-privileged=true \

 --apiserver-count=2 \

 --audit-log-maxage=30 \

 --audit-log-maxbackup=3 \

 --audit-log-maxsize=100 \

 --audit-log-path=/opt/log/api-server/kube-apiserver-audit.log \

 --event-ttl=1h \

 --alsologtostderr=true \

 --logtostderr=false \

 --log-dir=/opt/log/api-server \

 --v=4"

EOF


#


创建 apiserver 服务管理配置文件  (master三台都需执行)

cat > /usr/lib/systemd/system/kube-apiserver.service << "EOF"

[Unit]

Description=Kubernetes API Server

Documentation=https://github.com/kubernetes/kubernetes

After=kube-apiserver.service



[Service]

EnvironmentFile=/etc/kubernetes/kube-apiserver.conf

ExecStart=/usr/local/bin/kube-apiserver $KUBE_APISERVER_OPTS

Restart=on-failure

RestartSec=5

Type=notify

LimitNOFILE=65536



[Install]

WantedBy=multi-user.target

EOF


#


同步文件到集群 master 三台节点

cd /opt/packages/cert/

cp ca*.pem /etc/kubernetes/ssl/

cp kube-apiserver*.pem /etc/kubernetes/ssl/

cp token.csv /etc/kubernetes/

for i in k8s-master2 k8s-master3;do 
scp ca*.pem kube-apiserver*.pem $i:/etc/kubernetes/ssl/ && scp token.csv $i:/etc/kubernetes/;done

# 启动 apiserver 服务  (三台master启动)

systemctl daemon-reload && systemctl enable –now kube-apiserver  && systemctl status kube-apiserver


#


测试

curl –insecure https://192.168.1.200:16443

curl –insecure https://192.168.1.201:6443

curl –insecure https://192.168.1.202:6443

curl –insecure https://192.168.1.203:6443

七、部署kubectl


1


、创建 kubectl 证书请求文件

cd /opt/packages/cert

cat > admin-csr.json << "EOF"

{

  "CN": "admin",

  "hosts": [],

  "key": {

    "algo": "rsa",

    "size": 2048

  },

  "names": [

    {

      "C": "CN",

      "ST": "Beijing",

      "L": "Beijing",

      "O": "system:masters",            

      "OU": "system"

    }

  ]

}

EOF


#


生成证书文件

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin


#


复制文件到指定目录

cp admin*pem /etc/kubernetes/ssl/


2


、生成 kubeconfig 配置文件


# kube.config


为 kubectl 的配置文件,包含访问 apiserver 的所有信息,如 apiserver 地址、CA 证书和自身使用的证书


#


配置管理的集群以及证书和证书访问链接

kubectl config set-cluster kubernetes –certificate-authority=/etc/kubernetes/ssl/ca.pem –embed-certs=true –server=https://192.168.1.200:16443 –kubeconfig=/root/.kube/config


#


配置证书角色 admin

kubectl config set-credentials admin –client-certificate=/etc/kubernetes/ssl/admin.pem –client-key=/etc/kubernetes/ssl/admin-key.pem –embed-certs=true –kubeconfig=/root/.kube/config


#


设置安全上下文

kubectl config set-context kubernetes –cluster=kubernetes –user=admin –kubeconfig=/root/.kube/config


#


使用安全上下文进行管理

kubectl config use-context kubernetes –kubeconfig=/root/.kube/config


#


进行角色绑定

kubectl create clusterrolebinding kube-apiserver:kubelet-apis –clusterrole=system:kubelet-api-admin –user kubernetes –kubeconfig=/root/.kube/config


#


查看集群组件状态

kubectl get componentstatuses


#


查看命名空间中资源对象

kubectl get all –all-namespaces


3


、同步 kubectl 配置文件到集群其他 master 节点


#


同步证书文件

for i in k8s-master2 k8s-master3;do scp /etc/kubernetes/ssl/admin* $i:/etc/kubernetes/ssl/;done

for i in k8s-master2 k8s-master3;do scp -r /root/.kube/ $i:/root/;done


#


到另外2台集群节点验证

export KUBECONFIG=$HOME/.kube/config

kubectl cluster-info

八、部署 kube-controller-manager


1


、创建 kube-controller-manager 证书请求文件并完成集群配置

cd /opt/packages/cert

cat > kube-controller-manager-csr.json << "EOF"

{

    "CN": "system:kube-controller-manager",

    "key": {

        "algo": "rsa",

        "size": 2048

    },

    "hosts": [

      "127.0.0.1",

      "192.168.1.200",

      "192.168.1.201",

      "192.168.1.202",

         "192.168.1.203"

    ],

    "names": [

      {

        "C": "CN",

        "ST": "Beijing",

        "L": "Beijing",

        "O": "system:kube-controller-manager",

        "OU": "system"

      }

    ]

}

EOF


#


创建 kube-controller-manager 证书文件

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager


#


创建 kube-controll-manager 证书存放目录,并将证书 cp 至改目录

cp kube-controller-manager-key.pem kube-controller-manager.pem /etc/kubernetes/ssl/

for i in k8s-master2 k8s-master3;do scp kube-controller-manager-key.pem kube-controller-manager.pem $i:/etc/kubernetes/ssl/;done


#


配置管理的集群以及证书和证书访问链接

kubectl config set-cluster kubernetes –certificate-authority=/etc/kubernetes/ssl/ca.pem –embed-certs=true –server=https://192.168.1.200:16443 –kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig


#


设置集群需要的证书

kubectl config set-credentials system:kube-controller-manager –client-certificate=/etc/kubernetes/ssl/kube-controller-manager.pem –client-key=/etc/kubernetes/ssl/kube-controller-manager-key.pem –embed-certs=true –kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig


#


设置集群访问的安全上下文

kubectl config set-context system:kube-controller-manager –cluster=kubernetes –user=system:kube-controller-manager –kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig


#


使用设置的安全上下文

kubectl config use-context system:kube-controller-manager –kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig


2


、创建 kube-controller-manager 配置文件

mkdir /opt/log/control-manager

cat > /etc/kubernetes/kube-controller-manager.conf << "EOF"

KUBE_CONTROLLER_MANAGER_OPTS="--port=10252 \

  --secure-port=10257 \

  --bind-address=127.0.0.1 \

  --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \

  --service-cluster-ip-range=10.40.0.0/16 \

  --cluster-name=kubernetes \

  --cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem \

  --cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem \

  --allocate-node-cidrs=true \

  --cluster-cidr=172.40.0.0/16 \

  --experimental-cluster-signing-duration=87600h \

  --root-ca-file=/etc/kubernetes/ssl/ca.pem \

  --service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem \

  --leader-elect=true \

  --feature-gates=RotateKubeletServerCertificate=true \

  --controllers=*,bootstrapsigner,tokencleaner \

  --horizontal-pod-autoscaler-use-rest-clients=true \

  --horizontal-pod-autoscaler-sync-period=10s \

  --tls-cert-file=/etc/kubernetes/ssl/kube-controller-manager.pem \

  --tls-private-key-file=/etc/kubernetes/ssl/kube-controller-manager-key.pem \

  --use-service-account-credentials=true \

  --alsologtostderr=true \

  --logtostderr=false \

  --log-dir=/opt/log/control-manager \

  --v=2"

EOF


#


配置服务启动文件    (三台master执行)


3


、启动服务并验证


#


同步文件到其他节点

for i in k8s-master2 k8s-master3;do scp /etc/kubernetes/kube-controller-manager.* $i:/etc/kubernetes/;done


#


启动

systemctl daemon-reload && systemctl enable –now kube-controller-manager && systemctl status kube-controller-manager


#


验证

kubectl get componentstatuses

cat > /usr/lib/systemd/system/kube-controller-manager.service << "EOF"

[Unit]

Description=Kubernetes Controller Manager

Documentation=https://github.com/kubernetes/kubernetes



[Service]

EnvironmentFile=/etc/kubernetes/kube-controller-manager.conf

ExecStart=/usr/local/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS

Restart=on-failure

RestartSec=5



[Install]

WantedBy=multi-user.target

EOF

九、部署 kube-scheduler


1


、生成 kube-scheduler 证书请求文件


#


证书请求文件

cd /opt/packages/cert

cat > kube-scheduler-csr.json << "EOF"

{

    "CN": "system:kube-scheduler",

    "hosts": [

      "127.0.0.1",

      "192.168.1.201",

      "192.168.1.202",

         "192.168.1.203"

    ],

    "key": {

        "algo": "rsa",

        "size": 2048

    },

    "names": [

      {

        "C": "CN",

        "ST": "Beijing",

        "L": "Beijing",

        "O": "system:kube-scheduler",

        "OU": "system"

      }

    ]

}

EOF


#


生成 kube-scheduler 证书

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler


#


将生成的 kube-scheduler 证书文件 cp 到 /etc/kubenetes/ssl 目录

cp kube-scheduler.pem kube-scheduler-key.pem /etc/kubernetes/ssl/


#


配置管理的集群以及证书和证书访问链接

kubectl config set-cluster kubernetes –certificate-authority=/etc/kubernetes/ssl/ca.pem –embed-certs=true –server=https://192.168.1.200:16443 –kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig


#


设置集群需要的证书

kubectl config set-credentials system:kube-scheduler –client-certificate=/etc/kubernetes/ssl/kube-scheduler.pem –client-key=/etc/kubernetes/ssl/kube-scheduler-key.pem –embed-certs=true –kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig


#


设置集群访问的安全上下文

kubectl config set-context system:kube-scheduler –cluster=kubernetes –user=system:kube-scheduler –kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig


#


使用设置的安全上下文

kubectl config use-context system:kube-scheduler –kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig


2


、创建服务配置文件以及服务启动文件


#


创建 kube-scheduler 日志目录   (三台master执行)

mkdir /opt/log/scheduler


#


创建服务配置文件

cat > /etc/kubernetes/kube-scheduler.conf << "EOF"

KUBE_SCHEDULER_OPTS="--address=127.0.0.1 \

  --kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \

  --leader-elect=true \

  --alsologtostderr=true \

  --logtostderr=false \

  --log-dir=/opt/log/scheduler \

  --v=2"

EOF


#


创建服务启动文件

 cat > /usr/lib/systemd/system/kube-scheduler.service << "EOF"

[Unit]

Description=Kubernetes Scheduler

Documentation=https://github.com/kubernetes/kubernetes



[Service]

EnvironmentFile=/etc/kubernetes/kube-scheduler.conf

ExecStart=/usr/local/bin/kube-scheduler $KUBE_SCHEDULER_OPTS

Restart=on-failure

RestartSec=5



[Install]

WantedBy=multi-user.target

EOF


3


、同步文件至集群 master 节点


#


同步 ssl 证书、配置文件、同步服务启动文件

for i in k8s-master2 k8s-master3;do scp /etc/kubernetes/ssl/kube-scheduler* $i:/etc/kubernetes/ssl/;done

for i in k8s-master2 k8s-master3;do scp /etc/kubernetes/kube-scheduler.* $i:/etc/kubernetes/;done

for i in k8s-master2 k8s-master3;do scp /usr/lib/systemd/system/kube-scheduler.service $i:/usr/lib/systemd/system/;done


4


、启动 scheduler 服务并验证

systemctl daemon-reload && systemctl enable –now kube-scheduler && systemctl status kube-scheduler


#


验证

kubectl get componentstatuses

十、部署安装docker (所有节点都需要安装)


# docker


的安装

wget -P /etc/yum.repos.d/ https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

yum -y install docker-ce-19.03.*


#


创建数据目录 (节点6台都要执行)

mkdir /home/data ;ln -s /home/data /data && mkdir /data/docker


#


配置 docker daemon.json 文件

cat >/etc/docker/daemon.json <<EOF

{

“registry-mirrors”: [

“https://651cp807.mirror.aliyuncs.com”,

“https://docker.mirrors.ustc.edu.cn”,

“http://hub-mirror.c.163.com”

],

“exec-opts”: [“native.cgroupdriver=systemd”],

“max-concurrent-downloads”: 10,

“log-driver”: “json-file”,

“log-level”: “warn”,

“log-opts”: {

“max-size”: “10m”,

“max-file”: “3”

},

“data-root”: “/data/docker”

}

EOF


#


启动docker服务

systemctl daemon-reload && systemctl enable –now docker

十一、部署 kubelet


1


、创建 kubelet-bootstrap.kubeconfig     (master1节点上执行)


#


取出存放在 token.csv 中的 token

BOOTSTRAP_TOKEN=$(awk -F “,” ‘{print $1}’ /etc/kubernetes/token.csv)


#


配置管理的集群以及证书和证书访问链接

kubectl config set-cluster kubernetes –certificate-authority=/etc/kubernetes/ssl/ca.pem –embed-certs=true –server=https://192.168.1.200:16443 –kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig


#


设置集群需要的证书

kubectl config set-credentials kubelet-bootstrap –token=${BOOTSTRAP_TOKEN} –kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig


#


设置集群的安全上下文

kubectl config set-context default –cluster=kubernetes –user=kubelet-bootstrap –kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig


#


设置使用集群的安全上下文

kubectl config use-context default –kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig


#


创建绑定角色 cluster-system-anonymous

kubectl create clusterrolebinding cluster-system-anonymous –clusterrole=cluster-admin –user=kubelet-bootstrap


#


创建角色kubelet-bootstrap

kubectl create clusterrolebinding kubelet-bootstrap –clusterrole=system:node-bootstrapper –user=kubelet-bootstrap –kubeconfig=/etc/kubernetes/kubelet-bootstrap.kubeconfig


#


验证

kubectl describe clusterrolebinding cluster-system-anonymous

kubectl describe clusterrolebinding kubelet-bootstrap


2


、创建 kubelet 配置文件



#




对应节点的IP以及主机名需要按照实际情况来

mkdir -p /opt/log/kubelet


#


创建服务管理启动文件

cat > /usr/lib/systemd/system/kubelet.service << EOF

[Unit]

Description=Kubernetes Kubelet

After=docker.service

[Service]

ExecStart=/usr/local/bin/kubelet \
--logtostderr=true --v=2 --log-dir=/etc/kubernetes/logs/kubelet \
--hostname-override=k8s-master --network-plugin=cni \
--kubeconfig=/etc/kubernetes/kubelet/kubelet.kubeconfig \
--bootstrap-kubeconfig=/etc/kubernetes/kubelet/kubelet-bootstrap.kubeconfig \
--config=/etc/kubernetes/kubelet/kubelet-config.yml \
--cert-dir=/etc/kubernetes/ssl \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.2
Restart=on-failure

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target

EOF



#




配置参数文件

cat > /etc/kubernetes/kubelet/kubelet-config.yml << EOF

kind: KubeletConfiguration

apiVersion: kubelet.config.k8s.io/v1beta1

address: 192.168.20.111

port: 10250

readOnlyPort: 10255

cgroupDriver: systemd

clusterDNS:

- 10.40.0.2

clusterDomain: cluster.local

failSwapOn: false

authentication:

  anonymous:

    enabled: false

  webhook:

    cacheTTL: 2m0s

    enabled: true

  x509:

    clientCAFile: /etc/kubernetes/ssl/ca.pem

authorization:

  mode: Webhook

  webhook:

    cacheAuthorizedTTL: 5m0s

    cacheUnauthorizedTTL: 30s

evictionHard:

  imagefs.available: 15%

  memory.available: 100Mi

  nodefs.available: 10%

  nodefs.inodesFree: 5%

maxOpenFiles: 1000000

maxPods: 110

EOF


#


同步到其他节点并修改对应配置文件得主机名和ip地址


# node


三个节点创建目录

mkdir -p /etc/kubernetes/ssl

for i in k8s-master2 k8s-master3 k8s-node1 k8s-node2 k8s-node3;do

scp /etc/kubernetes/kubelet* $i:/etc/kubernetes

scp /etc/kubernetes/ssl/ca.pem $i:/etc/kubernetes/ssl

scp /usr/lib/systemd/system/kubelet.service $i:/usr/lib/systemd/system/

done


3


、启动并验证

#启动

systemctl daemon-reload && systemctl enable –now kubelet && systemctl status kubelet

# node节点状态

kubectl get csr

kubectl get node

十二、部署 kube-proxy   (集群每台节点都需安装)


1


、生成证书文件         (master1上执行)


#


创建 kube-proxy 证书请求文件

cd /opt/packages/cert

cat > kube-proxy-csr.json << EOF

{

“CN”: “system:kube-proxy”,

“key”: {

“algo”: “rsa”,

“size”: 2048

},

“names”: [

{

“C”: “CN”,

“ST”: “Beijing”,

“L”: “Beijing”,

“O”: “kubemsb”,

“OU”: “CN”

}

]

}

EOF


#


生成 kube-proxy 证书

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy


#


创建 kubeconfig 文件

kubectl config set-cluster kubernetes –certificate-authority=/etc/kubernetes/ssl/ca.pem –embed-certs=true –server=https://192.168.1.200:16443 –kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig

cp kube-proxy-key.pem kube-proxy.pem /etc/kubernetes/ssl/


#


设置集群需要的证书

kubectl config set-credentials kube-proxy –client-certificate=/etc/kubernetes/ssl/kube-proxy.pem –client-key=/etc/kubernetes/ssl/kube-proxy-key.pem –embed-certs=true –kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig


#


设置集群所需的安全上下文

kubectl config set-context default –cluster=kubernetes –user=kube-proxy –kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig


#


设置集群使用安全上下文

kubectl config use-context default –kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig


2


、创建服务配置文件–同步到其他节点时需要修改对应的IP


#


创建服务配置文件  (master1执行)

cat > /etc/kubernetes/kube-proxy.yaml << “EOF”

apiVersion: kubeproxy.config.k8s.io/v1alpha1

bindAddress: 192.168.1.201

clientConnection:

kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig

clusterCIDR: 172.40.0.0/16

healthzBindAddress: 192.168.1.201:10256

kind: KubeProxyConfiguration

metricsBindAddress: 192.168.1.201:10249

mode: “ipvs”

EOF


#


创建服务启动管理文件  (master1执行)

cat >  /usr/lib/systemd/system/kube-proxy.service << “EOF”

[Unit]

Description=Kubernetes Kube-Proxy Server

Documentation=https://github.com/kubernetes/kubernetes

After=network.target

[Service]

WorkingDirectory=/opt/kube-proxy

ExecStart=/usr/local/bin/kube-proxy \

–config=/etc/kubernetes/kube-proxy.yaml \

–alsologtostderr=true \

–logtostderr=false \

–log-dir=/opt/log/kubernetes \

–v=2

Restart=on-failure

RestartSec=5

LimitNOFILE=65536

[Install]

WantedBy=multi-user.target

EOF


# node


所有节点创建目录

mkdir -p /opt/log/kubernetes && mkdir -p /opt/kube-proxy


# master


节点创建目录

mkdir -p /opt/kube-proxy


#


同步配置文件到其他节点并启动服务

for i in  k8s-master2 k8s-master3 k8s-node1 k8s-node2 k8s-node3;

do

scp /etc/kubernetes/kube-proxy.* $i:/etc/kubernetes/

scp /etc/kubernetes/ssl/kube-proxy* $i:/etc/kubernetes/ssl/

scp /usr/lib/systemd/system/kube-proxy.service $i:/usr/lib/systemd/system

done


#


修改配置文件到对应主机得ip (剩下得所有节点)

export IP=$(hostname -I|awk ‘{print $1}’) && sed -i ‘s/192.168.1.201/'”$IP”‘/g’ /etc/kubernetes/kube-proxy.yaml && cat /etc/kubernetes/kube-proxy.yaml


#


重新加载并启动服务

systemctl daemon-reload && systemctl enable –now kube-proxy && systemctl status kube-proxy.service

十三、部署flannel和coredns


1


、下载网络插件yml部署文件并修改pod的ip段

wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

kubectl apply -f  kube-flannel.yml



2




、插件运行成功后,node节点仍处于NotReady状态;排查思路如下:

(1)查看systemctl status kubelet日志显示

network plugin is not ready: cni config uninitialized”

检测pod都是正常运行没有报错

(2)查看kubelet的日志出现一直打印报错tailf  /opt/log/kubelet/kubelet.INFO

err=”[failed to find plugin \”portmap\” in path [/opt/cni/bin]]

原因是:/opt/cni/bin下缺少可执行文件插件

(3)安装对应插件即可portmap、loopback、bridge、host-local

安装完成后就没有报错了,节点要是Ready状态了

for i in  k8s-master2 k8s-master3 k8s-node1 k8s-node2 k8s-node3;

do

scp /opt/cni/bin/* $i:/opt/cni/bin/

done


3


、安装coredns插件

wget https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/dns/coredns/coredns.yaml.base

cp coredns.yaml.base  coredns.yaml

#修改k8s集群后缀名称__DNS__DOMAIN__,一般为cluster.local

#77         kubernetes __DNS__DOMAIN__ in-addr.arpa ip6.arpa {

kubernetes cluster.local in-addr.arpa ip6.arpa {

#修改coredns谷歌地址为dockerhub地址,容易下载

#142         image: k8s.gcr.io/coredns/coredns:v1.8.6

image: coredns/coredns:1.8.6

#修改pod启动内存限制大小,300Mi即可

#146             memory: __DNS__MEMORY__LIMIT__

memory: 300Mi

#修改coredns的svcIP地址,一般为svc网段的第二位,10.40.0.2,第一位为apiserver的svc

#212   clusterIP: __DNS__SERVER__

clusterIP: 10.40.0.2


4


、其他插件

Ingress插件安装参考:https://www.yht7.com/news/199332


目录


K8s节点服务器规划


一、环境初始化


二、负载均衡器准备


三、所有节点配置ssh免密登录


四、部署etcd集群


五、kubernetes 软件包下载及安装


六、部署 kube-apiserver


七、部署kubectl


八、部署 kube-controller-manager


九、部署 kube-scheduler


十、部署安装docker (所有节点都需要安装)


十一、部署 kubelet


十二、部署 kube-proxy   (集群每台节点都需安装)


十三、部署flannel和coredns




版权声明:本文为weixin_47415962原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。