二进制部署k8s多master部署高可用+haproxy

  • Post author:
  • Post category:其他


前面两篇文章已经配置好了etcd和flannel的网络,现在开始配置k8s master集群。

etcd集群配置参考:

二进制搭建kubernetes多master集群【一、使用TLS证书搭建etcd集群】

flannel网络配置参考:

二进制搭建kubernetes多master集群【二、配置flannel网络】

本文在以下主机上操作部署k8s集群

k8s-master1:192.168.80.7

k8s-master2:192.168.80.8

k8s-master3:192.168.80.9


配置Kubernetes master集群

kubernetes master 节点包含的组件:

  • kube-apiserver
  • kube-scheduler
  • kube-controller-manager

目前这三个组件需要部署在同一台机器上。


  • kube-scheduler



    kube-controller-manager



    kube-apiserver

    三者的功能紧密相关;
  • 同时只能有一个

    kube-scheduler



    kube-controller-manager

    进程处于工作状态,如果运行多个,则需要通过选举产生一个 leader;


一、部署kubectl命令工具

kubectl 是 kubernetes 集群的命令行管理工具,本文档介绍安装和配置它的步骤。

kubectl 默认从

~/.kube/config

文件读取 kube-apiserver 地址、证书、用户名等信息,如果没有配置,执行 kubectl 命令时可能会出错。


~/.kube/config

只需要部署一次,然后拷贝到其他的master。

1、下载kubectl

wget https://dl.k8s.io/v1.12.3/kubernetes-server-linux-amd64.tar.gz
tar -xzvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin/
cp kube-apiserver kubeadm kube-controller-manager kubectl kube-scheduler /usr/local/bin

2、创建请求证书

[root@k8s-master1 ssl]# cat > admin-csr.json <<EOF
{
  "CN": "admin",
  "hosts": [],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "system:masters",
      "OU": "4Paradigm"
    }
  ]
}
EOF

  • O 为

    system:masters

    ,kube-apiserver 收到该证书后将请求的 Group 设置为 system:masters;
  • 预定义的 ClusterRoleBinding

    cluster-admin

    将 Group

    system:masters

    与 Role

    cluster-admin

    绑定,该 Role 授予所有 API的权限;
  • 该证书只会被 kubectl 当做 client 证书使用,所以 hosts 字段为空;

生成证书和私钥

cfssl gencert -ca=/etc/kubernetes/cert/ca.pem \
  -ca-key=/etc/kubernetes/cert/ca-key.pem \
  -config=/etc/kubernetes/cert/ca-config.json \
  -profile=kubernetes admin-csr.json | cfssljson -bare admin

3、创建~/.kube/config文件

kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/cert/ca.pem \
  --embed-certs=true \
  --server=https://114.67.81.105:8443 \
  --kubeconfig=kubectl.kubeconfig

# 设置客户端认证参数
kubectl config set-credentials admin \
  --client-certificate=admin.pem \
  --client-key=admin-key.pem \
  --embed-certs=true \
  --kubeconfig=kubectl.kubeconfig

# 设置上下文参数
kubectl config set-context kubernetes \
  --cluster=kubernetes \
  --user=admin \
  --kubeconfig=kubectl.kubeconfig
  
# 设置默认上下文
kubectl config use-context kubernetes --kubeconfig=kubectl.kubeconfig

4、分发~/.kube/config文件

[root@k8s-master1 temp]# cp kubectl.kubeconfig ~/.kube/config
[root@k8s-master1 temp]# scp kubectl.kubeconfig k8s-master2:~/.kube/config
kubectl.kubeconfig                                                                                                                                                                                    100% 6285     2.2MB/s   00:00    
[root@k8s-master1 temp]# scp kubectl.kubeconfig k8s-master3:~/.kube/config
kubectl.kubeconfig 


二、部署api-server

1、创建kube-apiserver的证书签名请求:

 
[root@k8s-master1 ssl]# cat > kubernetes-csr.json <<EOF
{
  "CN": "kubernetes",
  "hosts": [
    "127.0.0.1",
    "192.168.80.7",
    "192.168.80.8",
    "192.168.80.9",
    "192.168.80.13",
    "114.67.81.105",
    "kubernetes",
    "kubernetes.default",
    "kubernetes.default.svc",
    "kubernetes.default.svc.cluster",
    "kubernetes.default.svc.cluster.local"
  ],
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "4Paradigm"
    }
  ]
}
EOF

  • hosts 字段指定授权使用该证书的 IP 或域名列表,这里列出了 VIP 、apiserver 节点 IP、kubernetes 服务 IP 和域名;
  • 域名最后字符不能是

    .

    (如不能为

    kubernetes.default.svc.cluster.local.

    ),否则解析时失败,提示:

    x509: cannot parse dnsName "kubernetes.default.svc.cluster.local."

  • 如果使用非

    cluster.local

    域名,如

    bqding.com

    ,则需要修改域名列表中的最后两个域名为:

    kubernetes.default.svc.bqding



    kubernetes.default.svc.bqding.com
  • 红色的主机依次为master节点的ip,以及负载均衡器的内网和公网IP。

生成证书和私钥:

cfssl gencert -ca=/etc/kubernetes/cert/ca.pem \
  -ca-key=/etc/kubernetes/cert/ca-key.pem \
  -config=/etc/kubernetes/cert/ca-config.json \
  -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes

2、将生成的证书和私钥文件拷贝到 master 节点:

[root@k8s-master1 ssl]# cp kubernetes*.pem /etc/kubernetes/cert/
[root@k8s-master1 ssl]# scp kubernetes*.pem k8s-master2:/etc/kubernetes/cert/
[root@k8s-master1 ssl]# scp kubernetes*.pem k8s-master3:/etc/kubernetes/cert/

3、创建加密配置文件

[root@k8s-master1 ssl]# cat > encryption-config.yaml <<EOF
kind: EncryptionConfig
apiVersion: v1
resources:
  - resources:
      - secrets
    providers:
      - aescbc:
          keys:
            - name: key1
              secret: $(head -c 32 /dev/urandom | base64)
      - identity: {}
EOF

4、分发加密配置文件到master节点

[root@k8s-master1 ssl]# cp encryption-config.yaml /etc/kubernetes/cert/
[root@k8s-master1 ssl]# scp encryption-config.yaml k8s-master2:/etc/kubernetes/cert/
[root@k8s-master1 ssl]# scp encryption-config.yaml k8s-master3:/etc/kubernetes/cert/

5、创建kube-apiserver systemd unit文件

[root@k8s-master1 ssl]# cat > /etc/systemd/system/kube-apiserver.service << EOF
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
ExecStart=/usr/local/bin/kube-apiserver \
  --enable-admission-plugins=Initializers,NamespaceLifecycle,NodeRestriction,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota \
  --anonymous-auth=false \
  --experimental-encryption-provider-config=/etc/kubernetes/cert/encryption-config.yaml \
  --advertise-address=192.168.80.7 \
  --bind-address=192.168.80.7 \
  --insecure-port=0 \
  --authorization-mode=Node,RBAC \
  --runtime-config=api/all \
  --enable-bootstrap-token-auth \
  --service-cluster-ip-range=10.254.0.0/16 \
  --service-node-port-range=30000-32700 \
  --tls-cert-file=/etc/kubernetes/cert/kubernetes.pem \
  --tls-private-key-file=/etc/kubernetes/cert/kubernetes-key.pem \
  --client-ca-file=/etc/kubernetes/cert/ca.pem \
  --kubelet-client-certificate=/etc/kubernetes/cert/kubernetes.pem \
  --kubelet-client-key=/etc/kubernetes/cert/kubernetes-key.pem \
  --service-account-key-file=/etc/kubernetes/cert/ca-key.pem \
  --etcd-cafile=/etc/kubernetes/cert/ca.pem \
  --etcd-certfile=/etc/kubernetes/cert/kubernetes.pem \
  --etcd-keyfile=/etc/kubernetes/cert/kubernetes-key.pem \
  --etcd-servers=https://192.168.80.4:2379,https://192.168.80.5:2379,https://192.168.80.6:2379 \
  --enable-swagger-ui=true \
  --allow-privileged=true \
  --apiserver-count=3 \
  --audit-log-maxage=30 \
  --audit-log-maxbackup=3 \
  --audit-log-maxsize=100 \
  --audit-log-path=/var/log/kube-apiserver-audit.log \
  --event-ttl=1h \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=2
Restart=on-failure
RestartSec=5
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.targe
EOF


  • --experimental-encryption-provider-config

    :启用加密特性;

  • --authorization-mode=Node,RBAC

    : 开启 Node 和 RBAC 授权模式,拒绝未授权的请求;

  • --enable-admission-plugins

    :启用

    ServiceAccount



    NodeRestriction


  • --service-account-key-file

    :签名 ServiceAccount Token 的公钥文件,kube-controller-manager 的

    --service-account-private-key-file

    指定私钥文件,两者配对使用;

  • --tls-*-file

    :指定 apiserver 使用的证书、私钥和 CA 文件。

    --client-ca-file

    用于验证 client (kue-controller-manager、kube-scheduler、kubelet、kube-proxy 等)请求所带的证书;

  • --kubelet-client-certificate



    --kubelet-client-key

    :如果指定,则使用 https 访问 kubelet APIs;需要为证书对应的用户(上面 kubernetes*.pem 证书的用户为 kubernetes) 用户定义 RBAC 规则,否则访问 kubelet API 时提示未授权;

  • --bind-address

    : 不能为

    127.0.0.1

    ,否则外界不能访问它的安全端口 6443;

  • --insecure-port=0

    :关闭监听非安全端口(8080);

  • --service-cluster-ip-range

    : 指定 Service Cluster IP 地址段;

  • --service-node-port-range

    : 指定 NodePort 的端口范围;

  • --runtime-config=api/all=true

    : 启用所有版本的 APIs,如 autoscaling/v2alpha1;

  • --enable-bootstrap-token-auth

    :启用 kubelet bootstrap 的 token 认证;

  • --apiserver-count=3

    :指定集群运行模式,多台 kube-apiserver 会通过 leader 选举产生一个工作节点,其它节点处于阻塞状态;
  • 红色部分为各个master主机部分

6、分发kube-apiserver.service文件到其他master

[root@k8s-master1 ssl]# scp /etc/systemd/system/kube-apiserver.service k8s-master2:/etc/systemd/system/kube-apiserver.service
[root@k8s-master1 ssl]# scp /etc/systemd/system/kube-apiserver.service k8s-master3:/etc/systemd/system/kube-apiserver.service

7、创建日志目录

mkdir -p /var/log/kubernetes

8、启动api-server服务

[root@k8s-master1 ssl]# systemctl daemon-reload
[root@k8s-master1 ssl]# systemctl enable kube-apiserver
[root@k8s-master1 ssl]# systemctl start kube-apiserver

9、检查api-server和集群状态

[root@k8s-master1 ssl]# netstat -ptln | grep kube-apiserve
tcp        0      0 192.168.80.9:6443       0.0.0.0:*               LISTEN      22348/kube-apiserve

[root@k8s-master1 ssl]#kubectl cluster-info
Kubernetes master is running at https://114.67.81.105:8443

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

10、授予kubernetes证书访问kubelet api权限

kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes


三、部署kube-controller-manager

该集群包含 3 个节点,启动后将通过竞争选举机制产生一个 leader 节点,其它节点为阻塞状态。当 leader 节点不可用后,剩余节点将再次进行选举产生新的 leader 节点,从而保证服务的可用性。

为保证通信安全,本文档先生成 x509 证书和私钥,kube-controller-manager 在如下两种情况下使用该证书:

  1. 与 kube-apiserver 的安全端口通信时;
  2. 在安全端口(https,10252) 输出 prometheus 格式的 metrics;

1、创建kube-controller-manager证书请求:

[root@k8s-master1 ssl]# cat > kube-controller-manager-csr.json << EOF
{
    "CN": "system:kube-controller-manager",
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "hosts": [
      "127.0.0.1",
      "192.168.80.7",
      "192.168.80.8",
      "192.168.80.9"
    ],
    "names": [
      {
        "C": "CN",
        "ST": "BeiJing",
        "L": "BeiJing",
        "O": "system:kube-controller-manager",
        "OU": "4Paradigm"
      }
    ]
}
EOF

  • hosts 列表包含所有 kube-controller-manager 节点 IP;
  • CN 为 system:kube-controller-manager、O 为 system:kube-controller-manager,kubernetes 内置的 ClusterRoleBindings system:kube-controller-manager 赋予 kube-controller-manager 工作所需的权限。

生成证书和私钥:

cfssl gencert -ca=/etc/kubernetes/cert/ca.pem \
  -ca-key=/etc/kubernetes/cert/ca-key.pem \
  -config=/etc/kubernetes/cert/ca-config.json \
  -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager

2、将生成的证书和私钥分发到所有 master 节点

[root@k8s-master1 ssl]# cp kube-controller-manager*.pem /etc/kubernetes/cert/
[root@k8s-master1 ssl]# scp kube-controller-manager*.pem k8s-master2:/etc/kubernetes/cert/
[root@k8s-master1 ssl]# scp kube-controller-manager*.pem k8s-master3:/etc/kubernetes/cert/

3、创建和分发kubeconfig文件

kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/cert/ca.pem \
  --embed-certs=true \
  --server=https://114.67.81.105:8443 \
  --kubeconfig=kube-controller-manager.kubeconfig

kubectl config set-credentials system:kube-controller-manager \
  --client-certificate=kube-controller-manager.pem \
  --client-key=kube-controller-manager-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-controller-manager.kubeconfig

kubectl config set-context system:kube-controller-manager \
  --cluster=kubernetes \
  --user=system:kube-controller-manager \
  --kubeconfig=kube-controller-manager.kubeconfig

kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig

分发 kube-controller-manager.kubeconfig 到所有 master 节点

[root@k8s-master1 ssl]# cp kube-controller-manager.kubeconfig /etc/kubernetes/cert/
[root@k8s-master1 ssl]# scp kube-controller-manager.kubeconfig k8s-master2:/etc/kubernetes/cert/
[root@k8s-master1 ssl]# scp kube-controller-manager.kubeconfig k8s-master3:/etc/kubernetes/cert/

4、创建和分发kube-controller-manager systemd unit文件

[root@k8s-master1 ssl]# cat > /etc/systemd/system/kube-controller-manager.service  << EOF
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-controller-manager \
  --address=127.0.0.1 \
  --kubeconfig=/etc/kubernetes/cert/kube-controller-manager.kubeconfig \
  --authentication-kubeconfig=/etc/kubernetes/cert/kube-controller-manager.kubeconfig \
  --service-cluster-ip-range=10.254.0.0/16 \
  --cluster-name=kubernetes \
  --cluster-signing-cert-file=/etc/kubernetes/cert/ca.pem \
  --cluster-signing-key-file=/etc/kubernetes/cert/ca-key.pem \
  --experimental-cluster-signing-duration=8760h \
  --root-ca-file=/etc/kubernetes/cert/ca.pem \
  --service-account-private-key-file=/etc/kubernetes/cert/ca-key.pem \
  --leader-elect=true \
  --feature-gates=RotateKubeletServerCertificate=true \
  --controllers=*,bootstrapsigner,tokencleaner \
  --horizontal-pod-autoscaler-use-rest-clients=true \
  --horizontal-pod-autoscaler-sync-period=10s \
  --tls-cert-file=/etc/kubernetes/cert/kube-controller-manager.pem \
  --tls-private-key-file=/etc/kubernetes/cert/kube-controller-manager-key.pem \
  --use-service-account-credentials=true \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=2
Restart=on
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF


  • --port=0

    :关闭监听 http /metrics 的请求,同时

    --address

    参数无效,

    --bind-address

    参数有效;

  • --secure-port=10252



    --bind-address=0.0.0.0

    : 在所有网络接口监听 10252 端口的 https /metrics 请求;
  • –address:指定监听的地址为127.0.0.1

  • --kubeconfig

    :指定 kubeconfig 文件路径,kube-controller-manager 使用它连接和验证 kube-apiserver;

  • --cluster-signing-*-file

    :签名 TLS Bootstrap 创建的证书;

  • --experimental-cluster-signing-duration

    :指定 TLS Bootstrap 证书的有效期;

  • --root-ca-file

    :放置到容器 ServiceAccount 中的 CA 证书,用来对 kube-apiserver 的证书进行校验;

  • --service-account-private-key-file

    :签名 ServiceAccount 中 Token 的私钥文件,必须和 kube-apiserver 的

    --service-account-key-file

    指定的公钥文件配对使用;

  • --service-cluster-ip-range

    :指定 Service Cluster IP 网段,必须和 kube-apiserver 中的同名参数一致;

  • --leader-elect=true

    :集群运行模式,启用选举功能;被选为 leader 的节点负责处理工作,其它节点为阻塞状态;

  • --feature-gates=RotateKubeletServerCertificate=true

    :开启 kublet server 证书的自动更新特性;

  • --controllers=*,bootstrapsigner,tokencleaner

    :启用的控制器列表,tokencleaner 用于自动清理过期的 Bootstrap token;

  • --horizontal-pod-autoscaler-*

    :custom metrics 相关参数,支持 autoscaling/v2alpha1;

  • --tls-cert-file



    --tls-private-key-file

    :使用 https 输出 metrics 时使用的 Server 证书和秘钥;

  • --use-service-account-credentials=true

    :

分发kube-controller-manager systemd unit文件

[root@k8s-master1 ssl]# scp /etc/systemd/system/kube-controller-manager.service k8s-master2:/etc/systemd/system/kube-controller-manager.service
[root@k8s-master1 ssl]# scp /etc/systemd/system/kube-controller-manager.service k8s-master3:/etc/systemd/system/kube-controller-manager.service

5、启动kube-controller-manager服务

[root@k8s-master1 ssl]# systemctl daemon-reload
[root@k8s-master1 ssl]# systemctl enable kube-controller-manager
[root@k8s-master1 ssl]# systemctl start kube-controller-manager

6、检查kube-controller-manager服务

[root@k8s-master1 ssl]# netstat -lnpt|grep kube-controll
tcp        0      0 127.0.0.1:10252         0.0.0.0:*               LISTEN      17906/kube-controll 
tcp6       0      0 :::10257                :::*                    LISTEN      17906/kube-controll

7、查看当前kube-controller-manager的leader

[root@k8s-master1 ssl]# kubectl get endpoints kube-controller-manager --namespace=kube-system  -o yaml
apiVersion: v1
kind: Endpoints
metadata:
  annotations:
    control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"k8s-master3_d19698f1-0379-11e9-9c06-fa163e0a2feb","leaseDurationSeconds":15,"acquireTime":"2018-12-19T10:40:15Z","renewTime":"2018-12-19T11:12:43Z","leaderTransitions":5}'
  creationTimestamp: 2018-12-19T08:53:45Z
  name: kube-controller-manager
  namespace: kube-system
  resourceVersion: "9860"
  selfLink: /api/v1/namespaces/kube-system/endpoints/kube-controller-manager
  uid: 97ef4bad-036b-11e9-90aa-fa163e5caede

可见,当前的 leader 为 kube-master3 节点。


四、部署kube-scheduler

该集群包含 3 个节点,启动后将通过竞争选举机制产生一个 leader 节点,其它节点为阻塞状态。当 leader 节点不可用后,剩余节点将再次进行选举产生新的 leader 节点,从而保证服务的可用性。

为保证通信安全,本文档先生成 x509 证书和私钥,kube-scheduler 在如下两种情况下使用该证书:

  1. 与 kube-apiserver 的安全端口通信;
  2. 在安全端口(https,10251) 输出 prometheus 格式的 metrics;

1、创建kube-scheduler证书请求

[root@k8s-master1 ssl]# cat > kube-scheduler-csr.json << EOF
{
    "CN": "system:kube-scheduler",
    "hosts": [
      "127.0.0.1",
      "192.168.80.7",
      "192.168.80.8",
      "192.168.80.9"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
      {
        "C": "CN",
        "ST": "BeiJing",
        "L": "BeiJing",
        "O": "system:kube-scheduler",
        "OU": "4Paradigm"
      }
    ]
}
EOF

  • hosts 列表包含所有 kube-scheduler 节点 IP;
  • CN 为 system:kube-scheduler、O 为 system:kube-scheduler,kubernetes 内置的 ClusterRoleBindings system:kube-scheduler 将赋予 kube-scheduler 工作所需的权限。

生成证书和私钥:

cfssl gencert -ca=/etc/kubernetes/cert/ca.pem \
  -ca-key=/etc/kubernetes/cert/ca-key.pem \
  -config=/etc/kubernetes/cert/ca-config.json \
  -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler

2、创建和分发kube-scheduler.kubeconfig文件

kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/cert/ca.pem \
  --embed-certs=true \
  --server=https://114.67.81.105:8443 \
  --kubeconfig=kube-scheduler.kubeconfig

kubectl config set-credentials system:kube-scheduler \
  --client-certificate=kube-scheduler.pem \
  --client-key=kube-scheduler-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-scheduler.kubeconfig

kubectl config set-context system:kube-scheduler \
  --cluster=kubernetes \
  --user=system:kube-scheduler \
  --kubeconfig=kube-scheduler.kubeconfig

kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig

  • 上一步创建的证书、私钥以及 kube-apiserver 地址被写入到 kubeconfig 文件中;

分发 kubeconfig 到所有 master 节点:

[root@k8s-master1 ssl]# cp kube-scheduler.kubeconfig /etc/kubernetes/cert/
[root@k8s-master1 ssl]# scp kube-scheduler.kubeconfig k8s-master2:/etc/kubernetes/cert/
[root@k8s-master1 ssl]# scp kube-scheduler.kubeconfig k8s-master3:/etc/kubernetes/cert/

3、创建和分发kube-scheduler systemd unit文件

[root@k8s-master1 ssl]# cat > /etc/systemd/system/kube-scheduler.service << EOF
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/GoogleCloudPlatform/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-scheduler \
  --address=127.0.0.1 \
  --kubeconfig=/etc/kubernetes/cert/kube-scheduler.kubeconfig \
  --leader-elect=true \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF


  • --address

    :在 127.0.0.1:10251 端口接收 http /metrics 请求;kube-scheduler 目前还不支持接收 https 请求;

  • --kubeconfig

    :指定 kubeconfig 文件路径,kube-scheduler 使用它连接和验证 kube-apiserver;

  • --leader-elect=true

    :集群运行模式,启用选举功能;被选为 leader 的节点负责处理工作,其它节点为阻塞状态;

分发 systemd unit 文件到所有 master 节点:

[root@k8s-master1 ssl]# scp /etc/systemd/system/kube-scheduler.service k8s-master2:/etc/systemd/system/kube-scheduler.service
[root@k8s-master1 ssl]# scp /etc/systemd/system/kube-scheduler.service k8s-master3:/etc/systemd/system/kube-scheduler.service

4、启动kube-scheduler服务

[root@k8s-master1 ssl]# systemctl daemon-reload
[root@k8s-master1 ssl]# systemctl enable kube-scheduler
[root@k8s-master1 ssl]# systemctl start kube-scheduler

5、查看kube-scheduler运行监听端口

[root@k8s-master1 ssl]# netstat -lnpt|grep kube-sche
tcp        0      0 127.0.0.1:10251         0.0.0.0:*               LISTEN      17921/kube-schedule

6、查看当前kube-scheduler的leader

[root@k8s-master1 ssl]# kubectl get endpoints kube-scheduler --namespace=kube-system  -o yaml
apiVersion: v1
kind: Endpoints
metadata:
  annotations:
    control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"k8s-master1_d41f4473-0379-11e9-a19b-fa163e0a2feb","leaseDurationSeconds":15,"acquireTime":"2018-12-19T10:38:27Z","renewTime":"2018-12-19T11:14:06Z","leaderTransitions":2}'
  creationTimestamp: 2018-12-19T09:10:56Z
  name: kube-scheduler
  namespace: kube-system
  resourceVersion: "9961"
  selfLink: /api/v1/namespaces/kube-system/endpoints/kube-scheduler
  uid: fe267870-036d-11e9-90aa-fa163e5caede

可见,当前的 leader 为 kube-master1 节点。


七、在所有master节点上验证功能是否正常

[root@k8s-master1 ~]#  kubectl get componentstatuses
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-1               Healthy   {"health":"true"}   
etcd-0               Healthy   {"health":"true"}   
etcd-2               Healthy   {"health":"true"}


八、Haproxy+keepalived配置k8s master高可用

(每台master都进行操作,红色字体改成对应主机的即可)

  • keepalived 提供 kube-apiserver 对外服务的 VIP;
  • haproxy 监听 VIP,后端连接所有 kube-apiserver 实例,提供健康检查和负载均衡功能;

运行 keepalived 和 haproxy 的节点称为 LB 节点。由于 keepalived 是一主多备运行模式,故至少两个 LB 节点。

本文档复用 master 节点的三台机器,haproxy 监听的端口(8443) 需要与 kube-apiserver 的端口 6443 不同,避免冲突。

keepalived 在运行过程中周期检查本机的 haproxy 进程状态,如果检测到 haproxy 进程异常,则触发重新选主的过程,VIP 将飘移到新选出来的主节点,从而实现 VIP 的高可用。

所有组件(如 kubeclt、apiserver、controller-manager、scheduler 等)都通过 VIP 和 haproxy 监听的 8443 端口访问 kube-apiserver 服务。

1、安装haproxy和keepalived

yum install -y keepalived haproxy

2、三个master配置haproxy代理api-server服务

[root@k8s-master1 ~]# cat /etc/haproxy/haproxy.cfg 
global
    log /dev/log    local0
    log /dev/log    local1 notice
    chroot /var/lib/haproxy
    stats socket /var/run/haproxy-admin.sock mode 660 level admin
    stats timeout 30s
    user haproxy
    group haproxy
    daemon
    nbproc 1

defaults
    log     global
    timeout connect 5000
    timeout client  10m
    timeout server  10m

listen  admin_stats
    bind 0.0.0.0:10080
    mode http
    log 127.0.0.1 local0 err
    stats refresh 30s
    stats uri /status
    stats realm welcome login\ Haproxy
    stats auth admin:123456
    stats hide-version
    stats admin if TRUE

listen kube-master
    bind 0.0.0.0:8443
    mode tcp
    option tcplog
    balance roundrobin
    server 192.168.80.7 192.168.80.7:6443 check inter 2000 fall 2 rise 2 weight 1
    server 192.168.80.8 192.168.80.8:6443 check inter 2000 fall 2 rise 2 weight 1
    server 192.168.80.9 192.168.80.9:6443 check inter 2000 fall 2 rise 2 weight 1

  • haproxy 在 10080 端口输出 status 信息;
  • haproxy 监听所有接口的 8443 端口,该端口与环境变量 ${KUBE_APISERVER} 指定的端口必须一致;
  • server 字段列出所有 kube-apiserver 监听的 IP 和端口;

3、三个master配置keepalived服务

[root@k8s-master1 ~]# cat /etc/keepalived/keepalived.conf 
global_defs {
    router_id lb-master-105
}

vrrp_script check-haproxy {
    script "killall -0 haproxy"
    interval 3
}

vrrp_instance VI-kube-master {
    state BACKUP
    nopreempt    #设置不抢占,必须设置在backup上且priority最高的节点上
    priority 120
    dont_track_primary
    interface ens192
    virtual_router_id 68
    advert_int 3
    track_script {
        check-haproxy
    }
    virtual_ipaddress {
        114.67.81.105    #VIP,访问此IP调用api-server
    }
}

  • 使用

    killall -0 haproxy

    命令检查所在节点的 haproxy 进程是否正常。
  • router_id、virtual_router_id 用于标识属于该 HA 的 keepalived 实例,如果有多套 keepalived HA,则必须各不相同;
  • 其他2个backup把nopreempt去掉,及priority分别设置110和100即可。

4、启动haproxy和keepalived服务

#haproxy
systemctl enable haproxy
systemctl start haproxy

#keepalive
systemctl enable keepalived
systemctl start keepalived

5、查看haproxy和keepalived服务状态以及VIP情况

systemctl status haproxy|grep Active
systemctl status keepalived|grep Active

如果Active: active (running)表示正常。

6、查看VIP所属情况

ip addr show | grep 114.67.81.105

我这里VIP在192.168.80.7上。

为了验证高可用配置成功否,可以把192.168.80.7上的haproxy服务关闭,此时VIP会漂移到192.168.80.8服务器上,当192.168.80.7解决问题重启后,由于它配置了nopreempt,所以它不会重新抢占VIP资源。

注:* 如果使用云搭建的集群,在高可用这块可以直接用云服务商提供的SLB服务,如果haproxy+keepalive可能不支持,原因你懂的。(云底层封掉了)


下面部署node节点

本文在以下主机上操作部署k8s node

k8s-node1:192.168.80.10

k8s-node2:192.168.80.11

k8s-node3:192.168.80.12

以下kubeadm和kubectl命令操作都是在k8s-master1上执行的。

kubernetes work 节点运行如下组件:

  • docker
  • kubelet
  • kube-proxy
  • flannel

docker和flannel部署参考:

二进制搭建kubernetes多master集群【二、配置flannel网络】



docker-ce安装


一、安装依赖包

yum install -y epel-release wget conntrack ipvsadm ipset jq iptables curl sysstat libseccomp && /usr/sbin/modprobe ip_vs


二、部署kubelet组件

kublet 运行在每个 worker 节点上,接收 kube-apiserver 发送的请求,管理 Pod 容器,执行交互式命令,如 exec、run、logs 等。

kublet 启动时自动向 kube-apiserver 注册节点信息,内置的 cadvisor 统计和监控节点的资源使用情况。

为确保安全,本文档只开启接收 https 请求的安全端口,对请求进行认证和授权,拒绝未授权的访问(如 apiserver、heapster)。


1、下载和分发kubelet二进制文件

wget https://dl.k8s.io/v1.12.3/kubernetes-server-linux-amd64.tar.gz
tar -xzvf kubernetes-server-linux-amd64.tar.gz
cp kubernetes/server/bin/
cp kubelet kube-proxy /usr/local/bin
scp  kubelet kube-proxy k8s-node2:/usr/local/bin
scp  kubelet kube-proxy k8s-node3:/usr/local/bin


2、创建kubelet bootstrap kubeconfig文件 (k8s-master1上执行)

#创建 token
export BOOTSTRAP_TOKEN=$(kubeadm token create \
  --description kubelet-bootstrap-token \
  --groups system:bootstrappers:k8s-master1 \
  --kubeconfig ~/.kube/config)

# 设置集群参数
kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/cert/ca.pem \
  --embed-certs=true \
  --server=https://114.67.81.105:8443 \
  --kubeconfig=kubelet-bootstrap-k8s-master1.kubeconfig

# 设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \
  --token=${BOOTSTRAP_TOKEN} \
  --kubeconfig=kubelet-bootstrap-k8s-master1.kubeconfig

# 设置上下文参数
kubectl config set-context default \
  --cluster=kubernetes \
  --user=kubelet-bootstrap \
  --kubeconfig=kubelet-bootstrap-k8s-master1.kubeconfig

# 设置默认上下文
kubectl config use-context default --kubeconfig=kubelet-bootstrap-k8s-master1.kubeconfig

  • kubelet bootstrap kubeconfig文件创建三次,分别把k8s-master1改成k8s-master2、k8s-master3。
  • 证书中写入 Token 而非证书,证书后续由 controller-manager 创建。


3、查看 kubeadm 为各节点创建的 token:

[root@k8s-master1 ~]# kubeadm token list --kubeconfig ~/.kube/config
TOKEN                     TTL       EXPIRES                     USAGES                   DESCRIPTION               EXTRA GROUPS
8w6j3n.ruh4ne95icbae4ie   23h       2018-12-21T20:42:29+08:00   authentication,signing   kubelet-bootstrap-token   system:bootstrappers:k8s-master3
e7n0o5.1y8sjblh43z8ftz1   23h       2018-12-21T20:41:53+08:00   authentication,signing   kubelet-bootstrap-token   system:bootstrappers:k8s-master2
ydbwyk.yz8e97df5d5u2o70   22h       2018-12-21T19:28:43+08:00   authentication,signing   kubelet-bootstrap-token   system:bootstrappers:k8s-master1
  • 创建的 token 有效期为 1 天,超期后将不能再被使用,且会被 kube-controller-manager 的 tokencleaner 清理(如果启用该 controller 的话);
  • kube-apiserver 接收 kubelet 的 bootstrap token 后,将请求的 user 设置为 system:bootstrap:,group 设置为 system:bootstrappers;

查看各 token 关联的 Secret:(红色的为创建生成的token)

[root@k8s-master1 ~]# kubectl get secrets  -n kube-system
NAME                                             TYPE                                  DATA   AGE
attachdetach-controller-token-z2w72              kubernetes.io/service-account-token   3      119m
bootstrap-signer-token-hz8dr                     kubernetes.io/service-account-token   3      119m
bootstrap-token-8w6j3n                           bootstrap.kubernetes.io/token         7      20m
bootstrap-token-e7n0o5                           bootstrap.kubernetes.io/token         7      20m
bootstrap-token-ydbwyk                           bootstrap.kubernetes.io/token         7      93m
certificate-controller-token-bjhbq               kubernetes.io/service-account-token   3      119m
clusterrole-aggregation-controller-token-qkqxg   kubernetes.io/service-account-token   3      119m
cronjob-controller-token-v7vz5                   kubernetes.io/service-account-token   3      119m
daemon-set-controller-token-7khdh                kubernetes.io/service-account-token   3      119m
default-token-nwqsr                              kubernetes.io/service-account-token   3      119m


4、分发bootstrap kubeconfig文件

[root@k8s-master1 ~]# scp kubelet-bootstrap-k8s-master1.kubeconfig k8s-node1:/etc/kubernetes/cert/kubelet-bootstrap.kubeconfig
[root@k8s-master1 ~]# scp kubelet-bootstrap-k8s-master2.kubeconfig k8s-node2:/etc/kubernetes/cert/kubelet-bootstrap.kubeconfig
[root@k8s-master1 ~]# scp kubelet-bootstrap-k8s-master3.kubeconfig k8s-node3:/etc/kubernetes/cert/kubelet-bootstrap.kubeconfig


5、创建和分发kubelet参数配置文件

从 v1.10 开始,kubelet 部分参数需在配置文件中配置,

kubelet --help

会提示:

DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag

创建 kubelet 参数配置模板文件:(红色字体改成对应node主机ip)

cat > kubelet.config.json <<EOF
{
  "kind": "KubeletConfiguration",
  "apiVersion": "kubelet.config.k8s.io/v1beta1",
  "authentication": {
    "x509": {
      "clientCAFile": "/etc/kubernetes/cert/ca.pem"
    },
    "webhook": {
      "enabled": true,
      "cacheTTL": "2m0s"
    },
    "anonymous": {
      "enabled": false
    }
  },
  "authorization": {
    "mode": "Webhook",
    "webhook": {
      "cacheAuthorizedTTL": "5m0s",
      "cacheUnauthorizedTTL": "30s"
    }
  },
  "address": "192.168.80.10",
  "port": 10250,
  "readOnlyPort": 0,
  "cgroupDriver": "cgroupfs",
  "hairpinMode": "promiscuous-bridge",
  "serializeImagePulls": false,
  "featureGates": {
    "RotateKubeletClientCertificate": true,
    "RotateKubeletServerCertificate": true
  },
  "clusterDomain": "cluster.local.",
  "clusterDNS": ["10.254.0.2"]
}
EOF

  • address:API 监听地址,不能为 127.0.0.1,否则 kube-apiserver、heapster 等不能调用 kubelet 的 API;
  • readOnlyPort=0:关闭只读端口(默认 10255),等效为未指定;
  • authentication.anonymous.enabled:设置为 false,不允许匿名�访问 10250 端口;
  • authentication.x509.clientCAFile:指定签名客户端证书的 CA 证书,开启 HTTP 证书认证;
  • authentication.webhook.enabled=true:开启 HTTPs bearer token 认证;
  • 对于未通过 x509 证书和 webhook 认证的请求(kube-apiserver 或其他客户端),将被拒绝,提示 Unauthorized;
  • authroization.mode=Webhook:kubelet 使用 SubjectAccessReview API 查询 kube-apiserver 某 user、group 是否具有操作资源的权限(RBAC);
  • featureGates.RotateKubeletClientCertificate、featureGates.RotateKubeletServerCertificate:自动 rotate 证书,证书的有效期取决于 kube-controller-manager 的 –experimental-cluster-signing-duration 参数;
  • 需要 root 账户运行;

为各节点创建和分发 kubelet 配置文件:

scp kubelet.config.json k8s-node1:/etc/kubernetes/cert/kubelet.config.json
scp kubelet.config.json k8s-node2:/etc/kubernetes/cert/kubelet.config.json
scp kubelet.config.json k8s-node3:/etc/kubernetes/cert/kubelet.config.json


6、创建和分发kubelet systemd unit文件

(红色字体改成对应node主机ip)

[root@k8s-node1 ~]# cat /etc/systemd/system/kubelet.service 
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/var/lib/kubelet
ExecStart=/usr/local/bin/kubelet \
  --bootstrap-kubeconfig=/etc/kubernetes/cert/kubelet-bootstrap.kubeconfig \
  --cert-dir=/etc/kubernetes/cert \
  --kubeconfig=/etc/kubernetes/cert/kubelet.kubeconfig \
  --config=/etc/kubernetes/cert/kubelet.config.json \
  --hostname-override=192.168.80.10 \
  --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:3.1 \
  --allow-privileged=true \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/log/kubernetes \
  --v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target

  • 如果设置了

    --hostname-override

    选项,则

    kube-proxy

    也需要设置该选项,否则会出现找不到 Node 的情况;

  • --bootstrap-kubeconfig

    :指向 bootstrap kubeconfig 文件,kubelet 使用该文件中的用户名和 token 向 kube-apiserver 发送 TLS Bootstrapping 请求;
  • K8S approve kubelet 的 csr 请求后,在

    --cert-dir

    目录创建证书和私钥文件,然后写入

    --kubeconfig

    文件;

为各节点创建和分发 kubelet systemd unit 文件:

scp /etc/systemd/system/kubelet.service k8s-node2:/etc/systemd/system/kubelet.service
scp /etc/systemd/system/kubelet.service k8s-node3:/etc/systemd/system/kubelet.service


7、Bootstrap Token Auth和授予权限

kublet 启动时查找配置的 –kubeletconfig 文件是否存在,如果不存在则使用 –bootstrap-kubeconfig 向 kube-apiserver 发送证书签名请求 (CSR)。

kube-apiserver 收到 CSR 请求后,对其中的 Token 进行认证(事先使用 kubeadm 创建的 token),认证通过后将请求的 user 设置为 system:bootstrap:,group 设置为 system:bootstrappers,这一过程称为 Bootstrap Token Auth。

默认情况下,这个 user 和 group 没有创建 CSR 的权限,kubelet 启动失败,错误日志如下:

sudo journalctl -u kubelet -a |grep -A 2 'certificatesigningrequests'
May 06 06:42:36 kube-node1 kubelet[26986]: F0506 06:42:36.314378   26986 server.go:233] failed to run Kubelet: cannot create certificate signing request: certificatesigningrequests.certificates.k8s.io is forbidden: User "system:bootstrap:lemy40" cannot create certificatesigningrequests.certificates.k8s.io at the cluster scope
May 06 06:42:36 kube-node1 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/n/a
May 06 06:42:36 kube-node1 systemd[1]: kubelet.service: Failed with result 'exit-code'.

解决办法是:创建一个 clusterrolebinding,将 group system:bootstrappers 和 clusterrole system:node-bootstrapper 绑定:

[root@k8s-master1 ~]#  kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --group=system:bootstrappers


8、启动kubelet服务

mkdir -p /var/log/kubernetes && mkdir -p /var/lib/kubelet
systemctl daemon-reload 
systemctl enable kubelet 
systemctl restart kubelet
  • 关闭 swap 分区,否则 kubelet 会启动失败;
  • 必须先创建工作和日志目录;

kubelet 启动后使用 –bootstrap-kubeconfig 向 kube-apiserver 发送 CSR 请求,当这个 CSR 被 approve 后,kube-controller-manager 为 kubelet 创建 TLS 客户端证书、私钥和 –kubeletconfig 文件。

注意:kube-controller-manager 需要配置

--cluster-signing-cert-file



--cluster-signing-key-file

参数,才会为 TLS Bootstrap 创建证书和私钥。

  • 三个 work 节点的 csr 均处于 pending 状态;

此时kubelet的进程有,但是监听端口还未启动,需要进行下面步骤!


9、approve kubelet csr请求

可以手动或自动 approve CSR 请求。推荐使用自动的方式,因为从 v1.8 版本开始,可以自动轮转approve csr 后生成的证书。


i、手动approve csr请求

查看 CSR 列表:

[root@k8s-master1 ~]# kubectl get csr
NAME                                                   AGE   REQUESTOR                 CONDITION
node-csr-P7XcQAc2yNlXn1pUmQFxXNCdGyyt8ccVuW3bmoUZiK4   30m   system:bootstrap:e7n0o5   Pending
node-csr-gD18nmcyPUNWNyDQvCo2BMYiiA4K59BNkclFRWv1SAM   79m   system:bootstrap:ydbwyk   Pending
node-csr-u2sVzVkFYnMxPIYWjXHbqRJROtTZBYzA1s2vATPLzyo   30m   system:bootstrap:8w6j3n   Pending

approve CSR

[root@k8s-master1 ~]# kubectl certificate approve node-csr-gD18nmcyPUNWNyDQvCo2BMYiiA4K59BNkclFRWv1SAM
certificatesigningrequest.certificates.k8s.io "node-csr gD18nmcyPUNWNyDQvCo2BMYiiA4K59BNkclFRWv1SAM" approved

查看 Approve 结果:

[root@k8s-master1 ~]# kubectl describe csr node-csr-gD18nmcyPUNWNyDQvCo2BMYiiA4K59BNkclFRWv1SAM
Name:               node-csr-gD18nmcyPUNWNyDQvCo2BMYiiA4K59BNkclFRWv1SAM
Labels:             <none>
Annotations:        <none>
CreationTimestamp:  Thu, 20 Dec 2018 19:55:39 +0800
Requesting User:    system:bootstrap:ydbwyk
Status:             Approved,Issued
Subject:
         Common Name:    system:node:192.168.80.10
         Serial Number:  
         Organization:   system:nodes
Events:  <none>


  • Requesting User

    :请求 CSR 的用户,kube-apiserver 对它进行认证和授权;

  • Subject

    :请求签名的证书信息;
  • 证书的 CN 是 system:node:192.168.80.10, Organization 是 system:nodes,kube-apiserver 的 Node 授权模式会授予该证书的相关权限;


ii、自动approve csr请求

创建三个 ClusterRoleBinding,分别用于自动 approve client、renew client、renew server 证书:

[root@k8s-master1 ~]# cat > csr-crb.yaml <<EOF
 # Approve all CSRs for the group "system:bootstrappers"
 kind: ClusterRoleBinding
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   name: auto-approve-csrs-for-group
 subjects:
 - kind: Group
   name: system:bootstrappers
   apiGroup: rbac.authorization.k8s.io
 roleRef:
   kind: ClusterRole
   name: system:certificates.k8s.io:certificatesigningrequests:nodeclient
   apiGroup: rbac.authorization.k8s.io
---
 # To let a node of the group "system:nodes" renew its own credentials
 kind: ClusterRoleBinding
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   name: node-client-cert-renewal
 subjects:
 - kind: Group
   name: system:nodes
   apiGroup: rbac.authorization.k8s.io
 roleRef:
   kind: ClusterRole
   name: system:certificates.k8s.io:certificatesigningrequests:selfnodeclient
   apiGroup: rbac.authorization.k8s.io
---
# A ClusterRole which instructs the CSR approver to approve a node requesting a
# serving cert matching its client cert.
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: approve-node-server-renewal-csr
rules:
- apiGroups: ["certificates.k8s.io"]
  resources: ["certificatesigningrequests/selfnodeserver"]
  verbs: ["create"]
---
 # To let a node of the group "system:nodes" renew its own server credentials
 kind: ClusterRoleBinding
 apiVersion: rbac.authorization.k8s.io/v1
 metadata:
   name: node-server-cert-renewal
 subjects:
 - kind: Group
   name: system:nodes
   apiGroup: rbac.authorization.k8s.io
 roleRef:
   kind: ClusterRole
   name: approve-node-server-renewal-csr
   apiGroup: rbac.authorization.k8s.io
EOF

  • auto-approve-csrs-for-group:自动 approve node 的第一次 CSR; 注意第一次 CSR 时,请求的 Group 为 system:bootstrappers;
  • node-client-cert-renewal:自动 approve node 后续过期的 client 证书,自动生成的证书 Group 为 system:nodes;
  • node-server-cert-renewal:自动 approve node 后续过期的 server 证书,自动生成的证书 Group 为 system:nodes;

生效配置:

[root@k8s-master1 ~]# kubectl apply -f csr-crb.yaml


10、查看kubelet情况

等待一段时间(1-10 分钟),三个节点的 CSR 都被自动 approve:

[root@k8s-master1 ~]# kubectl get csr
NAME                                                   AGE   REQUESTOR                 CONDITION
node-csr-P7XcQAc2yNlXn1pUmQFxXNCdGyyt8ccVuW3bmoUZiK4   35m   system:bootstrap:e7n0o5   Approved,Issued
node-csr-gD18nmcyPUNWNyDQvCo2BMYiiA4K59BNkclFRWv1SAM   84m   system:bootstrap:ydbwyk   Approved,Issued
node-csr-u2sVzVkFYnMxPIYWjXHbqRJROtTZBYzA1s2vATPLzyo   35m   system:bootstrap:8w6j3n   Approved,Issued

所有节点均 ready:

[root@k8s-master1 ~]# kubectl get nodes
NAME            STATUS   ROLES    AGE   VERSION
192.168.80.10   Ready    <none>   69m   v1.12.3
192.168.80.11   Ready    <none>   36m   v1.12.3
192.168.80.12   Ready    <none>   36m   v1.12.3

kube-controller-manager 为各 node 生成了 kubeconfig 文件和公私钥:

[root@k8s-node1 ~]# ll /etc/kubernetes/cert/
total 40
-rw------- 1 root root 1675 Dec 20 19:10 ca-key.pem
-rw-r--r-- 1 root root 1367 Dec 20 19:10 ca.pem
-rw------- 1 root root 1679 Dec 20 19:10 flanneld-key.pem
-rw-r--r-- 1 root root 1399 Dec 20 19:10 flanneld.pem
-rw------- 1 root root 2170 Dec 20 20:43 kubelet-bootstrap.kubeconfig
-rw------- 1 root root 1277 Dec 20 20:43 kubelet-client-2018-12-20-20-43-59.pem
lrwxrwxrwx 1 root root   59 Dec 20 20:43 kubelet-client-current.pem -> /etc/kubernetes/cert/kubelet-client-2018-12-20-20-43-59.pem
-rw-r--r-- 1 root root  800 Dec 20 20:18 kubelet.config.json
-rw-r--r-- 1 root root 2185 Dec 20 20:43 kubelet.crt
-rw------- 1 root root 1675 Dec 20 20:43 kubelet.key
-rw------- 1 root root 2310 Dec 20 20:43 kubelet.kubeconfig

  • kubelet-server 证书会周期轮转;


11、Kubelet提供的API接口

kublet 启动后监听多个端口,用于接收 kube-apiserver 或其它组件发送的请求:

[root@k8s-node1 ~]# netstat -lnpt|grep kubelet
tcp        0      0 127.0.0.1:41980         0.0.0.0:*               LISTEN      7891/kubelet        
tcp        0      0 127.0.0.1:10248         0.0.0.0:*               LISTEN      7891/kubelet        
tcp        0      0 192.168.80.10:10250     0.0.0.0:*               LISTEN      7891/kubelet
  • 4194: cadvisor http 服务;
  • 10248: healthz http 服务;
  • 10250: https API 服务;注意:未开启只读端口 10255;

例如执行

kubectl ec -it nginx-ds-5rmws -- sh

命令时,kube-apiserver 会向 kubelet 发送如下请求:

POST /exec/default/nginx-ds-5rmws/my-nginx?command=sh&input=1&output=1&tty=1

kubelet 接收 10250 端口的 https 请求:

  • /pods、/runningpods
  • /metrics、/metrics/cadvisor、/metrics/probes
  • /spec
  • /stats、/stats/container
  • /logs
  • /run/、”/exec/”, “/attach/”, “/portForward/”, “/containerLogs/” 等管理;

详情参考:

https://github.com/kubernetes/kubernetes/blob/master/pkg/kubelet/server/server.go#L434:3

由于关闭了匿名认证,同时开启了 webhook 授权,所有访问 10250 端口 https API 的请求都需要被认证和授权。

预定义的 ClusterRole system:kubelet-api-admin 授予访问 kubelet 所有 API 的权限:

[root@k8s-master1 ~]# kubectl describe clusterrole system:kubelet-api-admin
Name:         system:kubelet-api-admin
Labels:       kubernetes.io/bootstrapping=rbac-defaults
Annotations:  rbac.authorization.kubernetes.io/autoupdate: true
PolicyRule:
  Resources      Non-Resource URLs  Resource Names  Verbs
  ---------      -----------------  --------------  -----
  nodes/log      []                 []              [*]
  nodes/metrics  []                 []              [*]
  nodes/proxy    []                 []              [*]
  nodes/spec     []                 []              [*]
  nodes/stats    []                 []              [*]
  nodes          []                 []              [get list watch proxy]


12、kubet api认证和授权

kublet的配置文件kubelet.config.json配置了如下认证参数:

  • authentication.anonymous.enabled:设置为 false,不允许匿名访问 10250 端口;
  • authentication.x509.clientCAFile:指定签名客户端证书的 CA 证书,开启 HTTPs 证书认证;
  • authentication.webhook.enabled=true:开启 HTTPs bearer token 认证;

同时配置了如下授权参数:

  • authroization.mode=Webhook:开启 RBAC 授权;

kubelet 收到请求后,使用 clientCAFile 对证书签名进行认证,或者查询 bearer token 是否有效。如果两者都没通过,则拒绝请求,提示 Unauthorized:

[root@k8s-node1 ~]# curl -s --cacert /etc/kubernetes/cert/ca.pem https://192.168.80.10:10250/metrics
Unauthorized
[root@k8s-node1 ~]# curl -s --cacert /etc/kubernetes/cert/ca.pem -H "Authorization: Bearer 123456"  https://192.168.80.10:10250/metrics
Unauthorized

通过认证后,kubelet 使用 SubjectAccessReview API 向 kube-apiserver 发送请求,查询证书或 token 对应的 user、group 是否有操作资源的权限(RBAC);

证书认证和授权:

# 权限不足的证书;
$ curl -s --cacert /etc/kubernetes/cert/ca.pem --cert /etc/kubernetes/cert/kube-controller-manager.pem --key /etc/kubernetes/cert/kube-controller-manager-key.pem https://192.168.80.10:10250/metrics
Forbidden (user=system:kube-controller-manager, verb=get, resource=nodes, subresource=metrics)

$ # 使用部署 kubectl 命令行工具时创建的、具有最高权限的 admin 证书;
$ curl -s --cacert /etc/kubernetes/cert/ca.pem --cert ./admin.pem --key ./admin-key.pem https://192.168.80.10:10250/metrics|head
# HELP apiserver_client_certificate_expiration_seconds Distribution of the remaining lifetime on the certificate used to authenticate a request.
# TYPE apiserver_client_certificate_expiration_seconds histogram
apiserver_client_certificate_expiration_seconds_bucket{le="0"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="21600"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="43200"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="86400"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="172800"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="345600"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="604800"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="2.592e+06"} 0


  • --cacert



    --cert



    --key

    的参数值必须是文件路径,如上面的

    ./admin.pem

    不能省略

    ./

    ,否则返回

    401 Unauthorized

bear token 认证和授权:

创建一个 ServiceAccount,将它和 ClusterRole system:kubelet-api-admin 绑定,从而具有调用 kubelet API 的权限:

kubectl create sa kubelet-api-test
kubectl create clusterrolebinding kubelet-api-test --clusterrole=system:kubelet-api-admin --serviceaccount=default:kubelet-api-test
SECRET=$(kubectl get secrets | grep kubelet-api-test | awk '{print $1}')
TOKEN=$(kubectl describe secret ${SECRET} | grep -E '^token' | awk '{print $2}')
echo ${TOKEN}

$ curl -s --cacert /etc/kubernetes/cert/ca.pem -H "Authorization: Bearer ${TOKEN}" https://192.168.80.10:10250/metrics|head
# HELP apiserver_client_certificate_expiration_seconds Distribution of the remaining lifetime on the certificate used to authenticate a request.
# TYPE apiserver_client_certificate_expiration_seconds histogram
apiserver_client_certificate_expiration_seconds_bucket{le="0"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="21600"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="43200"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="86400"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="172800"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="345600"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="604800"} 0
apiserver_client_certificate_expiration_seconds_bucket{le="2.592e+06"} 0

注意:

  • kublet.config.json 设置 authentication.anonymous.enabled 为 false,不允许匿名证书访问 10250 的 https 服务;
  • 参考

    A.浏览器访问kube-apiserver安全端口.md

    ,创建和导入相关证书,然后访问上面的 10250 端口;


三、部署kube-proxy组件

kube-proxy 运行在所有 worker 节点上,,它监听 apiserver 中 service 和 Endpoint 的变化情况,创建路由规则来进行服务负载均衡。

本文档讲解部署 kube-proxy 的部署,使用 ipvs 模式。


1、创建kube-proxy证书

[root@k8s-master1 cert]# cat > kube-proxy-csr.json <<EOF
{
  "CN": "system:kube-proxy",
  "key": {
    "algo": "rsa",
    "size": 2048
  },
  "names": [
    {
      "C": "CN",
      "ST": "BeiJing",
      "L": "BeiJing",
      "O": "k8s",
      "OU": "4Paradigm"
    }
  ]
}
EOF

  • CN:指定该证书的 User 为

    system:kube-proxy

  • 预定义的 RoleBinding

    system:node-proxier

    将User

    system:kube-proxy

    与 Role

    system:node-proxier

    绑定,该 Role 授予了调用

    kube-apiserver

    Proxy 相关 API 的权限;
  • 该证书只会被 kube-proxy 当做 client 证书使用,所以 hosts 字段为空;

生成证书和私钥:

[root@k8s-master1 cert]# cfssl gencert -ca=/etc/kubernetes/cert/ca.pem \
  -ca-key=/etc/kubernetes/cert/ca-key.pem \
  -config=/etc/kubernetes/cert/ca-config.json \
  -profile=kubernetes  kube-proxy-csr.json | cfssljson -bare kube-proxy


2、创建和分发kubeconfig文件

[root@k8s-master1 cert]#kubectl config set-cluster kubernetes \
  --certificate-authority=/etc/kubernetes/cert/ca.pem \
  --embed-certs=true \
  --server=https://114.67.81.105:8443 \
  --kubeconfig=kube-proxy.kubeconfig

[root@k8s-master1 cert]#kubectl config set-credentials kube-proxy \
  --client-certificate=kube-proxy.pem \
  --client-key=kube-proxy-key.pem \
  --embed-certs=true \
  --kubeconfig=kube-proxy.kubeconfig

[root@k8s-master1 cert]#kubectl config set-context default \
  --cluster=kubernetes \
  --user=kube-proxy \
  --kubeconfig=kube-proxy.kubeconfig

[root@k8s-master1 cert]#kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig


  • --embed-certs=true

    :将 ca.pem 和 admin.pem 证书内容嵌入到生成的 kubectl-proxy.kubeconfig 文件中(不加时,写入的是证书文件路径);

分发kubeconfig文件

[root@k8s-master1 cert]# scp kube-proxy.kubeconfig k8s-node1:/etc/kubernetes/cert/
[root@k8s-master1 cert]# scp kube-proxy.kubeconfig k8s-node2:/etc/kubernetes/cert/
[root@k8s-master1 cert]# scp kube-proxy.kubeconfig k8s-node3:/etc/kubernetes/cert/


3、创建kube-proxy配置文件

从 v1.10 开始,kube-proxy 部分参数可以配置文件中配置。可以使用

--write-config-to

选项生成该配置文件,或者参考 kubeproxyconfig 的类型定义源文件 :

https://github.com/kubernetes/kubernetes/blob/master/pkg/proxy/apis/kubeproxyconfig/types.go

创建 kube-proxy config 文件模板:

[root@k8s-master1 cert]# cat >kube-proxy.config.yaml <<EOF
apiVersion: kubeproxy.config.k8s.io/v1alpha1
bindAddress: 192.168.80.10
clientConnection:
kubeconfig: /etc/kubernetes/cert/kube-proxy.kubeconfig
clusterCIDR: 172.30.0.0/16
healthzBindAddress: 192.168.80.10:10256
hostnameOverride: k8s-node1
kind: KubeProxyConfiguration
metricsBindAddress: 192.168.80.10:10249
mode: "ipvs"
EOF


  • bindAddress

    : 监听地址;

  • clientConnection.kubeconfig

    : 连接 apiserver 的 kubeconfig 文件;

  • clusterCIDR

    : kube-proxy 根据

    --cluster-cidr

    判断集群内部和外部流量,指定

    --cluster-cidr



    --masquerade-all

    选项后 kube-proxy 才会对访问 Service IP 的请求做 SNAT;

  • hostnameOverride

    : 参数值必须与 kubelet 的值一致,否则 kube-proxy 启动后会找不到该 Node,从而不会创建任何 ipvs 规则;

  • mode

    : 使用 ipvs 模式;
  • 红色字体改成对应主机的信息。其中clusterc idr为flannel网络地址。

为各节点创建和分发 kube-proxy 配置文件:

[root@k8s-master1 cert]# scp kube-proxy.config.yaml k8s-node1:/etc/kubernetes/cert/
[root@k8s-master1 cert]# scp kube-proxy.config.yaml k8s-node2:/etc/kubernetes/cert/
[root@k8s-master1 cert]# scp kube-proxy.config.yaml k8s-node3:/etc/kubernetes/cert/


4、创建和分发kube-proxy systemd unit文件

[root@k8s-node1 cert]# cat /etc/systemd/system/kube-proxy.service 
[Unit]
Description=Kubernetes Kube-Proxy Server
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target

[Service]
WorkingDirectory=/var/lib/kube-proxy
ExecStart=/usr/local/bin/kube-proxy \
  --config=/etc/kubernetes/cert/kube-proxy.config.yaml \
  --alsologtostderr=true \
  --logtostderr=false \
  --log-dir=/var/lib/kube-proxy/log \
  --v=2
Restart=on-failure
RestartSec=5
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

分发 kube-proxy systemd unit 文件:

[root@k8s-master1 cert]# scp /etc/systemd/system/kube-proxy.service k8s-node1:/etc/systemd/system/kube-proxy.service
[root@k8s-master1 cert]# scp /etc/systemd/system/kube-proxy.service k8s-node2:/etc/systemd/system/kube-proxy.service
[root@k8s-master1 cert]# scp /etc/systemd/system/kube-proxy.service k8s-node3:/etc/systemd/system/kube-proxy.service


5、启动kube-proxy服务

[root@k8s-node1 cert]# mkdir -p /var/lib/kube-proxy/log
[root@k8s-node1 cert]# systemctl daemon-reload
[root@k8s-node1 cert]# systemctl enable kube-proxy
[root@k8s-node1 cert]# systemctl restart kube-proxy
  • 必须先创建工作和日志目录;


6、检查启动结果

[root@k8s-node1 cert]# systemctl status kube-proxy|grep Active

确保状态为

active (running)

,否则查看日志,确认原因:

journalctl -u kube-proxy

查看监听端口状态

[root@k8s-node1 cert]# netstat -lnpt|grep kube-proxy
tcp        0      0 192.168.80.10:10256     0.0.0.0:*               LISTEN      9617/kube-proxy     
tcp        0      0 192.168.80.10:10249     0.0.0.0:*               LISTEN      9617/kube-proxy
  • 10249:http prometheus metrics port;
  • 10256:http healthz port;


7、查看ipvs路由规则

[root@k8s-node1 cert]# yum install ipvsadm
[root@k8s-node1 cert]#ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  10.254.0.1:443 rr
  -> 192.168.80.7:6443            Masq    1      0          0         
  -> 192.168.80.8:6443            Masq    1      0          0         
  -> 192.168.80.9:6443            Masq    1      0          0 

可见将所有到 kubernetes cluster ip 443 端口的请求都转发到 kube-apiserver 的 6443 端口。

恭喜!至此node节点部署完成。


四、验证集群功能

1、查看节点状况

[root@k8s-master1 cert]# kubectl get nodes
NAME            STATUS   ROLES    AGE   VERSION
192.168.80.10   Ready    <none>   15h   v1.12.3
192.168.80.11   Ready    <none>   14h   v1.12.3
192.168.80.12   Ready    <none>   14h   v1.12.3

都为 Ready 时正常。

2、创建nginx web测试文件

[root@k8s-master1 ~]# cat nginx-web.yml 
apiVersion: v1
kind: Service
metadata:
  name: nginx-web
  labels:
    tier: frontend
spec:
  type: NodePort
  selector:
    tier: frontend
  ports:
  - name: http
    port: 80
    targetPort: 80
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx-con
  labels:
    tier: frontend
spec:
  replicas: 3
  template:
    metadata:
      labels:
        tier: frontend
    spec:
      containers:
      - name: nginx-pod
        image: nginx
        ports:
        - containerPort: 80

执行nginx-web.yaml文件

[root@k8s-master1 ~]# kubectl create -f nginx-web.yml

3、查看各个Node上Pod IP的连通性

[root@k8s-master1 ~]# kubectl get pod -o wide
NAME                         READY   STATUS    RESTARTS   AGE   IP            NODE            NOMINATED NODE
nginx-con-594b8d6b48-9p9sf   1/1     Running   0          37s   172.30.70.2   192.168.80.12   <none>
nginx-con-594b8d6b48-rxzwx   1/1     Running   0          37s   172.30.67.2   192.168.80.11   <none>
nginx-con-594b8d6b48-zd9g7   1/1     Running   0          37s   172.30.6.2    192.168.80.10   <none>

可见,nginx 的 Pod IP 分别是

172.30.70.2



172.30.67.2



172.30.6.2

,在所有 Node 上分别 ping 这三个 IP,看是否连通:

[root@k8s-node1 cert]# ping 172.30.6.2
PING 172.30.6.2 (172.30.6.2) 56(84) bytes of data.
64 bytes from 172.30.6.2: icmp_seq=1 ttl=64 time=0.058 ms
64 bytes from 172.30.6.2: icmp_seq=2 ttl=64 time=0.053 ms

[root@k8s-node1 cert]# ping 172.30.67.2
PING 172.30.67.2 (172.30.67.2) 56(84) bytes of data.
64 bytes from 172.30.67.2: icmp_seq=1 ttl=63 time=0.467 ms
64 bytes from 172.30.67.2: icmp_seq=1 ttl=63 time=0.425 ms


[root@k8s-node1 cert]# ping 172.30.70.2
PING 172.30.70.2 (172.30.70.2) 56(84) bytes of data.
64 bytes from 172.30.70.2: icmp_seq=1 ttl=63 time=0.562 ms
64 bytes from 172.30.70.2: icmp_seq=2 ttl=63 time=0.451 ms

4、查看server的集群ip

[root@k8s-master1 ~]# kubectl get svc
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.254.0.1      <none>        443/TCP        17h
nginx-web    NodePort    10.254.88.134   <none>        80:30164/TCP   47m
  • 10.254.88.134为nginx service的集群ip,代理的是前面的三个pod容器应用。
  • PORT 80是集群IP的端口,30164是node节点上的端口,可以用nodeip:nodeport方式访问服务

5、访问服务可达性

#1、用局域网的任意其他主机访问应用,nodeip:nodeprot方式 (这里nodeip是私网,所以用局域网的其他主机访问)
[root@etcd1 ~]# curl -I 192.168.80.10:30164
HTTP/1.1 200 OK
Server: nginx/1.15.7
Date: Fri, 21 Dec 2018 04:32:58 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 27 Nov 2018 12:31:56 GMT
Connection: keep-alive
ETag: "5bfd393c-264"
Accept-Ranges: bytes

#2、在flannel网络的主机上使用集群ip访问应用

[root@k8s-node1 cert]# curl -I 10.254.88.134

HTTP/1.1 200 OK

Server: nginx/1.15.7

Date: Fri, 21 Dec 2018 04:35:26 GMT

Content-Type: text/html

Content-Length: 612

Last-Modified: Tue, 27 Nov 2018 12:31:56 GMT

Connection: keep-alive

ETag: “5bfd393c-264”

Accept-Ranges: bytes

结果访问都正确,状态码200。集群功能正常。



版权声明:本文为Richardlygo原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。