目前出现的安装方式
目前我看到的有三种方式
minikube、microk8s、kubeadm
除了以上方式,甚至有以安装二进制文件的方式安装的。参看文档
minikube安装在不同操作系统
https://www.imooc.com/article/23785
从安装的服务来看,k8s 单节点必要的服务包括:
- 容器运行时: 默认是 Docker
- etcd: key-value 存储服务,用于保存集群的状态
- kube-apiserver: 集群资源操作的唯一入口,并提供认证、授权、访问控制、API 注册和发现等机制
- kube-controller-manager: 维护集群的状态,比如故障检测、自动扩展、滚动更新等
- kube-scheduler: 负责资源的调度,按照预定的调度策略将 Pod 调度到相应的机器上
- kubelet: 负责维持容器的生命周期,同时也负责 Volume(CVI)和网络(CNI)的管理
- kube-proxy: 负责为 Service 提供 cluster 内部的服务发现和负载均衡
三者差异之处在于:
- microk8s 部署 k8s 是直接安装到主机而不是以容器方式安装
- microk8s 虽然默认没有安装 DNS 服务,但通常情况下为了使用服务发现,还是需要安装的,microk8s 默认的 DNS 服务是 kube-dns,而不是目前官方推荐的 CoreDNS 。
- 网络插件 kubeadm 需要自己选择, microk8s 和 minikube 都会自己安装
- minikube 默认额外安装了 storage-provisioner 用于虚拟机挂载磁盘
从易用角度来看,microk8s 是安装最简单,门槛最低的;minikube 适合对 minikube 比较熟悉的用户。
无论以何种方式安装 k8s, 都需要注意安全问题, 因为在 k8s 的设计中, Master 节点是不会暴露到外网的,用户服务都会安装到 Worker 节点,但是在单节点的情况下,k8s 所监听的端口都没有设防,容器的权限也有可能过大,这些安全问题在 minikube 的文档中也有提到, 需要对网络端口设置 iptables 限制可访问的 IP 等方式来提升安全性,如果是安全性敏感的项目,建议放弃单节点 k8s 的方案。
以上文档参考内容:http://www.ituring.com.cn/article/507461
本文只以kubeadm为例,安装具有普遍性的方式。
kubeadm单机版安装步骤
硬件要求,cetnos7
CPU2,内存2G
安装DOCKER,请参考我之前的资料(Docker第一篇:简介和入门安装:https://blog.csdn.net/zjcjava/article/details/78493724)
设置成服务并启动
systemctl enable docker && systemctl start docker
关闭Centos自带的防火墙服务
systemctl disable firewalld
systemctl stop firewalld
1, 初始化系统,安装kubernetes所需的相关程序(所有master和node节点)
添加kubernetes相关的yum库资源,国内可使用阿里云的镜像:
vi /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
关闭selinux
# setenforce 0
安装K8S组件
执行以下命令安装kubelet、kubeadm、kubectl:
#指定版本否则都会默认安装库中最新版本,会因为彼此依赖的版本不同安装失败
#yum install -y kubelet-1.13.1 kubeadm-1.13.1 kubectl-1.13.1 kubernetes-cni-0.6.0
#设置开机启动并启动kubelet
# systemctl enable kubelet && systemctl start kubelet
下载镜像
#查看需要依赖的镜像版本
kubeadm config images list
k8s.gcr.io/kube-apiserver:v1.13.1
k8s.gcr.io/kube-controller-manager:v1.13.1
k8s.gcr.io/kube-scheduler:v1.13.1
k8s.gcr.io/kube-proxy:v1.13.1
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.2.24
k8s.gcr.io/coredns:1.2.6
vi my.sh
新建一个sh文件,内容如下(下载k8s相关镜像,下载后将镜像名改为k8s.gcr.io/开头的名字,以便kubeadm识别使用)
#!/bin/bash
images=(
kube-apiserver:v1.13.2
kube-controller-manager:v1.13.2
kube-scheduler:v1.13.2
kube-proxy:v1.13.2
pause:3.1
etcd:3.2.24
coredns:1.2.6
)
for imageName in ${images[@]} ; do
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/${imageName}
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/${imageName} k8s.gcr.io/${imageName}
docker rmi registry.cn-hangzhou.aliyuncs.com/google_containers/${imageName}
done
给权限,并运行
chmod 777 my.sh
./my.sh
看下已经拉取下来的镜像,k8s.gcr.io开头的都是刚刚拉取的镜像文件
docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
docker.io/centos latest 9f38484d220f 5 months ago 202 MB
docker.io/hello-world latest fce289e99eb9 7 months ago 1.84 kB
k8s.gcr.io/kube-controller-manager-amd64 v1.11.0 55b70b420785 13 months ago 155 MB
k8s.gcr.io/kube-scheduler-amd64 v1.11.0 0e4a34a3b0e6 13 months ago 56.8 MB
k8s.gcr.io/kube-proxy-amd64 v1.11.0 1d3d7afd77d1 13 months ago 97.8 MB
k8s.gcr.io/kube-apiserver-amd64 v1.11.0 214c48e87f58 13 months ago 187 MB
k8s.gcr.io/coredns 1.1.3 b3b94275d97c 14 months ago 45.6 MB
k8s.gcr.io/etcd-amd64 3.2.18 b8df3b177be2 16 months ago 219 MB
k8s.gcr.io/kubernetes-dashboard-amd64 v1.8.3 0c60bcf89900 18 months ago 102 MB
k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64 1.14.8 c2ce1ffb51ed 19 months ago 41 MB
k8s.gcr.io/k8s-dns-sidecar-amd64 1.14.8 6f7f2dc7fab5 19 months ago 42.2 MB
k8s.gcr.io/k8s-dns-kube-dns-amd64 1.14.8 80cc5ea4b547 19 months ago 50.5 MB
k8s.gcr.io/pause-amd64 3.1 da86e6ba6ca1 19 months ago 742 kB
k8s.gcr.io/pause 3.1 da86e6ba6ca1 19 mon
系统设置
初始化kubernets需关闭swap,否则会报错:[ERROR Swap]: running with swap on is not supported. Please disable swap。
cat <<EOF > /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system
关闭swap
swapoff -a
free -m |grep Swap
初始化master,出现下面的提示说明初始化成功
kubeadm init --pod-network-cidr=10.244.0.0/16 --kubernetes-version=v1.13.1
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join 192.168.1.101:6443 --token z06izj.jmbtqsw1rvqyhtkx --discovery-token-ca-cert-hash sha256:b07c72b5874b7ce015c784c09c67c5f12b3b89932a98dd98393ccdf24fbc4a8f
配置访问集群的,同时好需要执行如下命令:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
本文是讨论单机版的kubernetes,集群不做讨论。
具体出现的错误信息:https://blog.csdn.net/u013355826/article/details/82787241
执行命令如下:
kubectl taint nodes --all node-role.kubernetes.io/master-
至此,单机版的Kubernetes就搭建完成了。
验证master节点信息
[root@worker3 shell]# kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health": "true"}
[root@worker3 shell]# kubectl get node
NAME STATUS ROLES AGE VERSION
worker3 Ready master 1d v1.10.0
创建一个三个副本的Nginx服务
kubectl run nginx --image=nginx --replicas=3
kubectl expose deployment nginx --port=88 --target-port=80 --type=NodePort
通过命令查看服务
[root@worker worker3]# kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 1d
nginx NodePort 10.99.44.183 <none> 88:32584/TCP 20h
打开IP+PORT形式即可看到Nginx的页面,kubernetes验证也完毕。
配套的安装过程中的一些问题以及方案地址:
https://blog.csdn.net/u013355826/article/details/82790183
https://blog.csdn.net/u013355826/article/details/82786649
部署仪表盘
主节点操作
wget https://raw.githubusercontent.com/kubernetes/dashboard/v1.10.1/src/deploy/recommended/kubernetes-dashboard.yaml
这里只需要修改image的地址为国内阿里云的不然翻墙不了会下载不成功 registry.cn-beijing.aliyuncs.com/minminmsn/kubernetes-dashboard:v1.10.1
NodePort模式需要修改镜像地址和type: NodePort
vim kubernetes-dashboard.yaml
spec:
containers:
- name: kubernetes-dashboard
image: registry.cn-beijing.aliyuncs.com/minminmsn/kubernetes-dashboard:v1.10.1
spec:
type: NodePort #增加type: NodePort
ports:
- port: 443
targetPort: 8443
nodePort: 31620 #增加nodePort: 31620
selector:
k8s-app: kubernetes-dashboard
这里把官方的改成阿里云的镜像地址registry.cn-beijing.aliyuncs.com/minminmsn/kubernetes-dashboard:v1.10.1
修改如上文件,增加如下配置:
type: NodePort # 添加Service的type为NodePort
nodePort: 31000 # 添加映射到虚拟机的端口,k8s只支持30000以上的端口
访问dashboard有以下几种方式访问dashboard:
- Nodport方式访问dashboard,service类型改为NodePort
- loadbalacer方式,service类型改为loadbalacer
- Ingress方式访问dashboard
- API server方式访问 dashboard
-
kubectl proxy方式访问dashboard
官方参考文档:
https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/#accessing-the-dashboard-ui
修改完成创建服务pod
[root@node03 bin]# kubectl create -f kubernetes-dashboard.yaml
secret/kubernetes-dashboard-certs created
serviceaccount/kubernetes-dashboard created
role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created
deployment.apps/kubernetes-dashboard created
service/kubernetes-dashboard created
查看运行状态
[root@node03 bin]# kubectl get pods --all-namespaces -o wide | grep dashboard
kube-system kubernetes-dashboard-77fd78f978-bkm9r 1/1 Running 0 37m 10.244.1.4 node04 <none>
常见异常处理:
Terminating或者Pending时删除当前pod
kubectl delete pod kubernetes-dashboard-57df4db6b-lcj24 -n kube-system
如下异常时
Error from server (AlreadyExists): error when creating "kubernetes-dashboard.yaml": secrets "kubernetes-dashboard-certs" already exists
Error from server (AlreadyExists): error when creating "kubernetes-dashboard.yaml": serviceaccounts "kubernetes-dashboard" already exists
Error from server (AlreadyExists): error when creating "kubernetes-dashboard.yaml": roles.rbac.authorization.k8s.io "kubernetes-dashboard-minimal" already exists
Error from server (AlreadyExists): error when creating "kubernetes-dashboard.yaml": rolebindings.rbac.authorization.k8s.io "kubernetes-dashboard-minimal" already exists
处理如下,卸载之前安装的内容
kubectl delete -f kubernetes-dashboard.yaml
继续进行,查看service,TYPE类型已经变为NodePort,端口为31000
kubectl get service -n kube-system | grep dashboard
kubernetes-dashboard NodePort 10.98.190.246 <none> 443:31000/TCP 99s
查看token
[root@node03 bin]# kubectl get secrets -n kube-system | grep kubernetes-dashboard-admin
kubernetes-dashboard-admin-token-77hlj kubernetes.io/service-account-token 3 2m56s
[root@node03 bin]# kubectl describe secrets/kubernetes-dashboard-admin-token-77hlj -n kube-system
Name: kubernetes-dashboard-admin-token-77hlj
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name: kubernetes-dashboard-admin
kubernetes.io/service-account.uid: f10a8196-f46a-11e8-8466-1e00f5000055
Type: kubernetes.io/service-account-token
Data
====
namespace: 11 bytes
token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbi10b2tlbi03N2hsaiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50Lm5hbWUiOiJrdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbiIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImYxMGE4MTk2LWY0NmEtMTFlOC04NDY2LTFlMDBmNTAwMDA1NSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlLXN5c3RlbTprdWJlcm5ldGVzLWRhc2hib2FyZC1hZG1pbiJ9.sks04AJ1D0qx9YwhZevawx9iXp66bBZoAm2bqp6ai-qG3piz4NWLYisWZ40QgrMQ6EGf4j5ax7HKKxQKjxa0eqZ5pgLgoyYOC5jrl6y1jPUSj4ThsG5SnKhH1pr3Jzh7iUY-Ak_ckt1rzVn0AWs7PTL9fRqMJLrN4eFbcZgf5640fcwa4Jlu4pnDatIC7tRm9an_87wGVrn41HrxW4E5ZgKP9evEAhJybaZMTRVebh6jjBDf1XsPe6OMK0KwruNYVcuGY7eluJ0xKN-gwNXKP5MdV6Bii6ikAL_ZJ3PnlEjPl-5KHChMxj129RjlgVKu-IwC633hOnCrBlGfRLJVNQ
导出浏览器证书
cat /etc/kubernetes/admin.conf | grep client-certificate-data | awk -F ': ' '{print $2}' | base64 -d > /etc/kubernetes/pki/client.crt
cat /etc/kubernetes/admin.conf | grep client-key-data | awk -F ': ' '{print $2}' | base64 -d > /etc/kubernetes/pki/client.key
openssl pkcs12 -export -inkey /etc/kubernetes/pki/client.key -in /etc/kubernetes/pki/client.crt -out /etc/kubernetes/pki/client.pfx
开启kubelec proxy,开放dashboard端口,以便我们可以访问ui页面。(该方法为官方推介的其中一种方法,也可以使用NodePort方式):
# kubectl proxy --address=192.168.232.130 --accept-hosts='^*$' &
配置完之后使用http://192.168.232.130:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/#!/overview?namespace=default访问dashboard。
参考资料
https://blog.51cto.com/11889458/2324313
https://blog.csdn.net/shenhonglei1234/article/details/80312709
https://blog.51cto.com/icenycmh/2122309
dashboard安装
https://blog.csdn.net/fanren224/article/details/86610466
kube系列
https://blog.csdn.net/yuhaibao324/article/category/9091187
参考资料
使用kubeadm方式安装kubernetes 1.10.0
http://www.voidcn.com/article/p-zdtfobbv-brq.html
使用kubeadm安装单节点kubernetes集群,在vmware虚拟机centos7
https://blog.csdn.net/c5113620/article/details/82763681
centos7 单节点部署k8s以及kubernetes-dashboard安装
https://blog.csdn.net/moyu_2012/article/details/77683599
k8s集群安装
https://blog.csdn.net/wzr54321/article/details/81907836
https://blog.51cto.com/icenycmh/2121370
kubeadm安装1.9版本
https://blog.51cto.com/foxhound/2057395
kubernetes 安装
https://www.jianshu.com/p/c3dac5234c53
安装异常总结
安装过程中遇到异常:
/proc/sys/net/bridge/bridge-nf-call-iptables contents are not set to 1
执行
系统设置
部分设置bridge-nf-call-iptables=1后系统更新下配置
[preflight] Some fatal errors occurred:
[ERROR DirAvailable--var-lib-etcd]: /var/lib/etcd is not empty
直接删除/var/lib/etcd文件夹