生命无罪,健康万岁,我是laity。
我曾七次鄙视自己的灵魂:
第一次,当它本可进取时,却故作谦卑;
第二次,当它在空虚时,用爱欲来填充;
第三次,在困难和容易之间,它选择了容易;
第四次,它犯了错,却借由别人也会犯错来宽慰自己;
第五次,它自由软弱,却把它认为是生命的坚韧;
第六次,当它鄙夷一张丑恶的嘴脸时,却不知那正是自己面具中的一副;
第七次,它侧身于生活的污泥中,虽不甘心,却又畏首畏尾。
单节点docker安装&卸载
安装
- 仓库、镜像、容器
Docker 要求
CentOS7
系统的内核版本在 3.10以上 ,查看本页面的前提条件来验证你的
CentOS
版本是否支持 Docker 。
- 通过 uname -r 命令查看你当前的内核版本
uname -r
- 使用 root 权限登录 Centos。确保 yum 包更新到最新。
yum -y update # 10分钟左右
- 卸载旧版本(如果安装过旧版本的话)
yum remove docker docker-common docker-selinux docker-engine
- 安装需要的软件包, yum-util 提供yum-config-manager功能,另外两个是devicemapper驱动依赖的
yum install -y yum-utils device-mapper-persistent-data lvm2
- 设置yum源
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
- 可以查看所有仓库中所有docker版本,并选择特定版本安装
yum list docker-ce --showduplicates | sort -r
- 安装docker
sudo yum install -y docker-ce #由于repo中默认只开启stable仓库,故这里安装的是最新稳定版18.03.1/20.10
# 指定docker版本
yum install docker-ce-18.06.1.ce-3.el7 docker-ce-cli-18.06.1.ce-3.el7 containerd.io docker-compose-plugin
# docker官网指定格式
yum install docker-ce-<VERSION_STRING> docker-ce-cli-<VERSION_STRING> containerd.io docker-compose-plugin
- 启动并加入开机启动
systemctl start docker
systemctl enable docker
- 验证安装是否成功(有client和service两部分表示docker安装启动都成功了)
docker version
- docker镜像加速
参考阿里云的镜像加速文档:https://cr.console.aliyun.com/cn-hangzhou/instances/mirrors
您可以通过修改daemon配置文件/etc/docker/daemon.json来使用加速器
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json <<-'EOF'
{
"registry-mirrors": ["https://2qv4mkig.mirror.aliyuncs.com"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker
- 查看是否加速配置成功
docker info
卸载
systemctl stop docker
yum remove docker-ce docker-ce-cli containerd.io
rm -rf /var/lib/docker
rm -rf /var/lib/containerd
Kubernetes集群安装&卸载
Kubernetes
简称
k8s
。是用于自动部署,扩展和管理容器化应用程序的开源系统。
部署步骤
- 在所有节点上安装 Docker 和 kubeadm
- 部署 Kubernetes Master
- 部署容器网络插件
- 部署 Kubernetes Node,将节点加入 Kubernetes 集群中
- 部署 Dashboard Web 页面,可视化查看 Kubernetes 资源
环境准备
linux环境(三个节点都执行)
-
关闭防火墙:
-
systemctl stop firewalld systemctl disable firewalld
-
-
关闭 selinux(关闭警告):
-
sed -i 's/enforcing/disabled/' /etc/selinux/config setenforce 0
-
-
关闭 swap(关闭内存交换):
-
swapoff -a # 临时 sed -ri 's/.*swap.*/#&/' /etc/fstab # 永久 reboot # 重启服务器 free -g # 验证,swap 必须为 0; # 永久删除 swap 挂载 vim /etc/fstab # 注释 swap 行,然后reboot
-
-
添加主机名与 IP 对应关系:
-
vi /etc/hosts 192.168.8.101 node1 192.168.8.102 node2 192.168.8.103 node3 hostnamectl set-hostname <newhostname>:指定新的 hostname
-
-
将桥接的 IPv4 流量传递到 iptables 的链:
-
cat > /etc/sysctl.d/k8s.conf << EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl --system # 执行环境 疑难问题: 遇见提示是只读的文件系统,运行如下命令 mount -o remount rw /
-
-
date 查看时间(可选设置)
-
yum install -y ntpdate ntpdate time.windows.com 同步最新时间 ntpdate time1.aliyun.com 或者同步阿里云时间 # 也可以直接加到定时任务中 crontab -e 0 */1 * * * /usr/sbin/ntpdate time1.aliyun.com
-
-
升级内核
-
# 升级内核 rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org yum -y install https://www.elrepo.org/elrepo-release-7.0-4.el7.elrepo.noarch.rpm yum --enablerepo="elrepo-kernel" -y install kernel-ml.x86_64 #设置grub2默认引导为0 grub2-set-default 0 grub2-mkconfig -o /boot/grub2/grub.cfg reboot
-
所有节点安装 Docker、kubeadm、kubelet、kubectl
Docker安装
略(这个创建节点虚拟机的时候就提前创建好即可)
安装 kubeadm、kubelet、kubectl
-
yum源准备
(新老版本都有;如果不指定可能会出现没有可用软件包)-
vim /etc/yum.repos.d/k8s.repo
-
[kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgche gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
-
-
安装
-
yum list|grep kube yum install -y kubelet-1.17.3 kubeadm-1.17.3 kubectl-1.17.3
-
-
开启自启动
-
systemctl enable kubelet systemctl start kubelet # 修改kubelet cgroup启动方式与docker一致 # vim /etc/sysconfig/kubelet KUBELET_EXTRA_ARGS="--cgroup-driver=systemd"
-
部署 k8s-master
master 节点初始化
由于执行master节点初始化命令会下载很多镜像 如果我们直接执行 init命令 中途可能会发生失败而且时间很长。
master_images.sh
#!/bin/bash
images=(
kube-apiserver:v1.17.3
kube-proxy:v1.17.3
kube-controller-manager:v1.17.3
kube-scheduler:v1.17.3
coredns:1.6.5
etcd:3.4.3-0
pause:3.1
)
for imageName in ${images[@]} ; do
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName
# docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName
done
修改文件可执行权限
chmod +x master_images.sh
集群初始化
kubeadm init –apiserver-advertise-address=192.168.8.101 –image-repository registry.aliyuncs.com/google_containers –kubernetes-version=v1.17.3 –service-cidr=10.96.0.0/12 –pod-network-cidr=10.244.0.0/16 –ignore-preflight-errors=all –cri-socket unix:///var/run/cri-dockerd.sock
设置node1为master:–apiserver-advertise-address=192.168.8.101
镜像仓库地址:–image-repository registry.aliyuncs.com/google_containers
kubernetes版本指定:–kubernetes-version=v1.17.3
多个pod组成的service地址:–service-cidr=10.96.0.0/12
pod地址:–pod-network-cidr=10.244.0.0/16
忽略错误:–ignore-preflight-errors=all
由于默认拉取镜像地址 k8s.gcr.io 国内无法访问,这里指定阿里云镜像仓库地址。可以手动按照我们的 images.sh 先拉取镜像,地址变为 registry.aliyuncs.com/google_containers 也可以。
科普:无类别域间路由(Classless Inter-Domain Routing、CIDR)是一个用于给用户分配 IP 地址以及在互联网上有效地路由 IP 数据包的对 IP 地址进行归类的方法。
运行完成提前复制:加入集群的令牌
测试kubectl(主节点执行)
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl get nodes 获取所有节点
目前 master 状态为 notready。等待网络加入完成即可。
journalctl -u kubelet 查看kubelet日志 两个小时内有效
# 添加到集群
kubeadm join 192.168.8.101:6443 --token 6w08lf.my1tq6wu2ueq6eag --discovery-token-ca-cert-hash sha256:8c5f00183176ec8c449e23eb2ba3d05a81dfdcb8de739542dc728cbf58bad1ad
安装 Pod 网络插件(CNI)
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
以上地址可能被墙,小伙伴获取上传我下载好的 flannel.yml 运行即可,同时 flannel.yml 中指定的 images 访问不到可以去 docker hub 找一个wget yml 的地址,vi 修改 yml 所有 amd64 的地址都修改了即可。
kube-flannel.yml
该yml的作用:
安装 Pod 网络插件(CNI)
---
kind: Namespace
apiVersion: v1
metadata:
name: kube-flannel
labels:
pod-security.kubernetes.io/enforce: privileged
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
rules:
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-flannel
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: flannel
namespace: kube-flannel
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-flannel
labels:
tier: node
app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds
namespace: kube-flannel
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
hostNetwork: true
priorityClassName: system-node-critical
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni-plugin
#image: flannelcni/flannel-cni-plugin:v1.1.0 for ppc64le and mips64le (dockerhub limitations may apply)
image: docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0
command:
- cp
args:
- -f
- /flannel
- /opt/cni/bin/flannel
volumeMounts:
- name: cni-plugin
mountPath: /opt/cni/bin
- name: install-cni
#image: flannelcni/flannel:v0.20.2 for ppc64le and mips64le (dockerhub limitations may apply)
image: docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
#image: flannelcni/flannel:v0.20.2 for ppc64le and mips64le (dockerhub limitations may apply)
image: docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN", "NET_RAW"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: EVENT_QUEUE_DEPTH
value: "5000"
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
- name: xtables-lock
mountPath: /run/xtables.lock
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni-plugin
hostPath:
path: /opt/cni/bin
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
- name: xtables-lock
hostPath:
path: /run/xtables.lock
type: FileOrCreate
将其上传到服务器上后执行
kubectl apply -f kube-flannel.yml
等待大约 3 分钟
kubectl get pods -n kube-system 查看指定名称空间的 pods
kubectl get pods –all-namespace 查看所有名称空间的 pods
kubectl get ns
ip link set cni0 down 如果网络出现问题,关闭 cni0,重启虚拟机继续测试
执行 watch kubectl get pod -n kube-system -o wide 监控 pod 进度
等 3-10 分钟,完全都是 running 以后继续
加入 Kubernetes Node
加入node节点(在各个 Node 节点执行)
在各个 Node 节点执行。
向集群添加新节点,执行在 kubeadm init 输出的 kubeadm join 命令:确保 node 节点成功
以为token有效期2小时,当token过期怎么办?
# 临时
kubeadm token create --print-join-command
# 永久
kubeadm token create --ttl 0 --print-join-command
kubeadm join 192.168.8.101:6443 --token ymwezq.cs9j4tq4urj3yuc8 --discovery-token-ca-cert-hash sha256:0bb178298137223ac879adb8bfadd0732bf732270184c8d1dc36d5864f45ce2b
执行 watch kubectl get pod -n kube-system -o wide `监控 pod 进度`等 3-10 分钟,完全都是 running 以后使用 kubectl get nodes 检查状态
kubectl get nodes
此时就是多个节点了
至此kubernetes集群搭建完毕!
卸载
卸载docker
yum remove docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-selinux \
docker-engine-selinux \
docker-engine
or
systemctl stop docker
yum remove docker-ce docker-ce-cli containerd.io
rm -rf /var/lib/docker
rm -rf /var/lib/containerd
指定版本安装docker
# 指定docker版本
yum install docker-ce-18.06.1.ce-3.el7 docker-ce-cli-18.06.1.ce-3.el7 containerd.io docker-compose-plugin
# docker官网指定格式
yum install docker-ce-<VERSION_STRING> docker-ce-cli-<VERSION_STRING> containerd.io docker-compose-plugin
卸载podman
yum erase podman buildah
卸载k8s
yum remove -y kubelet kubeadm kubectl
kubeadm reset -f
modprobe -r ipip
lsmod
rm -rf ~/.kube/
rm -rf /etc/kubernetes/
rm -rf /etc/systemd/system/kubelet.service.d
rm -rf /etc/systemd/system/kubelet.service
rm -rf /usr/bin/kube*
rm -rf /etc/cni
rm -rf /opt/cni
rm -rf /var/lib/etcd
rm -rf /var/etcd
重启docker&k8s
sudo systemctl daemon-reload
sudo systemctl restart docker
重启kubelet服务
systemctl daemon-reload
systemctl restart kubelet
部署Ingress
ingress底层实现nginx,可以支持 TCP/UDP 4 层负载均衡和 HTTP 7 层负载均衡
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.30.0/deploy/static/mandatory.yaml
上面的命令可能会被墙,所以我下载了下来供各位小伙伴使用命名为:ingress-controller.yaml
apiVersion: v1
kind: Namespace
metadata:
name: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: nginx-configuration
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: tcp-services
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
kind: ConfigMap
apiVersion: v1
metadata:
name: udp-services
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: nginx-ingress-clusterrole
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- endpoints
- nodes
- pods
- secrets
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes
verbs:
- get
- apiGroups:
- ""
resources:
- services
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
resources:
- ingresses
verbs:
- get
- list
- watch
- apiGroups:
- ""
resources:
- events
verbs:
- create
- patch
- apiGroups:
- "extensions"
resources:
- ingresses/status
verbs:
- update
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
name: nginx-ingress-role
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
rules:
- apiGroups:
- ""
resources:
- configmaps
- pods
- secrets
- namespaces
verbs:
- get
- apiGroups:
- ""
resources:
- configmaps
resourceNames:
# Defaults to "<election-id>-<ingress-class>"
# Here: "<ingress-controller-leader>-<nginx>"
# This has to be adapted if you change either parameter
# when launching the nginx-ingress-controller.
- "ingress-controller-leader-nginx"
verbs:
- get
- update
- apiGroups:
- ""
resources:
- configmaps
verbs:
- create
- apiGroups:
- ""
resources:
- endpoints
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: nginx-ingress-role-nisa-binding
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: nginx-ingress-role
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: nginx-ingress-clusterrole-nisa-binding
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: nginx-ingress-clusterrole
subjects:
- kind: ServiceAccount
name: nginx-ingress-serviceaccount
namespace: ingress-nginx
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: nginx-ingress-controller
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
selector:
matchLabels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
template:
metadata:
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
annotations:
prometheus.io/port: "10254"
prometheus.io/scrape: "true"
spec:
hostNetwork: true
serviceAccountName: nginx-ingress-serviceaccount
containers:
- name: nginx-ingress-controller
image: siriuszg/nginx-ingress-controller:0.20.0
args:
- /nginx-ingress-controller
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
- --publish-service=$(POD_NAMESPACE)/ingress-nginx
- --annotations-prefix=nginx.ingress.kubernetes.io
securityContext:
allowPrivilegeEscalation: true
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
# www-data -> 33
runAsUser: 33
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 10
---
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
namespace: ingress-nginx
spec:
#type: NodePort
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
- name: https
port: 443
targetPort: 443
protocol: TCP
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
执行即可
kubectl apply -f ingress-controller.yaml
向上开花,向下生根。我是Laity,正在前行的Laity。