Kubernetes集群组件说明:
etcd 一个高可用的K/V键值对存储和服务发现系统
flannel 实现夸主机的容器网络的通信
kube-apiserver 提供kubernetes集群的API调用
kube-controller-manager 确保集群服务
kube-scheduler 调度容器,分配到Node
kubelet 在Node节点上按照配置文件中定义的容器规格启动容器
kube-proxy 提供网络代理服务
机器列表:
192.168.23.134 k8smaster01
192.168.23.133 k8snode01(节点根据需要扩展)
安装前准备[所有机器]
:
1、配置本地yum,可以参考https://blog.csdn.net/zhaikaiyun/article/details/104018468(docker私有仓库registry(v2)的搭建),yum文件可以在分享百度云中下载
2、centos内核升级到4.4以上,使用yum升级到4.4.207版本
[root@k8smaster01 registry]#
yum list kernel*
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
Installed Packages
kernel-lt.x86_64 4.4.207-1.el7.elrepo @elrepo-kernel
[root@k8smaster01 registry]# yum install kernel-lt.x86_64 -y
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
Package kernel-lt-4.4.207-1.el7.elrepo.x86_64 already installed and latest version
Nothing to do
You have new mail in /var/spool/mail/root
[root@k8smaster01 registry]#
3、
关闭防火墙
[root@k8smaster01 registry]#
systemctl stop firewalld;systemctl disable firewalld
临时禁用selinux
[root@k8smaster01 registry]#
setenforce 0
setenforce: SELinux is disabled
永久关闭 修改/etc/sysconfig/selinux文件设置
[root@k8smaster01 registry]#
sed -i ‘s/SELINUX=permissive/SELINUX=disabled/’ /etc/sysconfig/selinux
[root@k8smaster01 registry]#
sed -i “s/SELINUX=enforcing/SELINUX=disabled/g” /etc/selinux/config
[root@k8smaster01 registry]#
禁用交换分区
[root@k8smaster01 registry]#
swapoff -a
永久禁用,打开/etc/fstab注释掉swap那一行
[root@k8smaster01 registry]#
sed -i ‘s/.*swap.*/#&/’ /etc/fstab
[root@k8smaster01 registry]#
修改内核参数
[root@k8smaster01 registry]# cat <<EOF >
/etc/sysctl.d/k8s.conf
> net.bridge.bridge-nf-call-ip6tables = 1
> net.bridge.bridge-nf-call-iptables = 1
> EOF
[root@k8smaster01 registry]#
sysctl –system
* Applying /usr/lib/sysctl.d/00-system.conf …
net.bridge.bridge-nf-call-ip6tables = 0
net.bridge.bridge-nf-call-iptables = 0
net.bridge.bridge-nf-call-arptables = 0
* Applying /usr/lib/sysctl.d/10-default-yama-scope.conf …
* Applying /usr/lib/sysctl.d/50-default.conf …
kernel.sysrq = 16
kernel.core_uses_pid = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
net.ipv4.conf.all.accept_source_route = 0
net.ipv4.conf.default.promote_secondaries = 1
net.ipv4.conf.all.promote_secondaries = 1
fs.protected_hardlinks = 1
fs.protected_symlinks = 1
* Applying /usr/lib/sysctl.d/99-docker.conf …
* Applying /etc/sysctl.d/99-sysctl.conf …
* Applying /etc/sysctl.d/k8s.conf …
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
* Applying /etc/sysctl.conf …
安装k8smaster
1、
使用本地yum进行安装etcd kubernetes-master ntp flannel
[root@k8smaster01 registry]#
yum install -y etcd kubernetes-master ntp flannel
Loaded plugins: fastestmirror
Loading mirror speeds from cached hostfile
Resolving Dependencies
–> Running transaction check
—> Package etcd.x86_64 0:3.3.11-2.el7.centos will be installed
—> Package flannel.x86_64 0:0.7.1-4.el7 will be installed
—> Package kubernetes-master.x86_64 0:1.5.2-0.7.git269f928.el7 will be installed
—> Package ntp.x86_64 0:4.2.6p5-29.el7.centos will be installed
–> Finished Dependency Resolution
Dependencies Resolved
============================================================================================================================================================================
Package Arch Version Repository Size
============================================================================================================================================================================
Installing:
etcd x86_64 3.3.11-2.el7.centos yum 10 M
flannel x86_64 0.7.1-4.el7 yum 7.5 M
kubernetes-master x86_64 1.5.2-0.7.git269f928.el7 yum 25 M
ntp x86_64 4.2.6p5-29.el7.centos yum 548 k
Transaction Summary
============================================================================================================================================================================
Install 4 Packages
Total download size: 43 M
Installed size: 233 M
Downloading packages:
(1/4): flannel-0.7.1-4.el7.x86_64.rpm | 7.5 MB 00:00:00
(2/4): etcd-3.3.11-2.el7.centos.x86_64.rpm | 10 MB 00:00:00
(3/4): ntp-4.2.6p5-29.el7.centos.x86_64.rpm | 548 kB 00:00:00
(4/4): kubernetes-master-1.5.2-0.7.git269f928.el7.x86_64.rpm | 25 MB 00:00:00
—————————————————————————————————————————————————————————-
Total 38 MB/s | 43 MB 00:00:01
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : flannel-0.7.1-4.el7.x86_64 1/4
Installing : etcd-3.3.11-2.el7.centos.x86_64 2/4
Installing : kubernetes-master-1.5.2-0.7.git269f928.el7.x86_64 3/4
Installing : ntp-4.2.6p5-29.el7.centos.x86_64 4/4
Verifying : ntp-4.2.6p5-29.el7.centos.x86_64 1/4
Verifying : kubernetes-master-1.5.2-0.7.git269f928.el7.x86_64 2/4
Verifying : etcd-3.3.11-2.el7.centos.x86_64 3/4
Verifying : flannel-0.7.1-4.el7.x86_64 4/4
Installed:
etcd.x86_64 0:3.3.11-2.el7.centos flannel.x86_64 0:0.7.1-4.el7 kubernetes-master.x86_64 0:1.5.2-0.7.git269f928.el7 ntp.x86_64 0:4.2.6p5-29.el7.centos
Complete!
You have new mail in /var/spool/mail/root
[root@k8smaster01 registry]#
2、配置服务器
2.1 配置etcd服务器
[root@k8smaster01 registry]# grep -v ‘^#’
/etc/etcd/etcd.conf
ETCD_DATA_DIR=”/var/lib/etcd/default.etcd”
ETCD_LISTEN_CLIENT_URLS=”http://localhost:2379,
http://192.168.23.134:2379
”
ETCD_NAME=”default”
ETCD_ADVERTISE_CLIENT_URLS=”
http://192.168.23.134:2379
”
[root@k8smaster01 registry]#
systemctl start etcd;systemctl enable etcd 启动服务
Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.
[root@k8smaster01 registry]#
etcdctl cluster-health 检查etcd cluster状态
member 8e9e05c52164694d is healthy: got healthy result from http://192.168.23.134:2379
cluster is healthy
[root@k8smaster01 registry]#
etcdctl member list
检查etcd集群成员列表,这里只有一台
8e9e05c52164694d: name=default peerURLs=http://localhost:2380 clientURLs=http://192.168.23.134:2379 isLeader=true
[root@k8smaster01 registry]#
2.2 配置kube-apiserver
[root@k8smaster01 registry]# grep -v ‘^#’
/etc/kubernetes/config
KUBE_LOGTOSTDERR=”–logtostderr=true”
KUBE_LOG_LEVEL=”–v=0″
KUBE_ALLOW_PRIV=”–allow-privileged=false”
KUBE_MASTER=”–master=
http://192.168.23.134:8080
”
[root@k8smaster01 registry]# grep -v ‘^#’
/etc/kubernetes/apiserver
KUBE_API_ADDRESS=”–insecure-bind-address=
0.0.0.0
”
KUBE_ETCD_SERVERS=”–etcd-servers=
http://192.168.23.134:237
9″
KUBE_SERVICE_ADDRESSES=”–service-cluster-ip-range=10.254.0.0/16″
KUBE_ADMISSION_CONTROL=”
–admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota
”
KUBE_API_ARGS=””
[root@k8smaster01 registry]#
2.3
配置kube-controller-manager
[root@k8smaster01 registry]# grep -v ‘^#’
/etc/kubernetes/controller-manager
KUBE_CONTROLLER_MANAGER_ARGS=””
2.3 配置kube-scheduler
[root@k8smaster01 registry]# grep -v ‘^#’
/etc/kubernetes/scheduler
KUBE_SCHEDULER_ARGS=”
0.0.0.0
”
3、
启动kube-apiserver kube-controller-manager kube-scheduler组件
[root@k8smaster01 registry]#
for i in kube-apiserver kube-controller-manager kube-scheduler;do systemctl restart $i; systemctl enable $i;systemctl status $i; done
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.
● kube-apiserver.service – Kubernetes API Server
Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)
Active: active (running) since Thu 2020-01-16 09:45:52 EST; 2min 21s ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Main PID: 1472 (kube-apiserver)
CGroup: /system.slice/kube-apiserver.service
└─1472 /usr/bin/kube-apiserver –logtostderr=true –v=0 –etcd-servers=http://192.168.23.134:2379 –insecure-bind-address=0.0.0.0 –allow-privileged=false –s…
Jan 16 09:45:53 k8smaster01 kube-apiserver[1472]: [711.291468ms] [711.261572ms] Object stored in database
Jan 16 09:45:53 k8smaster01 kube-apiserver[1472]: [711.296003ms] [4.535µs] Self-link added
Jan 16 09:45:53 k8smaster01 kube-apiserver[1472]: “Update /api/v1/namespaces/kube-system/endpoints/kube-scheduler” [711.338217ms] [42.214µs] END
Jan 16 09:45:53 k8smaster01 kube-apiserver[1472]: I0116 09:45:53.730405 1472 trace.go:61] Trace “Update /api/v1/namespaces/kube-system/endpoints/kube-contro…0500 EST):
Jan 16 09:45:53 k8smaster01 kube-apiserver[1472]: [19.921µs] [19.921µs] About to convert to expected version
Jan 16 09:45:53 k8smaster01 kube-apiserver[1472]: [182.334µs] [162.413µs] Conversion done
Jan 16 09:45:53 k8smaster01 kube-apiserver[1472]: [187.272µs] [4.938µs] About to store object in database
Jan 16 09:45:53 k8smaster01 kube-apiserver[1472]: [922.495559ms] [922.308287ms] Object stored in database
Jan 16 09:45:53 k8smaster01 kube-apiserver[1472]: [922.497663ms] [2.104µs] Self-link added
Jan 16 09:45:53 k8smaster01 kube-apiserver[1472]: “Update /api/v1/namespaces/kube-system/endpoints/kube-controller-manager” [922.518962ms] [21.299µs] END
Hint: Some lines were ellipsized, use -l to show in full.
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.
● kube-controller-manager.service – Kubernetes Controller Manager
Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled; vendor preset: disabled)
Active: active (running) since Wed 2020-01-15 16:05:48 EST; 237ms ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Main PID: 57071 (kube-controller)
CGroup: /system.slice/kube-controller-manager.service
└─57071 /usr/bin/kube-controller-manager –logtostderr=true –v=0 –master=http://192.168.23.134:8080
Jan 15 16:05:48 k8smaster01 systemd[1]: Started Kubernetes Controller Manager.
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.
● kube-scheduler.service – Kubernetes Scheduler Plugin
Loaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; enabled; vendor preset: disabled)
Active: active (running) since Wed 2020-01-15 16:05:49 EST; 298ms ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Main PID: 57107 (kube-scheduler)
CGroup: /system.slice/kube-scheduler.service
└─57107 /usr/bin/kube-scheduler –logtostderr=true –v=0 –master=http://192.168.23.134:8080 0.0.0.0
Jan 15 16:05:49 k8smaster01 systemd[1]: Started Kubernetes Scheduler Plugin.
You have new mail in /var/spool/mail/root
[root@k8smaster01 registry]#
[root@k8smaster01 registry]#
kubectl get nodes
No resources found.
[root@k8smaster01 registry]#
kubectl get pods
No resources found.
[root@k8smaster01 registry]#
kubectl get service
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.254.0.1 <none> 443/TCP 1d
[root@k8smaster01 registry]#
kubectl get deployment
No resources found.
[root@k8smaster01 registry]#
安装k8snode
1、配置yum
[root@k8smaster01 yum.repos.d]#
scp yum.repo k8snode01:/etc/yum.repos.d/
root@k8snode01’s password:
yum.repo 100% 61 43.8KB/s 00:00
[root@k8snode01 yum.repos.d]# ls -lrt
total 4
drwxr-xr-x 2 root root 47 Jan 15 16:09 zhaiky
-rw-r–r– 1 root root 61 Jan 15 16:10
yum.repo
You have new mail in /var/spool/mail/root
[root@k8snode01 yum.repos.d]#
yum clean all
Loaded plugins: fastestmirror
Cleaning repos: yum
[root@k8snode01 yum.repos.d]#
2、
安装kubernetes-node ntp flannel docker组件
[root@k8snode01 yum.repos.d]#
yum remove -y kubernetes-node ntp flannel docker
Loaded plugins: fastestmirror
Resolving Dependencies
–> Running transaction check
—> Package docker.x86_64 2:1.13.1-103.git7f2769b.el7.centos will be erased
—> Package flannel.x86_64 0:0.7.1-4.el7 will be erased
—> Package kubernetes-node.x86_64 0:1.5.2-0.7.git269f928.el7 will be erased
—> Package ntp.x86_64 0:4.2.6p5-29.el7.centos will be erased
–> Finished Dependency Resolution
Dependencies Resolved
============================================================================================================================================================================
Package Arch Version Repository Size
============================================================================================================================================================================
Removing:
docker x86_64 2:1.13.1-103.git7f2769b.el7.centos @yum 65 M
flannel x86_64 0.7.1-4.el7 @yum 41 M
kubernetes-node x86_64 1.5.2-0.7.git269f928.el7 @extras 78 M
ntp x86_64 4.2.6p5-29.el7.centos @yum 1.4 M
Transaction Summary
============================================================================================================================================================================
Remove 4 Packages
Installed size: 185 M
Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Erasing : kubernetes-node-1.5.2-0.7.git269f928.el7.x86_64 1/4
Erasing : 2:docker-1.13.1-103.git7f2769b.el7.centos.x86_64 2/4
Erasing : flannel-0.7.1-4.el7.x86_64 3/4
warning: /etc/sysconfig/flanneld saved as /etc/sysconfig/flanneld.rpmsave
Erasing : ntp-4.2.6p5-29.el7.centos.x86_64 4/4
Verifying : ntp-4.2.6p5-29.el7.centos.x86_64 1/4
Verifying : 2:docker-1.13.1-103.git7f2769b.el7.centos.x86_64 2/4
Verifying : flannel-0.7.1-4.el7.x86_64 3/4
Verifying : kubernetes-node-1.5.2-0.7.git269f928.el7.x86_64 4/4
Removed:
docker.x86_64 2:1.13.1-103.git7f2769b.el7.centos flannel.x86_64 0:0.7.1-4.el7 kubernetes-node.x86_64 0:1.5.2-0.7.git269f928.el7 ntp.x86_64 0:4.2.6p5-29.el7.centos
Complete!
[root@k8snode01 yum.repos.d]#
[root@k8snode01 yum.repos.d]# yum install -y kubernetes-node ntp flannel docker
Loaded plugins: fastestmirror
Determining fastest mirrors
yum | 2.9 kB 00:00:00
yum/primary_db | 455 kB 00:00:00
Resolving Dependencies
–> Running transaction check
—> Package docker.x86_64 2:1.13.1-103.git7f2769b.el7.centos will be installed
—> Package flannel.x86_64 0:0.7.1-4.el7 will be installed
—> Package kubernetes-node.x86_64 0:1.5.2-0.7.git269f928.el7 will be installed
—> Package ntp.x86_64 0:4.2.6p5-29.el7.centos will be installed
–> Finished Dependency Resolution
Dependencies Resolved
============================================================================================================================================================================
Package Arch Version Repository Size
============================================================================================================================================================================
Installing:
docker x86_64 2:1.13.1-103.git7f2769b.el7.centos yum 18 M
flannel x86_64 0.7.1-4.el7 yum 7.5 M
kubernetes-node x86_64 1.5.2-0.7.git269f928.el7 yum 14 M
ntp x86_64 4.2.6p5-29.el7.centos yum 548 k
Transaction Summary
============================================================================================================================================================================
Install 4 Packages
Total download size: 40 M
Installed size: 185 M
Downloading packages:
(1/4): docker-1.13.1-103.git7f2769b.el7.centos.x86_64.rpm | 18 MB 00:00:00
(2/4): flannel-0.7.1-4.el7.x86_64.rpm | 7.5 MB 00:00:00
(3/4): ntp-4.2.6p5-29.el7.centos.x86_64.rpm | 548 kB 00:00:00
(4/4): kubernetes-node-1.5.2-0.7.git269f928.el7.x86_64.rpm | 14 MB 00:00:00
—————————————————————————————————————————————————————————-
Total 60 MB/s | 40 MB 00:00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Installing : 2:docker-1.13.1-103.git7f2769b.el7.centos.x86_64 1/4
Installing : kubernetes-node-1.5.2-0.7.git269f928.el7.x86_64 2/4
Installing : flannel-0.7.1-4.el7.x86_64 3/4
Installing : ntp-4.2.6p5-29.el7.centos.x86_64 4/4
Verifying : ntp-4.2.6p5-29.el7.centos.x86_64 1/4
Verifying : 2:docker-1.13.1-103.git7f2769b.el7.centos.x86_64 2/4
Verifying : flannel-0.7.1-4.el7.x86_64 3/4
Verifying : kubernetes-node-1.5.2-0.7.git269f928.el7.x86_64 4/4
Installed:
docker.x86_64 2:1.13.1-103.git7f2769b.el7.centos flannel.x86_64 0:0.7.1-4.el7 kubernetes-node.x86_64 0:1.5.2-0.7.git269f928.el7 ntp.x86_64 0:4.2.6p5-29.el7.centos
Complete!
[root@k8snode01 yum.repos.d]#
3、导入pod-infrastructure.tar镜像到私有仓库
[root@k8smaster01 zhaiky]#
docker load -i pod-infrastructure.tar
c1eac31e742f: Loading layer [==================================================>] 205.9 MB/205.9 MB
9161a60cc964: Loading layer [==================================================>] 10.24 kB/10.24 kB
6872307367a6: Loading layer [==================================================>] 12.74 MB/12.74 MB
Loaded image: registry.access.redhat.com/rhel7/pod-infrastructure:latest
[root@k8smaster01 zhaiky]#
docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
192.168.23.134:5000/nginx latest c7460dfcab50 5 days ago 126 MB
docker.io/registry latest f32a97de94e1 10 months ago 25.8 MB
registry.access.redhat.com/rhel7/pod-infrastructure latest 99965fb98423 2 years ago 209 MB
[root@k8smaster01 zhaiky]#
docker tag registry.access.redhat.com/rhel7/pod-infrastructure:latest 192.168.23.134:5000/pod-infrastructure:latest
[root@k8smaster01 zhaiky]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
192.168.23.134:5000/nginx latest c7460dfcab50 5 days ago 126 MB
docker.io/registry latest f32a97de94e1 10 months ago 25.8 MB
192.168.23.134:5000/pod-infrastructure latest 99965fb98423 2 years ago 209 MB
registry.access.redhat.com/rhel7/pod-infrastructure latest 99965fb98423 2 years ago 209 MB
[root@k8smaster01 zhaiky]#
docker push 192.168.23.134:5000/pod-infrastructure:latest
The push refers to a repository [192.168.23.134:5000/pod-infrastructure]
6872307367a6: Pushed
9161a60cc964: Pushed
c1eac31e742f: Pushed
latest: digest: sha256:460909dd861a61cce7cf76404578dcdf7bf916e57612eea60a807b0cddb453cd size: 948
[root@k8smaster01 zhaiky]#
4、配置服务器
4.1 配置k8snode1网络
[root@k8smaster01 zhaiky]#
etcdctl set /atomic.io/network/config ‘{“Network”: “172.16.0.0/16”}’
{“Network”: “172.16.0.0/16”}
[root@k8smaster01 zhaiky]#
[root@k8snode01 yum.repos.d]# grep -v ‘^#’
/etc/sysconfig/flanneld
FLANNEL_ETCD_ENDPOINTS=”
http://192.168.23.134:2379
”
FLANNEL_ETCD_PREFIX=”/atomic.io/network”
FLANNEL_OPTIONS=””
4.2 配置k8snode1 kube-proxy
[root@k8snode01 yum.repos.d]# grep -v ‘^#’
/etc/kubernetes/config
KUBE_LOGTOSTDERR=”–logtostderr=true”
KUBE_LOG_LEVEL=”–v=0″
KUBE_ALLOW_PRIV=”–allow-privileged=false”
KUBE_MASTER=”–master=
http://192.168.23.134:8080
”
4.3 配置k8snode1 kubelet
[root@k8snode01 yum.repos.d]# grep -v ‘^#’
/etc/kubernetes/kubelet
KUBELET_ADDRESS=”–address=127.0.0.1″
KUBELET_HOSTNAME=”–hostname-override=
192.168.23.133
”
KUBELET_API_SERVER=”–api-servers=
http://192.168.23.134:8080
”
KUBELET_POD_INFRA_CONTAINER=”–pod-infra-container-image=192.168.23.134:5000/pod-infrastructure:latest”
KUBELET_ARGS=””
[root@k8snode01 yum.repos.d]#
5、启动k8snode1组件
[root@k8snode01 yum.repos.d]#
for i in flanneld kube-proxy kubelet docker;do systemctl restart $i;systemctl enable $i;systemctl status $i ;done
Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
Created symlink from /etc/systemd/system/docker.service.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.
● flanneld.service – Flanneld overlay address etcd agent
Loaded: loaded (/usr/lib/systemd/system/flanneld.service; enabled; vendor preset: disabled)
Active: active (running) since Wed 2020-01-15 16:55:13 EST; 100ms ago
Main PID: 29421 (flanneld)
CGroup: /system.slice/flanneld.service
└─29421 /usr/bin/flanneld -etcd-endpoints=http://192.168.23.134:2379 -etcd-prefix=/atomic.io/network
Jan 15 16:55:13 k8snode01 systemd[1]: Starting Flanneld overlay address etcd agent…
Jan 15 16:55:13 k8snode01 flanneld-start[29421]: I0115 16:55:13.492052 29421 main.go:132] Installing signal handlers
Jan 15 16:55:13 k8snode01 flanneld-start[29421]: I0115 16:55:13.492243 29421 manager.go:136] Determining IP address of default interface
Jan 15 16:55:13 k8snode01 flanneld-start[29421]: I0115 16:55:13.492632 29421 manager.go:149] Using interface with name ens33 and address 192.168.23.133
Jan 15 16:55:13 k8snode01 flanneld-start[29421]: I0115 16:55:13.492660 29421 manager.go:166] Defaulting external address to interface address (192.168.23.133)
Jan 15 16:55:13 k8snode01 flanneld-start[29421]: I0115 16:55:13.501266 29421 local_manager.go:134] Found lease (172.16.11.0/24) for current IP (192.168.23.133), reusing
Jan 15 16:55:13 k8snode01 flanneld-start[29421]: I0115 16:55:13.514075 29421 manager.go:250] Lease acquired: 172.16.11.0/24
Jan 15 16:55:13 k8snode01 flanneld-start[29421]: I0115 16:55:13.514431 29421 network.go:98] Watching for new subnet leases
Jan 15 16:55:13 k8snode01 systemd[1]: Started Flanneld overlay address etcd agent.
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
● kube-proxy.service – Kubernetes Kube-Proxy Server
Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
Active: active (running) since Wed 2020-01-15 16:55:13 EST; 170ms ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Main PID: 29487 (kube-proxy)
CGroup: /system.slice/kube-proxy.service
└─29487 /usr/bin/kube-proxy –logtostderr=true –v=0 –master=http://192.168.23.134:8080
Jan 15 16:55:13 k8snode01 systemd[1]: Started Kubernetes Kube-Proxy Server.
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
● kubelet.service – Kubernetes Kubelet Server
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Active: active (running) since Wed 2020-01-15 16:55:17 EST; 194ms ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Main PID: 29696 (kubelet)
CGroup: /system.slice/kubelet.service
└─29696 /usr/bin/kubelet –logtostderr=true –v=0 –api-servers=http://192.168.23.134:8080 –address=127.0.0.1 –hostname-override=192.168.23.133 –allow-priv…
Jan 15 16:55:17 k8snode01 systemd[1]: Started Kubernetes Kubelet Server.
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
● docker.service – Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
Drop-In: /usr/lib/systemd/system/docker.service.d
└─flannel.conf
Active: active (running) since Wed 2020-01-15 16:55:20 EST; 173ms ago
Docs: http://docs.docker.com
Main PID: 29791 (dockerd-current)
CGroup: /system.slice/docker.service
├─29791 /usr/bin/dockerd-current –add-runtime docker-runc=/usr/libexec/docker/docker-runc-current –default-runtime=docker-runc –exec-opt native.cgroupdrive…
└─29798 /usr/bin/docker-containerd-current -l unix:///var/run/docker/libcontainerd/docker-containerd.sock –metrics-interval=0 –start-timeout 2m –state-dir …
Jan 15 16:55:19 k8snode01 dockerd-current[29791]: time=”2020-01-15T16:55:19.951301500-05:00″ level=warning msg=”overlay2: the backing xfs filesystem is formatted without…
Jan 15 16:55:19 k8snode01 dockerd-current[29791]: time=”2020-01-15T16:55:19.955863277-05:00″ level=warning msg=”Docker could not enable SELinux on the host system”
Jan 15 16:55:19 k8snode01 dockerd-current[29791]: time=”2020-01-15T16:55:19.964161213-05:00″ level=info msg=”Graph migration to content-addressability took 0.00 seconds”
Jan 15 16:55:19 k8snode01 dockerd-current[29791]: time=”2020-01-15T16:55:19.964964318-05:00″ level=info msg=”Loading containers: start.”
Jan 15 16:55:19 k8snode01 dockerd-current[29791]: time=”2020-01-15T16:55:19.995710283-05:00″ level=info msg=”Firewalld running: false”
Jan 15 16:55:20 k8snode01 dockerd-current[29791]: time=”2020-01-15T16:55:20.161968053-05:00″ level=info msg=”Loading containers: done.”
Jan 15 16:55:20 k8snode01 dockerd-current[29791]: time=”2020-01-15T16:55:20.178307009-05:00″ level=info msg=”Daemon has completed initialization”
Jan 15 16:55:20 k8snode01 dockerd-current[29791]: time=”2020-01-15T16:55:20.178361669-05:00″ level=info msg=”Docker daemon” commit=”7f2769b/1.13.1″ graphdriver…ion=1.13.1
Jan 15 16:55:20 k8snode01 systemd[1]: Started Docker Application Container Engine.
Jan 15 16:55:20 k8snode01 dockerd-current[29791]: time=”2020-01-15T16:55:20.202148151-05:00″ level=info msg=”API listen on /var/run/docker.sock”
Hint: Some lines were ellipsized, use -l to show in full.
[root@k8snode01 yum.repos.d]#
6、查看节点
[root@k8smaster01 ~]#
kubectl get nodes
NAME STATUS AGE
192.168.23.133 Ready 4m
[root@k8smaster01 ~]#
7、k8s命令方式操作
[root@k8smaster01 ~]#
kubectl get nodes
NAME STATUS AGE
192.168.23.133 Ready 17m
[root@k8smaster01 ~]#
kubectl run nginx –image=192.168.23.134:5000/nginx –port=80 –replicas=1
deployment “nginx” created
[root@k8smaster01 ~]#
kubectl get deployment
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx 1 1 1 1 1m
[root@k8smaster01 ~]#
kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-2780347830-nbhxz 1/1 Running 0 1m
[root@k8smaster01 ~]#
kubectl expose deployment nginx –port=80 –type=LoadBalancer
service “nginx” exposed
[root@k8smaster01 ~]#
kubectl get service
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.254.0.1 <none> 443/TCP 1d
nginx 10.254.106.69 <pending> 80:30731/TCP 7s
[root@k8smaster01 ~]#
[root@k8smaster01 ~]#
kubectl describe pod nginx
Name: nginx-2780347830-nbhxz
Namespace: default
Node: 192.168.23.133/192.168.23.133
Start Time: Fri, 17 Jan 2020 03:39:58 -0500
Labels: pod-template-hash=2780347830
run=nginx
Status: Running
IP: 172.16.11.2
Controllers: ReplicaSet/nginx-2780347830
Containers:
nginx:
Container ID: docker://98b92df4deff534e5543cac4f1c42952467c329fc158d34dc347430791841b0a
Image: 192.168.23.134:5000/nginx
Image ID: docker-pullable://192.168.23.134:5000/nginx@sha256:89a42c3ba15f09a3fbe39856bddacdf9e94cd03df7403cad4fc105088e268fc9
Port: 80/TCP
State: Running
Started: Fri, 17 Jan 2020 03:40:18 -0500
Ready: True
Restart Count: 0
Volume Mounts: <none>
Environment Variables: <none>
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
No volumes.
QoS Class: BestEffort
Tolerations: <none>
Events:
FirstSeen LastSeen Count From SubObjectPath Type Reason Message
——— ——– —– —- ————- ——– —— ——-
3m 3m 1 {default-scheduler } Normal Scheduled Successfully assigned nginx-2780347830-nbhxz to 192.168.23.133
<invalid> <invalid> 1 {kubelet 192.168.23.133} spec.containers{nginx} Normal Pulling pulling image “192.168.23.134:5000/nginx”
<invalid> <invalid> 2 {kubelet 192.168.23.133} Warning MissingClusterDNS kubelet does not have ClusterDNS IP configured and cannot create Pod using “ClusterFirst” policy. Falling back to DNSDefault policy.
<invalid> <invalid> 1 {kubelet 192.168.23.133} spec.containers{nginx} Normal Pulled Successfully pulled image “192.168.23.134:5000/nginx”
<invalid> <invalid> 1 {kubelet 192.168.23.133} spec.containers{nginx} Normal Created Created container with docker id 98b92df4deff; Security:[seccomp=unconfined]
<invalid> <invalid> 1 {kubelet 192.168.23.133} spec.containers{nginx} Normal Started Started container with docker id 98b92df4deff
root@k8snode01 log]#
curl “http://192.168.23.133:30731”
<!DOCTYPE html>
<html>
<head>
<title>
Welcome to nginx
!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href=”http://nginx.org/”>nginx.org</a>.<br/>
Commercial support is available at
<a href=”http://nginx.com/”>nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
[root@k8snode01 log]#
docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
192.168.23.134:5000/nginx latest c7460dfcab50 7 days ago 126 MB
192.168.23.134:5000/pod-infrastructure latest 99965fb98423 2 years ago 209 MB
[root@k8snode01 log]#
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
98b92df4deff 192.168.23.134:5000/nginx “nginx -g ‘daemon …” 36 seconds ago Up 35 seconds k8s_nginx.ae2a0158_nginx-2780347830-nbhxz_default_6a7f5008-3871-11ea-b26d-000c299594c3_849a0082
6c2956f72c98 192.168.23.134:5000/pod-infrastructure:latest “/usr/bin/pod” 45 seconds ago Up 44 seconds k8s_POD.463102c5_nginx-2780347830-nbhxz_default_6a7f5008-3871-11ea-b26d-000c299594c3_f5a2bc99
[root@k8snode01 log]#
[root@k8smaster01 k8s]#
kubectl delete deployment nginx
deployment “nginx” deleted
[root@k8smaster01 k8s]#
kubectl get service
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.254.0.1 <none> 443/TCP 1d
nginx 10.254.106.69 <pending> 80:30731/TCP 4m
[root@k8smaster01 k8s]#
kubectl delete service nginx
service “nginx” deleted
[root@k8smaster01 k8s]#
8、配置文件方式
[root@k8smaster01 k8s]#
more nginx-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
app: nginx
spec:
containers:
– name: nginx
image: 192.168.23.134:5000/nginx
imagePullPolicy: IfNotPresent
ports:
– containerPort: 80
restartPolicy: Always
[root@k8smaster01 k8s]#
more nginx-svc.yaml
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: NodePort
sessionAffinity: ClientIP
selector:
app: nginx
ports:
– port: 80
nodePort: 30080
[root@k8smaster01 k8s]#
more pod_yaml.sh
kubectl create -f nginx-pod.yaml
[root@k8smaster01 k8s]#
more svc_yaml.sh
kubectl create -f nginx-svc.yaml
[root@k8smaster01 k8s]#
[root@k8smaster01 k8s]#
sh pod_yaml.sh
pod “nginx” created
[root@k8smaster01 k8s]#
kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 6s
[root@k8smaster01 k8s]#
sh svc_yaml.sh
service “nginx-service” created
You have new mail in /var/spool/mail/root
[root@k8smaster01 k8s]#
kubectl get service
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes 10.254.0.1 <none> 443/TCP 1d
nginx-service 10.254.203.126 <nodes> 80:30080/TCP 6s
[root@k8smaster01 k8s]#
[root@k8snode01 log]#
docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
192.168.23.134:5000/nginx latest c7460dfcab50 7 days ago 126 MB
192.168.23.134:5000/pod-infrastructure latest 99965fb98423 2 years ago 209 MB
[root@k8snode01 log]#
docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6d840ddc40ea 192.168.23.134:5000/nginx “nginx -g ‘daemon …” About a minute ago Up About a minute k8s_nginx.cad903a8_nginx_default_eee252a2-3872-11ea-b26d-000c299594c3_7bcfb107
885152ae7623 192.168.23.134:5000/pod-infrastructure:latest “/usr/bin/pod” About a minute ago Up About a minute k8s_POD.463102c5_nginx_default_eee252a2-3872-11ea-b26d-000c299594c3_6cb4d540
[root@k8snode01 log]#
curl “http://192.168.23.133:30080”
<!DOCTYPE html>
<html>
<head>
<title>
Welcome to nginx!
</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href=”http://nginx.org/”>nginx.org</a>.<br/>
Commercial support is available at
<a href=”http://nginx.com/”>nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>