Kubernetes 进阶

  • Post author:
  • Post category:其他




Kubernetes 进阶



Yaml基础



Yaml编写规则

  • 大小写敏感
  • 使用缩进表示层级关系
  • 缩进时不允许使用Tal键,只允许使用空格
  • 缩进的空格数目不重要,只要相同层级的元素左侧对齐即可
  • ”#” 表示注释,从这个字符一直到行尾,都会被解析器忽略
  • YAML只有两种结构类型需要知道: lists, maps



YAML Maps

Maps的value既能够对应字符串也能够对应一个Maps。例如:

apiVersion: v1
kind: Pod
metadata:
  name: test-site
  labels:
    app: web

注:上述的YAML文件中,metadata这个KEY对应的值为一个Maps,而嵌套的labels这个KEY的值又是一个Map。实际使用中可视情况进行多层嵌套。

YAML处理器根据行缩进来知道内容之间的关联。上述例子中,使用两个空格作为缩进,但空格的数据量并不重要,只是至少要求一个空格并且所有缩进保持一致的空格数 。例如,name和labels是相同缩进级别,因此YAML处理器知道他们属于同一map;它知道app是lables的值因为app的缩进更大。

注意:在YAML文件中绝对不要使用tab键



YAML Lists

List即列表,说白了就是数组,例如:

args
  - sleep
  - "1000"
  - message
  - "This is test!"

正如你可以看到,你可以有任何数量的项在列表中,项的定义以破折号(-)开头,并且与父元素之间存在缩进。

当然,list的子项也可以是maps,maps的子项也可以是list如下所示:

---
apiVersion: v1
kind: Pod
metadata:
  name: rss-site
  labels:
    app: web
spec:
  containers:
    - name: front-end
      image: nginx
      ports:
        - containerPort: 80
    - name: rss-reader
      image: nickchase/rss-php-nginx:v1
      ports:
        - containerPort: 88



Pod YAML参数定义

Pod是K8S的最小单元,它的信息都记录在了一个YAML文件里。那么这个YAML文件到底怎么写呢?里面有哪些参数?如何去修改YAML文件?带着这几个问题我们来了解下。

Pod YAML有哪些参数?

K8S的YAML配置文件我们初学者看了后都觉得很长,然后也觉得没什么规律。其实,我们可以梳理下从两个方面去了解。第一个是哪些是必写项,第二个是YAML包含哪些主要参数对象。



哪些是必写项

注意,一个YAML文件,下面几个参数是必须要声明,不然绝对会出错:

参数名 字段类型 注释
version String 这里是指的是K8S API的版本,默认系统写法是v1,可以用kubectl api-versions命令查询
kind String 这里指的是yaml文件定义的资源类型和角色,比如:Pod
metadata Object 元数据对象,固定值就写metadata
metadata.name String 元数据对象的名字,这里由我们编写,比如命名Pod的名字
metadata.namespace String 元数据对象的命名空间,由我们自身定义
Spec Object 详细定义对象,固定值就写Spec
spec.containers[] list 这里是Spec对象的容器列表定义,是个列表
spec.containers[].name String 这里定义容器的名字
spec.containers[].image String 这里定义要用到的镜像名称

以上这些都是编写一个YAML文件的必写项,一个最基本的YAML文件就包含它们。



主要参数对象

第一小点里讲的都是必选参数,那么还是否有其他参数呢?其他功能的参数,虽然不是必选项,但是为了让YAML定义得更详细、功能更丰富,这里其他参数也需要了解下。接下来的参数都是Spec对象下面的,主要分了两大块:spec.containers 和 spec.volumes。



spec.containers

spec.containers 是个list数组,很明显,它代表的是描述container容器方面的参数。所以它下面的参数是非常多的,具体参数看如下表格:

参数名 字符类型 注释
spec.containers[].name String 定义容器的名字
spec.containers[].image String 定义要用到的镜像名称
spec.containers[].imagePullPolicy String 定义镜像拉取策略,有Always、Never、IfNotPresent三个值可选

(1)Always:意思是每次都尝试重新拉取镜像

(2)Never:表示仅使用本地镜像

(3)IfNotPresent:如果本地有镜像就使用本地镜像,没有就拉取在线镜像。

上面三个值都没设置的话,默认是Always。
spec.containers[].command[] List 指定容器启动命令,因为是数组可以指定多个,不指定则使用镜像打包时使用的启动命令。
spec.containers[].args[] List 指定容器启动命令参数,因为是数组可以指定多个。
spec.containers[].workingDir String 指定容器的工作目录
spec.containers[].volumeMounts[] List 指定容器内部的存储卷配置
spec.containers[].volumeMounts[].name String 指定可以被容器挂载的存储卷的名称
spec.containers[].volumeMounts[].mountPath String 指定可以被容器挂载的存储卷的路径
spec.containers[].volumeMounts[].readOnly String 设置存储卷路径的读写模式,ture 或者false,默认为读写模式
spec.containers[].ports[] List 指定容器需要用到的端口列表
spec.containers[].ports[].name String 指定端口名称
spec.containers[].ports[].containerPort String 指定容器所在主机需要监听的端口号,默认跟上面containerPort相同,注意设置了hostPort 同一台主机无法启动该容器的相同副本(因为主机的端口号不能相同,这样会冲突)
spec.containers[].ports[].protocol String 指定端口协议,支持TCP和UDP,默认值为TCP
spec.containers[].env[] List 指定容器运行前需设置的环境变量列表
spec.containers[].env[].name String 指定环境变量名称
spec.containers[].env[].value String 指定环境变量值
spec.containers[].resources Object 指定资源限制和资源请求的值(这里开始就是设置容器的资源上限)
spec.containers[].resources.limits Object 指定设置容器运行时资源的运行上限
spec.containers[].resources.limits.cpu String 指定CPU的限制,单位为core数,将用于 docker run –cpu-shares参数
spec.containers[].resources.limits.memory String 指定MEM内存的限制,单位为MIB、GiB
spec.containers[].resources.requests Object 指定容器启动和调度时的限制设置
spec.containers[].resources.requests.cpu String CPU请求,单位为core数,容器启动时初始化可用数量
spec.containers[].resources.requests.memory String 内存请求,单位为MIB、GiB,容器启动的初始化可用数量



spec.volumes

spec.volumes是个list数组,很明显,看名字就知道它是定义同步存储方面的参数。它下面的参数是非常多,具体参数看如下表格:

参数名 字段类型 说明
spec.volumes[].name String 定义Pod的共享存储卷的名称,容器定义部分的spec.containers[].volumeMounts[].name的值跟这里是一样的。
spec.volumes[].emptyDir Object 指定Pod的临时目录,值为一个空对象:emptyDir:{}
spec.volumes[].hostPath Object 指定挂载Pod所在宿主机的目录
spec.volumes[].hostPath.path String 指定Pod所在主机的目录,将被用于容器中mount的目录
spec.volumes[].secret Object 指定类型为secret的存储卷,secret意为私密、秘密的意思,很容易理解,它存储一些密码,token或者秘钥等敏感安全文件。挂载集群预定义的secret对象到容器内部。
spec.volumes[].configMap Object 指定类型为configMap的存储卷,表示挂载集群预定义的configMap对象到容器内部。
spec.volumes[].livenessProbe Object 指定Pod内容器健康检查的设置,当探测无响应几次后,系统将自动重启该容器。这个在前面的文章中有说,具体可以设置:exec、httpGet、tcpSocket。
spec.volumes[].livenessProbe.exec Object 指定Pod内容器健康检查的设置,确定是exec方式
spec.volumes[].livenessProbe.exec.command[] String 指定exec方式后需要指定命令或者脚本,用这个参数设置
spec.volumes[].livenessProbe.httpGet Object 指定Pod内容器健康检查的设置,确定是httpGet方式
spec.volumes[].livenessProbe.tcpSocket Object 指定Pod内容器健康检查的设置,确定是tcpSocket方式
spec.volumes[].livenessProbe.initialDelaySeconds Number 容器启动完成后手册探测的时间设置,单位为s
spec.volumes[].livenessProbe.timeoutSeconds Number 对容器健康检查的探测等待响应的超时时间设置,单位为S,默认为1s。若超过该超时时间设置,则认为该容器不健康,会重启该容器。
spec.volumes[].livenessProbe.periodSeconds Number 对容器健康检查的定期探测时间设置,单位为S,默认10s探测一次。



额外参数对象

除了上面containers和volumes两个主要参数,剩下有几个参数:

参数名 字段类型 说明
spec.restartPolicy String 定义Pod的重启策略,可选值为Always、OnFailure,默认值为Always。 1.Always:Pod一旦终止运行,则无论容器是如何终止的,kubelet服务都将重启它。 2.OnFailure:只有Pod以非零退出码终止时,kubelet才会重启该容器。如果容器正常结束(退出码为0),则kubelet将不会重启它。 3. Never:Pod终止后,kubelet将退出码报告给Master,不会重启该Pod。
spec.nodeSelector Object 定义Node的Label过滤标签,以key:value格式指定
spec.imagePullSecrets Object 定义pull镜像时使用secret名称,以name:secretkey格式指定
spec.hostNetwork Boolean 定义是否使用主机网络模式,默认值为false。设置true表示使用宿主机网络,不使用docker网桥,同时设置了true将无法在同一台宿主机上启动第二个副本。



kubectl explain

在创建json 和yaml 时,我们可能不知道具体的参数该怎么写。同样 我们可以通过explain这个 命令来查看 每个参数具体的作用与写法。

比如:我们在创建 Pod 时不晓得其中如何定义参数、那些是必填的,那些该怎么写

[root@master ~]# kubectl explain pod
KIND:     Pod
VERSION:  v1

DESCRIPTION:
     Pod is a collection of containers that can run on a host. This resource is
     created by clients and scheduled onto hosts.

FIELDS:
   apiVersion   <string>
     APIVersion defines the versioned schema of this representation of an
     object. Servers should convert recognized schemas to the latest internal
     value, and may reject unrecognized values. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources

   kind <string>
     Kind is a string value representing the REST resource this object
     represents. Servers may infer this from the endpoint the client submits
     requests to. Cannot be updated. In CamelCase. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds

   metadata     <Object>
     Standard object's metadata. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata

   spec <Object>
     Specification of the desired behavior of the pod. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status

   status       <Object>
     Most recently observed status of the pod. This data may not be up to date.
     Populated by the system. Read-only. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status

# 以上Description是对资源对象的简要描述,Fields则是对所有字段的描述

告诉我们,apiVersion、kind、metadata、spec 这个几个是要定义到yaml中,这里参数类型为Object类型的,还可以继续深入、例如这里的metadata

[root@master ~]# kubectl explain pod.metadata
KIND:     Pod
VERSION:  v1

RESOURCE: metadata <Object>

DESCRIPTION:
     Standard object's metadata. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata

     ObjectMeta is metadata that all persisted resources must have, which
     includes all objects users must create.

FIELDS:
   annotations  <map[string]string>
     Annotations is an unstructured key value map stored with a resource that
     may be set by external tools to store and retrieve arbitrary metadata. They
     are not queryable and should be preserved when modifying objects. More
     info: http://kubernetes.io/docs/user-guide/annotations

   clusterName  <string>
     The name of the cluster which the object belongs to. This is used to
     distinguish resources with same name and namespace in different clusters.
     This field is not set anywhere right now and apiserver is going to ignore
     it if set in create or update request.

   creationTimestamp    <string>
     CreationTimestamp is a timestamp representing the server time when this
     object was created. It is not guaranteed to be set in happens-before order
     across separate operations. Clients may not set this value. It is
     represented in RFC3339 form and is in UTC.

     Populated by the system. Read-only. Null for lists. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata

   deletionGracePeriodSeconds   <integer>
     Number of seconds allowed for this object to gracefully terminate before it
     will be removed from the system. Only set when deletionTimestamp is also
     set. May only be shortened. Read-only.

   deletionTimestamp    <string>
     DeletionTimestamp is RFC 3339 date and time at which this resource will be
     deleted. This field is set by the server when a graceful deletion is
     requested by the user, and is not directly settable by a client. The
     resource is expected to be deleted (no longer visible from resource lists,
     and not reachable by name) after the time in this field, once the
     finalizers list is empty. As long as the finalizers list contains items,
     deletion is blocked. Once the deletionTimestamp is set, this value may not
     be unset or be set further into the future, although it may be shortened or
     the resource may be deleted prior to this time. For example, a user may
     request that a pod is deleted in 30 seconds. The Kubelet will react by
     sending a graceful termination signal to the containers in the pod. After
     that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)
     to the container and after cleanup, remove the pod from the API. In the
     presence of network partitions, this object may still exist after this
     timestamp, until an administrator or automated process can determine the
     resource is fully terminated. If not set, graceful deletion of the object
     has not been requested.

     Populated by the system when a graceful deletion is requested. Read-only.
     More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata

   finalizers   <[]string>
     Must be empty before the object is deleted from the registry. Each entry is
     an identifier for the responsible component that will remove the entry from
     the list. If the deletionTimestamp of the object is non-nil, entries in
     this list can only be removed. Finalizers may be processed and removed in
     any order. Order is NOT enforced because it introduces significant risk of
     stuck finalizers. finalizers is a shared field, any actor with permission
     can reorder it. If the finalizer list is processed in order, then this can
     lead to a situation in which the component responsible for the first
     finalizer in the list is waiting for a signal (field value, external
     system, or other) produced by a component responsible for a finalizer later
     in the list, resulting in a deadlock. Without enforced ordering finalizers
     are free to order amongst themselves and are not vulnerable to ordering
     changes in the list.

   generateName <string>
     GenerateName is an optional prefix, used by the server, to generate a
     unique name ONLY IF the Name field has not been provided. If this field is
     used, the name returned to the client will be different than the name
     passed. This value will also be combined with a unique suffix. The provided
     value has the same validation rules as the Name field, and may be truncated
     by the length of the suffix required to make the value unique on the
     server.

     If this field is specified and the generated name exists, the server will
     NOT return a 409 - instead, it will either return 201 Created or 500 with
     Reason ServerTimeout indicating a unique name could not be found in the
     time allotted, and the client should retry (optionally after the time
     indicated in the Retry-After header).

     Applied only if Name is not specified. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency

   generation   <integer>
     A sequence number representing a specific generation of the desired state.
     Populated by the system. Read-only.

   labels       <map[string]string>
     Map of string keys and values that can be used to organize and categorize
     (scope and select) objects. May match selectors of replication controllers
     and services. More info: http://kubernetes.io/docs/user-guide/labels

   managedFields        <[]Object>
     ManagedFields maps workflow-id and version to the set of fields that are
     managed by that workflow. This is mostly for internal housekeeping, and
     users typically shouldn't need to set or understand this field. A workflow
     can be the user's name, a controller's name, or the name of a specific
     apply path like "ci-cd". The set of fields is always in the version that
     the workflow used when modifying the object.

   name <string>
     Name must be unique within a namespace. Is required when creating
     resources, although some resources may allow a client to request the
     generation of an appropriate name automatically. Name is primarily intended
     for creation idempotence and configuration definition. Cannot be updated.
     More info: http://kubernetes.io/docs/user-guide/identifiers#names

   namespace    <string>
     Namespace defines the space within which each name must be unique. An empty
     namespace is equivalent to the "default" namespace, but "default" is the
     canonical representation. Not all objects are required to be scoped to a
     namespace - the value of this field for those objects will be empty.

     Must be a DNS_LABEL. Cannot be updated. More info:
     http://kubernetes.io/docs/user-guide/namespaces

   ownerReferences      <[]Object>
     List of objects depended by this object. If ALL objects in the list have
     been deleted, this object will be garbage collected. If this object is
     managed by a controller, then an entry in this list will point to this
     controller, with the controller field set to true. There cannot be more
     than one managing controller.

   resourceVersion      <string>
     An opaque value that represents the internal version of this object that
     can be used by clients to determine when objects have changed. May be used
     for optimistic concurrency, change detection, and the watch operation on a
     resource or set of resources. Clients must treat these values as opaque and
     passed unmodified back to the server. They may only be valid for a
     particular resource or set of resources.

     Populated by the system. Read-only. Value must be treated as opaque by
     clients and . More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency

   selfLink     <string>
     SelfLink is a URL representing this object. Populated by the system.
     Read-only.

     DEPRECATED Kubernetes will stop propagating this field in 1.20 release and
     the field is planned to be removed in 1.21 release.

   uid  <string>
     UID is the unique in time and space value for this object. It is typically
     generated by the server on successful creation of a resource and is not
     allowed to change on PUT operations.

     Populated by the system. Read-only. More info:
     http://kubernetes.io/docs/user-guide/identifiers#uids

我们可以了解到 name、namespace 等属性,例如 kubectl explain pod.spce 。

[root@master ~]# kubectl explain pod.spec
KIND:     Pod
VERSION:  v1

RESOURCE: spec <Object>

DESCRIPTION:
     Specification of the desired behavior of the pod. More info:
     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status

     PodSpec is a description of a pod.

FIELDS:
   activeDeadlineSeconds        <integer>
     Optional duration in seconds the pod may be active on the node relative to
     StartTime before the system will actively try to mark it failed and kill
     associated containers. Value must be a positive integer.

   affinity     <Object>
     If specified, the pod's scheduling constraints

   automountServiceAccountToken <boolean>
     AutomountServiceAccountToken indicates whether a service account token
     should be automatically mounted.

   containers   <[]Object> -required-
     List of containers belonging to the pod. Containers cannot currently be
     added or removed. There must be at least one container in a Pod. Cannot be
     updated.

   dnsConfig    <Object>
     Specifies the DNS parameters of a pod. Parameters specified here will be
     merged to the generated DNS configuration based on DNSPolicy.

   dnsPolicy    <string>
     Set DNS policy for the pod. Defaults to "ClusterFirst". Valid values are
     'ClusterFirstWithHostNet', 'ClusterFirst', 'Default' or 'None'. DNS
     parameters given in DNSConfig will be merged with the policy selected with
     DNSPolicy. To have DNS options set along with hostNetwork, you have to
     specify DNS policy explicitly to 'ClusterFirstWithHostNet'.



   enableServiceLinks   <boolean>
     EnableServiceLinks indicates whether information about services should be
     injected into pod's environment variables, matching the syntax of Docker
     links. Optional: Defaults to true.

   ephemeralContainers  <[]Object>
     List of ephemeral containers run in this pod. Ephemeral containers may be
     run in an existing pod to perform user-initiated actions such as debugging.
     This list cannot be specified when creating a pod, and it cannot be
     modified by updating the pod spec. In order to add an ephemeral container
     to an existing pod, use the pod's ephemeralcontainers subresource. This
     field is beta-level and available on clusters that haven't disabled the
     EphemeralContainers feature gate.

   hostAliases  <[]Object>
     HostAliases is an optional list of hosts and IPs that will be injected into
     the pod's hosts file if specified. This is only valid for non-hostNetwork
     pods.

   hostIPC      <boolean>
     Use the host's ipc namespace. Optional: Default to false.

   hostNetwork  <boolean>
     Host networking requested for this pod. Use the host's network namespace.
     If this option is set, the ports that will be used must be specified.
     Default to false.

   hostPID      <boolean>
     Use the host's pid namespace. Optional: Default to false.

   hostname     <string>
     Specifies the hostname of the Pod If not specified, the pod's hostname will
     be set to a system-defined value.

   imagePullSecrets     <[]Object>
     ImagePullSecrets is an optional list of references to secrets in the same
     namespace to use for pulling any of the images used by this PodSpec. If
     specified, these secrets will be passed to individual puller
     implementations for them to use. For example, in the case of docker, only
     DockerConfig type secrets are honored. More info:
     https://kubernetes.io/docs/concepts/containers/images#specifying-imagepullsecrets-on-a-pod

   initContainers       <[]Object>
     List of initialization containers belonging to the pod. Init containers are
     executed in order prior to containers being started. If any init container
     fails, the pod is considered to have failed and is handled according to its
     restartPolicy. The name for an init container or normal container must be
     unique among all containers. Init containers may not have Lifecycle
     actions, Readiness probes, Liveness probes, or Startup probes. The
     resourceRequirements of an init container are taken into account during
     scheduling by finding the highest request/limit for each resource type, and
     then using the max of of that value or the sum of the normal containers.
     Limits are applied to init containers in a similar fashion. Init containers
     cannot currently be added or removed. Cannot be updated. More info:
     https://kubernetes.io/docs/concepts/workloads/pods/init-containers/

   nodeName     <string>
     NodeName is a request to schedule this pod onto a specific node. If it is
     non-empty, the scheduler simply schedules this pod onto that node, assuming
     that it fits resource requirements.

   nodeSelector <map[string]string>
     NodeSelector is a selector which must be true for the pod to fit on a node.
     Selector which must match a node's labels for the pod to be scheduled on
     that node. More info:
     https://kubernetes.io/docs/concepts/configuration/assign-pod-node/

   os   <Object>
     Specifies the OS of the containers in the pod. Some pod and container
     fields are restricted if this is set.

    ......此处省略n行

   schedulerName        <string>
     If specified, the pod will be dispatched by specified scheduler. If not
     specified, the pod will be dispatched by default scheduler.

     ......此处省略n行

   volumes      <[]Object>
     List of volumes that can be mounted by containers belonging to the pod.
     More info: https://kubernetes.io/docs/concepts/storage/volumes

我们可以利用这些帮助信息快速去查询到怎么使用这些字段和键值。



Yaml 编写 Apache 实例

代码结构如图:

在这里插入图片描述

[root@master ~]# mkdir /manifest   //创建一个目录用于存放yaml文件
[root@master ~]# cd //manifest
[root@master manifest]# vi deploy.yaml 
---
apiVersion: apps/v1                   //API的版本
kind: Deployment                      //资源类型
metadata:                             //数据对象
  name: web                           //对象名称
  namespace: default                  //命名空间
spec: 
  replicas: 3                         //副本个数
  selector: 
    matchLabels: 
      app: apache
  template: 
    metadata: 
      labels: 
        app: apache                    //自定义标签
    spec: 
      containers:                      //容器相关定义列表
      - image: xm17671855780/httpd:v1.0          //镜像名
        imagePullPolicy: IfNotPresent                  //镜像拉取策略:IfNotPresent,如果本地有镜像就使用本地镜像,没有就拉取在线镜像
        name: httpd                    //定义容器名字
---                                           //如果将pod的yaml和service的yaml合并在一起时,使用---隔开
apiVersion: v1
kind: Service
metadata: 
  name: web
  namespace: default
spec:                                  //资源清单列表
  ports:                               //指定容器需要用到的端口列表
  - port: 8081                         //service暴露在Cluster上的端口
    protocol: TCP                      //指定端口协议,支持TCP和UDP,默认值为TCP
    targetPort: 80                     //容器的端口,也是最终底层的服务所提供的端口
  selector:                            //标签选择,符合这个标签的 Pod 会作为这个 Service 的 backend(后端服务)。
    app: apache
  type: NodePort                      

type: NodePort (可以随机暴露出一个端口外部访问的端口(默认值:30000-32767)出来。由于暴露的端口往往都比较大,这时候可以采用nginx反向代理的方式,为外界提供访问服务 HTTP:80,HTTPS:443。


[root@master manifest]# kubectl apply -f deploy.yaml  //使用kubectl apply -f 指定文件 部署pod
deployment.apps/web created
service/web created

[root@master manifest]# kubectl get pods  //查看
NAME                  READY   STATUS    RESTARTS   AGE
web-5b564d848-4b5fz   1/1     Running   0          15s
web-5b564d848-8j92q   1/1     Running   0          15s
web-5b564d848-lr94q   1/1     Running   0          15s

[root@master manifest]# kubectl get pods,svc //看service状态
NAME                      READY   STATUS    RESTARTS   AGE
pod/web-5b564d848-4b5fz   1/1     Running   0          3m8s
pod/web-5b564d848-8j92q   1/1     Running   0          3m8s
pod/web-5b564d848-lr94q   1/1     Running   0          3m8s

NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
service/kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP          13m
service/web          NodePort    10.97.213.169   <none>        8081:31744/TCP   3m8s

[root@master manifest]# curl 10.97.213.169:8081
This is V1!

使用浏览器访问 master_ip+31744

在这里插入图片描述



三种管理资源的方式

  • kubectl create -f 创建相关资源,对同一个目标文件只能执行一次
  • kubectl apply -f 部署或配置相关资源,对同一个目标文件可以重复使用
  • kubectl delete -f 清楚相关资源,清楚指定文件中设置的相关资源
# kubectl create -f

[root@master manifest]#  kubectl create -f deploy.yaml 
deployment.apps/web created
service/web created
[root@master manifest]# kubectl get pods,svc
NAME                       READY   STATUS    RESTARTS   AGE
pod/web-6559c7bc85-l77cf   1/1     Running   0          6s
pod/web-6559c7bc85-n4wpn   1/1     Running   0          6s
pod/web-6559c7bc85-qnsp2   1/1     Running   0          6s

NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)          AGE
service/kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP          169m
service/web          NodePort    10.100.26.47   <none>        8081:31738/TCP   6s

[root@master manifest]# curl 10.100.26.47:8081
This is V1!


[root@master manifest]#  kubectl create -f deploy.yaml      // 再次使用此命令时,报错显示:已经存在该资源
Error from server (AlreadyExists): error when creating "deploy.yaml": deployments.apps "web" already exists
Error from server (AlreadyExists): error when creating "deploy.yaml": services "web" already exists


# kubectl apply -f 

[root@master manifest]# kubectl apply -f deploy.yaml 
deployment.apps/web created
service/web created

[root@master manifest]# kubectl get pods
NAME                  READY   STATUS    RESTARTS   AGE
web-5b564d848-4b5fz   1/1     Running   0          15s
web-5b564d848-8j92q   1/1     Running   0          15s
web-5b564d848-lr94q   1/1     Running   0          15s

[root@master ~]# kubectl get pods,svc
NAME                      READY   STATUS    RESTARTS   AGE
pod/web-5b564d848-4b5fz   1/1     Running   0          149m
pod/web-5b564d848-8j92q   1/1     Running   0          149m
pod/web-5b564d848-lr94q   1/1     Running   0          149m

NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
service/kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP          160m
service/web          NodePort    10.97.213.169   <none>        8081:31744/TCP   149m

[root@master ]# curl 10.97.213.169:8081
This is V1!

[root@master /]# cd /manifest/
[root@master manifest]# vi deploy.yaml
spec:
      containers:
      - image: xm17671855780/httpd:v2.0  //修改镜像
        imagePullPolicy: IfNotPresent
        name: httpd

[root@master manifest]# kubectl apply -f deploy.yaml   //再次使用命令,进行部署
deployment.apps/web configured
service/web unchanged

[root@master manifest]# kubectl get pods,svc
NAME                       READY   STATUS    RESTARTS   AGE
pod/web-6559c7bc85-5blmn   1/1     Running   0          50s
pod/web-6559c7bc85-h4whr   1/1     Running   0          47s
pod/web-6559c7bc85-rj2xp   1/1     Running   0          44s

NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
service/kubernetes   ClusterIP   10.96.0.1       <none>        443/TCP          164m
service/web          NodePort    10.97.213.169   <none>        8081:31744/TCP   154m

[root@master manifest]# curl 10.97.213.169:8081  //发现内容变更了
This is V2!


# kubectl delete -f       //发现只删除deploy创建出来的资源

[root@master manifest]# kubectl delete -f deploy.yaml 
deployment.apps "web" deleted
service "web" deleted

[root@master manifest]# kubectl get pods,svc
NAME                       READY   STATUS        RESTARTS   AGE
pod/web-6559c7bc85-5blmn   1/1     Terminating   0          2m32s
pod/web-6559c7bc85-h4whr   1/1     Terminating   0          2m29s
pod/web-6559c7bc85-rj2xp   1/1     Terminating   0          2m26s

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   166m

[root@master manifest]# kubectl get pods
No resources found in default namespace.




命名空间

Kubernetes 支持多个虚拟集群,它们底层依赖于同一个物理集群。 这些虚拟集群被称为命名空间。命名空间为名称提供了一个范围。资源的名称需要在命名空间内是唯一的,但不能跨命名空间。且命名空间不能相互嵌套。

[root@master ~]# kubectl create namespace mysapce     //创建命名空间
namespace/mysapce created

[root@master ~]# kubectl get namespaces               //查看命令空间
NAME              STATUS   AGE
default           Active   2d8h
kube-node-lease   Active   2d8h
kube-public       Active   2d8h
kube-system       Active   2d8h
mysapce           Active   16s
[root@master ~]# kubectl delete namespaces mysapce    //删除命名空间
namespace "mysapce" deleted

[root@master ~]# kubectl get namespaces 
NAME              STATUS   AGE
default           Active   2d8h
kube-node-lease   Active   2d8h
kube-public       Active   2d8h
kube-system       Active   2d8h


# 当然也可以通过yaml 文件进行创建
[root@master ~]# vi my-namespace.yaml 
apiVersion: v1
kind: Namespace
metadata: 
  name: red-test
  labels:
    name: red-test-v1

[root@master ~]# kubectl create -f my-namespace.yaml   //使用kubectl create -f 指定文件镜像创建
namespace/red-test created

[root@master ~]# kubectl get namespaces
NAME              STATUS   AGE
default           Active   2d8h
kube-node-lease   Active   2d8h
kube-public       Active   2d8h
kube-system       Active   2d8h
red-test          Active   14s



健康检查

健康检查(Health Check)是让系统知道您的应用实例是否正常工作的简单方法。 如果您的应用实例不再工作,则其他服务不应访问该应用或向其发送请求。 相反,应该将请求发送到已准备好的应用程序实例,或稍后重试。 系统还应该能够使您的应用程序恢复健康状态。

强大的自愈能力是 Kubernetes 这类容器编排引擎的一个重要特性。自愈的默认实现方式是自动重启发生故障的容器。除此之外,用户还可以利用Liveness 和 Readiness 探测机制设置更精细的健康检查,进而实现如下需求:

  • 零停机部署。
  • 避免部署无效的镜像。
  • 更加安全的滚动升级。



探针类型



Liveness存活性探针

Liveness探针让Kubernetes知道你的应用程序是活着还是死了。 如果你的应用程序还活着,那么Kubernetes就不管它了。 如果你的应用程序已经死了,Kubernetes将删除Pod并启动一个新的替换它。



Readiness就绪性探针

Readiness探针旨在让Kubernetes知道您的应用何时准备好其流量服务。 Kubernetes确保Readiness探针检测通过,然后允许服务将流量发送到Pod。 如果Readiness探针开始失败,Kubernetes将停止向该容器发送流量,直到它通过。 判断容器是否处于可用Ready状态,达到ready状态表示pod可以接受请求, 如果不健康,从service的后端endpoint列表中把pod隔离出去



实例



harporxy负载均衡



nginx、httpd

编写rss.yaml文件

[[root@master haproxy]# vim rss.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: rs1
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: rs1
  template:
    metadata:
      labels:
        app: rs1
    spec:
      containers:
      - image: xm17671855780/nginx:/0.2
        imagePullPolicy: IfNotPresent
        name: rs1
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: rs2
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: rs2
  template:
    metadata:
      labels:
        app: rs2
    spec:
      containers:
      - image: xm17671855780/file_httpd:v2.1
        imagePullPolicy: IfNotPresent
        name: rs2

---
apiVersion: v1
kind: Service
metadata:
  name: rs1
  namespace: default
spec:
  ports:
  - port: 8080
    protocol: TCP
    targetPort: 80
  selector:
    app: rs1
  type: NodePort
---
apiVersion: v1
kind: Service
metadata:
  name: rs2
  namespace: default
spec:
  ports:
  - port: 8081
    protocol: TCP
    targetPort: 80
  selector:
    app: rs2
  type: NodePort



[root@master haproxy]# kubectl apply -f rss.yaml //创建nginx pod,service
deployment.apps/rs1 created
deployment.apps/rs2 created
service/rs1 created
service/rs2 created

[root@master haproxy]# kubectl get pod,svc //查看
NAME                          READY   STATUS    RESTARTS   AGE
pod/rs1-7hgki5btf22-ca62m   1/1     Running   0          26s
pod/rs2-43hklpg67d-su56g    1/1     Running   0          30s

NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
service/kubernetes   ClusterIP   10.96.0.1    <none>        443/TCP   100m
service/rs1          ClusterIP   10.105.78.8  <none>        80/TCP    35s
service/rs2          ClusterIP   10.105.78.22 <none>        80/TCP    42s



haproxy

[root@master haproxy]# vi haproxy.yaml 
---
apiVersion: v1
kind: Pod
metadata:
  labels:
    app: haproxy
  name: haproxy
spec:
  restartPolicy: OnFailure
  initContainers:
  - name: cfgfile
    volumeMounts:
    - name: haproxyconfigfile
      mountPath: /tmp
  containers:
  - image: xm17671855780/haproxy:v1.0
    imagePullPolicy: IfNotPresent
    name: haproxy
    livenessProbe:
      tcpSocket:
        port: 80
      initialDelaySeconds: 30
      periodSeconds: 10
    readinessProbe:
      tcpSocket:
        port: 80
      initialDelaySeconds: 30
      periodSeconds: 10
    env:
    - name: RSIP
      value: "rs1 rs2"
---
apiVersion: v1
kind: Service
metadata:
  name: haproxy
  namespace: default
spec:
  ports:
  - port: 80
    protocol: TCP
    targetPort: 80
  selector:
    app: haproxy
  type: NodePort


[root@master haproxy]# kubectl apply -f haproxy.yaml 
deployment.apps/haproxy created
service/haproxy created

# 查看是否创建成功
[root@master haproxy]# kubectl get pod,svc
NAME                           READY   STATUS    RESTARTS   AGE
pod/haproxy-6h22bhf7gf-h8sdg   1/1     Running   0          15s
pod/rs1-7hgki5btf22-ca62m      1/1     Running   0          38s
pod/rs2-43hklpg67d-su56g       1/1     Running   0          43s

NAME                 TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
service/haproxy      NodePort    10.91.2.9        <none>      80:31143/TCP   28s
service/rs2          ClusterIP   10.105.78.22     <none>      80/TCP         54s
service/kubernetes   ClusterIP   10.96.0.1        <none>      443/TCP        56m
service/rs1          ClusterIP   10.105.78.8      <none>      80/TCP         183s

访问测试


[root@master haproxy]# curl 192.168.58.110:31143
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
......

[root@master haproxy]# curl 192.168.58.110:31143
<html><body><h1>It works!</h1></body></html>



Kubernetes 调度

Kubernetes Scheduler 是 Kubernetes 控制平面的核心组件之一。它在控制平面上运行,将 Pod 分配给节点,同时平衡节点之间的资源利用率。将 Pod 分配给新节点后,在该节点上运行的 kubelet 会在 Kubernetes API 中检索 Pod 定义,根据节点上的 Pod 规范创建资源和容器。换句话说,Scheduler 在控制平面内运行,并将工作负载分配给 Kubernetes 集群。

在这里插入图片描述

上述以创建一个Pod为例,简要介绍调度流程:

1.用户通过命令行创建Pod(选择直接创建Pod而不是其他workload,是为了省略kube-controller-manager)
2.kube-apiserver经过对象校验、admission、quota等准入操作,写入etcd
3.kube-apiserver将结果返回给用户
4.同时kube-scheduler一直监听节点、Pod事件等
5.kube-scheduler将spec.nodeName的pod加入到调度队列中,进行调度周期
6.kube-scheduler将pod与得分最高的节点进行binding操作
7.kube-apiserver将binding信息写入etcd
8.kubelet监听分配给自己的Pod,调用CRI接口进行Pod创建
9.kubelet创建Pod后,更新Pod状态等信息,并向kube-apiserver上报
10.kube-apiserver写入数据

在这里插入图片描述

注:以前称之为predicate与priorities,当前统称为过滤与评分,实际效果一致

  • 过滤:输入是所有节点,输出是满足预选条件的节点。kube-scheduler根据过滤策略过滤掉不满足的节点。例如,如果某节点的资源不足或者不满足预选策略的条件如“节

    点的标签必须与Pod的Selector一致”时则无法通过过滤。

  • 评分:输入是通过过滤阶段的节点,评分时会根据评分算法为通过过滤的节点进行打分,选择得分最高的节点。例如,资源越充足、负载越小的节点可能具有越高的排名。



    通俗点说,调度的过程就是在回答两个问题:1. 候选节点有哪些?2. 其中最适合的是哪一个?

​ 如果在过滤阶段没有节点满足条件,Pod会一直处在Pending状态直到出现满足的节点,在此期间调度器会不断的进行重试。如果有多个节点评分一致,那么kube-scheduler任意选择其一。



NodeName(节点选择器)

Kubernetes 调度是将 Pod 分配给集群中匹配节点的过程。Scheduler 监控新创建的 Pod,并为其分配最佳节点。它会根据 Kubernetes 的调度原则和我们的配置选项选择最佳节点。最简单的配置选项是直接在 PodSpec 设置 nodeName:


案例

[root@master ~]# vi test.yaml 
apiVersion: v1
kind: Pod
metadata:
   name: pod-nodename
spec:
  nodeName: node01  //指定节点
  containers:
  - name: nginx
    image: nginx 

[root@master ~]# kubectl get pods -o wide
NAME           READY   STATUS    RESTARTS   AGE     IP            NODE     NOMINATED NODE   READINESS GATES
pod-nodename   1/1     Running   0          85s     10.244.1.55   node01   <none>           <none>
web            1/1     Running   0          9m15s   10.244.2.44   node02   <none>           <none>

[root@master ~]# kubectl apply -f test.yaml
pod/pod-nodename created

# 此时node01 和 node02 上各一个,占用资源差不多,此时指定node02运行
[root@master ~]# kubectl get pods -o wide
NAME            READY   STATUS    RESTARTS   AGE    IP            NODE     NOMINATED NODE   READINESS GATES
pod-nodename    1/1     Running   0          6m7s   10.244.1.55   node01   <none>           <none>
pod-nodename2   1/1     Running   0          22s    10.244.2.45   node02   <none>           <none>
web             1/1     Running   0          13m    10.244.2.44   node02   <none>           <none>

# 说明nodeName约束生效

上面的 nginx pod 默认情况下将在 node-01 上运行,但是 nodeName 有许多限制导致无法正常运行 Pod,例如云中节点名称未知、资源节点不足以及节点网络间歇性问题等。因此,除了测试或开发期间,我们最好不使用 nodeName。



nodeSelector

如果要在一组特定的节点上运行 Pod,可以使用 nodeSelector。我们在 PodSpec 中将 nodeSelector 定义为一组键值对:


案例

[root@master ~]# vi test.yaml 
apiVersion: v1
kind: Pod
metadata:
   name: pod-nodename
spec:
  containers:
  - name: nginx
    image: nginx 
  nodeSelector: 
    disktype: ssd  //在带有这个标签上的节点运行

[root@master ~]# kubectl get nodes --show-labels   //查看个结点的标签,此时任何主机都没有disktype: ssd 这个标签
NAME     STATUS   ROLES                  AGE    VERSION   LABELS
master   Ready    control-plane,master   4d8h   v1.23.1   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers=
node01   Ready    <none>                 4d8h   v1.23.1   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node01,kubernetes.io/os=linux
node02   Ready    <none>                 4d8h   v1.23.1   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node02,kubernetes.io/os=linux


[root@master ~]# kubectl apply -f test.yaml
pod/pod-nodename created

[root@master ~]# kubectl get pods  //发现pod总是处于待办状态
NAME                     READY   STATUS    RESTARTS   AGE
pod-nodename             0/1     Pending   0          17s

[root@master ~]# kubectl get pods -o wide  //未运行在任何节点上
NAME                     READY   STATUS    RESTARTS   AGE     IP            NODE     NOMINATED NODE   READINESS GATES
nginx-85b98978db-ztswj   1/1     Running   0          4d8h    10.244.1.56   node01   <none>           <none>
pod-nodename             0/1     Pending   0          8m22s   <none>        <none>   <none>           <none>

#为node2添加disktype:=ssd 标签
[root@master ~]# kubectl label nodes node02 disktype=ssd
node/node02 labeled

[root@master ~]# kubectl get nodes --show-labels | grep node02
node02   Ready    <none>                 4d8h   v1.23.1   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd,kubernetes.io/arch=amd64,kubernetes.io/hostname=node02,kubernetes.io/os=linux

[root@master ~]# kubectl apply -f test.yaml  //重新部署
pod/pod-nodename created

[root@master ~]# kubectl get pods -o wide  //发现此时pod-nodename在node02上
NAME                     READY   STATUS    RESTARTS   AGE    IP            NODE     NOMINATED NODE   READINESS GATES
nginx-85b98978db-ztswj   1/1     Running   0          4d8h   10.244.1.56   node01   <none>           <none>
pod-nodename             1/1     Running   0          18s    10.244.2.47   node02   <none>           <none>



Node Affinity(节点亲和力)

节点亲和力(Node Affinity)是在 Pod 上定义的一组约束,用于确定哪些节点适合进行调度,即使用亲和性规则为 Pod 的节点分配定义硬性要求和软性要求。例如可以将 Pod 配置为仅运行带有 GPU 的节点,并且最好使用 NVIDIA_TESLA_V100 运行深度学习工作负载。Scheduler 会评估规则,并在定义的约束内找到合适的节点。与 nodeSelectors 相似,节点亲和性规则可与节点标签一起使用,但它比 nodeSelectors 更强大。

  • 硬条件(required) :必须满足
  • 软条件(preferred):尽量满足

操作符: in、NotIn、Exists(是否存在)、DoesNotExist(是否不存在)、Gt(是否大于)、Lt(是否小于)

硬条件(required) :必须满足

[root@master ~]# vi test.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: disktype //标签名为key
            operator: In   //包括
            values:
            - ssd            
  containers:
  - name: nginx
    image: nginx
    imagePullPolicy: IfNotPresent

[root@master ~]# kubectl get nodes --show-labels | grep disktype=ssd  //查看哪个节点上有标签
node02   Ready    <none>                 4d9h   v1.23.1   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd,kubernetes.io/arch=amd64,kubernetes.io/hostname=node02,kubernetes.io/os=linux



[root@master ~]# kubectl apply -f test.yaml 
pod/nginx created

[root@master ~]# kubectl get pods
NAME                     READY   STATUS    RESTARTS   AGE
nginx                    1/1     Running   0          6s
nginx-85b98978db-ztswj   1/1     Running   0          4d8h

[root@master ~]# kubectl get pods --output=wide //发现工作在node02上
NAME                     READY   STATUS    RESTARTS   AGE    IP            NODE     NOMINATED NODE   READINESS GATES
nginx                    1/1     Running   0          20s    10.244.2.48   node02   <none>           <none>
nginx-85b98978db-ztswj   1/1     Running   0          4d8h   10.244.1.56   node01   <none>           <none>


# 若是都不满足,pod的还会创建吗
[root@master ~]# kubectl label nodes node02 disktype-  //disktype从node02剔除
node/node02 unlabeled

[root@master ~]# kubectl get nodes --show-labels | grep disktype=ssd
[root@master ~]# 

[root@master ~]# kubectl get nodes --show-labels | grep node02
node02   Ready    <none>                 4d9h   v1.23.1   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node02,kubernetes.io/os=linux

[root@master ~]# kubectl delete -f test.yaml  //清楚旧的资源
pod "nginx" deleted

[root@master ~]# kubectl get pods
NAME                     READY   STATUS    RESTARTS   AGE
nginx-85b98978db-ztswj   1/1     Running   0          4d9h

[root@master ~]# kubectl apply -f test.yaml  //重新部署,发现pod没有出现在任何节点上(Pending)待办
pod/nginx created

[root@master ~]# kubectl get pods --output=wide
NAME                     READY   STATUS    RESTARTS   AGE    IP            NODE     NOMINATED NODE   READINESS GATES
nginx                    0/1     Pending   0          10s    <none>        <none>   <none>           <none>
nginx-85b98978db-ztswj   1/1     Running   0          4d9h   10.244.1.56   node01   <none>           <none>

软条件(preferred):尽量满足

[root@master ~]# vi test.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  affinity:
    nodeAffinity:
      requiredDuringSchedulingIgnoredDuringExecution:
        nodeSelectorTerms:
        - matchExpressions:
          - key: disktype
            operator: In
            values:
            - ssd            
      preferredDuringSchedulingIgnoredDuringExecution:
      - weight: 3
        preference: 
          matchExpressions:
          - key: app
            operator: In
            values:
            - nginx  
  containers:
  - name: nginx
    image: nginx
    imagePullPolicy: IfNotPresent
    
[root@master ~]# kubectl get nodes --show-labels | grep disktype=ssd   //各节点上没有此标签
[root@master ~]# kubectl get nodes --show-labels | grep app=nginx       //各节点上没有此标签
[root@master ~]# kubectl get nodes --show-labels 
NAME     STATUS   ROLES                  AGE    VERSION   LABELS
master   Ready    control-plane,master   4d9h   v1.23.1   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=master,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers=
node01   Ready    <none>                 4d9h   v1.23.1   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node01,kubernetes.io/os=linux
node02   Ready    <none>                 4d9h   v1.23.1   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node02,kubernetes.io/os=linux

[root@master ~]# kubectl apply -f test.yaml 
pod/nginx created

[root@master ~]# kubectl get pods -o wide  //发现nginx没有创建成功,不在任何节点上
NAME                     READY   STATUS    RESTARTS   AGE    IP            NODE     NOMINATED NODE   READINESS GATES
nginx                    0/1     Pending   0          83s    <none>        <none>   <none>           <none>
nginx-85b98978db-ztswj   1/1     Running   0          4d9h   10.244.1.56   node01   <none>           <none>


# 若此时将node02 打上app=nginx的标签呢
[root@master ~]# kubectl label nodes node02 app=nginx
node/node02 labeled
[root@master ~]# kubectl get nodes --show-labels | grep app=nginx
node02   Ready    <none>                 4d9h   v1.23.1   app=nginx,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=node02,kubernetes.io/os=linux

[root@master ~]# kubectl delete -f test.yaml 
pod "nginx" deleted
[root@master ~]# kubectl apply -f test.yaml   //再次创建时,发现也不在任何节点上
pod/nginx created
[root@master ~]# kubectl get pods -o wide
NAME                     READY   STATUS    RESTARTS   AGE    IP            NODE     NOMINATED NODE   READINESS GATES
nginx                    0/1     Pending   0          3s     <none>        <none>   <none>           <none>
nginx-85b98978db-ztswj   1/1     Running   0          4d9h   10.244.1.56   node01   <none>           <none>

# 那么此时我将disktype=ssd 添加至node01,又会发生什么呢
[root@master ~]# kubectl label nodes node01 disktype=ssd
node/node01 labeled

[root@master ~]# kubectl get nodes --show-labels | grep disktype=ssd
node01   Ready    <none>                 4d9h   v1.23.1   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd,kubernetes.io/arch=amd64,kubernetes.io/hostname=node01,kubernetes.io/os=linux

[root@master ~]# kubectl apply -f test.yaml   
pod/nginx created
[root@master ~]# kubectl get pods -o wide  //发现此时nginx运行在node01 上,验证了先硬后软的规则
NAME                     READY   STATUS    RESTARTS   AGE    IP            NODE     NOMINATED NODE   READINESS GATES
nginx                    1/1     Running   0          3s     10.244.1.58   node01   <none>           <none>
nginx-85b98978db-ztswj   1/1     Running   0          4d9h   10.244.1.56   node01   <none>           <none>


# 若将node02 也添加disktype=ssd,nginx又将在那个节点运行呢?
[root@master ~]# kubectl label nodes node02 disktype=ssd
node/node02 labeled

[root@master ~]# kubectl get nodes --show-labels | grep disktype=ssd   //查看拥有disktype=ssd标签的节点
node01   Ready    <none>                 4d9h   v1.23.1   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd,kubernetes.io/arch=amd64,kubernetes.io/hostname=node01,kubernetes.io/os=linux
node02   Ready    <none>                 4d9h   v1.23.1   app=nginx,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd,kubernetes.io/arch=amd64,kubernetes.io/hostname=node02,kubernetes.io/os=linux

[root@master ~]# kubectl get nodes --show-labels | grep app=nginx //查看拥有app=nginx的节点
node02   Ready    <none>                 4d9h   v1.23.1   app=nginx,beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=ssd,kubernetes.io/arch=amd64,kubernetes.io/hostname=node02,kubernetes.io/os=linux

[root@master ~]# kubectl delete -f test.yaml 
pod "nginx" deleted
^[[A[root@master kubectl apply -f test.yaml 
pod/nginx created
[root@master ~]# kubectl get pods -o wide    
NAME                     READY   STATUS    RESTARTS   AGE    IP            NODE     NOMINATED NODE   READINESS GATES
nginx                    1/1     Running   0          2s     10.244.2.49   node02   <none>           <none>
nginx-85b98978db-ztswj   1/1     Running   0          4d9h   10.244.1.56   node01   <none>           <none>

# 当两个条件都满足时,nginx出现在了node02 的节点上



Taint(污点)与 Toleration(污点容忍)

集群中并非所有 Kubernetes 节点都相同。某些节点可能具有特殊的硬件,例如 GPU、磁盘或网络功能。同样,我们可能需要将一些节点专用于测试、数据保护或用户组。我们可以将 Taint 添加到节点以排斥 Pod,

  • Node Taints

    • 一个Node可以有多个Taints
    • Effect(不能为空)

      • NoSchedule – 只是禁止新Pods调度上来
      • PreferNoSchedule – 尽量不调度到该节点来
      • NoExecyte – 会evict(驱逐)没有对应toleration的Pods,并且也不会调度新的来
  • Pod Tolerations

    • 一个Pod可以有多个Tolerations
    • Effect可以为空,匹配所有

      • 取值和Taints的Effect一致
    • Operator

      • Exists/Equal

那我们如何限制 Pod 调度到某些 Node 上呢?

比如说现在有个 node 叫 demo-node,这个节点有问题,我想限制一些 Pod 调度上来。这时可以给这个节点打一个 taints,taints 内容包括 key、value、effect:

  • key 就是配置的键值
  • value 就是内容
  • effect 是标记了这个 taints 行为是什么

目前 Kubernetes 里面有三个 taints 行为:

  • NoSchedule 禁止新的 Pod 调度上来;
  • PreferNoSchedul 尽量不调度到这台;
  • NoExecute 会 evict 没有对应 toleration 的 Pods,并且也不会调度新的上来。这个策略是非常严格的,大家在使用的时候要小心一点。

查看master节点,默认是不可调度

[root@master ~]# kubectl describe node master
......
CreationTimestamp:  Sun, 19 Dec 2021 11:40:19 +0800
Taints:             node-role.kubernetes.io/master:NoSchedule
Unschedulable:      false
Lease:
  HolderIdentity:  master
......

[root@master ~]# kubectl taint node master node-role.kubernetes.io/master-   //去掉污点
node/master untainted

[root@master ~]# kubectl describe node master
......
CreationTimestamp:  Sun, 19 Dec 2021 11:40:19 +0800
Taints:             <none>
Unschedulable:      false
Lease:
  HolderIdentity:  master
  AcquireTime:     <unset>
......

[root@master ~]# kubectl taint node master node-role.kubernetes.io/master="":NoSchedule
node/master tainted  //添加污点

[root@master ~]# kubectl describe node master
CreationTimestamp:  Sun, 19 Dec 2021 11:40:19 +0800
Taints:             node-role.kubernetes.io/master:NoSchedule
Unschedulable:      false
Lease:
  HolderIdentity:  master
  AcquireTime:     <unset>


污点案例

NoSchedule (不可调度)


# 查看各个节点的污点
[root@master ~]# kubectl describe node  master | grep Taints:
Taints:             node-role.kubernetes.io/master:NoSchedule
[root@master ~]# kubectl describe node  node01 | grep Taints:
Taints:             <none>
[root@master ~]# kubectl describe node  node02 | grep Taints:
Taints:             <none>

# 注意:使用二进制部署的集群,默认是没有污点存在的
# 如果是kubeadm部署,则master节点自带污点

# 添加污点
[root@master ~]# kubectl taint node node01 env_wu=yes:NoSchedule
node/node01 tainted

[root@master ~]# kubectl describe node  node01 | grep Taints:  //查看
Taints:             env_wu=yes:NoSchedule

# 格式为
# key=value:值
# k/v都可以自定义

添加NoSchedule污点后,节该节点不会调度pod运行

(如果该节点已经运行pod,也不会删除)


# 测试
[root@master ~]# kubectl create deployment web --image=nginx  //创建pod
deployment.apps/web created

[root@master ~]# kubectl scale deployment web --replicas=2 //为其添加副本
deployment.apps/web scaled

[root@master ~]# kubectl get pod -o wide   //发现都在node02上
NAME                     READY   STATUS              RESTARTS   AGE     IP            NODE     NOMINATED NODE   READINESS GATES
web-76b56fd968-mn8sw     1/1     Running             0          2m29s   10.244.2.51   node02   <none>           <none>
web-76b56fd968-n5zdj     1/1     Running             0          2m39s   10.244.2.50   node02   <none>           <none>

# 如果想要在该运行,删除污点后重新扩展pod即可
#删除就是在添加污点的后面加一个减号"-"
[root@master ~]# kubectl taint node node01 env_wu=yes:NoSchedule-
node/node01 untainted
[root@master ~]# kubectl describe node  node01 | grep Taints:
Taints:             <none>
[root@master ~]# 

PreferNoSchedule

#清除刚才的控制器
[root@master ~]# kubectl delete deployment web  
deployment.apps "web" deleted

[root@master ~]# kubectl taint node node01 env_wu=yes:PreferNoSchedule    //添加污点
node/node01 tainted

[root@master ~]# kubectl create deployment web --image=nginx
deployment.apps/web created

[root@master ~]# kubectl scale deployment web --replicas=50  //创建多个副本
deployment.apps/web scaled

[root@master ~]# kubectl get pod -o wide   //当没有污点的节点资源不足时则会向有污点的转移
NAME                     READY   STATUS              RESTARTS   AGE     IP            NODE     NOMINATED NODE   READINESS GATES
nginx-85b98978db-jcn98   1/1     Running             0          12m     10.244.2.53   node02   <none>           <none>
web-76b56fd968-2bb82     0/1     Terminating         0          61s     <none>        node02   <none>           <none>
web-76b56fd968-2fnbp     0/1     Pending             0          4s      <none>        node01   <none>           <none>
web-76b56fd968-2l626     0/1     Terminating         0          2m1s    <none>        node02   <none>           <none>
web-76b56fd968-2z9pv     0/1     Pending             0          4s      <none>        node01   <none>           <none>
web-76b56fd968-44skb     0/1     Pending             0          4s      <none>        node01   <none>           <none>
web-76b56fd968-4pphl     0/1     Pending             0          3s      <none>        node01   <none>           <none>
web-76b56fd968-55jb8     0/1     Terminating         0          61s     <none>        node02   <none>           <none>
web-76b56fd968-58r4z     0/1     Terminating         0          2m1s    <none>        node02   <none>           <none>
web-76b56fd968-5c9hn     0/1     Pending             0          4s      <none>        node01   <none>           <none>

[root@master ~]# kubectl taint node node01 env_wu=yes:PreferNoSchedule-  //清除污点
node/node01 untainted

[root@master ~]# kubectl describe node  node01 | grep Taints:
Taints:             <none>


容忍案例

# 为node01添加一个污点
[root@master ~]# kubectl taint node node01 env_wu=yes:NoExecute (驱逐)
node/node01 tainted

[root@master ~]# vi rong.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: test-taint
spec:
  selector:
    matchLabels:
      app: test-taint
  replicas: 3
  template:
    metadata:
      labels:
        app: test-taint
    spec:
      containers:
      - name: nginx
        image: nginx
        
[root@master ~]# kubectl get pod -o wide  //node01无pod
NAME                          READY   STATUS              RESTARTS        AGE   IP            NODE     NOMINATED NODE   READINESS GATES
test-taint-6798d496c7-2rfnf   0/1     ContainerCreating   0               9s    <none>        node02   <none>           <none>
test-taint-6798d496c7-2x6j9   0/1     ContainerCreating   0               9s    <none>        node02   <none>           <none>
test-taint-6798d496c7-r66wk   0/1     ContainerCreating   0               9s    <none>        node02   <none>           <none>

# 添加容忍
[root@master ~]# vi ss.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
  name: test-taint
spec:
  selector:
    matchLabels:
      app: test-taint
  replicas: 3
  template:
    metadata:
      labels:
        app: test-taint
    spec:
      containers:
      - name: nginx
        image: nginx
      tolerations:        #容忍
      - key: "env_wu"       #这个是设置污点的key
        operator: "Equal"
        value: "yes"          #这个是设置污点的values
        effect: "NoExecute"  #这个是污点类型

[root@master ~]# kubectl apply -f ss.yaml
Warning: resource deployments/test-taint is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
deployment.apps/test-taint configured


[root@master ~]# kubectl get pod -o wide
NAME                          READY   STATUS              RESTARTS      AGE     IP            NODE     NOMINATED NODE   READINESS GATES
test-taint-6798d496c7-2x6j9   1/1     Running             0             5m32s   10.244.2.64   node02   <none>           <none>
test-taint-749fbcf99f-jlf42   1/1     Running             0             40s     10.244.1.62   node01   <none>           <none>
test-taint-749fbcf99f-rtrbr   1/1     Running             0             21s     10.244.2.67   node02   <none>           <none>
test-taint-749fbcf99f-v86zc   0/1     ContainerCreating   0             3s      <none>        node01   <none>           <none>



容器资源限制

k8s采用request和limit两种限制类型来对资源进行分配

  • request(资源需求):即运行pod的节点必须满足运行pod的最基本需求才能运行pod。
  • limit(资源限制):即运行pod期间,可能内存使用量会增加,那最多能使用多少内存,这就是资源限额。


资源类型

CPU的单位是核心数,内存的单位是字节。

一个容器申请0.5各CPU,就相当于申请1个CPU的一半,可以加个后缀m表示千分之一的概念。比如说100m的CPU,100豪的CPU和0.1个CPU都是一样的。

内存单位:

  • K,M,G,T,P,E #通常是以1000为换算标准的。
  • Ki,Mi,Gi,Ti,Pi,Ei #通常是以1024为换算标准的。


内存资源限制实例
[root@master ~]# cat limit.yaml 
apiVersion: v1
kind: Pod
metadata: 
  name: web
spec: 
  containers: 
  - name: web
    image: nginx
    resources: 
      requests: 
        memory: "64Mi"  //请求为64m
        cpu: "250m"
      limits: 
        memory: "128Mi"  限制为128m
        cpu: "500m"



工作负载型控制器的应用

工作负载控制器(Workload Controllers)时kubernetes的一个抽象概念,用于更高层次对象,部署和管理pod。

常用工作负载控制器:

  • Deployment:无状态应用部署
  • StatefulSet:有状态应用部署
  • DaemonSet:确保所有Node运行同一个Pod
  • JOb:一次性服务
  • Cronjob:定时任务


控制器的作用

  • 管理pod对象
  • 使用标签与Pod关联
  • 控制器实现了Pod的运维,例如滚动更新、伸缩、副本管理、维护Pod状态等。

在这里插入图片描述



Deployment

基于ReplicaSet之上,可为Pod和ReplicaSet资源提供声明式更新,它具有以下特性:

  • 事件和状态查看:可以查看Deployment对象升级的详细进度和状态
  • 回滚:当升级操作完成后发现问题时,支持将应用返回到指定的历史版本中
  • 版本记录:对Deployment 对象的每一次操作都予以保存
  • 暂停和启动:每一次升级,都可以随时暂停和启动
  • 多种自动更新方案:Recreate-重建更新、RollingUpdate-滚动更新

应用场景:

  • 网站
  • 微服务
  • APi

Deployment 应用流程如下:

在这里插入图片描述

Deployment的更新策略描述如下:

RollingUpdate 策略:旧控制器的Pod数量不断减少,同时新控制器的Pod不断增加,以下两个属性:

  • maxSurge:升级期间存在的总Pod数量最多可超过期望值的个数,可以是数值或百分比。
  • maxUnavailabe:升级期间正常可用的Pod数(新旧版本)最多不能低于期望的个数,可以是数值或百分比。


滚动升级更新策略

spec:
  replicas: 3
  revisionHistoryLimit: 10  # 历史版本保存数量,保存数量基于镜像数量
  selector: 
    matchLabels: 
      app:web
    strategy: 
      rollingUpdate: 
        maxSurge: 25%      //更新时,启动的Pod数量,占总数量的1/4
        maxUnavailable: 25%  //更新时,关闭的Pod数量,占总数量的1/4
      type: RollingUpdate


注意

: 在实际应用中,maxSurge: 25% 和maxUnavailable: 25% ,通常都是1 : 1,关几个,跑几个,为保证服务更新时务业服务的可用性。

Recreate 策略:在删除旧的 pod 之后才开始创建新的 pod。 如果你的应用程序不支持多个版本同时对外提供服务, 需要在启动新版本之前完全停用旧版本, 那么需要使用这种策略。 但是使用这种策略的话, 会导致应用程序出现短暂的不可用。



Deployment实现滚动升级
[root@master ~]# vi test.yaml 
apiVersion: apps/v1
kind: Deployment
metadata: 
  name: web
spec: 
  replicas: 4
  strategy: 
    rollingUpdate: 
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  selector: 
    matchLabels: 
      app: httpd
  template: 
    metadata: 
      labels: 
        app: httpd
    spec: 
      containers: 
      - name: httpd
        image: xm17671855780/httpd:v1.0
        imagePullPolicy: IfNotPresent

先运行一个v1版本的pods
[root@master ~]# kubectl delete -f test.yaml 
deployment.apps "web" deleted

[root@master ~]# kubectl get pods   //此时都是v1版本
NAME                     READY   STATUS    RESTARTS   AGE
web-66dc445c66-5sd42     1/1     Running   0          50s
web-66dc445c66-8fjld     1/1     Running   0          50s
web-66dc445c66-8wdc9     1/1     Running   0          50s
web-66dc445c66-fc9lf     1/1     Running   0          50s


# 更改版本镜像v1-v2
[root@master ~]# vi test.yaml 
......
containers: 
      - name: httpd
        image: xm17671855780/httpd:v2.0
        imagePullPolicy: IfNotPresent
......

# 应用部署
[root@master ~]# kubectl apply -f test.yaml 
deployment.apps/web configured

[root@master ~]# kubectl get pods 
# 发现有五个正在创建或运行的,一个正在关闭,五个运行,其中包括了四个之前就创建过的;
在策略中, replicas: 4(副本的总数量),maxSurge=25%(1/4),maxUnavailable: 25%(1/4)也就是一个,当更新时控制关闭一个pod,然后启动一个pod。所以在运行的pod为:4+1个。

NAME                     READY   STATUS              RESTARTS   AGE
web-66dc445c66-8gkcv     0/1     ContainerCreating   0          1s
web-66dc445c66-gv6gx     0/1     ContainerCreating   0          1s
web-689dbd69c9-l6t69     1/1     Running             0          2m25s
web-689dbd69c9-p8xtq     1/1     Running             0          2m25s
web-689dbd69c9-prdls     1/1     Running             0          2m23s

# 4的1 /4为一,所以在更新时,关闭了一个
web-689dbd69c9-xbtwl     1/1     Terminating         0          2m22s  

# 修改占比,将占比改为50%(1/2)
[root@master ~]# vi test.yaml 
apiVersion: apps/v1
kind: Deployment
metadata: 
  name: web
spec: 
  replicas: 4
  strategy: 
    rollingUpdate: 
      maxSurge: 50%  //修改此处
      maxUnavailable: 50%  //修改此处
    type: RollingUpdate
    ......此处省略n行
    spec:
      containers:
      - name: httpd
        image: xm17671855780/httpd:v1.0  //修改版本
        imagePullPolicy: IfNotPresent

#  maxSurge: 50%      //更新时,启动的Pod数量,占总数量的1/2
#  maxUnavailable: 50%  //更新时,关闭的Pod数量,占总数量的1/2
#  replicas: 4 副本数量

# 重新应用
[root@master ~]# kubectl apply -f test.yaml 
Adeployment.apps/web configured
#原来正在运行四个
# 正在在运行 =  启动两个,4+(4*0.5)=6个,关闭两个(4*0.5=2)个,runing:150%,Closing:50%

[root@master ~]# kubectl get pods
NAME                     READY   STATUS              RESTARTS   AGE
web-66dc445c66-8gkcv     1/1     Terminating         0          13m
web-66dc445c66-gv6gx     1/1     Terminating         0          13m
web-66dc445c66-mqqdk     1/1     Running             0          13m
web-66dc445c66-rgqqz     1/1     Running             0          13m
web-689dbd69c9-5mkm6     0/1     ContainerCreating   0          1s
web-689dbd69c9-g4px6     0/1     ContainerCreating   0          1s
web-689dbd69c9-hvh86     0/1     ContainerCreating   0          1s
web-689dbd69c9-jc57n     0/1     ContainerCreating   0          1s
# 总共8个

# 等待更新完毕
[root@master ~]# kubectl get pods
NAME                     READY   STATUS    RESTARTS   AGE
web-689dbd69c9-5mkm6     1/1     Running   0          13m
web-689dbd69c9-g4px6     1/1     Running   0          13m
web-689dbd69c9-hvh86     1/1     Running   0          13m
web-689dbd69c9-jc57n     1/1     Running   0          13m



版本记录
  • kubectl rollout history deployment/web # 查看历史发布版本

  • kubectl rollout undo deployment/web # 回滚上一个版本

  • kubectl rollout undo deployment/web –to-revision=2 # 回滚历史指定版本

# 先清除之前创建的资源,将版本历史清空
[root@master ~]# kubectl delete -f test.yaml 
deployment.apps "web" deleted

[root@master ~]# cat test.yaml 
apiVersion: apps/v1
kind: Deployment
metadata: 
  name: web
spec: 
  replicas: 4
  revisionHistoryLimit:10  //版本的最大存储版本数
  strategy: 
    rollingUpdate: 
      maxSurge: 50%
      maxUnavailable: 50%
    type: RollingUpdate
  selector: 
    matchLabels: 
      app: httpd
  template: 
    metadata: 
      labels: 
        app: httpd
    spec: 
      containers: 
      - name: httpd
        image: xm17671855780/httpd:v2.0
        imagePullPolicy: IfNotPresent


# 然后重新部署新资源,此时为1版本
[root@master ~]# kubectl apply -f test.yaml 
deployment.apps/web created


[root@master ~]# vi test.yaml   //变更版本
......
      containers: 
      - name: httpd
        image: xm17671855780/httpd:v1.0  //修改此处
        imagePullPolicy: IfNotPresent
......    


[root@master ~]# kubectl apply -f test.yaml    //重新应用,此时为2版本
deployment.apps/web configured

[root@master ~]# kubectl rollout history deployment/web  //查看版本,发现有1,2两个版本
deployment.apps/web 
REVISION  CHANGE-CAUSE
1         <none>  //旧版本,上一个版本
2         <none>  //所在版本

[root@master ~]# vi test.yaml 
[root@master ~]# kubectl apply -f test.yaml 
deployment.apps/web configured
[root@master ~]# kubectl rollout history deployment/web
deployment.apps/web 
REVISION  CHANGE-CAUSE
2         <none>   //上一个版本
3         <none>  //所在版本


# revisionHistoryLimit:10,发现并没有继续存储,版本1被2替代。
# 原因在于我们只使用了两个镜像,若镜像一致,系统则会替代

# 回滚到上一个版本
[root@master ~]# kubectl rollout undo deployment/web
deployment.apps/web rolled back

[root@master ~]# kubectl rollout history deployment/web
deployment.apps/web 
REVISION  CHANGE-CAUSE
3         <none>
4         <none>

# 回滚到指定版本
[root@master ~]# kubectl rollout undo deploy/web --to-revision 3
deployment.apps/web rolled back

[root@master ~]# kubectl rollout history deployment/web
deployment.apps/web 
REVISION  CHANGE-CAUSE
4         <none>
5         <none>



ReplicaSet

说到ReplicaSet对象,得先说说ReplicationController(简称为RC)。在旧版本的Kubernetes中,只有ReplicationController对象。它的主要作用是确保Pod以你指定的副本数运行,即如果有容器异常退出,会自动创建新的 Pod 来替代;而异常多出来的容器也会自动回收。



ReplicationController、ReplicaSet、Deployment的协调流程

在这里插入图片描述



ReplicationController、ReplicaSet、Deployment的区别

ReplicaSet是新一代的ReplicationController, 并推荐使用它替代ReplicationController来复制和管理 Pod。

同时,在使用 Deployment 时,实际的 Pod是由Deployment的Replicaset创建和管理的,而不是由Deployment直接创建和管理的。

在这里插入图片描述

  • Deployment每次发布都会创建一个RS作为记录,用于实现回滚


实例

查看RS记录

[root@master ~]# kubectl get rs
NAME             DESIRED   CURRENT   READY   AGE
web-66dc445c66   0         0         0       32s
web-689dbd69c9   4         4         3       3s

版本对应RS记录

[root@master ~]# kubectl rollout history deployment web
deployment.apps/web 
REVISION  CHANGE-CAUSE
1         <none>
2         <none>

[root@master ~]# vi test.yaml   //变更版本
......
      containers: 
      - name: httpd
        image: xm17671855780/httpd:v2.0  //修改此处
        imagePullPolicy: IfNotPresent
......    

[root@master ~]# kubectl apply -f test.yaml 
deployment.apps/web configured

# 发现每次版本变更时对应一个RS,上一个版本被干掉了,新的版本会生成新的Rs
[root@master ~]# kubectl get rs
NAME             DESIRED   CURRENT   READY   AGE
web-66dc445c66   4         4         4       2m7s
web-689dbd69c9   0         0         0       98s
[root@master ~]# 



DaemonSet

DaemonSet控制器确保集群中的每一个Node只运行一个特定的Pod副本,实现系统级的后台任务,也具有标签选择器。也可以指定部分满足条件的Node运行一个Pod副本,比如监控具有ssd存储的Node节点。

常用来部署一些集群的日志、监控或者其他系统管理应用。典型的应用包括:

  • 日志收集,比如fluentd、logstash等。
  • 系统监控,比如Prometheus Node Exporter、collectd、New Relic agent、Ganglia gmond等。
  • 系统程序,比如kube-proxy、kube-dns、Glusterd、Ceph等。

    在这里插入图片描述
[root@master ~]# kubectl delete deploy/nginx  //清除资源
deployment.apps "nginx" deleted

[root@master ~]# kubectl get pods
NAME                   READY   STATUS        RESTARTS   AGE
web-689dbd69c9-2stv8   1/1     Terminating   0          6m56s
web-689dbd69c9-7plnz   1/1     Terminating   0          6m56s
web-689dbd69c9-cmngk   0/1     Terminating   0          6m56s

[root@master ~]# kubectl get pods
No resources found in default namespace.

# 修改test.yaml文件
[root@master ~]# vi test.yaml 
---
apiVersion: apps/v1
kind: DaemonSet    // 指定类型为DaemonSet                             
metadata:
  name: filebeat
  namespace: kube-system   //指定系统命名空间             
spec:
  selector:
    matchLabels:
      name: filebeat
  template:
    metadata:
      labels:
        name: filebeat
    spec:
      containers:                               
      - name: log
        image: elastic/filebeat:7.16.2
        imagePullPolicy: IfNotPresent


#   容器名:filebeat

[root@master ~]# kubectl apply -f test.yaml 
daemonset.apps/filebeat created

[root@master ~]# kubectl get pods -n kube-system -o wide | grep filebeat
filebeat-crlnx                   1/1     Running   0               8m1s   10.244.2.119     node02   <none>           <none>
filebeat-hplmj                   1/1     Running   0               8m1s   10.244.1.64      node01   <none>           <none>

# 发现在node01和node02上运行
# 需要在每个节点都运行容器的时候,可以用DameonSet来执行:公共任务


Job控制器

Job控制器用于Pod对象运行一次性任务,容器中的进程在正常运行结束后不会对其进行重启,而是将Pod对象置于”Completed”(完成)状态,若容器中的进程因错误而终止,则需要按照重启策略配置确定是否重启,未运行完成的Pod对象因其所在的节点故障而意外终止后会被调度。 Job控制器的Pod对象的状态转换如下图所示:

在这里插入图片描述

Job工作流程及特点:N个作业需要串行运行N次,直至满足期望的次数。如下图所示,这次Job也可以理解为并行度为1的作业执行方式,在某个时刻仅存在一个Pod资源对象。

在这里插入图片描述

# 编写一个job.yaml文件
[root@master ~]# vi job.yaml 
apiVersion: batch/v1
kind: Job
metadata:
  name: pi
spec:
  template: 
    spec: 
      containers:
      - name: pi
        image: perl
        command: ["perl","-Mbignum=bpi","-wle","print bpi(2000)"]
      restartPolicy: Never
  backoffLimit: 4  // 重试次数


[root@master ~]# kubectl apply -f job.yaml  //部署
job.batch/pi created

[root@master ~]# kubectl get pods -o wide  //此时为正在创建状态
NAME       READY   STATUS              RESTARTS   AGE   IP       NODE     NOMINATED NODE   READINESS GATES
pi-dtgj6   0/1     ContainerCreating   0          8s    <none>   node01   <none>           <none>

[root@master ~]# kubectl get pods -o wide  //当进程运行完成后为完成状态,并不在运行了
NAME       READY   STATUS      RESTARTS   AGE    IP            NODE     NOMINATED NODE   READINESS GATES
pi-dtgj6   0/1     Completed   0          5m7s   10.244.1.65   node01   <none>           <none>



CronJob

CronJob控制器执行周期性任务作业,控制其运行的时间点及重复运行的方式,类似于Linux操作系统的周期性任务作业计划的方式控制其运行的时间点和重复运行的方式。

配置项:

  • jobTemplate:Job控制器模板。
  • schedule:Cron格式的作业调度运行的时间点。
  • concurrencyPolicy:并发执行策略,用于定义前一次作业尚未完成时如何执行下一此任务。默认是Allow,即允许前后Job,甚至是属于同一个CrontJob的更多Job同时运行。如果设置为Forbid则禁止前后两个Job同时运行,如果前一个尚未结束,后一个不会启动(跳过),如果设置为Replace,则后一个Job会替代前一个Job,即终止前一个,启动后一个。
  • failedJobHistoryLimit:为失败的任务执行保留的历史记录数,默认是1。
  • successfulJobsHistoryLimit:为成功的任务执行保留的历史记录数,默认是3。
  • startingDeadlineSeconds:因各种原因缺乏执行作业的时间点所导致的启动作业错误的超时时长,会被记入错误历史记录
  • suspend:是否挂起后续的任务执行,默认是false。


注意

: 在书写* 时记得空格,每个*之间都有一个空格。



实例
# 编写cronjob.yaml 文件
[root@master ~]# vi cronjob.yaml 
apiVersion: batch/v1
kind: CronJob
metadata:
  name: hello
spec:
  schedule: "*/1 * * * *"  //每分钟进行一次
  jobTemplate: 
    spec: 
      template: 
        spec: 
          containers:
          - name: hello
            image: busybox
            args:
            - /bin/sh
            - -c
            - date;echo Hello aliang
          restartPolicy: OnFailure

# 部署
[root@master ~]# kubectl apply -f cronjob.yaml 
cronjob.batch/hello created

[root@master ~]# kubectl get pods //完成,等待任务执行
NAME       READY   STATUS      RESTARTS   AGE
pi-dtgj6   0/1     Completed   0          9m42s

# 记录时间
[root@master ~]# date
2021年 12月 24日 星期五 18:37:12 CST

[root@master ~]# kubectl get pods  //此时已经有一个任务了
NAME                   READY   STATUS              RESTARTS   AGE
hello-27339037-qnvvx   0/1     ContainerCreating   0          14s
pi-dtgj6               0/1     Completed           0          10m

# 我们再等待1分钟查看,任务是否增加
[root@master ~]# date
2021年 12月 24日 星期五 18:38:01 CST

[root@master ~]# kubectl get pods
NAME                   READY   STATUS              RESTARTS   AGE
hello-27339037-qnvvx   0/1     Completed           0          63s
hello-27339038-9k2mj   0/1     ContainerCreating   0          3s
pi-dtgj6               0/1     Completed           0          11m

# 注意:batch/v1beta1 CronJob 在 v1.21+ 中被弃用,在 v1.25+ 中不可用; 使用批处理/v1 CronJob。

在这里插入图片描述

时间表从左到右包含以下五个条目:

  • 分钟
  • 小时
  • 每月中的第几天
  • 星期几



版权声明:本文为xumneg111原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。