写在前面
- 嗯,整理K8s中pod在此与小伙伴分享调度相关笔记
- 博文内容包括:
kube-scheduler
组件的简述Pod
调度(选择器、指定节点、主机亲和力)- 节点的
coedon
与drain
标记 - 节点的
taint
(污点)标记及pod的容忍污点(tolerations
)定义
- 食用方式:
- 需要了解
K8s
基础知识 - 熟悉资源对象
pod,deploy
了解资源对象的定义yaml
文件 - 了解
kubectl
常用命令
- 需要了解
- 理解不足的小伙伴帮忙指正
Scheduler 简要介绍调度组件
Kubernetes Scheduler是什么
众多周知,Kubernetes Scheduler
是 Kubernetes
中负责Pod调度
运行在重要功能模块中的重要功能模块k8s 集群中的master节点。
: Kubernetes Scheduler
待调度的作用Pod (API新创建
的Pod, Controller Manager为创建补充副本
的Pod等)根据具体情况调度算法
和调度策略
绑定(Binding)适合集群Node写入绑定信息etcd中。
整个调度过程涉及三个对象,即
待调度Pod列表
可用Node列表
以及调度算法和策略
:通过调度算法进行调度待调度Pod列表
中的每个Pod从Node列表
中选择一个最适合的Node
随后, 目标node节点上的kubelet
通过APIServer监听到Kubernetes Scheduler
产生的Pod绑定事件
,然后获得相应的Pod清单
,下载Image镜像并启动容器
。
kubelet
进程通过与API Server
每隔一个时间周期,互动就会被调用一次API Server的REST接口报告自身状态, API Server接收到此信息后,更新节点状态信息etcd中。 同时kubelet也通过API Server的Watch接口监听Pod信息
,
- 假如监听到新的Pod如果将副本绑定到本节点,则执行Pod创建和启动相应容器的逻辑;
- 如果监听到Pod如果对象被删除,则在本节点上删除相应的对象Pod容器;
- 若监听到修改Pod信息,则kubelet听到变化后,节点将相应修改Pod容器。
所以说,kubernetes Schedule
承担整个系统承上启下的
重要功能负责接收声明API或者创建新的控制器pod并为其安排适当的消息Node,对下,选择好node之后,交接工作node上的kubelet,由kubectl负责pod剩余生命周期。
Kubernetes Scheduler调度流程
Kubernetes Scheduler
目前提供的默认调度流程分为以下两个步骤。这部分随着版本的变化而变化,小伙伴主要是官网
流程 | 描述 |
---|---|
即遍历所有目标Node,筛选出符合要求的候选节点。 Kubernetes内置多种预选策略(xxx Predicates)供用户选择 | |
在第一步的基础上,采用优化策略(xxxPriority)计算每个候选节点的积分,得分最高者获胜 |
Kubernetes Scheduler通过插件加载调度过程"调度算法提供者” (AlgorithmProvider)
具体实现AlgorithmProvider
其实包括一组预选策略
与一组优先选择策略
的结构体.
Scheduler中可用的预选策略包含:NoDiskConflict、PodFitsResources、PodSelectorMatches、PodFitsHost、CheckNodeLabelPresence、CheckServiceAffinity 和PodFitsPorts
策略等。
其默认的AlgorithmProvider
返回布尔值的预选策略包括:
- PodFitsPorts(PodFitsPorts):判断端口是否冲突
- PodFitsResources(PodFitsResources):判断备选节点的资源是否符合备选节点Pod的需求
- NoDiskConflict(NoDiskConflict):判断备选节点和现有节点是否存在磁盘冲突
- MatchNodeSelector(PodSelectorMatches):判断备选节点是否包含备选节点Pod标签选择器指定的标签。
- HostName(PodFitsHost):判断备选Pod的spec.nodeName域内指定的节点名称与备选节点名称是否一致
也就是说,每个节点节点通过上述5种默认预选策略
,初步选择,进入确认最佳节点(优化策略)流程。
Scheduler中的优选策略包含(不限于下面3个):
LeastRequestedPriority
:从备选节点列表中选择资源消耗最小的节点,对每个节点公式进行评分CalculateNodeLabelPriority
:判断策略列出的标签在备选节点中存在时,是否选择该备选节,这不太懂,打分BalancedResourceAllocation
:从备选节点列表中选择资源利用率最平衡的节点。对每个节点公式进行评分
通过每个节点在优先选择策略时,会计算得分,计算得分,最后选择得分最大的节点作为优先结果
(也是调度算法的结果)。
Pod的调度
手动指定pod的运行位置:
通过给node设置指定的标签,然后我们就可以创建了pod选择通过选择器选择node标签类似于前端DOM操作元素定位,或直接指定节点名
常用的节点标签命令
所有节点设置标签设置 | – |
---|---|
查看 | kubectl get nodes --show-labels |
设置 | kubectl label node node2 disktype=ssd |
取消 | kubectl label node node2 disktype |
kubectl label node all key=vale |
给节点设置标签
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl label node vms82.liruilongs.github.io disktype=node1
node/vms82.liruilongs.github.io labeled
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl label node vms83.liruilongs.github.io disktype=node2
node/vms83.liruilongs.github.io labeled
查看节点标签
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl get node --show-labels
NAME STATUS ROLES AGE VERSION LABELS
vms81.liruilongs.github.io Ready control-plane,master 45d v1.22.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=vms81.liruilongs.github.io,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers=
vms82.liruilongs.github.io Ready <none> 45d v1.22.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=node1,kubernetes.io/arch=amd64,kubernetes.io/hostname=vms82.liruilongs.github.io,kubernetes.io/os=linux
vms83.liruilongs.github.io Ready <none> 45d v1.22.2 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,disktype=node2,kubernetes.io/arch=amd64,kubernetes.io/hostname=vms83.liruilongs.github.io,kubernetes.io/os=linux
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl get node
NAME STATUS ROLES AGE VERSION
vms81.liruilongs.github.io Ready control-plane,master 45d v1.22.2
vms82.liruilongs.github.io Ready <none> 45d v1.22.2
vms83.liruilongs.github.io Ready <none> 45d v1.22.2
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl label nodes vms82.liruilongs.github.io node-role.kubernetes.io/worker1=
node/vms82.liruilongs.github.io labeled
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl label nodes vms83.liruilongs.github.io node-role.kubernetes.io/worker2=
node/vms83.liruilongs.github.io labeled
查看标签
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl get node
NAME STATUS ROLES AGE VERSION
vms81.liruilongs.github.io Ready control-plane,master 45d v1.22.2
vms82.liruilongs.github.io Ready worker1 45d v1.22.2
vms83.liruilongs.github.io Ready worker2 45d v1.22.2
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$
选择器(nodeSelector
)方式
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get nodes -l disktype=node2
NAME STATUS ROLES AGE VERSION
vms83.liruilongs.github.io Ready worker2 45d v1.22.2
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$vim pod-node2.yaml
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl apply -f pod-node2.yaml
pod/podnode2 created
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: podnode2
name: podnode2
spec:
nodeSelector:
disktype: node2
containers:
- image: nginx
imagePullPolicy: IfNotPresent
name: podnode2
resources: {
}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {
}
pod被调度到了vms83.liruilongs.github.io
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
podnode2 1/1 Running 0 13m 10.244.70.60 vms83.liruilongs.github.io <none> <none>
指定节点名称(nodeName
)的方式
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$vim pod-node1.yaml
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl apply -f pod-node1.yaml
pod/podnode1 created
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: podnode1
name: podnode1
spec:
nodeName: vms82.liruilongs.github.io
containers:
- image: nginx
imagePullPolicy: IfNotPresent
name: podnode1
resources: {
}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {
}
需要注意
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
podnode1 1/1 Running 0 36s 10.244.171.165 vms82.liruilongs.github.io <none> <none>
podnode2 1/1 Running 0 13m 10.244.70.60 vms83.liruilongs.github.io <none> <none>
主机亲和性
所谓主机亲和性
,即在满足指定条件的节点上运行。分为硬策略(必须满足)
,软策略(最好满足)
硬策略(requiredDuringSchedulingIgnoredDuringExecution
)
所调度节点的标签必须为其中的一个,这个标签是一个默认标签,会自动添加
affinity:
nodeAffinity: #主机亲和性
requiredDuringSchedulingIgnoredDuringExecution: #硬策略
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- vms85.liruilongs.github.io
- vms84.liruilongs.github.io
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl apply -f pod-node-a.yaml
pod/podnodea created
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: podnodea
name: podnodea
spec:
containers:
- image: nginx
imagePullPolicy: IfNotPresent
name: podnodea
resources: {
}
affinity:
nodeAffinity: #主机亲和性
requiredDuringSchedulingIgnoredDuringExecution: #硬策略
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- vms85.liruilongs.github.io
- vms84.liruilongs.github.io
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {
}
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods
NAME READY STATUS RESTARTS AGE
podnodea 0/1 Pending 0 8s
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$sed -i 's/vms84.liruilongs.github.io/vms83.liruilongs.github.io/' pod-node-a.yaml
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl apply -f pod-node-a.yaml
pod/podnodea created
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
podnodea 1/1 Running 0 13s 10.244.70.61 vms83.liruilongs.github.io <none> <none>
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$
软策略(preferredDuringSchedulingIgnoredDuringExecution
)
所调度节点尽量为其中一个
affinity:
nodeAffinity: #主机亲和性
preferredDuringSchedulingIgnoredDuringExecution: # 软策略
- weight: 2
preference:
matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- vms85.liruilongs.github.io
- vms84.liruilongs.github.io
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$vim pod-node-a-r.yaml
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl apply -f pod-node-a-r.yaml
pod/podnodea created
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: podnodea
name: podnodea
spec:
containers:
- image: nginx
imagePullPolicy: IfNotPresent
name: podnodea
resources: {
}
affinity:
nodeAffinity: #主机亲和性
preferredDuringSchedulingIgnoredDuringExecution: # 软策略
- weight: 2
preference:
matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- vms85.liruilongs.github.io
- vms84.liruilongs.github.io
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {
}
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods -owide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
podnodea 1/1 Running 0 28s 10.244.70.62 vms83.liruilongs.github.io <none> <none>
常见的标签运算符
运算符 | 描述 |
---|---|
In | 包含自, 比如上面的硬亲和就包含env_role=dev、env_role=test两种标签 |
NotIn | 和上面相反,凡是包含该标签的节点都不会匹配到 |
Exists | 存在里面和In比较类似,凡是有某个标签的机器都会被选择出来。使用Exists的operator的话,values里面就不能写东西了。 |
Gt | greater than的意思,表示凡是某个value大于设定的值的机器则会被选择出来。 |
Lt | less than的意思,表示凡是某个value小于设定的值的机器则会被选择出来。 |
DoesNotExists | 不存在该标签的节点 |
节点的coedon与drain
如果一个node被标记为cordon
,新创建的pod不会被调度到此node上,已经调度上去的不会被移走,coedon用于节点的维护,当不希望再节点分配pod,那么可以使用coedon
把节点标记为不可调度。
┌──[root@vms81.liruilongs.github.io]-[~/ansible]
└─$kubectl create deployment nginx --image=nginx --dry-run=client -o yaml >nginx-dep.yaml
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$vim nginx-dep.yaml
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl apply -f nginx-dep.yaml
deployment.apps/nginx created
apiVersion: apps/v1
kind: Deployment
metadata:
creationTimestamp: null
labels:
app: nginx
name: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
strategy: {
}
template:
metadata:
creationTimestamp: null
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
imagePullPolicy: IfNotPresent
resources: {
}
status: {
}
可以看到pod调度到了两个工作节点
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods -owide
NAME READY STATUS RESTARTS AGE IP NODE
NOMINATED NODE READINESS GATES
nginx-7cf7d6dbc8-hx96s 1/1 Running 0 2m16s 10.244.171.167 vms82.liruilongs.github.io <none> <none>
nginx-7cf7d6dbc8-wshxp 1/1 Running 0 2m16s 10.244.70.1 vms83.liruilongs.github.io <none> <none>
nginx-7cf7d6dbc8-x78x4 1/1 Running 0 2m16s 10.244.70.63 vms83.liruilongs.github.io <none> <none>
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$
节点的coedon
基本命令
kubectl cordon vms83.liruilongs.github.io #标记不可用
kubectl uncordon vms83.liruilongs.github.io #取消标记
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl cordon vms83.liruilongs.github.io #通过cordon把83标记为不可调度
node/vms83.liruilongs.github.io cordoned
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get nodes
NAME STATUS ROLES AGE VERSION
vms81.liruilongs.github.io Ready control-plane,master 48d v1.22.2
vms82.liruilongs.github.io Ready worker1 48d v1.22.2
vms83.liruilongs.github.io Ready,SchedulingDisabled worker2 48d v1.22.2
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl scale deployment nginx --replicas=6
deployment.apps/nginx scaled
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-7cf7d6dbc8-2nmsj 1/1 Running 0 64s 10.244.171.170 vms82.liruilongs.github.io <none> <none>
nginx-7cf7d6dbc8-chsrn 1/1 Running 0 63s 10.244.171.168 vms82.liruilongs.github.io <none> <none>
nginx-7cf7d6dbc8-hx96s 1/1 Running 0 7m30s 10.244.171.167 vms82.liruilongs.github.io <none> <none>
nginx-7cf7d6dbc8-lppbp 1/1 Running 0 63s 10.244.171.169 vms82.liruilongs.github.io <none> <none>
nginx-7cf7d6dbc8-wshxp 1/1 Running 0 7m30s 10.244.70.1 vms83.liruilongs.github.io <none> <none>
nginx-7cf7d6dbc8-x78x4 1/1 Running 0 7m30s 10.244.70.63 vms83.liruilongs.github.io <none> <none>
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl delete pod nginx-7cf7d6dbc8-wshxp
pod "nginx-7cf7d6dbc8-wshxp" deleted
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-7cf7d6dbc8-2nmsj 1/1 Running 0 2m42s 10.244.171.170 vms82.liruilongs.github.io <none> <none>
nginx-7cf7d6dbc8-5hnc7 1/1 Running 0 10s 10.244.171.171 vms82.liruilongs.github.io <none> <none>
nginx-7cf7d6dbc8-chsrn 1/1 Running 0 2m41s 10.244.171.168 vms82.liruilongs.github.io <none> <none>
nginx-7cf7d6dbc8-hx96s 1/1 Running 0 9m8s 10.244.171.167 vms82.liruilongs.github.io <none> <none>
nginx-7cf7d6dbc8-lppbp 1/1 Running 0 2m41s 10.244.171.169 vms82.liruilongs.github.io <none> <none>
nginx-7cf7d6dbc8-x78x4 1/1 Running 0 9m8s 10.244.70.63 vms83.liruilongs.github.io <none> <none>
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl delete pod nginx-7cf7d6dbc8-x78x4
pod "nginx-7cf7d6dbc8-x78x4" deleted
pod都位于正常的节点
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-7cf7d6dbc8-2nmsj 1/1 Running 0 3m31s 10.244.171.170 vms82.liruilongs.github.io <none> <none>
nginx-7cf7d6dbc8-5hnc7 1/1 Running 0 59s 10.244.171.171 vms82.liruilongs.github.io <none> <none>
nginx-7cf7d6dbc8-chsrn 1/1 Running 0 3m30s 10.244.171.168 vms82.liruilongs.github.io <none> <none>
nginx-7cf7d6dbc8-hx96s 1/1 Running 0 9m57s 10.244.171.167 vms82.liruilongs.github.io <none> <none>
nginx-7cf7d6dbc8-lppbp 1/1 Running 0 3m30s 10.244.171.169 vms82.liruilongs.github.io <none> <none>
nginx-7cf7d6dbc8-m8ltr 1/1 Running 0 30s 10.244.171.172 vms82.liruilongs.github.io <none> <none>
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl uncordon vms83.liruilongs.github.io #恢复节点状态
node/vms83.liruilongs.github.io uncordoned
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get nodes
NAME STATUS ROLES AGE VERSION
vms81.liruilongs.github.io Ready control-plane,master 48d v1.22.2
vms82.liruilongs.github.io Ready worker1 48d v1.22.2
vms83.liruilongs.github.io Ready worker2 48d v1.22.2
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl scale deployment nginx --replicas=0
deployment.apps/nginx scaled
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods -o wide
No resources found in liruilong-pod-create namespace.
节点的drain
kubectl drain vms83.liruilongs.github.io --ignore-daemonsets
kubectl uncordon vms83.liruilongs.github.io
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl scale deployment nginx --replicas=4
deployment.apps/nginx scaled
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get pods -o wide --one-output
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-7cf7d6dbc8-2clnb 1/1 Running 0 22s 10.244.171.174 vms82.liruilongs.github.io <none> <none>
nginx-7cf7d6dbc8-9p6g2 1/1 Running 0 22s 10.244.70.2 vms83.liruilongs.github.io <none> <none>
nginx-7cf7d6dbc8-ptqxm 1/1 Running 0 22s 10.244.171.173 vms82.liruilongs.github.io <none> <none>
nginx-7cf7d6dbc8-zmdqm 1/1 Running 0 22s 10.244.70.4 vms83.liruilongs.github.io <none> <none>
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl drain vms82.liruilongs.github.io --ignore-daemonsets --delete-emptydir-data
node/vms82.liruilongs.github.io cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/calico-node-ntm7v, kube-system/kube-proxy-nzm24
evicting pod liruilong-pod-create/nginx-7cf7d6dbc8-ptqxm
evicting pod kube-system/metrics-server-bcfb98c76-wxv5l
evicting pod liruilong-pod-create/nginx-7cf7d6dbc8-2clnb
pod/nginx-7cf7d6dbc8-2clnb evicted
pod/nginx-7cf7d6dbc8-ptqxm evicted
pod/metrics-server-bcfb98c76-wxv5l evicted
node/vms82.liruilongs.github.io evicted
查看节点,状态为不可被调度
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create]
└─$kubectl get nodes
NAME STATUS ROLES AGE VERSION
vms81.liruilongs.github.io Ready control-plane,master 48d v1.22.2
vms82.liruilongs.github.io Ready,SchedulingDisabled worker1 48d v1.22.2
vms83.liruilongs.github.io Ready worker2 48d v1.22.2
┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create] └─$kubectl get pods -o wide --one-output NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-7cf7d6dbc8-9p6g2 1/1 Running 0 4m20s 10.244.70.2 vms83.liruilongs.github.io <none> <none> nginx-7cf7d6dbc8-hkflr 1/1 Running 0 25s 10.244.70.5 vms83.liruilongs.github.io <none> <none> nginx-7cf7d6dbc8-qt48k 1/1 Running 0 26s 10.244.70.7 vms83.liruilongs.github.io <none> <none> nginx-7cf7d6dbc8-zmdqm 1/1 Running 0 4m20s 10.244.70.4 vms83.liruilongs.github.io <none> <none> ┌──[root@vms81.liruilongs.github.io]-[~/ansible/k8s-pod-create 标签:
p48k5s圆形连接器