资讯详情

k8s1.18-kubeadm安装手册

一、初始化

参考链接:https://kubernetes.io/docs/reference/setup-tools/kubeadm/

1.1、配置要求

  • 节点配置为2c4g
  • 集群中所有机器之间的互通
  • hostname 和 mac 唯一的地址集群
  • CentOS版本为7,最好为7.9.更新低版本的建议
  • 集群中的所有节点都可以访问外网拉取镜像
  • swap分区关闭
  • 注:在这里下单master安装方式master同样的安装方法

1.2、初始化

1.所有主机都必须操作 systemctop disable firewalld;systemctl stop firewalld swapoff -a ;sed -i '/swap/s/^/#/g'  /etc/fstab  setenforce 0;sed -i 's/SELINUX=enforcing/SELINUX=disabled/g'  /etc/selinux/config  2.主机名初始化 hostnamectl set-hostname  $主机名 #不同的主机山设置不同的主机名称 [root@master1 $]# vim /etc/hosts 127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1         localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.74.128  master1 registry.mt.com 192.168.74.129  master2 192.168.74.130  master3  3.内核初始化-所有主机 [root@master1 yaml]# cat /etc/sysctl.d/kubernetes.conf net.bridge.bridge-nf-call-iptables=1  #需要modprobe br_netfilter模块才会有 net.bridge.bridge-nf-call-ip6tables=1 net.ipv4.ip_forward=1 net.ipv4.tcp_tw_recycle=0 vm.swappiness=0 # 禁止使用 swap 空间,只有当系统 OOM 允许使用它 vm.overcommit_memory=1 # 不检查物理内存是否足够。 vm.panic_on_oom=0 fs.inotify.max_user_instances=8192 fs.inotify.max_user_watches=1048576 fs.file-max=655360000 fs.nr_open=655360000 net.ipv6.conf.all.disable_ipv6=1 net.netfilter.nf_conntrack_max=2310720  [root@master1 yaml]# sysctl --system  4、时间同步 ntpdate time.windows.com  5.设置资源配置文件 echo "* soft nofile 65536" >> /etc/security/limits.conf echo "* hard nofile 65536" >> /etc/security/limits.conf echo "* soft nproc 65536"  >> /etc/security/limits.conf echo "* hard nproc 65536"  >> /etc/security/limits.conf echo "* soft  memlock  unlimited"  >> /etc/security/limits.conf echo "* hard memlock  unlimited"  >> /etc/security/limits.conf  6.启动加载模块 [root@master1 ~]# cat /etc/modules-load.d/k8s.conf  lvs lvs_rr br_netfilter lvs_wrr lvs_sh [root@master1 ~]# modprobe lvs  [root@master1 ~]# modprobe lvs_rr [root@master1 ~]# modprobe br_netfilter [root@master1 ~]# modprobe lvs_wrr [root@master1 ~]# modprobe lvs_sh   [root@iZ2zef0llgs69lx3vc9rfgZ ~]# lsmod |grep vs ip_vs_sh               12688  0 ip_vs_wrr              12697  0 ip_vs_rr               12600  0 ip_vs                 145497  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr nf_conntrack          137239  1 ip_vs libcrc32c              12644  2 ip_vs,nf_conntrack   

二、安装

2.1、安装docker

# 执行所有节点: wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo   yum list docker-ce.x86_64 --showduplicates | sort -r  ##可以看到可以安装的#docker-ce版本,选择其中一个安装 yum install docker-ce-18.06.1.ce-3.el7  # 修改/etc/docker/docker-daemon.json配置文件 {         "exec-opts": ["native.cgroupdriver=systemd"],         "registry-mirrors": ["https://b9pmyelo.mirror.aliyuncs.com"],         "insecure-registries": ["www.mt.com:9500"] }  说明:  registry-mirrors 为aliyun用于加速下载国外镜像的镜像加速设置地址。不然下载会很慢  #修改后,docker info验证 systemctl disbale docker;systemctl enable docker [root@master1 $]# docker info |tail -10 Registry: https://index.docker.io/v1/ Labels: Experimental: false Insecure Registries:  www.mt.com:9500  127.0.0.0/8 Registry Mirrors:  https://b9pmyelo.mirror.aliyuncs.com/ Live Restore Enabled: false 

2.2、安装kubectl/kubeadm/kubelet

#创建repo文件  /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes #baseurl=https://mirrors.tuna.tsinghua.edu.cn/kubernetes/yum/repos/kubernetes-el7-$basearch baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0  #安装 yum install kubeadm-1.18.0 kubectl-1.18.0 kubelet-1.18.0  #启动服务 systemctl start kubelet;systemctl enable kubelet 

2.3、初始化

注:多master不同的节点和初始化方法。 ///本实验未使用此步骤 kubeadm init --kubernetes-version=$版本 --apiserver-advertise-address=$本masterip \ --control-plane-endpoint=$本masterip:6443 \  #这一行必须要 --pod-network-cidr=10.244.0.0/16 \  --image-repository registry.aliyuncs.com/google_containers  \ -service-cidr=10.96.0.0/12 这样操作第一个节点,使用第二个节点kubeadm join方式加入control-plane 
#只在master上运行 [root@master1 ~]# kubeadm init --apiserver-advertise-address=192.168.74.128 --image-repository registry.aliyuncs.com/google_containers --kubernetes-version v1.18.0 --service-cidr=10.96.0.0/12 --pod-network-cidr=10.244.0.0/16 W0503 16:48:24.338256   38262 configset.go:202] WARNING: kubeadm cannot validatecomponent configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[init] Using Kubernetes version: v1.18.0
[preflight] Running pre-flight checks  #1、预检查,拉镜像
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"  
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"  #2、初始化kubelet信息
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"  #3、证书准备
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.74.128]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master1 localhost] and IPs [192.168.74.128 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master1 localhost] and IPs [192.168.74.128 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"  #4、写入kubeconfig信息
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests" #5、写入control-plane 配置信息
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0503 16:49:05.984728   38262 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0503 16:49:05.985466   38262 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 17.002641 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master1 as control-plane by adding the label "node-role.kubernetes.io/master=''"  #6、打标签
[mark-control-plane] Marking the node master1 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]  
[bootstrap-token] Using token: i91fyq.rp8jv1t086utyxif  #7、配置bootstrap和RBAC
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS  #8、插件安装
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

# 创建kubectl访问apiserver 配置文件。在master上执行
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

# 创建network
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

#加入集群使用
kubeadm join 192.168.74.128:6443 --token i91fyq.rp8jv1t086utyxif \
    --discovery-token-ca-cert-hash sha256:a31d6b38fec404eda854586a4140069bc6e3154241b59f40a612e24e9b89bf37

[root@master1 ~]# docker images
REPOSITORY                                                        TAG                 IMAGE ID            CREATED             SIZE
registry.aliyuncs.com/google_containers/kube-proxy                v1.18.0             43940c34f24f        13 months ago       117MB
registry.aliyuncs.com/google_containers/kube-apiserver            v1.18.0             74060cea7f70        13 months ago       173MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.18.0             d3e55153f52f        13 months ago       162MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.18.0             a31f78c7c8ce        13 months ago       95.3MB
registry.aliyuncs.com/google_containers/pause                     3.2                 80d28bedfe5d        14 months ago       683kB
registry.aliyuncs.com/google_containers/coredns                   1.6.7               67da37a9a360        15 months ago       43.8MB
registry.aliyuncs.com/google_containers/etcd                      3.4.3-0             303ce5db0e90        18 months ago       288MB
k8s.gcr.io/etcd                                                   3.2.24              3cab8e1b9802        2 years ago         220MB
k8s.gcr.io/coredns                                                1.2.2               367cdc8433a4        2 years ago         39.2MB
k8s.gcr.io/kubernetes-dashboard-amd64                             v1.10.0             0dab2435c100        2 years ago         122MB
k8s.gcr.io/flannel                                                v0.10.0-amd64       f0fad859c909        3 years ago         44.6MB
k8s.gcr.io/pause                                                  3.1                 da86e6ba6ca1        3 years ago         742kB
[root@master1 ~]#
# 创建kubectl配置文件:
  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

2.4、加入集群

[root@master1 ~]# kubectl  get nodes
NAME      STATUS     ROLES    AGE   VERSION
master1   NotReady   master   36m   v1.18.0

# master2和master3上执行
kubeadm join 192.168.74.128:6443 --token i91fyq.rp8jv1t086utyxif --discovery-token-ca-cert-hash sha256:a31d6b38fec404eda854586a4140069bc6e3154241b59f40a612e24e9b89bf37

[root@master1 ~]# kubectl  get nodes
NAME      STATUS     ROLES    AGE   VERSION
master1   NotReady   master   40m   v1.18.0
master2   NotReady   <none>   78s   v1.18.0
master3   NotReady   <none>   31s   v1.18.0

2.5、安装网络插件

[root@master1 yaml]# kubectl apply -f  https://raw.githubusercontent.com/flannel-io/flannel/master/Documentation/kube-flannel.yml
[root@master1 yaml]# kubectl  get pods -n kube-system
NAME                              READY   STATUS    RESTARTS   AGE
coredns-7ff77c879f-l9h6x          1/1     Running   0          60m
coredns-7ff77c879f-lp9p2          1/1     Running   0          60m
etcd-master1                      1/1     Running   0          60m
kube-apiserver-master1            1/1     Running   0          60m
kube-controller-manager-master1   1/1     Running   0          60m
kube-flannel-ds-524n6             1/1     Running   0          45s
kube-flannel-ds-8gqwp             1/1     Running   0          45s
kube-flannel-ds-fgvpf             1/1     Running   0          45s
kube-proxy-2fz26                  1/1     Running   0          21m
kube-proxy-46psf                  1/1     Running   0          20m
kube-proxy-p7jnd                  1/1     Running   0          60m
kube-scheduler-master1            1/1     Running   0          60m

[root@master1 yaml]# kubectl  get nodes #在安装网络插件之前STATUS都是NotReady状态
NAME      STATUS   ROLES    AGE   VERSION
master1   Ready    master   60m   v1.18.0
master2   Ready    <none>   21m   v1.18.0
master3   Ready    <none>   20m   v1.18.0

三、验证

[root@master1 yaml]# kubectl  create deployment nginx --image=nginx
[root@master1 yaml]# kubectl  expose deployment  nginx --port=80 --type=NodePort
[root@master1 yaml]# kubectl  get pods -o wide
NAME                    READY   STATUS    RESTARTS   AGE   IP           NODE      NOMINATED NODE   READINESS GATES
nginx-f89759699-fvlfg   1/1     Running   0          99s   10.244.2.3   master3   <none>           <none>
[root@master1 yaml]# kubectl  get svc/nginx
NAME    TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
nginx   NodePort   10.105.249.76   <none>        80:30326/TCP   85s

[root@master1 yaml]# curl -s -I master2:30326  #访问集群中任意一个node节点的 30326端口都可以通
HTTP/1.1 200 OK 
Server: nginx/1.19.10
Date: Mon, 03 May 2021 09:58:47 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 13 Apr 2021 15:13:59 GMT
Connection: keep-alive
ETag: "6075b537-264"
Accept-Ranges: bytes

[root@master1 yaml]# curl -s -I master3:30326
HTTP/1.1 200 OK
Server: nginx/1.19.10
Date: Mon, 03 May 2021 09:58:50 GMT
Content-Type: text/html
Content-Length: 612
Last-Modified: Tue, 13 Apr 2021 15:13:59 GMT
Connection: keep-alive
ETag: "6075b537-264"
Accept-Ranges: bytes

[root@master1 yaml]# curl -s -I master1:30326 #master1为master节点非worker节点,
^C

镜像:
[root@master1 yaml]# docker images
REPOSITORY                                                        TAG                 IMAGE ID            CREATED             SIZE
quay.io/coreos/flannel                                            v0.14.0-rc1         0a1a2818ce59        2 weeks ago         67.9MB
registry.aliyuncs.com/google_containers/kube-proxy                v1.18.0             43940c34f24f        13 months ago       117MB
registry.aliyuncs.com/google_containers/kube-controller-manager   v1.18.0             d3e55153f52f        13 months ago       162MB
registry.aliyuncs.com/google_containers/kube-apiserver            v1.18.0             74060cea7f70        13 months ago       173MB
registry.aliyuncs.com/google_containers/kube-scheduler            v1.18.0             a31f78c7c8ce        13 months ago       95.3MB
registry.aliyuncs.com/google_containers/pause                     3.2                 80d28bedfe5d        14 months ago       683kB
registry.aliyuncs.com/google_containers/coredns                   1.6.7               67da37a9a360        15 months ago       43.8MB
registry.aliyuncs.com/google_containers/etcd                      3.4.3-0             303ce5db0e90        18 months ago       288MB

[root@master2 ~]# docker images
REPOSITORY                                           TAG                 IMAGE ID            CREATED             SIZE
quay.io/coreos/flannel                               v0.14.0-rc1         0a1a2818ce59        2 weeks ago         67.9MB
registry.aliyuncs.com/google_containers/kube-proxy   v1.18.0             43940c34f24f        13 months ago       117MB
registry.aliyuncs.com/google_containers/pause        3.2                 80d28bedfe5d        14 months ago       683kB
registry.aliyuncs.com/google_containers/coredns      1.6.7               67da37a9a360        15 months ago       43.8MB

[root@master3 ~]# docker images
REPOSITORY                                           TAG                 IMAGE ID            CREATED             SIZE
quay.io/coreos/flannel                               v0.14.0-rc1         0a1a2818ce59        2 weeks ago         67.9MB
nginx                                                latest              62d49f9bab67        2 weeks ago         133MB
registry.aliyuncs.com/google_containers/kube-proxy   v1.18.0             43940c34f24f        13 months ago       117MB
registry.aliyuncs.com/google_containers/pause        3.2                 80d28bedfe5d        14 months ago       683kB
registry.aliyuncs.com/google_containers/coredns      1.6.7               67da37a9a360        15 months ago       43.8MB

四、kubeadm用法

4.1、init

用途:用于初始化master节点

4.1.1、简介

preflight                    Run pre-flight checks
certs                        Certificate generation
  /ca                          Generate the self-signed Kubernetes CA to provision identities for other Kubernetes components
  /apiserver                   Generate the certificate for serving the Kubernetes API
  /apiserver-kubelet-client    Generate the certificate for the API server to connect to kubelet
  /front-proxy-ca              Generate the self-signed CA to provision identities for front proxy
  /front-proxy-client          Generate the certificate for the front proxy client
  /etcd-ca                     Generate the self-signed CA to provision identities for etcd
  /etcd-server                 Generate the certificate for serving etcd
  /etcd-peer                   Generate the certificate for etcd nodes to communicate with each other
  /etcd-healthcheck-client     Generate the certificate for liveness probes to healthcheck etcd
  /apiserver-etcd-client       Generate the certificate the apiserver uses to access etcd
  /sa                          Generate a private key for signing service account tokens along with its public key
kubeconfig                   Generate all kubeconfig files necessary to establish the control plane and the admin kubeconfig file
  /admin                       Generate a kubeconfig file for the admin to use and for kubeadm itself
  /kubelet                     Generate a kubeconfig file for the kubelet to use *only* for cluster bootstrapping purposes
  /controller-manager          Generate a kubeconfig file for the controller manager to use
  /scheduler                   Generate a kubeconfig file for the scheduler to use
kubelet-start                Write kubelet settings and (re)start the kubelet
control-plane                Generate all static Pod manifest files necessary to establish the control plane
  /apiserver                   Generates the kube-apiserver static Pod manifest
  /controller-manager          Generates the kube-controller-manager static Pod manifest
  /scheduler                   Generates the kube-scheduler static Pod manifest
etcd                         Generate static Pod manifest file for local etcd
  /local                       Generate the static Pod manifest file for a local, single-node local etcd instance
upload-config                Upload the kubeadm and kubelet configuration to a ConfigMap
  /kubeadm                     Upload the kubeadm ClusterConfiguration to a ConfigMap
  /kubelet                     Upload the kubelet component config to a ConfigMap
upload-certs                 Upload certificates to kubeadm-certs
mark-control-plane           Mark a node as a control-plane
bootstrap-token              Generates bootstrap tokens used to join a node to a cluster
kubelet-finalize             Updates settings relevant to the kubelet after TLS bootstrap
  /experimental-cert-rotation  Enable kubelet client certificate rotation
addon                        Install required addons for passing conformance tests
  /coredns                     Install the CoreDNS addon to a Kubernetes cluster
  /kube-proxy                  Install the kube-proxy addon to a Kubernetes cluster

4.1.2、工作流 kubeadm init workflow

  • 1)预检查,检查系统状态,进行异常warning警告或者exit退出。直到问题修复或者 --ignore-preflight-errors=<list-of-errors>

  • 2)产生自签ca为集群中各个组件设置集群身份。用户可以自行设置ca或者key--cert-dir,默认为/etc/kubernetes/pki, APIServer 证书将为任何 --apiserver-cert-extra-sans 参数值提供附加的 SAN(Optional extra Subject Alternative Names )条目,必要时将其小写。

  • 3)将kubeconfig写入/etc/kubernetes/目录,以便kubelet/controller-manager/scheduler访问apiserver使用。他们都有自己的身份标识。同时生成一个名为admin.conf的独立的kubeconfig文件,用于管理操作。

  • 4)为apiserver/controller-manager/scheduler生成静态 manifests (默认在/etc/kubernetes/manifests), 假使没有提供一个外部的 etcd 服务的话,也会为 etcd 生成一份额外的静态 Pod 清单文件。 kubelet会监听/etc/kubernetes/manifests目录,以便在启动的时候创建pod

  • 5)对control-plane节点进行打标和taints防止作为workload被调度

  • 6)生成token,供节点加入

  • 7)为节点能够通过Bootstrap TokensTLS Bootstrap加入集群做必要的配置;创建一个configmap提供集群节点添加所需要的信息,并为configmap配置相关的RBAC访问规则。允许启动引导令牌访问CSR签名API。配置自动签发新的CSR请求

  • 8)通过apiserver安装coreDns和kube-proxy等addon。这里的coreDns之后在cni插件部署完成后才会被调度

4.1.3、分阶段创建control-plane

  • kubeadm允许使用kubeadm init phase命令分段创建控制平面节点
[root@master1 ~]# kubeadm init phase --help
Use this command to invoke single phase of the init workflow

Usage:
  kubeadm init phase [command]

Available Commands:
  addon              Install required addons for passing Conformance tests
  bootstrap-token    Generates bootstrap tokens used to join a node to a cluster
  certs              Certificate generation
  control-plane      Generate all static Pod manifest files necessary to establish the control plane
  etcd               Generate static Pod manifest file for local etcd
  kubeconfig         Generate all kubeconfig files necessary to establish the control plane and the admin kubeconfig file
  kubelet-finalize   Updates settings relevant to the kubelet after TLS bootstrap
  kubelet-start      Write kubelet settings and (re)start the kubelet
  mark-control-plane Mark a node as a control-plane
  preflight          Run pre-flight checks
  upload-certs       Upload certificates to kubeadm-certs
  upload-config      Upload the kubeadm and kubelet configuration to a ConfigMap
...
  • kubeadm init 还公开了一个名为 --skip-phases 的参数,该参数可用于跳过某些阶段。 参数接受阶段名称列表,并且这些名称可以从上面的有序列表中获取 sudo kubeadm init --skip-phases=control-plane,etcd --config=configfile.yaml
  • kubeadm中kube-proxy说明参考
  • kubeadm中ipvs说明参考
  • 向控制平面传递自定义的命令行参数说明
  • 使用自定义镜像:
    • 使用其他的 imageRepository 来代替 k8s.gcr.io
    • 将 useHyperKubeImage 设置为 true,使用 HyperKube 镜像。
    • 为 etcd 或 DNS 附件提供特定的 imageRepository 和 imageTag
  • 设置节点名称--hostname-override
  • ...

4.2、join

4.2.1、阶段划分

当节点加入 kubeadm 初始化的集群时,我们需要建立双向信任。 这个过程可以分解为发现(让待加入节点信任 Kubernetes control-plane节点)和 TLS 引导(让Kubernetes 控制平面节点信任待加入节点)两个部分。

发现阶段有两种主要的发现方案(二者择其一):

  • 1、使用共享令牌和 API 服务器的 IP 地址。传递 --discovery-token-ca-cert-hash 参数来验证 Kubernetes 控制平面节点提供的根证书颁发机构(CA)的公钥。 此参数的值指定为 ":",其中支持的哈希类型为 "sha256"。哈希是通过 Subject Public Key Info(SPKI)对象的字节计算的(如 RFC7469)。 这个值可以从 "kubeadm init" 的输出中获得,或者可以使用标准工具进行计算。 可以多次重复 --discovery-token-ca-cert-hash 参数以允许多个公钥。
  • 2、提供一个文件 - 标准 kubeconfig 文件的一个子集,该文件可以是本地文件,也可以通过 HTTPS URL 下载。 格式是:
    • kubeadm join --discovery-token abcdef.1234567890abcdef 1.2.3.4:6443
    • kubeadm join --discovery-file path/to/file.conf
    • kubeadm join --discovery-file https://url/file.conf #这里必须是https

TLS引导阶段

  • 也通过共享令牌驱动。 用于向 Kubernetes 控制平面节点进行临时的身份验证,以提交本地创建的密钥对的证书签名请求(CSR)。 默认情况下,kubeadm 将设置 Kubernetes 控制平面节点自动批准这些签名请求。 这个令牌通过--tls-bootstrap-token abcdef.1234567890abcdef 参数传入。在集群安装完成后,可以关闭自动签发,改为手动approve csr

4.2.2、命令用法

"join [api-server-endpoint]" 命令执行下列阶段:

preflight              Run join pre-flight checks
control-plane-prepare  Prepare the machine for serving a control plane
  /download-certs        [EXPERIMENTAL] Download certificates shared among control-plane nodes from the kubeadm-certs Secret
  /certs                 Generate the certificates for the new control plane components
  /kubeconfig            Generate the kubeconfig for the new control plane components
  /control-plane         Generate the manifests for the new control plane components
kubelet-start          Write kubelet settings, certificates and (re)start the kubelet
control-plane-join     Join a machine as a control plane instance
  /etcd                  Add a new local etcd member
  /update-status         Register the new control-plane node into the ClusterStatus maintained in the kubeadm-config ConfigMap
  /mark-control-plane    Mark a node as a control-plane
  
[flag]  #这里只列出常用的参数
--apiserver-advertise-address string  #如果该节点托管一个新的控制平面实例,则 API 服务器将公布其正在侦听的 IP 地址。如果未设置,则使用默认网络接口
--apiserver-bind-port int32     默认值: 6443 #如果节点应该托管新的控制平面实例,则为 API 服务器要绑定的端口。
--control-plane 在此节点上创建一个新的控制平面实例
--discovery-file string	对于基于文件的发现,给出用于加载集群信息的文件或者 URL。
--discovery-token string	对于基于令牌的发现,该令牌用于验证从 API 服务器获取的集群信息。
--discovery-token-ca-cert-hash stringSlice	对基于令牌的发现,验证根 CA 公钥是否与此哈希匹配 (格式: "<type>:<value>")。
--tls-bootstrap-token string	指定在加入节点时用于临时通过 Kubernetes 控制平面进行身份验证的令牌。
--token string	如果未提供这些值,则将它们用于 discovery-token 令牌和 tls-bootstrap 令牌。

4.2.3、Join工作流

  • kubeadm 从 apiserver下载必要的集群信息。 默认情况下,它使用引导令牌和 CA 密钥哈希来验证数据的真实性。 也可以通过文件或 URL 直接发现根 CA。

  • 一旦知道集群信息,kubelet 就可以开始 TLS 引导过程。

    TLS 引导程序使用共享令牌与 Kubernetes API 服务器进行临时的身份验证,以提交证书签名请求 (CSR); 默认情况下,控制平面自动对该 CSR 请求进行签名。

  • 最后,kubeadm 配置本地 kubelet 使用分配给节点的确定标识连接到 API 服务器。

对于新增节点需要增加到master需要额外的步骤:

  • 从集群下载控制平面节点之间共享的证书(如果用户明确要求)。
  • 生成控制平面组件清单、证书和 kubeconfig。
  • 添加新的本地 etcd 成员。
  • 将此节点添加到 kubeadm 集群的 ClusterStatus。

4.2.4、发现要信任的集群 CA

发现集群ca的集中方式:

方式1、带 CA 定模式的基于令牌的发现

Kubernetes 1.8 及以上版本中的默认模式。 在这种模式下,kubeadm 下载集群配置(包括根CA)并使用令牌验证它, 并且会验证根 CA 的公钥与所提供的哈希是否匹配, 以及 API 服务器证书在根 CA 下是否有效。

CA 键哈希格式为 sha256:<hex_encoded_hash>。 默认情况下,在 kubeadm init 最后打印的 kubeadm join 命令 或者 kubeadm token create --print-join-command 的输出信息中返回哈希值。 它使用标准格式 (请参考 RFC7469) 并且也能通过第三方工具或者制备系统进行计算。openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'

对于工作节点:
kubeadm join --discovery-token abcdef.1234567890abcdef --discovery-token-ca-cert-hash sha256:1234..cdef 1.2.3.4:6443

对于控制节点:
kubeadm join --discovery-token abcdef.1234567890abcdef --discovery-token-ca-cert-hash sha256:1234..cdef --control-plane 1.2.3.4:6443

注意:CA 哈希通常在主节点被提供之前是不知道的,这使得构建使用 kubeadm 的自动化配置工具更加困难。 通过预先生成CA,你可以解除这个限制。

方式2、无 CA 锁定模式的基于令牌的发现

Kubernetes 1.7 和早期版本_中的默认设置;使用时要注意一些重要的补充说明。 此模式仅依赖于对称令牌来签名(HMAC-SHA256)发现信息,这些发现信息为主节点建立信任根。 在 Kubernetes 1.8 及以上版本中仍然可以使用 --discovery-token-unsafe-skip-ca-verification 参数,但是如果可能的话,你应该考虑使用一种其他模式。

kubeadm join --token abcdef.1234567890abcdef --discovery-token-unsafe-skip-ca-verification 1.2.3.4:6443

方式3、基于 HTTPS 或文件发现

这种方案提供了一种带外方式在主节点和引导节点之间建立信任根。 如果使用 kubeadm 构建自动配置,请考虑使用此模式。 发现文件的格式为常规的 Kubernetes kubeconfig 文件。

如果发现文件不包含凭据,则将使用 TLS 发现令牌。

  • kubeadm join --discovery-file path/to/file.conf (本地文件)
  • kubeadm join --discovery-file https://url/file.conf (远程 HTTPS URL)

4.2.5、提高安全性

# 关闭自动approve csr
kubectl delete clusterrolebinding kubeadm:node-autoapprove-bootstrap  #

#关闭对集群信息configmap的公开访问
kubectl -n kube-public get cm cluster-info -o yaml | grep "kubeconfig:" -A11 | grep "apiVersion" -A10 | sed "s/    //" | tee cluster-info.yaml
使用 cluster-info.yaml 文件作为 kubeadm join --discovery-file 参数。

kubectl -n kube-public delete rolebinding kubeadm:bootstrap-signer-clusterinfo

4.3、config

[root@master1 pki]# kubeadm  config --help

There is a ConfigMap in the kube-system namespace called "kubeadm-config" that kubeadm uses to store internal configuration about the
cluster. kubeadm CLI v1.8.0+ automatically creates this ConfigMap with the config used with 'kubeadm init', but if you
initialized your cluster using kubeadm v1.7.x or lower, you must use the 'config upload' command to create this
ConfigMap. This is required so that 'kubeadm upgrade' can configure your upgraded cluster correctly.

Usage:
  kubeadm config [flags]
  kubeadm config [command]

Available Commands:
  images      Interact with container images used by kubeadm
  migrate     Read an older version of the kubeadm configuration API types from a file, and output the similar config object for the newer version
  print       Print configuration
  view        View the kubeadm configuration stored inside the cluster
# 1)images kubeadm使用到的镜像查看和拉取
list #Print a list of images kubeadm will use. The configuration file is used in case any images or image repositories are customized
pull #Pull images used by kubeadm

[root@master1 yaml]# kubeadm config images  list
I0503 22:55:00.007902  111954 version.go:252] remote version is much newer: v1.21.0; falling back to: stable-1.18
W0503 22:55:01.118197  111954 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
k8s.gcr.io/kube-apiserver:v1.18.18
k8s.gcr.io/kube-controller-manager:v1.18.18
k8s.gcr.io/kube-scheduler:v1.18.18
k8s.gcr.io/kube-proxy:v1.18.18
k8s.gcr.io/pause:3.2
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/coredns:1.6.7

# 2)view #查看kubeadm集群配置
[root@master1 yaml]# kubeadm  config view
apiServer:
  extraArgs:
    authorization-mode: Node,RBAC
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.18.0
networking:
  dnsDomain: cluster.local
  podSubnet: 10.244.0.0/16
  serviceSubnet: 10.96.0.0/12
scheduler: {}

可以修改config view的内容,然后指定文件初始化集群:kubeadm init --config kubeadm.yml
config文件支持

# 3)print init-defaults
注意:可以使用init-defaults作为kubeadm.yml配置文件,
[root@master1 yaml]# kubeadm config print  'init-defaults'
W0503 23:00:53.662149  113332 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 1.2.3.4
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: master1
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.18.0
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.96.0.0/12
scheduler: {}

[root@master1 yaml]# kubeadm config print  init-defaults  --component-configs KubeProxyConfiguration
[root@master1 yaml]# kubeadm config print  init-defaults  --component-configs KubeletConfiguration

# 4)print join-defaults
[root@master1 yaml]# kubeadm config print  'join-defaults'
apiVersion: kubeadm.k8s.io/v1beta2
caCertPath: /etc/kubernetes/pki/ca.crt
discovery:
  bootstrapToken:
    apiServerEndpoint: kube-apiserver:6443
    token: abcdef.0123456789abcdef
    unsafeSkipCAVerification: true
  timeout: 5m0s
  tlsBootstrapToken: abcdef.0123456789abcdef
kind: JoinConfiguration
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: master1
  taints: null
[root@master1 yaml]# kubeadm config print  join-defaults  --component-configs KubeProxyConfiguration
[root@master1 yaml]# kubeadm config print  join-defaults  --component-configs KubeletConfiguration

4.4、certs

1)[root@master1 pki]# kubeadm alpha  certs --help
Commands related to handling kubernetes certificates

Usage:
  kubeadm alpha certs [command]

Aliases:
  certs, certificates

Available Commands:
  certificate-key  Generate certificate keys  #生成证书
  check-expiration Check certificates expiration for a Kubernetes cluster  #检查过期
  renew            Renew certificates for a Kubernetes cluster  #刷新证书


[root@master1 pki]# kubeadm  alpha certs  check-expiration
[check-expiration] Reading configuration from the cluster...
[check-expiration] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'

CERTIFICATE                EXPIRES                  RESIDUAL TIME   CERTIFICATE AUTHORITY   EXTERNALLY MANAGED
admin.conf                 May 03, 2022 08:49 UTC   362d                                    no
apiserver                  May 03, 2022 08:49 UTC   362d            ca                      no
apiserver-etcd-client      May 03, 2022 08:49 UTC   362d            etcd-ca                 no
apiserver-kubelet-client   May 03, 2022 08:49 UTC   362d            ca                      no
controller-manager.conf    May 03, 2022 08:49 UTC   362d                                    no
etcd-healthcheck-client    May 03, 2022 08:49 UTC   362d            etcd-ca                 no
etcd-peer                  May 03, 2022 08:49 UTC   362d            etcd-ca                 no
etcd-server                May 03, 2022 08:49 UTC   362d            etcd-ca                 no
front-proxy-client         May 03, 2022 08:49 UTC   362d            front-proxy-ca          no
scheduler.conf             May 03, 2022 08:49 UTC   362d                                    no

CERTIFICATE AUTHORITY   EXPIRES                  RESIDUAL TIME   EXTERNALLY MANAGED
ca                      May 01, 2031 08:49 UTC   9y              no
etcd-ca                 May 01, 2031 08:49 UTC   9y              no
front-proxy-ca          May 01, 2031 08:49 UTC   9y              no

4.5、kubeconfig

用于管理kubeconfig文件

[root@master1 pki]# kubeadm alpha kubeconfig  user --client-name=kube-apiserver-etcd-clientI0505 23:28:58.999787   49317 version.go:252] remote version is much newer: v1.21.0; falling back to: stable-1.18W0505 23:29:01.024319   49317 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]apiVersion: v1clusters:- cluster:    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeE1EVXdNekE0TkRrd00xb1hEVE14TURVd01UQTRORGt3TTFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTnRpCkpkNW41YlhIMURISm9hTWpxQ3Z6bytjNHZHQmVsYkxjaFhrL3p2eUpaNk5vQU1PdG9PRWQwMC9YWmlYSktJUEEKWm5xbUJzZktvV2FIdEpUWDViSzlyZVdEaXJPM3Q2QUtMRG5tMzFiRHZuZjF6c0Z3dnlCU3RhUkFOUTRzZE5tbApCQWg1SFlpdmJzbjc2S0piVk1jdlIvSHJjcmw5U0IyWEpCMkFRUHRuOHloay94MUhONGxma01KVU5CTnk4bjV0ClMzUmRjZ1lTVEhJQ0FGLzFxdkl6c01ONjNEdHBIai9LNjJBNTAwN3RXUEF4MVpPdGFwL1pxM1UyeTk3S1gxYnEKdk1SanFyM0lXZW9ZMWdsWnJvdXRjcVpoelh2VmVKRE9DY1pVNFEyNmlFMzJyYWRFT3Y0bTVvZEUzSDdEYjBRWgpYMVNUNzY4SGRYTWtZWEJxOWJNQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFNZ0cyeVFzb0dyYkpDRE91ZXVWdVMyV2hlRHUKVnBkQlFIdFFxSkFGSlNSQ1M4aW9yQzE5K2gxeWgxOEhJNWd4eTljUGFBTXkwSmczRVlwUUZqbTkxQWpJWFBOQgp3Y2h0R1NLa1BpaTdjempoV2VpdXdJbDE5eElrM2lsY1dtNTBIU2FaVWpzMXVCUmpMajNsOFBFOEJITlNhYXBDCmE5NkFjclpnVmtVOFFpNllPaUxDOVpoMExiQkNWaDZIV2pRMGJUVnV2aHVlY0x5WlJpTlovUEVmdmhIa0Q3ZFIKK0RyVnBoNXVHMXdmL3ZqQXBhQmMvTWZ2TGswMlJ4dTNZR0VNbGd2TndPMStoOXJ3anRJbkNWdVIyZ3hIYjJtZgpvb0RMRHZTMDk0dXBMNkR6bXZqdkZSU1pjWElLaklQZ2xaN0VGdlprWWYrcjUybW5aeVE0NVVPcUlwcz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=    server: https://192.168.74.133:6443  name: kubernetescontexts:- context:    cluster: kubernetes    user: kube-apiserver-etcd-client  name: kube-apiserver-etcd-client@kubernetescurrent-context: kube-apiserver-etcd-client@kuberneteskind: Configpreferences: {}users:- name: kube-apiserver-etcd-client  user:    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM0ekNDQWN1Z0F3SUJBZ0lJTlZmQkNWZFNrYTR3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TVRBMU1ETXdPRFE1TUROYUZ3MHlNakExTURVeE5USTVNREZhTUNVeApJekFoQmdOVkJBTVRHbXQxWW1VdFlYQnBjMlZ5ZG1WeUxXVjBZMlF0WTJ4cFpXNTBNSUlCSWpBTkJna3Foa2lHCjl3MEJBUUVGQUFPQ0FROEFNSUlCQ2dLQ0FRRUFyTGc2cG9EK28zMnZpLytMVkhoV0FSbHZBZHFrWFRFYjAvR0QKY0xDeFlraDdhYUVVVmxKTjBHUFFkTkJhb3R5TGtvak9jL3Y4WEdxcVh2bGRpOUN0eVFMR2EydGNuNWlKZkQweAp1bjFNQXZIT1cxaUVDM2RLR1BXeXIraUZBVFExakk2ZXA2aDNidEVsSHkwSVVabEhQUDJ3WW5JQnV1NCtsLy9xCmsrc1lMQjZ0N0ZoalVQbmhnaHk4T2dMa0o3UmRjMWNBSDN3ejR2L0xoYy9yK0ppc0kvZnlRTXpiYVdqdE5GRTEKYk1MdnlYM0RXbmhlVnlod3EyUTZEbHhMaGVFWUJRWWxhbjNjdVQ3aG5YSm9NTGJKUnRhSVFWbktVOVJzYVlSUgpVVnRvdUZ2QkN5d21qOGlvOUdJakV6M2dMa0JPTk1BMERvVTlFZjhBcFpuZFY4VmN5d0lEQVFBQm95Y3dKVEFPCkJnTlZIUThCQWY4RUJBTUNCYUF3RXdZRFZSMGxCQXd3Q2dZSUt3WUJCUVVIQXdJd0RRWUpLb1pJaHZjTkFRRUwKQlFBRGdnRUJBTGw5NURPVlFmRTkxejgwQUFCOE5mR2x3d200ZnNHM0hjRXh2cXBudjdhb2I4WTQ3QWRvMkNBaApNY1pUdUxWQ3B2SHRUYk15YXhoRE41VzNmdnU2NXRuUVJKM21RNzhBSllsWjZzVmZWODNTNzgzOGRnYWtKckluCjRtNm5KME5sRjRyeFpZbEVwRU5yWUxBOUlYM25oZndVY0FpbG9uWG03cmlaOVIvV1FJZHZ1WXVQMFdDMUI1RUoKY1ZiTWN4dUJOZGwwTHpKa1dYWUc3Y3ZaYjR5NmR5U2FPZnBranNKdFFudzlzbm9nNHVBUW1DMENnZFZpcUx5ZwpscExuYVExR3BVeVF5bTlDSTVvRlMzSThrS2RaQmV5d1duQURCYXFjS3BKTnFRWkFRWnRQYzhXSjRIczREYlVMCjM3YnlPSEp6VUZkbWxPMU9ubDRhQWRsVXp3Y0IxemM9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBckxnNnBvRCtvMzJ2aS8rTFZIaFdBUmx2QWRxa1hURWIwL0dEY0xDeFlraDdhYUVVClZsSk4wR1BRZE5CYW90eUxrb2pPYy92OFhHcXFYdmxkaTlDdHlRTEdhMnRjbjVpSmZEMHh1bjFNQXZIT1cxaUUKQzNkS0dQV3lyK2lGQVRRMWpJNmVwNmgzYnRFbEh5MElVWmxIUFAyd1luSUJ1dTQrbC8vcWsrc1lMQjZ0N0ZoagpVUG5oZ2h5OE9nTGtKN1JkYzFjQUgzd3o0di9MaGMvcitKaXNJL2Z5UU16YmFXanRORkUxYk1MdnlYM0RXbmhlClZ5aHdxMlE2RGx4TGhlRVlCUVlsYW4zY3VUN2huWEpvTUxiSlJ0YUlRVm5LVTlSc2FZUlJVVnRvdUZ2QkN5d20KajhpbzlHSWpFejNnTGtCT05NQTBEb1U5RWY4QXBabmRWOFZjeXdJREFRQUJBb0lCQUJ1R1VISndaQ1FSeDRQNworV3hBc1JRRHhaajZDdTkrLy94S3BMTzB0TkFBMVFvRVRZVmtJRnB4VGFzUCtTR3pHOXNDU2tSWmgrSUNiWnd0CkNTZGEzaGNHaGpCZ0w2YVBYSG1jRnV5dFF3dkZGU21oZFltT1BSUzFNd0N0Z1dTcnVVem8vWWVpWlVZWHRsNjkKZ25IZWgyZkUxZk1hVUFSR0sxdDF3U0JKZXRTczIrdG5wcnhFZ3E5a1BxRkpFUEJ3SG8vRElkQ3gzRWJLZWY2NQpSSzRhMURDcWIzLzNrYzE2dGoweWwrNXFZKzVFQ2xBMjZhbTNuSFJHWVdUQ0QyUEVsNnZoUXVZMHN5bHh3djAwClgwLzFxakNVeU1DWkJVYlo1RkNpM25yOE5HMnBETnFGbEtnZnJXTXVkMkdoTEMxWUp5ekdIU3AyTm91L0U3L2cKRmVzV2ZVRUNnWUVBMVlTWUtJeS9wa3pKZFhCTVhBOUVoYnZaQ3hudk40bkdjNXQxdXZxb2h6WVBVbjhIbDRmNwpxN1Q0d2tpZEtDN29CMXNIUVdCcmg5d3UvRXVyQ0ZRYzRremtHQmZJNkwzSzBHalg3VXVCc3VOODQyelBZbmtvCjFQakJTTXdLQlVpb0ZlMnlJWnZmOU53V0FLczA3VU9VMTNmeVpodHVVcTVyYklrOVE2VXRGeU1DZ1lFQXp4V1oKSjQwa3lPTmliSml5ZFFtYkd4UG9sK0dzYjFka1JNUlVCMjk4K3l4L0JnOE4zOWppVko1bjZkbmVpazc5MklYdQprU0k5VENJdkh4ZlBkUTR0Q1JpdnFWUVcza0xvSk5jVkg3UW8rWnIzd3JOYmZrOWR3MDJ1WGZqWlpNTmx3ZmgwCkR3S0NnSFBIVVBucU4wcW1RUzVva3A0QUM4WmVKOHQ2S3FHR1Vqa0NnWUVBeTM4VStjaXpPNDhCanBFWjViK1QKWWhZWGxQSUJ3Ui9wYVBOb2NHMUhRNTZ0V2NYQitaVGJzdG5ISUh2T2RLYkg4NEs1Vm9ETDIyOXB4SUZsbjRsegpBZWVnbUtuS2pLK2VaYVVXN28xQkxycUxvOEZub2dXeGVkRWZmZjhoS2NvR2tPZTdGemNWYXF4N3QrVjBpeEVYCkFZakxHSy9hSktraHJ3N1p1ZWZxSXBzQ2dZQUpLcWFWNXB5TE8rMStheC96S0ZLeVZ5WkRtdHk4TFAwbVFoNksKR2JoSmtnV3BhZjh1T25hQ1VtUzlLRVMra0pLU0JCTzBYdlNocXgyMDNhUDBSWVZlMHJYcjQrb0RPcWoyQUlOUgozUEszWWRHM3o2S3NLNjAxMlBsdjlYVUNEZGd5UnVJMFMrTWs5bnNMTFpUZGo3TmVUVVNad042MXBybENQN0tQCnNvaTBtUUtCZ0hPRC9ZV2h0NENLNWpXMlA0UWQyeUJmL1M1UUlPakRiMk45dVgvcEk2OFNYcVNYUWQrYXZ3cjkKRjN2MFkxd3gvVXc3U1lqVy8wLytNaE4ycW5ZYWZibDI2UEtyRTM1ZTZNYXhudFMxNTJHTzVaTHFPWVByOHpDVAp6Nk9MMko0a0lONHJ5QUhsdkFqc1htUVAzTG4yZ0JESVFhb3dZWUtTa2phZE5jc1lMUWhhCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==

4.6、reset

用于,集群init失败后,重置集群

--ignore-preflight-errors stringSlice 错误将显示为警告的检查列表;例如:IsPrivilegedUser,Swap。取值为 'all' 时将忽略检查中的所有错误。

[root@master1 pki]# kubeadm  reset phase --help
Use this command to invoke single phase of the reset workflowUsage:  
kubeadm reset phase [command]Available Commands:  
cleanup-node          Run cleanup node.  #清理节点  preflight             
Run reset pre-flight checks  #检查  
remove-etcd-member    Remove a local etcd member.  #清理etcd成员  
update-cluster-status Remove this node from the ClusterStatus object.  #更新集群状态

回到顶部

五、附件

[root@master1 ~]# tree /etc/kubernetes/
/etc/kubernetes/
├── admin.conf
├── controller-manager.conf
├── kubelet.conf
├── manifests
│   ├── etcd.yaml
│   ├── kube-apiserver.yaml
│   ├── kube-controller-manager.yaml
│   └── kube-scheduler.yaml
├── pki
│   ├── apiserver.crt
│   ├── apiserver-etcd-client.crt
│   ├── apiserver-etcd-client.key
│   ├── apiserver.key
│   ├── apiserver-kubelet-client.crt
│   ├── apiserver-kubelet-client.key
│   ├── ca.crt
│   ├── ca.key
│   ├── etcd
│   │   ├── ca.crt
│   │   ├── ca.key
│   │   ├── healthcheck-client.crt
│   │   ├── healthcheck-client.key
│   │   ├── peer.crt
│   │   ├── peer.key
│   │   ├── server.crt
│   │   └── server.key
│   ├── front-proxy-ca.crt
│   ├── front-proxy-ca.key
│   ├── front-proxy-client.crt
│   ├── front-proxy-client.key
│   ├── sa.key
│   └── sa.pub
└── scheduler.conf

3 directories, 30 files
[root@master2 ~]# tree /etc/kubernetes/
/etc/kubernetes/
├── kubelet.conf
├── manifests
└── pki
    └── ca.crt

2 directories, 2 files


 #同master1

5.1、conf配置

crt文件查看: openssl x509 -in apiserver.crt -text

5.1.1、admin.conf

1)[root@master1 kubernetes]# cat admin.conf
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeE1EVXdNekE0TkRrd00xb1hEVE14TURVd01UQTRORGt3TTFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTnRpCkpkNW41YlhIMURISm9hTWpxQ3Z6bytjNHZHQmVsYkxjaFhrL3p2eUpaNk5vQU1PdG9PRWQwMC9YWmlYSktJUEEKWm5xbUJzZktvV2FIdEpUWDViSzlyZVdEaXJPM3Q2QUtMRG5tMzFiRHZuZjF6c0Z3dnlCU3RhUkFOUTRzZE5tbApCQWg1SFlpdmJzbjc2S0piVk1jdlIvSHJjcmw5U0IyWEpCMkFRUHRuOHloay94MUhONGxma01KVU5CTnk4bjV0ClMzUmRjZ1lTVEhJQ0FGLzFxdkl6c01ONjNEdHBIai9LNjJBNTAwN3RXUEF4MVpPdGFwL1pxM1UyeTk3S1gxYnEKdk1SanFyM0lXZW9ZMWdsWnJvdXRjcVpoelh2VmVKRE9DY1pVNFEyNmlFMzJyYWRFT3Y0bTVvZEUzSDdEYjBRWgpYMVNUNzY4SGRYTWtZWEJxOWJNQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFNZ0cyeVFzb0dyYkpDRE91ZXVWdVMyV2hlRHUKVnBkQlFIdFFxSkFGSlNSQ1M4aW9yQzE5K2gxeWgxOEhJNWd4eTljUGFBTXkwSmczRVlwUUZqbTkxQWpJWFBOQgp3Y2h0R1NLa1BpaTdjempoV2VpdXdJbDE5eElrM2lsY1dtNTBIU2FaVWpzMXVCUmpMajNsOFBFOEJITlNhYXBDCmE5NkFjclpnVmtVOFFpNllPaUxDOVpoMExiQkNWaDZIV2pRMGJUVnV2aHVlY0x5WlJpTlovUEVmdmhIa0Q3ZFIKK0RyVnBoNXVHMXdmL3ZqQXBhQmMvTWZ2TGswMlJ4dTNZR0VNbGd2TndPMStoOXJ3anRJbkNWdVIyZ3hIYjJtZgpvb0RMRHZTMDk0dXBMNkR6bXZqdkZSU1pjWElLaklQZ2xaN0VGdlprWWYrcjUybW5aeVE0NVVPcUlwcz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    server: https://192.168.74.128:6443
  name: kubernetescontexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Configpreferences: {}users:
- name: kubernetes-admin
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM4akNDQWRxZ0F3SUJBZ0lJVFVrUFJHZUxsYTB3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TVRBMU1ETXdPRFE1TUROYUZ3MHlNakExTURNd09EUTVNRFZhTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQTJnbUFsZmVCZ3FsWG5OeEIKSVZ2V040WnBUTHNOMTRsN0I1c0hnTUQ4UUQzNEsyOUpmOFVaV2ZhUlJPR3IwK0hadXhhdldCM1F4TGQ3SDQ1RwpkTkFMMFNxWmlqRUlPL1JCc1BhS0tQcEYvdGIzaVlrWGk5Y0tzL3UxOVBLMVFHb29kOWwrUzR1Vzh1OG9tTHA2CldJQ3VjUWYwa2sxOTVJNnVHVXBRYmZpY1BRYVdLWC9yK1lYbDFhbUl2YTlGOEVJZlEzVjZQU0Jmb3BBajNpVjkKVU1Ic0dIWU1mLzlKcThscGNxNWxSZGZISkNJaVRPQ21SSjZhekIrRGpVLytES0RiRG5FaEJ2b2ZYQldYczZPWQpJbVdodEhFbENVZ3BnQWZBNjRyd3ZEaVlteERjbitLUWd5dk1GamwzNDMzMy9yWTZDNWZoUmVjSmtranJtNHI1CjFRVmUyd0lEQVFBQm95Y3dKVEFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFOREJBVy9qTVNRYlE4TGd5cktTRDZZYmtZY3ZYWHRaNUVodAovUXBsZnNTS1Frd1IvNzhnQjRjUTFQMHFXMmZrWnhhZjZTODJ3cGxGcEQ0SktId3VkNG1SZy9mTU5oYVdiY0tDCnRJejMydnpsY2dlMm9ydEdMSmU3MVVPcVIxY0l4c25qZHhocFk3SkRWeFNDcE9vbDkzc1ZUT3hNaTRwYjErNXEKL1MxWERLdk1FZmxQTEtQcERpTGVLZFBVVHo4Y21qdlhFUndvS0NLOHYvUVY3YStoN0gvKzlYZHc2bEkyOXpRawp6V01kamxua2RJVzQ2L1Q1RDVUMnNQVkZKak9nZEhLR2loMG9pU1ZBZWZiVzVwdTFlTktrb2h3SWJ1Q0RRM0oyCmk5ZUFsbmFvYWU0QkU0bTBzVGpTR0FLTTZqTUxTZXZiaTY1dnZCUGw3OU9CSXQ2WXZjVT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBMmdtQWxmZUJncWxYbk54QklWdldONFpwVExzTjE0bDdCNXNIZ01EOFFEMzRLMjlKCmY4VVpXZmFSUk9HcjArSFp1eGF2V0IzUXhMZDdINDVHZE5BTDBTcVppakVJTy9SQnNQYUtLUHBGL3RiM2lZa1gKaTljS3MvdTE5UEsxUUdvb2Q5bCtTNHVXOHU4b21McDZXSUN1Y1FmMGtrMTk1STZ1R1VwUWJmaWNQUWFXS1gvcgorWVhsMWFtSXZhOUY4RUlmUTNWNlBTQmZvcEFqM2lWOVVNSHNHSFlNZi85SnE4bHBjcTVsUmRmSEpDSWlUT0NtClJKNmF6QitEalUvK0RLRGJEbkVoQnZvZlhCV1hzNk9ZSW1XaHRIRWxDVWdwZ0FmQTY0cnd2RGlZbXhEY24rS1EKZ3l2TUZqbDM0MzMzL3JZNkM1ZmhSZWNKa2tqcm00cjUxUVZlMndJREFRQUJBb0lCQUhQNjdBQloyUFZVK1ByQwptbzZSR0dFT3lZSjhXYitXTFBCOXdiNzJhUGdQUHF4MEZTZTNBMlk4WjBlNXR6b05BRkdwbm5vRDJpSlo2MDk4CjBmT2ZHem9YSy9jN1g4THNpZWtGSzdiaWNrczl0QXpmOUx0NUZ3Tm9XSURFZmkrV2lKSkFDaE5MWEc4N1VsL3oKaWRMOEdFNmR5Ylh0TEpOZ1pqR2p1eWJVUU4rZ1h1VTFZdUo1NEtEWnFFblp4Uk1PT0pqMm1YU2dZWW9kNTFqMQpyZkZ0Z0xidGVLckhBSFFubzhheDdOQjVudzh4U3lBTDdVbGxnNWl5MDRwNHpzcFBRZnpOckhwdytjMG5FR3lICmdIa3pkdC94RUJKNkg0YWk1bnRMdVJGSU54ZE5oeEFEM2d5YU1SRFBzLzF2N3E5SFNTem91ejR1Ri94WURaYXgKenIrWjlNa0NnWUVBNXhOUXl6SUsvSFAwOUYxdHpDMFgzUDg0eEc5YWNXTUxMYkxzdGdJYWpwSWxzQ1JGMzV4cgpYeUtSeFpOb0JBS1M4RnhlZ1ZkbmJKbUNvL0FwZHVtb2ZJZGovYXEwQ1VHK3FVV3dFcUVlYXRMSWhUdUE0aFFkClg4dEJLSStndFU1alpqdmMzSjlFSGczWnpKSFUwc0UyKzAyTDVjb3BZZWRZMGczaTJsWmIwOTBDZ1lFQThZNG8KeUZtMFkxZjdCamUvdGNnYTVEMWEyaG5POWozUjNic1lMb0xrVE52MWpmMm9PTG0vMTBKQnhVVkFQd1lYbm8zUAo0U3FPallsQlNxUjRKOTZCd1hOaGRmenoxS0MwSXpXNUZ2T1lHRFpXSXBuMFMxZy9KTnRqOVdyS1RpWjNEenU0CjBQbGx3dzZXaE5hWmlpeWs0SDYvT0Z4NFR6MzE5NzBWelRHWVRoY0NnWUFOMGtueTNYdHF2a1RZbVA0SVNHbzAKL2M4WGNOR29GcFNFbHo4eFk4N1MyRXNJemlLZnpXdGV0V0tpdnI1cC92MXJBeHRrQVNaZWlKQVgzaldjdHowcwp0YXgxYjlCMC9VbTZOa0RoM0dGRlluWThBZU1qb3JCZkduazdROXdJL0RkVjFoN1AwM2J2bFVTQngvZEM0K3UxCi9GMXgwVFhJZFY0S3NtbnZSVnNZd1FLQmdRRFpyTlBQaUJib2x6WWM2a3dXVWhiNXF0aWVSamVjNnlTZC9hWFMKOUIwcnJlUGdhcjhYTHp4VGpOK2NGOFhIaFlQdlc3Z0RIc2lMZnk2WlJ4RUlUSmo5YlM1Y2x2QmJvZDN6Qk15ZwpoQytCVWlYWTFJZXpCZmtSQzZ0T1UwZXZtVFlkUWlKUUh3NjI4Z1J0L0wwc0tRTURVdlNhbzZtL0x3VGlsVUI2ClFzRVBUUUtCZ0ZpWUpLZklnZ0NqamltRWZZb1ZxaXp6MnVUMmtPWHhCNlhycUJDbGlXclNCUElFT2lmcjIraU4KTmx6MTBIcTkzRWVUd0VlbUhpQy9DVmR5amh3N2JjY2gvQWZWLzZJSzhoWHRlUmhRcDZvdnpFSUtVa0NYQWx0Mwo2RzA3RUw2Qk9JVFgwbFBIWlYzUDBDRGVvRnFXSUd6RHpnUHA2ak40SVhPdGQwc0RPbFZkCi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==

5.1.2、controller-manager.conf

[root@master1 kubernetes]# cat controller-manager.conf
apiVersion: v1clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeE1EVXdNekE0TkRrd00xb1hEVE14TURVd01UQTRORGt3TTFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTnRpCkpkNW41YlhIMURISm9hTWpxQ3Z6bytjNHZHQmVsYkxjaFhrL3p2eUpaNk5vQU1PdG9PRWQwMC9YWmlYSktJUEEKWm5xbUJzZktvV2FIdEpUWDViSzlyZVdEaXJPM3Q2QUtMRG5tMzFiRHZuZjF6c0Z3dnlCU3RhUkFOUTRzZE5tbApCQWg1SFlpdmJzbjc2S0piVk1jdlIvSHJjcmw5U0IyWEpCMkFRUHRuOHloay94MUhONGxma01KVU5CTnk4bjV0ClMzUmRjZ1lTVEhJQ0FGLzFxdkl6c01ONjNEdHBIai9LNjJBNTAwN3RXUEF4MVpPdGFwL1pxM1UyeTk3S1gxYnEKdk1SanFyM0lXZW9ZMWdsWnJvdXRjcVpoelh2VmVKRE9DY1pVNFEyNmlFMzJyYWRFT3Y0bTVvZEUzSDdEYjBRWgpYMVNUNzY4SGRYTWtZWEJxOWJNQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFNZ0cyeVFzb0dyYkpDRE91ZXVWdVMyV2hlRHUKVnBkQlFIdFFxSkFGSlNSQ1M4aW9yQzE5K2gxeWgxOEhJNWd4eTljUGFBTXkwSmczRVlwUUZqbTkxQWpJWFBOQgp3Y2h0R1NLa1BpaTdjempoV2VpdXdJbDE5eElrM2lsY1dtNTBIU2FaVWpzMXVCUmpMajNsOFBFOEJITlNhYXBDCmE5NkFjclpnVmtVOFFpNllPaUxDOVpoMExiQkNWaDZIV2pRMGJUVnV2aHVlY0x5WlJpTlovUEVmdmhIa0Q3ZFIKK0RyVnBoNXVHMXdmL3ZqQXBhQmMvTWZ2TGswMlJ4dTNZR0VNbGd2TndPMStoOXJ3anRJbkNWdVIyZ3hIYjJtZgpvb0RMRHZTMDk0dXBMNkR6bXZqdkZSU1pjWElLaklQZ2xaN0VGdlprWWYrcjUybW5aeVE0NVVPcUlwcz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    server: https://192.168.74.128:6443
  name: kubernetescontexts:
- context:
    cluster: kubernetes
    user: system:kube-controller-manager
  name: system:kube-controller-manager@kubernetes
current-context: system:kube-controller-manager@kubernetes
kind: Config
preferences: {}users:
- name: system:kube-controller-manager
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM1ekNDQWMrZ0F3SUJBZ0lJVWpSRlRjUG9MWUl3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TVRBMU1ETXdPRFE1TUROYUZ3MHlNakExTURNd09EUTVNRFZhTUNreApKekFsQmdOVkJBTVRIbk41YzNSbGJUcHJkV0psTFdOdmJuUnliMnhzWlhJdGJXRnVZV2RsY2pDQ0FTSXdEUVlKCktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQUxGd3NVRWU3am5nNGQwVHAwclAxSklMVzJ2aFNIMzgKY09hdnhIRnMzOForZHhXNVJ5NDB2TTRyeldmakhtRUhyQXpydmxUWjhHL1FKS0xWSlYrd0R5c2MraEJxSEhYegpyUzF0MHlOTWZERktHQU04dDRSQjQxSjVCaUhIcFFOYWl4cVk4WGhPbnZsLzJaV3dhVHNoMGVEcFdpKy9YbUFpCm9oc2xjL2U0Nk1Rb1hSdmlGK0Uva0o0K05YeEFQdXVCeVBNQVd1Y0VUeTlrRXU0d1ladkp6bVJEcnY4SjdIam0KQVNkL2JoYjlzc0s1QlFITlE5QncyaUMvUWp0YTZZVTRlQ2l3RTFEOGNndUszbzMvcVFsMkVkdGNoSEtHMGxmTgoyY3JvQW1yMWFqZkRwUFpvRERxa0lDU0x0STdtL3dNZ1QybUdyaDgzUFcxTFdjSjVyR1FRWU1FQ0F3RUFBYU1uCk1DVXdEZ1lEVlIwUEFRSC9CQVFEQWdXZ01CTUdBMVVkSlFRTU1Bb0dDQ3NHQVFVRkJ3TUNNQTBHQ1NxR1NJYjMKRFFFQkN3VUFBNElCQVFBelBjUUNPOElHN0lhSmdqdlN4RWYzU0NER3RXT0JZd0ZOYUtZck9LUDVKZGlJN01ZZQpiTnBEZWZKR2szd1V6bEVMYm5NOTQ1NkpjNExaYXp1WFFwUkF5d0VJelVTUHdKV2JicERsQlpIWnV3Ym5wd1lJCkdhZExyUVRXVXBBV3RaSXhOb1RjbzJlK09menVack9IS0YzYjR2akljWGdUV2VtVmJqazJ4RUd6dHpDMG5UZ04KZFR1UW1GaDNJRnl1VmdjbjNqZytTeTZQb3ZKV1lianFBeW9MWlFkUGJUQnl0YXZQcWcrNjhRNXdZamVpZXhTZwo1bHdlMmVrcHB3VUkxVU1oZlk5a2ZBY1Bma0NWbjdxKzEveGtpdlF0dTJ0UTN6dDR0dzVnaks5ZkR5NTROejlPCkJJd1ZiYTBRdVgySW1OemVCR2EvYzVNUjV2S09tcFpHSnkrZwotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFb3dJQkFBS0NBUUVBc1hDeFFSN3VPZURoM1JPblNzL1VrZ3RiYStGSWZmeHc1cS9FY1d6ZnhuNTNGYmxICkxqUzh6aXZOWitNZVlRZXNET3UrVk5ud2I5QWtvdFVsWDdBUEt4ejZFR29jZGZPdExXM1RJMHg4TVVvWUF6eTMKaEVIalVua0dJY2VsQTFxTEdwanhlRTZlK1gvWmxiQnBPeUhSNE9sYUw3OWVZQ0tpR3lWejk3am94Q2hkRytJWAo0VCtRbmo0MWZFQSs2NEhJOHdCYTV3UlBMMlFTN2pCaG04bk9aRU91L3duc2VPWUJKMzl1RnYyeXdya0ZBYzFECjBIRGFJTDlDTzFycGhUaDRLTEFUVVB4eUM0cmVqZitwQ1hZUjIxeUVjb2JTVjgzWnl1Z0NhdlZxTjhPazltZ00KT3FRZ0pJdTBqdWIvQXlCUGFZYXVIemM5YlV0WndubXNaQkJnd1FJREFRQUJBb0lCQVFDVUV5b280UW9Hek85UAowYzNpOWFzOE1UUWF4QWI5OUVPM2oyak5Dd0Yzb1NQNXdnTnZ3TnpxNU16bWJEZDIyN010bVRIZGwzNDVvU1poCnFLUW14VUx6Ukp3K1JINzV3OTk2TU5ObytyUU5ZZnJHQU01WkZhOEJyVE43enlLYXVOMnExWVYxVTQ4QlFUc3YKMnVjR1RNUGNBSUNkcGdLNUVVM2NmNVhXWGI0SnF3MjlzdWJER0ZtL2kzUlpiTzlTejFBZUFEU0tXN1lGS2thMwpQRzFsWklUenlndkRjV20zK2U5TlRYR3VKTVNHK1FXOGlSUWJkZk9HbGdtRDNUa0FzUGpxYkphZ0Z3NGpoVEErCjJwODhDNVRvVVVkakRhS0d3RTBWcmpsWUUxandBRnRoWTY4dXd0T0l1MHlWYlN6R3RnOWxlNUVVMEsvWGdnekcKOGd5TWZPQUJBb0dCQU5oQmRjNm9SdVZ6ZmpsV3NQK0QybExjbTZFVDZwNU15T1A0WlJ4TU0yejJoUTR2UGNRZQorTXd4UHA3YUJWczQvVlJadk5JSWJIcTdHZCtKdXNFOFdyeHQ5Z0xBcDU3QTdTUXdkeWUvMGtuZk5tSDFWdTNuCkxDM3AybWU3SnhETlF6bTdRMVpRWVM5TTl4RHBpak9oc2RHY3BRN3hMODk3cUV2aFBaSUtQL0pCQW9HQkFOSU4KQStWNVNsRTYwWDNtZHhPMHEzV0ZCaEFKMWtMNnhHNThjNnpYdGgvYnBXdFRlNXF2Q1FteUhyZ1I1a2pvSzlsZgpYeUdRTEtIMHhRRVBzRG94cGpjMmZzemhXN1cxNVJSRkpyMlM1Q0kybndZSVdUUG5vMXBEbG8rWXJ5TXAxWnZDCkxrVlpHOFpZeURPeURIY3VueStMSHM2Z3NseUdpMTliK2dhZEo4NkJBb0dBR0x0THhNbWI2Z3ZPU0xKd1paaG4KdEloRVNDU2w5VnFrc3VXcWNwVUlZSkxFM3IxcVcrNksxNWRlS1A2WUZEbXRSeU5JSStFUXZ1eDg1Z0t6Vi93VwpDR3l1OE51bGo5TlNpNHY3WkpGY2RGUlJ2Tnc1QjlZalNGRHhTR0d2OHd6MmZqaTdWN2l6bEp4QnVTNXNQc0ZrCk82dWxlTkwrZThVUmx6UDRQYVpzYjhFQ2dZQWVtOExybDRjYTJ5Vlg0Vk9NelpFR3FRRy9LSS9PWnRoaytVR3AKK0MwVDYxL3BpZHJES2FwNWZUazR2WEwvUU1YVEFURE5wVUs3dnYxT01Fa1AwZGhVeDE0bTROZ0tYSjBySFFDTwpNMitIQk1xYmlHL25QbVB4YlZQdFRPU0lqVG9SWG5SN3FvWi9tc1JodEJwWTY3UktxMDByOHdMS3ROaHVadXJDCk4vaHJBUUtCZ0VHalk3U0ZrbCtudGgvUkh0aWFublpSallicnFaNXdUbk1EYjMwTEJKOFVwd01zQUJiWnYweFgKdk9wVXROdWVMV243QWtEVCtRYWtIanBDbmVDQUNRcE55VVhRZmJtYVgwaEFkbDVKVlRrWUZHaDUzcmhOL1UzRAowc3FZbDRjOWJjYWVua2xZMUdqbUNUcmlwTDFsVjFjZlA1bklXTTRVbUlCSm9GV1RYS2VECi0tLS0tRU5EIFJTQSBQUklWQVRFIEtFWS0tLS0tCg==

5.1.3、kubelet.conf

[root@master1 kubernetes]# cat kubelet.conf
apiVersion: v1clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeE1EVXdNekE0TkRrd00xb1hEVE14TURVd01UQTRORGt3TTFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTnRpCkpkNW41YlhIMURISm9hTWpxQ3Z6bytjNHZHQmVsYkxjaFhrL3p2eUpaNk5vQU1PdG9PRWQwMC9YWmlYSktJUEEKWm5xbUJzZktvV2FIdEpUWDViSzlyZVdEaXJPM3Q2QUtMRG5tMzFiRHZuZjF6c0Z3dnlCU3RhUkFOUTRzZE5tbApCQWg1SFlpdmJzbjc2S0piVk1jdlIvSHJjcmw5U0IyWEpCMkFRUHRuOHloay94MUhONGxma01KVU5CTnk4bjV0ClMzUmRjZ1lTVEhJQ0FGLzFxdkl6c01ONjNEdHBIai9LNjJBNTAwN3RXUEF4MVpPdGFwL1pxM1UyeTk3S1gxYnEKdk1SanFyM0lXZW9ZMWdsWnJvdXRjcVpoelh2VmVKRE9DY1pVNFEyNmlFMzJyYWRFT3Y0bTVvZEUzSDdEYjBRWgpYMVNUNzY4SGRYTWtZWEJxOWJNQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFNZ0cyeVFzb0dyYkpDRE91ZXVWdVMyV2hlRHUKVnBkQlFIdFFxSkFGSlNSQ1M4aW9yQzE5K2gxeWgxOEhJNWd4eTljUGFBTXkwSmczRVlwUUZqbTkxQWpJWFBOQgp3Y2h0R1NLa1BpaTdjempoV2VpdXdJbDE5eElrM2lsY1dtNTBIU2FaVWpzMXVCUmpMajNsOFBFOEJITlNhYXBDCmE5NkFjclpnVmtVOFFpNllPaUxDOVpoMExiQkNWaDZIV2pRMGJUVnV2aHVlY0x5WlJpTlovUEVmdmhIa0Q3ZFIKK0RyVnBoNXVHMXdmL3ZqQXBhQmMvTWZ2TGswMlJ4dTNZR0VNbGd2TndPMStoOXJ3anRJbkNWdVIyZ3hIYjJtZgpvb0RMRHZTMDk0dXBMNkR6bXZqdkZSU1pjWElLaklQZ2xaN0VGdlprWWYrcjUybW5aeVE0NVVPcUlwcz0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    server: https://192.168.74.128:6443
  name: kubernetescontexts:
- context:
    cluster: kubernetes
    user: system:node:master1
  name: system:node:master1@kubernetes
current-context: system:node:master1@kubernetes
kind: Config
preferences: {}users:
- name: system:node:master1
  user:
    client-certificate: /var/lib/kubelet/pki/kubelet-client-current.pem
    client-key: /var/lib/kubelet/pki/kubelet-client-current.pem    

# master2的kubelet.conf
[root@master2 kubernetes]# cat kubelet.conf
apiVersion: v1
clusters:
- cluster:    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUS

标签: 51对射光电传感器pzfz系列无源交流电流隔离变送器zr2012w3y二极管06丝印z19三极管2sdq固态继电器jku3e3变送器

锐单商城拥有海量元器件数据手册IC替代型号,打造 电子元器件IC百科大全!

锐单商城 - 一站式电子元器件采购平台