K8S 二进制安装
- 1. 环境准备
-
- 1.1 安装规划
- 1.2 系统设置
- 2. 安装 docker
- 3. TLS 证书
-
- 3.1 证书工具
- 3.2 证书归类
- 3.3 CA 证书
- 3.4 etcd 证书
- 3.5 kube-apiserver 证书
- 3.6 kube-controller-manager 证书
- 3.8 kube-scheduler 证书
- 3.9 admin 证书
- 3.10 kube-proxy 证书
- 3.11 证书信息
- 3.12 分发证书
- 4. 安装 etcd
-
- 4.1 节点 etcd-1
- 4.2 其他节点
- 4.3 启动
- 5. Master 节点
-
- 5.1 安装准备
- 5.2 apiserver
-
- 5.2.1 TLS Bootstrapping Token
- 5.2.2 配置文件
- 5.2.3 开机启动
- 5.3 controller-manager
-
- 5.3.1 kubeconfig 文件
- 5.3.2 配置文件
- 5.3.3 开机启动
- 5.4 scheduler
-
- 5.4.1 kubeconfig 文件
- 5.4.2 配置文件
- 5.4.3 开机启动
- 5.5 kubelet
-
- 5.5.1 参数配置文件
- 5.5.2 kubeconfig 文件
- 5.5.3 配置文件
- 5.5.4 授权 kubelet-bootstrap 允许用户申请证书
- 5.5.5 开机启动
- 5.5.6 加入集群
- 5.6 kube-proxy
-
- 5.6.1 参数配置文件
- 5.6.2 kubeconfig 文件
- 5.6.3 配置文件
- 5.6.4 开机启动
- 5.7 授权 `apiserver` 访问 `kubelet`
- 5.8 集群管理
-
- 5.8.1 kubeconfig 文件
- 5.8.2 集群配置信息
- 5.8.3 集群状态
- 5.9 命令补全
- 6. Node 节点
-
- 6.1 克隆准备 (master节点执行)
- 6.2 克隆节点
- 6.3 修改配置
- 6.4 开机启动
- 6.5 加入集群 (master节点执行)
- 7. CNI 网络
-
- 7.1 安装 CNI 网络插件
- 7.2 calico
- 7.3 flannel
- 8. Addons
-
- 8.1 CoreDNS
- 8.2 Dashboard
- 9. 高可用
-
- 9.1 准备操作 (Master-1)
-
- 9.1.1 kube-apiserver 证书更新
- 9.1.2 增加主机
- 9.2 扩容 Master
-
- 9.2.1 初始化
- 9.2.2 克隆
- 9.2.3 更新配置
- 9.2.4 开机启动
- 9.2.5 集群状态
- 9.2.6 加入集群
- 9.2.7 打标和污点
- 9.3 高可用负载平衡
-
- 9.3.1 安装软件
- 9.3.2 配置Nginx
- 9.3.3 keepalived 配置 (master)
- 9.3.4 keepalived 配置 (slave)
- 9.3.5 keepalived 检查脚本
- 9.3.6 启动服务
- 9.3.7 状态检查
- 9.3.8 Worker Node 连接到 LB VIP
- 10. 删除节点
1. 环境准备
1.1 安装规划
角色 | IP | 组件 |
---|---|---|
k8s-master1 | 192.168.80.45 | etcd, api-server, controller-manager, scheduler, docker |
k8s-node01 | 192.168.80.46 | etcd, kubelet, kube-proxy, docker |
k8s-node02 | 192.168.80.47 | etcd, kubelet, kube-proxy, docker |
软件版本:
软件 | 版本 | 备注 |
---|---|---|
OS | Ubuntu 16.04.6 LTS | |
Kubernetes | 1.19.11 | |
Etcd | v3.4.15 | |
Docker | 19.03.9 |
1.2 系统设置
# 1. 修改主机名 hostnamectl set-hostname k8s-master1 hostnamectl set-hostname k8s-node01 hostnamectl set-hostname k8snode02
# 2. 主机名解析
cat >> /etc/hosts <<EOF 192.168.80.45 k8s-master1 192.168.80.46 k8s-node01 192.168.80.47 k8s-node02 EOF
# 3. 禁用 swap
swapoff -a && sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
# 4. 将桥接的IPv4流量传递到iptables的链
cat > /etc/sysctl.d/k8s.conf << EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF
sysctl --system
# 5. 域名解析
echo "nameserver 8.8.8.8" >> /etc/resolv.conf
# 6. 时间同步
apt install ntpdate -y
ntpdate ntp1.aliyun.com
crontab -e
*/30 * * * * /usr/sbin/ntpdate-u ntp1.aliyun.com >> /var/log/ntpdate.log 2>&1
# 7. 日志目录
mkdir -p /var/log/kubernetes
2. 安装 docker
mkdir -p $HOME/k8s-install && cd $HOME/k8s-install
# 1. 下载安装包
wget https://download.docker.com/linux/static/stable/x86_64/docker-19.03.9.tgz
tar zxvf docker-19.03.9.tgz
mv docker/* /usr/bin
docker version
# 2. 开机启动配置
cat > /lib/systemd/system/docker.service << EOF [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com After=network-online.target firewalld.service Wants=network-online.target [Service] Type=notify ExecStart=/usr/bin/dockerd ExecReload=/bin/kill -s HUP $MAINPID LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity TimeoutStartSec=0 Delegate=yes KillMode=process Restart=on-failure StartLimitBurst=3 StartLimitInterval=60s [Install] WantedBy=multi-user.target EOF
# 3. 启动
systemctl daemon-reload
systemctl start docker
systemctl status docker
systemctl enable docker
3. TLS 证书
3.1 证书工具
mkdir -p $HOME/k8s-install && cd $HOME/k8s-install
wget https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssl_1.5.0_linux_amd64
wget https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssljson_1.5.0_linux_amd64
wget https://github.com/cloudflare/cfssl/releases/download/v1.5.0/cfssl-certinfo_1.5.0_linux_amd64
mv cfssl_1.5.0_linux_amd64 /usr/local/bin/cfssl
mv cfssljson_1.5.0_linux_amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_1.5.0_linux_amd64 /usr/bin/cfssl-certinfo
chmod /usr/local/bin/cfssl*
3.2 证书归类
组件 | 证书 | 密钥 | 备注 |
---|---|---|---|
etcd | ca.pem、etcd.pem | etcd-key.pem | |
apiserver | ca.pem、apiserver.pem | apiserver-key.pem | |
controller-manager | ca.pem、kube-controller-manager.pem | ca-key.pem、kube-controller-manager-key.pem | kubeconfig |
scheduler | ca.pem、kube-scheduler.pem | kube-scheduler-key.pem | kubeconfig |
kubelet | ca.pem | kubeconfig+token | |
kube-proxy | ca.pem、kube-proxy.pem | kube-proxy-key.pem | kubeconfig |
kubectl | ca.pem、admin.pem | admin-key.pem |
3.3 CA 证书
CA: Certificate Authority
mkdir -p /root/ssl && cd /root/ssl
# 1. CA 配置文件
cat > ca-config.json <<EOF { "signing": { "default": { "expiry": "87600h" }, "profiles": { "kubernetes": { "usages": [ "signing", "key encipherment", "server auth", "client auth" ], "expiry": "87600h" } } } } EOF
# 2. CA 证书签名请求文件
cat > ca-csr.json <<EOF { "CN": "kubernetes", "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "k8s", "OU": "System" } ], "ca": { "expiry": "87600h" } } EOF
# 3. 生成CA证书和密钥
cfssl gencert -initca ca-csr.json | cfssljson -bare ca
ls ca*
ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem
3.4 etcd 证书
注意:hosts 中的IP地址,分别指定了 etcd
集群的主机 IP
# 1. 证书签名请求文件
cat > etcd-csr.json <<EOF { "CN": "etcd", "hosts": [ "127.0.0.1", "localhost", "192.168.80.45", "192.168.80.46", "192.168.80.47" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "etcd", "OU": "System" } ] } EOF
# 2. 生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes etcd-csr.json | cfssljson -bare etcd
3.5 kube-apiserver 证书
注意:hosts 中的IP地址,分别指定了 kubernetes master
集群的主机 IP 和 (一般是 kube-apiserver
指定的 service-cluster-ip-range
网段的第一个IP,如 10.254.0.1)
# 1. 证书签名请求文件
cat > apiserver-csr.json <<EOF { "CN": "kubernetes", "hosts": [ "127.0.0.1", "localhost", "192.168.80.1", "192.168.80.2", "192.168.80.45", "192.168.80.46", "192.168.80.47", "10.254.0.1", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "k8s", "OU": "System" } ] } EOF
# 2. 生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes apiserver-csr.json | cfssljson -bare apiserver
3.6 kube-controller-manager 证书
# 1. 证书签名请求文件
cat > kube-controller-manager-csr.json <<EOF { "CN": "system:kube-controller-manager", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "system:masters", "OU": "System" } ] } EOF
# 2. 生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-controller-manager-csr.json | cfssljson -bare kube-controller-manager
3.8 kube-scheduler 证书
# 1. 证书签名请求文件
cat > kube-scheduler-csr.json << EOF { "CN": "system:kube-scheduler", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "system:masters", "OU": "System" } ] } EOF
# 2. 生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-scheduler-csr.json | cfssljson -bare kube-scheduler
3.9 admin 证书
- 后续
kube-apiserver
使用RBAC
对客户端(如kubelet
、kube-proxy
、Pod
)请求进行授权; kube-apiserver
预定义了一些RBAC
使用的RoleBindings
,如cluster-admin
将 Groupsystem:masters
与 Rolecluster-admin
绑定,该 Role 授予了调用kube-apiserver
的的权限;- O 指定该证书的 Group 为
system:masters
,kubelet
使用该证书访问kube-apiserver
时 ,由于证书被 CA 签名,所以认证通过,同时由于证书用户组为经过预授权的system:masters
,所以被授予访问所有 API 的权限;
# 1. 证书签名请求文件
cat > admin-csr.json <<EOF { "CN": "admin", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "system:masters", "OU": "System" } ] } EOF
# 2. 生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin
ls admin*
admin.csr admin-csr.json admin-key.pem admin.pem
搭建完 kubernetes 集群后,可以通过命令: kubectl get clusterrolebinding cluster-admin -o yaml
,查看到 clusterrolebinding cluster-admin
的 subjects 的 kind 是 Group,name 是 system:masters
。 roleRef
对象是 ClusterRole cluster-admin
。 即 system:masters Group
的 user 或者 serviceAccount
都拥有 cluster-admin
的角色。 因此在使用 kubectl 命令时候,才拥有整个集群的管理权限。
kubectl get clusterrolebinding cluster-admin -o yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
creationTimestamp: 2017-04-11T11:20:42Z
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: cluster-admin
resourceVersion: "52"
selfLink: /apis/rbac.authorization.k8s.io/v1/clusterrolebindings/cluster-admin
uid: e61b97b2-1ea8-11e7-8cd7-f4e9d49f8ed0
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: Group
name: system:masters
3.10 kube-proxy 证书
- CN 指定该证书的 User 为
system:kube-proxy
; kube-apiserver
预定义的 RoleBindingsystem:node-proxier
将Usersystem:kube-proxy
与 Rolesystem:node-proxier
绑定,该 Role 授予了调用kube-apiserver
Proxy 相关 API 的权限;
# 1. 证书签名请求文件
cat > kube-proxy-csr.json <<EOF { "CN": "system:kube-proxy", "hosts": [], "key": { "algo": "rsa", "size": 2048 }, "names": [ { "C": "CN", "ST": "BeiJing", "L": "BeiJing", "O": "k8s", "OU": "System" } ] } EOF
# 2. 生成证书
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy
3.11 证书信息
cfssl-certinfo -cert apiserver.pem
{
"subject": {
"common_name": "kubernetes",
"country": "CN",
"organization": "k8s",
"organizational_unit": "System",
"locality": "BeiJing",
"province": "BeiJing",
"names": [
"CN",
"BeiJing",
"BeiJing",
"k8s",
"System",
"kubernetes"
]
},
"issuer": {
"common_name": "kubernetes",
"country": "CN",
"organization": "k8s",
"organizational_unit": "System",
"locality": "BeiJing",
"province": "BeiJing",
"names": [
"CN",
"BeiJing",
"BeiJing",
"k8s",
"System",
"kubernetes"
]
},
"serial_number": "275867496157961939649344217740970264800633176866",
"sans": [
"localhost",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local",
"127.0.0.1",
"192.168.80.1",
"192.168.80.2",
"192.168.80.45",
"192.168.80.46",
"192.168.80.47",
"10.254.0.1"
],
"not_before": "2021-06-09T05:20:00Z",
"not_after": "2031-06-07T05:20:00Z",
"sigalg": "SHA256WithRSA",
"authority_key_id": "",
"subject_key_id": "E3:84:0F:9C:00:07:4A:8F:5C:B2:35:45:A0:50:4D:3E:9D:C0:B4:D0",
"pem": "-----BEGIN CERTIFICATE-----\nMIIEezCCA2OgAwIBAgIUMFJTjEXe9sDDDpPXcAiUBt5+QyIwDQYJKoZIhvcNAQEL\nBQAwZTELMAkGA1UEBhMCQ04xEDAOBgNVBAgTB0JlaUppbmcxEDAOBgNVBAcTB0Jl\naUppbmcxDDAKBgNVBAoTA2s4czEPMA0GA1UECxMGU3lzdGVtMRMwEQYDVQQDEwpr\ndWJlcm5ldGVzMB4XDTIxMDYwOTA1MjAwMFoXDTMxMDYwNzA1MjAwMFowZTELMAkG\nA1UEBhMCQ04xEDAOBgNVBAgTB0JlaUppbmcxEDAOBgNVBAcTB0JlaUppbmcxDDAK\nBgNVBAoTA2s4czEPMA0GA1UECxMGU3lzdGVtMRMwEQYDVQQDEwprdWJlcm5ldGVz\nMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAw0BpjZQNEd6Oqu8ubEWG\nhbdwJecOTCfdbY+VLIKEm0Tys8ZBlu7OrtZ8Rj5OAZTXil0ZJz+hvHo8YTNJJ16g\njHV88VSpfoXD5DE59PITSFwfY1lWHVctC3Ddo9CM9cU9Ty+Kf29XcrLbc/VNGZTB\ncvKXoM3b6NkBKOdKphVjUvafhKC6ls2ac5uub3uqZTpPgBs/1PvINKNZkP5U6lUV\noTBMAT+qbQ9aggA+bA+WegL3jHU78ngo1XMnsb1HfAjwKDOf66smNJ/K+YjD+Cul\ngjpyqOQKGlz5xqXUcBgIMO9djI4f5hvaMsSje1aSJ/oh5AfQbxQsGjajlS80ED08\nxwIDAQABo4IBITCCAR0wDgYDVR0PAQH/BAQDAgWgMB0GA1UdJQQWMBQGCCsGAQUF\nBwMBBggrBgEFBQcDAjAMBgNVHRMBAf8EAjAAMB0GA1UdDgQWBBTjhA+cAAdKj1yy\nNUWgUE0+ncC00DCBvgYDVR0RBIG2MIGzgglsb2NhbGhvc3SCCmt1YmVybmV0ZXOC\nEmt1YmVybmV0ZXMuZGVmYXVsdIIWa3ViZXJuZXRlcy5kZWZhdWx0LnN2Y4Iea3Vi\nZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVygiRrdWJlcm5ldGVzLmRlZmF1bHQu\nc3ZjLmNsdXN0ZXIubG9jYWyHBH8AAAGHBMCoUAGHBMCoUAKHBMCoUC2HBMCoUC6H\nBMCoUC+HBAr+AAEwDQYJKoZIhvcNAQELBQADggEBAG+RUKp4cxz4EOqmAPiczkl2\nHciAg01RbCavoLoUWmoDDAQf7PIhQF2pLewFCwR5w6SwvCJAVdg+eHdefJ2MBtJr\nKQgbmEOBXd4Z5ZqBeSP6ViHvb1pKtRSldznZLfxjsVd0bN3na/JmS4TZ90SqLLtL\nN4CgGfTs2AfrtbtWIqewDMS9aWjBK8VePzLBmsdLddD4WYQOnl+QjdrX9bbqYRCG\nQo3CKvJ3JZqh6AJHcgKsm0702uMU/TCJwe1M8I8SpYrwA74uCBy3O9jXed1rZlrp\nRVURB6Ro7SMLjiadTJyf6AbLPMmZcPKHhZ1XG07q8Od2Kd+KVx1PxF3et6OOteE=\n-----END CERTIFICATE-----\n"
}
3.12 分发证书
所有节点
mkdir -p /etc/kubernetes/pki
cp *.pem /etc/kubernetes/pki
tar cvf pki.tar /etc/kubernetes/pki
scp pki.tar ubuntu@192.168.80.46:/home/ubuntu
scp pki.tar ubuntu@192.168.80.47:/home/ubuntu
sudo -i
cd / && mv /home/ubuntu/pki.tar / && tar xvf pki.tar && rm -f pki.tar
4. 安装 etcd
4.1 节点 etcd-1
mkdir -p $HOME/k8s-install && cd $HOME/k8s-install
# 1. 下载并安装
wget https://github.com/etcd-io/etcd/releases/download/v3.4.15/etcd-v3.4.15-linux-amd64.tar.gz
tar zxvf etcd-v3.4.15-linux-amd64.tar.gz
mv etcd-v3.4.15-linux-amd64/{
etcd,etcdctl} /usr/bin/
# 2. 配置文件
mkdir -p /etc/etcd
cat > /etc/etcd/etcd.conf << EOF #[Member] ETCD_NAME="etcd-1" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://192.168.80.45:2380" ETCD_LISTEN_CLIENT_URLS="https://192.168.80.45:2379,https://127.0.0.1:2379" #[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.80.45:2380" ETCD_ADVERTISE_CLIENT_URLS="https://192.168.80.45:2379" ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.80.45:2380,etcd-2=https://192.168.80.46:2380,etcd-3=https://192.168.80.47:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new" EOF
# 3. 开机启动
cat > /lib/systemd/system/etcd.service << EOF [Unit] Description=Etcd Server After=network.target After=network-online.target Wants=network-online.target [Service] Type=notify EnvironmentFile=-/etc/etcd/etcd.conf ExecStart=/usr/bin/etcd \ --cert-file=/etc/kubernetes/pki/etcd.pem \ --key-file=/etc/kubernetes/pki/etcd-key.pem \ --peer-cert-file=/etc/kubernetes/pki/etcd.pem \ --peer-key-file=/etc/kubernetes/pki/etcd-key.pem \ --trusted-ca-file=/etc/kubernetes/pki/ca.pem \ --peer-trusted-ca-file=/etc/kubernetes/pki/ca.pem \ --logger=zap Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF
# 4. 准备克隆文件
tar cvf etcd-clone.tar /usr/bin/etcd* /etc/etcd /lib/systemd/system/etcd.service
scp etcd-clone.tar ubuntu@192.168.80.46:/home/ubuntu
scp etcd-clone.tar ubuntu@192.168.80.47:/home/ubuntu
4.2 其他节点
# 1. 解压克隆文件
sudo -i
cd / && mv /home/ubuntu/etcd-clone.tar / && tar xvf etcd-clone.tar && rm -f etcd-clone.tar
# 2. 修改配置文件
vi /etc/etcd/etcd.conf
#[Member]
ETCD_NAME="etcd-2" # change to local
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.80.46:2380" # change to local
ETCD_LISTEN_CLIENT_URLS="https://192.168.80.46:2379,https://127.0.0.1:2379" # change to local
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.80.46:2380" # change to local
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.80.46:2379" # change to local
ETCD_INITIAL_CLUSTER="etcd-1=https://192.168.80.45:2380,etcd-2=https://192.168.80.46:2380,etcd-3=https://192.168.80.47:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"
4.3 启动
# 1. 开机启动
systemctl daemon-reload
systemctl start etcd
systemctl status etcd
systemctl enable etcd
# 2. 运行状态
etcdctl member list --cacert=/etc/kubernetes/pki/ca.pem --cert=/etc/kubernetes/pki/etcd.pem --key=/etc/kubernetes/pki/etcd-key.pem --write-out=table
+------------------+---------+--------+----------------------------+----------------------------+------------+
| ID | STATUS | NAME | PEER ADDRS | CLIENT ADDRS | IS LEARNER |
+------------------+---------+--------+----------------------------+----------------------------+------------+
| 46bc5ad35e418584 | started | etcd-1 | https://192.168.80.45:2380 | https://192.168.80.45:2379 | false |
| 8f347c1327049bc8 | started | etcd-3 | https://192.168.80.47:2380 | https://192.168.80.47:2379 | false |
| b01e7a29099f3eb8 | started | etcd-2 | https://192.168.80.46:2380 | https://192.168.80.46:2379 | false |
+------------------+---------+--------+----------------------------+----------------------------+------------+
# 3. 健康状态
etcdctl endpoint health --cacert=/etc/kubernetes/pki/ca.pem --cert=/etc/kubernetes/pki/etcd.pem --key=/etc/kubernetes/pki/etcd-key.pem --cluster --write-out=table
+----------------------------+--------+-------------+-------+
| ENDPOINT | HEALTH | TOOK | ERROR |
+----------------------------+--------+-------------+-------+
| https://192.168.80.47:2379 | true | 20.973639ms | |
| https://192.168.80.46:2379 | true | 29.842299ms | |
| https://192.168.80.45:2379 | true | 30.564766ms | |
+----------------------------+--------+-------------+-------+
5. Master 节点
kubernetes master 节点组件:
- kube-apiserver
- kube-scheduler
- kube-controller-manager
- kubelet (非必须,但必要)
- kube-proxy(非必须,但必要)
5.1 安装准备
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.20.md
mkdir -p $HOME/k8s-install && cd $HOME/k8s-install
wget https://dl.k8s.io/v1.19.11/kubernetes-server-linux-amd64.tar.gz
tar zxvf kubernetes-server-linux-amd64.tar.gz
cd kubernetes/server/bin
cp kube-apiserver kube-scheduler kube-controller-manager kubectl kubelet kube-proxy /usr/bin
5.2 apiserver
5.2.1 TLS Bootstrapping Token
TLS Bootstraping:Master apiserver启用TLS认证后,Node节点kubelet和kube-proxy要与kube-apiserver进行通信,必须使用CA签发的有效证书才可以,当Node节点很多时,这种客户端证书颁发需要大量工作,同样也会增加集群扩展复杂度。为了简化流程,Kubernetes引入了TLS bootstraping机制来自动颁发客户端证书,kubelet会以一个低权限用户自动向apiserver申请证书,kubelet的证书由apiserver动态签署。所以强烈建议在Node上使用这种方式,目前主要用于kubelet,kube-proxy还是由我们统一颁发一个证书。
TLS bootstraping
工作流程:
BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
# 格式:token,用户名,UID,用户组
cat > /etc/kubernetes/token.csv <<EOF ${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:node-bootstrapper" EOF
5.2.2 配置文件
--service-cluster-ip-range=10.254.0.0/16
: Service IP 段
cat > /etc/kubernetes/kube-apiserver.conf << EOF KUBE_APISERVER_OPTS="--logtostderr=false \\ --v=2 \\ --log-dir=/var/log/kubernetes \\ --etcd-servers=https://192.168.80.45:2379,https://192.168.80.46:2379,https://192.168.80.47:2379 \\ --bind-address=192.168.80.45 \\ --secure-port=6443 \\ --advertise-address=192.168.80.45 \\ --allow-privileged=true \\ --service-cluster-ip-range=10.254.0.0/16 \\ --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\ --authorization-mode=RBAC,Node \\ --enable-bootstrap-token-auth=true \\ --token-auth-file=/etc/kubernetes/token.csv \\ --service-node-port-range=30000-32767 \\ --kubelet-client-certificate=/etc/kubernetes/pki/apiserver.pem \\ --kubelet-client-key=/etc/kubernetes/pki/apiserver-key.pem \\ --tls-cert-file=/etc/kubernetes/pki/apiserver.pem \\ --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem \\ --client-ca-file=/etc/kubernetes/pki/ca.pem \\ --service-account-key-file=/etc/kubernetes/pki/ca-key.pem \\ --service-account-issuer=api \\ --service-account-signing-key-file=/etc/kubernetes/pki/apiserver-key.pem \\ --etcd-cafile=/etc/kubernetes/pki/ca.pem \\ --etcd-certfile=/etc/kubernetes/pki/etcd.pem \\ --etcd-keyfile=/etc/kubernetes/pki/etcd-key.pem \\ --requestheader-client-ca-file=/etc/kubernetes/pki/ca.pem \\ --proxy-client-cert-file=/etc/kubernetes/pki/apiserver.pem \\ --proxy-client-key-file=/etc/kubernetes/pki/apiserver-key.pem \\ --requestheader-allowed-names=kubernetes \\ --requestheader-extra-headers-prefix=X-Remote-Extra- \\ --requestheader-group-headers=X-Remote-Group \\ --requestheader-username-headers=X-Remote-User \\ --enable-aggregator-routing=true \\ --audit-log-maxage=30 \\ --audit-log-maxbackup=3 \\ --audit-log-maxsize=100 \\ --audit-log-path=/var/log/kubernetes/k8s-audit.log" EOF
5.2.3 开机启动
# 1. 系统管理
cat > /lib/systemd/system/kube-apiserver.service << EOF [Unit] Description=Kubernetes API Server Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=/etc/kubernetes/kube-apiserver.conf ExecStart=/usr/bin/kube-apiserver \$KUBE_APISERVER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF
# 2. 启动
systemctl daemon-reload
systemctl start kube-apiserver
systemctl status kube-apiserver
systemctl enable kube-apiserver
5.3 controller-manager
5.3.1 kubeconfig 文件
KUBE_CONFIG="/etc/kubernetes/kube-controller-manager.kubeconfig"
KUBE_APISERVER="https://192.168.80.45:6443"
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/pki/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=${KUBE_CONFIG}
kubectl config set-credentials kube-controller-manager \
--client-certificate=/etc/kubernetes/pki/kube-controller-manager.pem \
--client-key=/etc/kubernetes/pki/kube-controller-manager-key.pem \
--embed-certs=true \
--kubeconfig=${KUBE_CONFIG}
kubectl config set-context default \
--cluster=kubernetes \
--user=kube-controller-manager \
--kubeconfig=${KUBE_CONFIG}
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
5.3.2 配置文件
--cluster-cidr=10.244.0.0/16
: Pod IP 段
--service-cluster-ip-range=10.254.0.0/16
: Service IP 段
cat > /etc/kubernetes/kube-controller-manager.conf << EOF KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=false \\ --v=2 \\ --log-dir=/var/log/kubernetes \\ --leader-elect=true \\ --kubeconfig=/etc/kubernetes/kube-controller-manager.kubeconfig \\ --bind-address=127.0.0.1 \\ --allocate-node-cidrs=true \\ --cluster-cidr=10.244.0.0/16 \\ --service-cluster-ip-range=10.254.0.0/16 \\ --cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem \\ --cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem \\ --root-ca-file=/etc/kubernetes/pki/ca.pem \\ --service-account-private-key-file=/etc/kubernetes/pki/ca-key.pem \\ --cluster-signing-duration=87600h0m0s" EOF
5.3.3 开机启动
cat > /lib/systemd/system/kube-controller-manager.service << EOF [Unit] Description=Kubernetes Controller Manager Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/etc/kubernetes/kube-controller-manager.conf ExecStart=/usr/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF
systemctl daemon-reload
systemctl start kube-controller-manager
systemctl status kube-controller-manager
systemctl enable kube-controller-manager
5.4 scheduler
5.4.1 kubeconfig 文件
KUBE_CONFIG="/etc/kubernetes/kube-scheduler.kubeconfig"
KUBE_APISERVER="https://192.168.80.45:6443"
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/pki/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=${KUBE_CONFIG}
kubectl config set-credentials kube-scheduler \
--client-certificate=/etc/kubernetes/pki/kube-scheduler.pem \
--client-key=/etc/kubernetes/pki/kube-scheduler-key.pem \
--embed-certs=true \
--kubeconfig=${KUBE_CONFIG}
kubectl config set-context default \
--cluster=kubernetes \
--user=kube-scheduler \
--kubeconfig=${KUBE_CONFIG}
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
5.4.2 配置文件
cat > /etc/kubernetes/kube-scheduler.conf << EOF KUBE_SCHEDULER_OPTS="--logtostderr=false \ --v=2 \ --log-dir=/var/log/kubernetes \ --leader-elect \ --kubeconfig=/etc/kubernetes/kube-scheduler.kubeconfig \ --bind-address=127.0.0.1" EOF
5.4.3 开机启动
cat > /lib/systemd/system/kube-scheduler.service << EOF [Unit] Description=Kubernetes Scheduler Documentation=https://github.com/kubernetes/kubernetes [Service] EnvironmentFile=-/etc/kubernetes/kube-scheduler.conf ExecStart=/usr/bin/kube-scheduler \$KUBE_SCHEDULER_OPTS Restart=on-failure [Install] WantedBy=multi-user.target EOF
systemctl daemon-reload
systemctl start kube-scheduler
systemctl status kube-scheduler
systemctl enable kube-scheduler
5.5 kubelet
5.5.1 参数配置文件
cat > /etc/kubernetes/kubelet-config.yml << EOF kind: KubeletConfiguration apiVersion: kubelet.config.k8s.io/v1beta1 address: 0.0.0.0 port: 10250 readOnlyPort: 10255 cgroupDriver: cgroupfs clusterDNS: - 10.254.0.2 clusterDomain: cluster.local failSwapOn: false authentication: anonymous: enabled: false webhook: cacheTTL: 2m0s enabled: true x509: clientCAFile: /etc/kubernetes/pki/ca.pem authorization: mode: Webhook webhook: cacheAuthorizedTTL: 5m0s cacheUnauthorizedTTL: 30s evictionHard: imagefs.available: 15% memory.available: 100Mi nodefs.available: 10% nodefs.inodesFree: 5% maxOpenFiles: 1000000 maxPods: 110 EOF
5.5.2 kubeconfig 文件
BOOTSTRAP_TOKEN=$(cat /etc/kubernetes/token.csv | awk -F, '{print $1}')
KUBE_CONFIG="/etc/kubernetes/bootstrap.kubeconfig"
KUBE_APISERVER="https://192.168.80.45:6443"
# 生成 kubelet bootstrap kubeconfig 配置文件
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/pki/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=${KUBE_CONFIG}
kubectl config set-credentials "kubelet-bootstrap" \
--token=${BOOTSTRAP_TOKEN} \
--kubeconfig=${KUBE_CONFIG}
kubectl config set-context default \
--cluster=kubernetes \
--user="kubelet-bootstrap" \
--kubeconfig=${KUBE_CONFIG}
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
5.5.3 配置文件
其中:--kubeconfig=/etc/kubernetes/kubelet.kubeconfig
在加入集群时自动生成
cat > /etc/kubernetes/kubelet.conf << EOF KUBELET_OPTS="--logtostderr=false \\ --v=2 \\ --log-dir=/var/log/kubernetes \\ --hostname-override=k8s-master1 \\ --network-plugin=cni \\ --kubeconfig=/etc/kubernetes/kubelet.kubeconfig \\ --bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig \\ --config=/etc/kubernetes/kubelet-config.yml \\ --cert-dir=/etc/kubernetes/pki \\ --pod-infra-container-image=mirrorgooglecontainers/pause-amd64:3.1" EOF
5.5.4 授权 kubelet-bootstrap 用户允许请求证书
防止错误:failed to run Kubelet: cannot create certificate signing request: certificatesigningrequests.certificates.k8s.io is forbidden: User "kubelet-bootstrap" cannot create resource "certificatesigningrequests" in API group "certificates.k8s.io" at the cluster scope
kubectl create clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap
5.5.5 开机启动
cat > /lib/systemd/system/kubelet.service << EOF [Unit] Description=Kubernetes Kubelet After=docker.service [Service] EnvironmentFile=/etc/kubernetes/kubelet.conf ExecStart=/usr/bin/kubelet \$KUBELET_OPTS Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF
systemctl daemon-reload
systemctl start kubelet
systemctl status kubelet
systemctl enable kubelet
5.5.6 加入集群
# 查看kubelet证书请求
kubectl get csr
NAME AGE SIGNERNAME REQUESTOR CONDITION
node-csr-ghWG-AWFM9sxJbr5A-BIq9puVIRxfFHrQlwDjYbHba8 25s kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Pending
# 批准申请
kubectl certificate approve node-csr-ghWG-AWFM9sxJbr5A-BIq9puVIRxfFHrQlwDjYbHba8
# 再次查看证书
kubectl get csr
NAME AGE SIGNERNAME REQUESTOR CONDITION
node-csr-ghWG-AWFM9sxJbr5A-BIq9puVIRxfFHrQlwDjYbHba8 53m kubernetes.io/kube-apiserver-client-kubelet kubelet-bootstrap Approved,Issued
# 查看节点(由于网络插件还没有部署,节点会没有准备就绪 NotReady)
kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master1 NotReady <none> 4m8s v1.19.11
5.6 kube-proxy
5.6.1 参数配置文件
clusterCIDR: 10.254.0.0/16
: Service IP 段,与apiserver & controller-manager 的--service-cluster-ip-range
一致
cat > /etc/kubernetes/kube-proxy-config.yml << EOF kind: KubeProxyConfiguration apiVersion: kubeproxy.config.k8s.io/v1alpha1 bindAddress: 0.0.0.0 metricsBindAddress: 0.0.0.0:10249 clientConnection: kubeconfig: /etc/kubernetes/kube-proxy.kubeconfig hostnameOverride: k8s-master1 clusterCIDR: 10.254.0.0/16 EOF
5.6.2 kubeconfig 文件
KUBE_CONFIG="/etc/kubernetes/kube-proxy.kubeconfig"
KUBE_APISERVER="https://192.168.80.45:6443"
kubectl config set-cluster kubernetes \
--certificate-authority=/etc/kubernetes/pki/ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=${KUBE_CONFIG}
kubectl config set-credentials kube-proxy \
--client-certificate=/etc/kubernetes/pki/kube-proxy.pem \
--client-key=/etc/kubernetes/pki/kube-proxy-key.pem \
--embed-certs=true \
--kubeconfig=${KUBE_CONFIG}
kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=${KUBE_CONFIG}
kubectl config use-context default --kubeconfig=${KUBE_CONFIG}
5.6.3 配置文件
cat > /etc/kubernetes/kube-proxy.conf << EOF KUBE_PROXY_OPTS="--logtostderr=false \ --v=2 \ --log-dir=/var/log/kubernetes \ --config=/etc/kubernetes/kube-proxy-config.yml" EOF
5.6.4 开机启动
cat > /lib/systemd/system/kube-proxy.service << EOF [Unit] Description=Kubernetes Proxy After=network.target [Service] EnvironmentFile=/etc/kubernetes/kube-proxy.conf ExecStart=/usr/bin/kube-proxy \$KUBE_PROXY_OPTS Restart=on-failure LimitNOFILE=65536 [Install] WantedBy=multi-user.target EOF
systemctl daemon-reload
systemctl start kube-proxy
systemctl status kube-proxy
systemctl enable kube-proxy
5.7 授权 apiserver
访问 kubelet
mkdir -p $HOME/k8s-install && cd $HOME/k8s-install
cat > apiserver-to-kubelet-rbac.yaml << EOF apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" labels: kubernetes.io/bootstrapping: rbac-defaults name: system:kube-apiserver-to-kubelet rules: - apiGroups: - "" resources: - nodes/proxy - nodes/stats - nodes/log - nodes/spec - nodes/metrics - pods/log verbs: - "*" --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: system:kube-apiserver namespace: "" roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:kube-apiserver-to-kubelet subjects: - apiGroup: rbac.authorization.k8s.io kind: User name: kubernetes EOF
kubectl apply -f apiserver-to-kubelet-rbac.yaml