Kubernetes(K8s)集群二进制安装
- Kubernetes(K8s)集群二进制安装
-
- 1 、基本环境配置
-
- 1.1、Kubernetes集群规划
- 1.2.基本环境配置(所有节点)
-
- 1.2.1、Host文件修改
- 1.2.1、yum源替换
- 1.2.安装必要的工具
- 1.2.关闭防火墙
- 1.2.4、关闭Swap分区
- 1.2.5、NTP时间同步
- 1.2.6.所有节点配置limit
- 1.2.7、Master01节点免密钥登录其他节点
- 1.2.8、系统升级
- 1.2.9、内核升级
- 1.2.10、安装ipvsadm
- 1.2.修改内核参数
- 1.3 K8s组件和Runtime安装
-
- 1.3.1、安装Docker
- 1.3.2、安装Kubernetes组件
- 二、集群初始化
- 3、Node节点加入
- 4、Calico组件安装
- 5、Metrics部署
- 6 、Dashboard部署
- 7.一些必要的配置变更
Kubernetes(K8s)集群二进制安装
1 、基本环境配置
Kubeadm安装方式自1.14版本结束后,安装方法几乎没有变化,本文档可以尝试安装最新k8s集群,centos采用的是7.9版本
K8S官网:https://kubernetes.io/docs/setup/ 最新版高可用安装:https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/high-availability/
1.1、Kubernetes集群规划
主机名 | IP地址 | 说明 |
---|---|---|
k8s-master01 | 172.19.204.205 | master节点 |
k8s-node01 | 172.19.204.206 | worker节点 |
配置信息 | 备注 |
---|---|
系统版本 | CentOS 7.9 |
Docker版本 | 20.10.x |
Pod网段 | 17.16.0.0/12 |
Service网段 | 192.168.0.0/16 |
1.2、基础环境配置(所有节点)
1.2.1、Host文件修改
所有节点配置hosts,修改/etc/hosts如下:
[root@k8s-master01 ~]# vi /etc/hostsc 172.19.204.205 k8s-master01 172.19.204.206 k8s-node01
1.2.1、yum源替换
替换所有节点yum源。
[root@k8s-master01 ~]# curl -o /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repo [root@k8s-master01 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2 [root@k8s-master01 ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
[root@k8s-master01 ~]# cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
[root@k8s-master01 ~]# sed -i -e '/mirrors.cloud.aliyuncs.com/d' -e '/mirrors.aliyuncs.com/d' /etc/yum.repos.d/CentOS-Base.repo
1.2.2、必备工具安装
[root@k8s-master01 ~]# yum install wget jq psmisc vim net-tools telnet yum-utils device-mapper-persistent-data lvm2 git -y
1.2.3、关闭防火墙
所有节点关闭防火墙、selinux、dnsmasq、swap。
[root@k8s-master01 ~]# systemctl disable --now firewalld
[root@k8s-master01 ~]# systemctl disable --now dnsmasq
[root@k8s-master01 ~]# systemctl disable --now NetworkManager
[root@k8s-master01 ~]# setenforce 0
[root@k8s-master01 ~]# sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/sysconfig/selinux
[root@k8s-master01 ~]# sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config
1.2.4、关闭Swap分区
[root@k8s-master01 ~]# swapoff -a && sysctl -w vm.swappiness=0
[root@k8s-master01 ~]# sed -ri '/^[^#]*swap/s@^@#@' /etc/fstab
1.2.5、NTP时间同步
安装ntpdate
[root@k8s-master01 ~]# rpm -ivh http://mirrors.wlnmp.com/centos/wlnmp-release-centos.noarch.rpm
[root@k8s-master01 ~]# yum install ntpdate -y
所有节点同步时间。
[root@k8s-master01 ~]# ln -sf /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
echo 'Asia/Shanghai' >/etc/timezone
[root@k8s-master01 ~]# ntpdate time2.aliyun.com
[root@k8s-master01 ~]# crontab -e
*/5 * * * * /usr/sbin/ntpdate time2.aliyun.com
[root@k8s-master01 ~]# systemctl restart crond
1.2.6、所有节点配置limit
[root@k8s-master01 ~]# ulimit -SHn 65535
[root@k8s-master01 ~]# vim /etc/security/limits.conf
# 末尾添加如下内容
* soft nofile 65536
* hard nofile 131072
* soft nproc 65535
* hard nproc 655350
* soft memlock unlimited
* hard memlock unlimited
1.2.7、Master01节点免密钥登录其他节点
Master01节点免密钥登录其他节点,安装过程中生成配置文件和证书均在Master01上操作,集群管理也在Master01上操作。
[root@k8s-master01 ~]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
/root/.ssh/id_rsa already exists.
Overwrite (y/n)? y
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:U4GtYVh/FF9JET79oziKBmY0eC8SvjvED9gYfFjoE1c root@k8s-master01
The key's randomart image is:
+---[RSA 2048]----+
| . .E o.o. o..+=|
| o o . +..o ..o.|
|o = . . oo . .o.|
| * + + .. . o|
| X + o S ..|
| o B = . . . . .|
| . B o o . |
| o . .. . . |
| .o .. . |
+----[SHA256]-----+
[root@k8s-master01 ~]# for i in k8s-master01 k8s-node01;do ssh-copy-id -i .ssh/id_rsa.pub $i;done
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: ".ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@k8s-master01's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'k8s-master01'"
and check to make sure that only the key(s) you wanted were added.
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: ".ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@k8s-node01's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'k8s-node01'"
and check to make sure that only the key(s) you wanted were added.
下载安装所有的源码文件:
[root@k8s-master01 ~]# cd /root/ ; git clone https://github.com/dotbalo/k8s-ha-install.git
如果无法下载就下载:https://gitee.com/dukuan/k8s-ha-install.git
1.2.8、系统升级
所有节点升级系统并重启
[root@k8s-master01 ~]# yum update -y --exclude=kernel* && reboot
1.2.9、内核升级
CentOS7 需要升级内核至4.18+,本地升级的版本为4.19
[root@k8s-master01 ~]# cd /root
[root@k8s-master01 ~]# wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-devel-4.19.12-1.el7.elrepo.x86_64.rpm
[root@k8s-master01 ~]# wget http://193.49.22.109/elrepo/kernel/el7/x86_64/RPMS/kernel-ml-4.19.12-1.el7.elrepo.x86_64.rpm
所有节点安装内核
[root@k8s-master01 ~]# cd /root && yum localinstall -y kernel-ml*
所有节点更改内核启动顺序
[root@k8s-master01 ~]# grub2-set-default 0 && grub2-mkconfig -o /etc/grub2.cfg
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-4.19.12-1.el7.elrepo.x86_64
Found initrd image: /boot/initramfs-4.19.12-1.el7.elrepo.x86_64.img
Found linux image: /boot/vmlinuz-3.10.0-1160.el7.x86_64
Found initrd image: /boot/initramfs-3.10.0-1160.el7.x86_64.img
Found linux image: /boot/vmlinuz-0-rescue-dadf611991db48e3b1eef115a841fbff
Found initrd image: /boot/initramfs-0-rescue-dadf611991db48e3b1eef115a841fbff.img
done
[root@k8s-master01 ~]# grubby --args="user_namespace.enable=1" --update-kernel="$(grubby --default-kernel)"
检查默认内核是不是4.19
[root@k8s-master01 ~]# grubby --default-kernel
/boot/vmlinuz-4.19.12-1.el7.elrepo.x86_64
所有节点重启,然后检查内核是不是4.19
[root@k8s-master01 ~]# reboot
[root@k8s-master01 ~]# uname -r
4.19.12-1.el7.elrepo.x86_64
1.2.10、安装ipvsadm
[root@k8s-master01 ~]# yum install ipvsadm ipset sysstat conntrack libseccomp -y
[root@k8s-master01 ~]# modprobe -- ip_vs
[root@k8s-master01 ~]# modprobe -- ip_vs_rr
[root@k8s-master01 ~]# modprobe -- ip_vs_wrr
[root@k8s-master01 ~]# modprobe -- ip_vs_sh
[root@k8s-master01 ~]# modprobe -- nf_conntrack
[root@k8s-master01 ~]# vim /etc/modules-load.d/ipvs.conf
ip_vs
ip_vs_lc
ip_vs_wlc
ip_vs_rr
ip_vs_wrr
ip_vs_lblc
ip_vs_lblcr
ip_vs_dh
ip_vs_sh
ip_vs_fo
ip_vs_nq
ip_vs_sed
ip_vs_ftp
ip_vs_sh
nf_conntrack
ip_tables
ip_set
xt_set
ipt_set
ipt_rpfilter
ipt_REJECT
ipip
[root@k8s-master01 ~]# systemctl enable --now systemd-modules-load.service
1.2.11、修改内核参数
开启一些k8s集群中必须的内核参数,所有节点配置k8s内核:
[root@k8s-master01 ~]# cat <<EOF > /etc/sysctl.d/k8s.conf
> net.ipv4.ip_forward = 1
> net.bridge.bridge-nf-call-iptables = 1
> net.bridge.bridge-nf-call-ip6tables = 1
> fs.may_detach_mounts = 1
> net.ipv4.conf.all.route_localnet = 1
> vm.overcommit_memory=1
> vm.panic_on_oom=0
> fs.inotify.max_user_watches=89100
> fs.file-max=52706963
> fs.nr_open=52706963
> net.netfilter.nf_conntrack_max=2310720
>
> net.ipv4.tcp_keepalive_time = 600
> net.ipv4.tcp_keepalive_probes = 3
> net.ipv4.tcp_keepalive_intvl =15
> net.ipv4.tcp_max_tw_buckets = 36000
> net.ipv4.tcp_tw_reuse = 1
> net.ipv4.tcp_max_orphans = 327680
> net.ipv4.tcp_orphan_retries = 3
> net.ipv4.tcp_syncookies = 1
> net.ipv4.tcp_max_syn_backlog = 16384
> net.ipv4.ip_conntrack_max = 65536
> net.ipv4.tcp_max_syn_backlog = 16384
> net.ipv4.tcp_timestamps = 0
> net.core.somaxconn = 16384
> EOF
[root@k8s-master01 ~]# reboot
所有节点配置完内核后,重启服务器,保证重启后内核依旧加载
[root@k8s-master01 ~]# lsmod | grep --color=auto -e ip_vs -e nf_conntrack
ip_vs_ftp 16384 0
nf_nat 32768 1 ip_vs_ftp
ip_vs_sed 16384 0
ip_vs_nq 16384 0
ip_vs_fo 16384 0
ip_vs_sh 16384 0
ip_vs_dh 16384 0
ip_vs_lblcr 16384 0
ip_vs_lblc 16384 0
ip_vs_wrr 16384 0
ip_vs_rr 16384 0
ip_vs_wlc 16384 0
ip_vs_lc 16384 0
ip_vs 151552 24 ip_vs_wlc,ip_vs_rr,ip_vs_dh,ip_vs_lblcr,ip_vs_sh,ip_vs_fo,ip_vs_nq,ip_vs_lblc,ip_vs_wrr,ip_vs_lc,ip_vs_sed,ip_vs_ftp
nf_conntrack 143360 2 nf_nat,ip_vs
nf_defrag_ipv6 20480 1 nf_conntrack
nf_defrag_ipv4 16384 1 nf_conntrack
libcrc32c 16384 4 nf_conntrack,nf_nat,xfs,ip_vs
1.3 K8s组件和Runtime安装
1.3.1、安装Docker
所有节点安装docker-ce 20.10
[root@k8s-master01 ~]# yum install docker-ce-20.10.* docker-ce-cli-20.10.* -y
由于新版Kubelet建议使用systemd,所以把Docker的CgroupDriver也改成systemd:
[root@k8s-master01 ~]# mkdir /etc/docker
[root@k8s-master01 ~]# cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"]
}
EOF
设置开机自启动Docker
[root@k8s-master01 ~]# systemctl daemon-reload && systemctl enable --now docker
1.3.2、安装Kubernetes组件
所有节点安装1.23最新版本kubeadm、kubelet和kubectl:
[root@k8s-master01 ~]# yum install kubeadm-1.23* kubelet-1.23* kubectl-1.23* -y
所有节点设置Kubelet开机自启动(由于还未初始化,没有kubelet的配置文件,此时kubelet无法启动,无需管理)
[root@k8s-master01 ~]# systemctl daemon-reload
[root@k8s-master01 ~]# systemctl enable --now kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
2、集群初始化
[root@k8s-master01 ~]# vim kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: 7t2weq.bjbawausm0jaxury
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 10.103.236.201
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: k8s-master01
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
certSANs:
- 172.19.204.205
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: 172.19.204.205:6443
controllerManager: {
}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.23.0
networking:
dnsDomain: cluster.local
podSubnet: 17.16.0.0/12
serviceSubnet: 192.168.0.0/16
scheduler: {
}
更新kubeadm文件
[root@k8s-master01 ~]# kubeadm config migrate --old-config kubeadm-config.yaml --new-config new.yaml
[root@k8s-master01 ~]# kubeadm config images pull --config /root/new.yaml
开机自启动kubelet
systemctl enable --now kubelet
如果启动失败无需管理,初始化成功以后即可启动
[root@k8s-master01 ~]# kubeadm init --config /root/new.yaml --upload-certs
初始化失败
kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp: lookup localhost on 114.114.114.114:53: no such host. Unfortunately, an error has occurred: timed out waiting for the condition This error is likely caused by: - The kubelet is not running - The kubelet is unhealthy due to a misconfiguration of the node 标签:
53s光纤传感器