资讯详情

Kubernetes 1.22.9搭建 和 部署dashboard可视化UI

文章目录

  • 前言
  • 集群规划
  • kubernetes的安装
  • dash board可视化UI的安装


前言

参考博客:https://blog.csdn.net/qq_41632602/article/details/115366909 参考博客: https://blog.csdn.net/mshxuyi/article/details/108425487 借鉴两篇文章,根据自身的环境和需要进行了一系列的更改


集群规划

  1. 整体规划
主机名 IP地址 角色
master 192.168.56.3 master
node 1 192.168.56.4 node 1
node 2 192.168.56.5 node 2
  1. 使用组件:docker、kubelet、kubeadm、kubectl

kubernetes的安装

hostnamectl set-hostname master 
hostnamectl set-hostname node1 
hostnamectl set-hostname node2 

vi /etc/hosts 192.168.56.3 matser 192.168.56.4 node1 192.168.56.5 node2 

systemctl stop firewalld  systemctl disable firewalld 

如果开启了swap分区,kebelet会启动失败,所以每台机器都要关闭swap分区

swapoff -a sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab 

否则后续 K8S 挂载目录时可能会报错 Permission denied :

setenforce 0 vi /etc/selinux/config (修改SELINUX=disabled) 

cat > /etc/sysctl.d/k8s.conf <<EOF net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF sysctl --system 

移除原有docker

yum remove docker \                   docker-client \                   docker-client-latest \                   docker-common \                   docker-latest \                   docker-latest-logrotate \                   docker-logrotate \                   docker-engine 

删除成功或没有文件意味着成功

yum install -y yum-utils device-mapper-persistent-data lvm2 

yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo 

yum makecache fast 

yum -y install docker-ce 

yum -y install docker-ce 

systemctl status docker 

如果active:active(running)说明docker在正常运行. 如有问题,输入journalctl -xe或查看系统日志(vim /var/log/message)查看原因

mkdir -p /etc/docker tee /etc/docker/daemon.json <<- 'EOF' {  "registry-mirrors: ["https://s2q9fn53.mirror.aliyuncs.com"] } EOF systemctl daemon-reload && sudo systemctl restart docker 

cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=0 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF 

yum install -y kubectl-1.22.9 && yum install -y kubelet-1.22.9 && yum install -y kubeadm-1.22.9 systemctl enable kubelet && systemctl start kubelet  

kubeadm init --kubernetes-version=1.22.9 --apiserver-advertise-address=192.168.56.3 --image-repository registry.aliyuncs.com/google_containers --service-cidr=10.96.0.0/12 --pod-network-cidr=10.244.0.0/16 

如果初始化报错,检查kubelet状态

systemctl status kubelet 

如果是失败,解决办法如下: 检查执行命令docker的Cgroup:

docker info |grep Cgoup

修改docker的daemon.json配置

vi /etc/docker/daemon.json
{
  "registry-mirrors": ["https://lhx5mss7.mirror.aliyuncs.com"], 
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}

更新安装systemd

yum update systemd

重新启动docker

systemctl daemon-reload&&systemctl restart docker

重置kubeadm

kubeadm reset -f

重新进行初始化

kubeadm init --kubernetes-version=1.22.9 --apiserver-advertise-address=192.168.56.3 --image-repository registry.aliyuncs.com/google_containers --service-cidr=10.96.0.0/12 --pod-network-cidr=10.244.0.0/16

出现:Your Kubernetes control-plane has initialized successfully! 代表安装成功

另外:查看如下的内容,后面要用

kubeadm join 192.168.137.3:6443 --token ys4kum.voz9oqs2048ljfdp --discovery-token-ca-cert-hash sha256:35d21aef13298edef1bcade1202da8b3871ac41e325becdba200ecc48b5a97b0

接着按照弹出的提示执行如下命令:

mkdir -p $HOME/.kube
cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
chown $(id -u):$(id -g) $HOME/.kube/config

按照上述步骤,将另外两个节点的kube也做好之后,在另个2个节点输入如下命令: vi /etc/docker/daemon.json #输入如下内容:

{
  "registry-mirrors": ["https://lhx5mss7.mirror.aliyuncs.com"], 
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2",
  "storage-opts": [
    "overlay2.override_kernel_check=true"
  ]
}

执行如下命令

yum update systemd
systemctl daemon-reload
systemctl restart docker
kubeadm reset -f

之后可以加入集群:

kubeadm reset
kubeadm join 192.168.137.3:6443 --token ys4kum.voz9oqs2048ljfdp --discovery-token-ca-cert-hash sha256:35d21aef13298edef1bcade1202da8b3871ac41e325becdba200ecc48b5a97b0

另外两个节点,加入集群成功后,可以看到如下: This node has joined the cluster:

  • Certificate signing request was sent to apiserver and a response was received.
  • The Kubelet was informed of the new secure connection details. Run ‘kubectl get nodes’ on the control-plane to see this node join the cluster.

查看token的命令:【下述命令都是在主节点master上执行】

kubeadm token list

如果今后token过期,可以创建新的永久token:

kubeadm token create --ttl 0 

# 获取ca证书sha256编码hash值
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //'
得到如下值:
35d21aef13298edef1bcade1202da8b3871ac41e325becdba200ecc48b5a97b0

节点退出集群(剔除),【下述命令都是在主节点master上执行】

kubectl cordon node1
kubectl cordon node2

之后驱逐调节点上的pod

kubectl drain node1
kubectl drain node2

如果是节点出了问题,执行不了指令,可以采取强制驱逐的方式,删除某个运行的pod,例如:

kubectl delete pods -n kube-system nginx-6qz6s

主要是执行这个:将节点从集群中剔除或退出

kubectl delete node node1
kubectl delete node node2

node 节点重新加入集群时使用,【下述命令都是在节点slave1和slave2上执行】

kubeadm reset
kubeadm join 192.168.137.3:6443 --token d4offl.d3mufkukeb0b6y27 --discovery-token-ca-cert-hash sha256:35d21aef13298edef1bcade1202da8b3871ac41e325becdba200ecc48b5a97b0

查看节点是否加入【下述命令都是在主节点master上执行】

kubectl get nodes
NAME     STATUS     ROLES                            AGE   VERSION
master   NotReady   control-plane,master    56m   v1.22.9
node1    NotReady   <none>                         62s     v1.22.9
node2    NotReady   <none>                         20s     v1.22.9

安装网络插件:-flannel 到如下地址获取flannel的yml文件内容:

https://github.com/flannel-io/flannel/blob/master/Documentation/kube-flannel.yml

将内容另外存储到当前root用户家目录~下的kube-flannel.yml文件中

kubectl apply -f kube-flannel.yml	

稍等一下 再查看如下命令:

kubectl get pods -n kube-system

得到结果:

NAME                             READY   STATUS    RESTARTS   AGE
coredns-7f6cbbb7b8-jcgsx         1/1     Running   0          17m
coredns-7f6cbbb7b8-wgknx         1/1     Running   0          17m
etcd-master                      1/1     Running   3          18m
kube-apiserver-master            1/1     Running   0          18m
kube-controller-manager-master   1/1     Running   0          18m
kube-flannel-ds-kmkqx            1/1     Running   0          3m26s
kube-flannel-ds-qp48p            1/1     Running   0          3m26s
kube-flannel-ds-zp2zl            1/1     Running   0          3m26s
kube-proxy-2ftbn                 1/1     Running   0          17m
kube-proxy-btckv                 1/1     Running   0          11m
kube-proxy-sz9cf                 1/1     Running   0          11m
kube-scheduler-master            1/1     Running   3          18m

给其他节点加个roles角色标签

kubectl label node node1 node-role.kubernetes.io/slave=
kubectl label node node2 node-role.kubernetes.io/slave=

如上可知kube-flannel的pod已经运行,重新执行查看节点命令:

kubectl get nodes 

得到如下:

master   Ready    control-plane,master   36m   v1.22.9
node1   Ready    slave                  30m   v1.22.9
node2   Ready    slave                  29m   v1.22.9

k8s的搭建到此就已经完成。

dash board可视化UI的安装

查看具体对应版本 https://github.com/kubernetes/dashboard/releases

用vpn打开 https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc5/aio/deploy/recommended.yaml 将页面信息复制

cd  /home
vi recommended.yaml
#将复制的页面信息写入
#创建pod
kubectl apply -f recommended.yaml

查看,成功创建

[root@master1 ~]# kubectl get pods --all-namespaces
NAMESPACE              NAME                                         READY   STATUS    RESTARTS   AGE
default                nginx-5578584966-ch9x4                       1/1     Running   1          8h
kube-system            coredns-9d85f5447-qghnb                      1/1     Running   38         6d13h
kube-system            coredns-9d85f5447-xqsl2                      1/1     Running   37         6d13h
kube-system            etcd-master1                                 1/1     Running   8          6d13h
kube-system            kube-apiserver-master1                       1/1     Running   9          6d13h
kube-system            kube-controller-manager-master1              1/1     Running   8          6d13h
kube-system            kube-flannel-ds-amd64-h2f4w                  1/1     Running   5          6d10h
kube-system            kube-flannel-ds-amd64-z57qk                  1/1     Running   1          10h
kube-system            kube-proxy-4j8pj                             1/1     Running   1          10h
kube-system            kube-proxy-xk7gq                             1/1     Running   7          6d13h
kube-system            kube-scheduler-master1                       1/1     Running   9          6d13h
kubernetes-dashboard   dashboard-metrics-scraper-7b8b58dc8b-5r22j   1/1     Running   0          15m
kubernetes-dashboard   kubernetes-dashboard-866f987876-gv2qw        1/1     Running   0          15m

删除现有的dashboard服务,dashboard 服务的 namespace 是 kubernetes-dashboard,但是该服务的类型是ClusterIP,不便于我们通过浏览器访问,因此需要改成NodePort型的

# 查看 现有的服务
[root@master1 ~]# kubectl get svc --all-namespaces
NAMESPACE              NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                  AGE
default                kubernetes                  ClusterIP   10.96.0.1        <none>        443/TCP                  6d13h
default                nginx                       NodePort    10.102.220.172   <none>        80:31863/TCP             8h
kube-system            kube-dns                    ClusterIP   10.96.0.10       <none>        53/UDP,53/TCP,9153/TCP   6d13h
kubernetes-dashboard   dashboard-metrics-scraper   ClusterIP   10.100.246.255   <none>        8000/TCP                 61s
kubernetes-dashboard   kubernetes-dashboard        ClusterIP   10.109.210.35    <none>        443/TCP                  61s

删除

kubectl delete service kubernetes-dashboard --namespace=kubernetes-dashboard

创建配置文件

vi dashboard-svc.yaml
#内容
kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  type: NodePort
  ports:
    - port: 443
      targetPort: 8443
  selector:
    k8s-app: kubernetes-dashboard

执行

kubectl apply -f dashboard-svc.yaml

再次查看服务,成功

kubectl get svc --all-namespaces
NAMESPACE              NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                  AGE
default                kubernetes                  ClusterIP   10.96.0.1        <none>        443/TCP                  6d13h
default                nginx                       NodePort    10.102.220.172   <none>        80:31863/TCP             8h
kube-system            kube-dns                    ClusterIP   10.96.0.10       <none>        53/UDP,53/TCP,9153/TCP   6d13h
kubernetes-dashboard   dashboard-metrics-scraper   ClusterIP   10.100.246.255   <none>        8000/TCP                 4m32s
kubernetes-dashboard   kubernetes-dashboard        NodePort    10.110.91.255    <none>        443:30432/TCP            10s

想要访问dashboard服务,就要有访问权限,创建kubernetes-dashboard管理员角色

vi dashboard-svc-account.yaml
 
# 结果
apiVersion: v1
kind: ServiceAccount
metadata:
  name: dashboard-admin
  namespace: kube-system
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: dashboard-admin
subjects:
  - kind: ServiceAccount
    name: dashboard-admin
    namespace: kube-system
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io
 
# 执行
kubectl apply -f dashboard-svc-account.yaml

获取 token

kubectl get secret -n kube-system |grep admin|awk '{print $1}'
dashboard-admin-token-bwgjv
#会产生一个串,复制这个串下面用到
kubectl describe secret dashboard-admin-token-bwgjv -n kube-system|grep '^token'|awk '{print $2}'
eyJhbGciOiJSUzI1NiIsImtpZCI6IkJOVUhyRElPQzJzU2t6VDNVdWpTdzhNZmZPZjV0U2s1UXBFTzctNE9uOFEifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tYndnanYiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiOTE5NGY5YWYtZDZlNC00ZDFmLTg4OWEtMDY4ODIyMDFlOGNmIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.kEK3XvUXJGzQlBI4LIOp-puYzBBhhXSkD20vFp9ET-rGErxmMHjUuCqWxg0iawbuOndMARrpeGJKNTlD2vL81bXMaPpKb4Y2qoB6bH5ETQPUU0HPpWYmfoHl4krEXy7S95h0mWehiHLcFkrUhyKGa39cEBq0B0HRo49tjM5QzkE6PNJ5nmEYHIJMb4U62E8wKeqY9vt60AlRa_Re7IDAO9qfb5_dGEmUaIdr3tu22sa3POBsm2bhr-R3aC8vQzNuafM35s3ed8KofOTQFk8fXu4p7lquJnji4yfC77yS3yo5Jo3VPyHi3p5np_9AuSNYfI8fo1EpSeMsXOBH45hu2w

访问页面,虚拟机ip为masterIP 端口为kubectl get svc --all-namespaces查看的端口

http://192.168.56.3:30142

把上面的 token 粘贴到令牌,完成后进入页面 在这里插入图片描述

标签: kum连接器插头

锐单商城拥有海量元器件数据手册IC替代型号,打造 电子元器件IC百科大全!

锐单商城 - 一站式电子元器件采购平台