资讯详情

kubuadm搭建

一. 部署要求和规划

  • kube-proxy使用IPVS
  • kubectl kubelet kubeadm 版本为1.19.3
  • CNI为flannel
  • NFS共享存储
  • kubeadm部署
  • centOS7
主机名 IP 资源分配 描述
k8s-master 192.168.188.220 8c16G 集群
k8s-node01 192.168.188.221 8c16G 集群
k8s-node02 192.168.188.222 8c16G 集群
NFS 192.168.188.225 8c8G 非集群内

二. 部署过程

1. hosts

  • 所有集群节点
cat >>/etc/hosts<<EOF 192.168.188.220 k8s-master 192.168.188.221 k8s-node01 192.168.188.222 k8s-node02 192.168.136.225 nfs EOF 

2. yum源

  • 所有集群节点
# base源 curl -o /etc/yum.repos.d/Centos-7.repo http://mirrors.aliyun.com/repo/Centos-7.repo  # docker repo curl -o /etc/yum.repos.d/docker-ce.repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo  # kubernetes repo cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg        http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF  # 缓存 yum clean all && yum makecache 

3. 时间同步,常用命令

  • 所有节点
yum -y install tree vim wget bash-completion bash-completion-extras lrzsz net-tools sysstat iotop iftop htop unzip nc nmap telnet bc  psmisc httpd-tools ntpdate  # 时区修改,如果/etc/localtime有软连接,不Shanghai,可直接删除,在软链接 ln -s /usr/share/zoneinfo/Asia/Shanghai /etc/localtime ntpdate ntp2.aliyun.com   # 在阿里云服务器上同步时间. /sbin/hwclock --systohc   # 写入到bios系统  

4. 关闭swap,防火墙,selinux

  • 所有集群节点
# 打开转发 iptables -P FORWARD ACCEPT  # 临时 swapoff -a  # 永久防止启动自动挂载swap sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab  # 关闭防火墙 systemctl disable firewalld && systemctl stop firewalld  # 不关机关闭selinux setenforce 0 sed -ri 's#(SELINUX=).*#\1disabled#' /etc/selinux/config 

5. 修改内核参数

  • 所有集群节点
# 内核文件 cat <<EOF >  /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.ipv4.ip_forward=1 vm.max_map_count=262144 EOF  # 内核优化的生效和验证 sysctl -p /etc/sysctl.d/k8s.conf  # 加载ipvs模块 modprobe br_netfilter modprobe -- ip_vs modprobe -- ip_vs_sh modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- nf_conntrack_ipv4  # 验证ip_vs模块 lsmod |grep ip_vs ip_vs_wrr              12697  0  ip_vs_rr               12600  0  ip_vs_sh               12688  0  ip_vs                 145458  6 ip_vs_rr,ip_vs_sh,ip_vs_wrr nf_conntrack          139264  2 ip_vs,nf_conntrack_ipv4 libcrc32c              12644  3 xfs,ip_vs,nf_conntrack 

6. 安装docker

  • 所有集群节点
# 查看可用版本 yum list docker-ce --showduplicates | sort -r  # 安装 yum install docker-ce -y  # 配置docker加速,注意insecure-registries改为镜像仓库地址 mkdir -p /etc/docker vim /etc/docker/daemon.json {   "insecure-registries": [         "192.168.188.225:80"    ],                             "registry-mirrors" : [     "https://8xpk5wnt.mirror.aliyuncs.com"   ] }  # enable systemctl enable docker;systemctl start docker 

7. kubeadm, kubelet 和 kubectl

  • 所有集群节点
yum install -y kubelet-1.19.3 kubeadm-1.19.3 kubectl-1.19.3  # 查看kubeadm版本 kubeadm  version  # 开机自启kubelet,这里kubelet没关系关系 systemctl start kubelet.service;systemctl enable kubelet 

8. 初始化

  • 只在master节点
# 生产初始化文件,有错误没关系 kubeadm config print init-defaults > kubeadm.yaml  # 修改初始化文件 apiVersion: kubeadm.ks.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 192.168.188.220 	# apiserver地址,因为单master,所以配置master的节点的IP
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: k8s-master
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers   # 修改为阿里镜像源
kind: ClusterConfiguration
kubernetesVersion: v1.19.0		# 可以修改你想要的版本apiserver,controller-manager,scheduler,kube-proxy的版本,这里我没修改.
networking:
  dnsDomain: cluster.local
  podSubnet: 10.244.0.0/16		# 增加这一行,k8s的pod网段修改
  serviceSubnet: 10.96.0.0/12
scheduler: {}
# 查看需要pull的镜像列表,这有个警告不用管.
kubeadm config images list --config kubeadm.yaml
W0701 07:29:54.188861   14093 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
registry.aliyuncs.com/google_containers/kube-apiserver:v1.19.0
registry.aliyuncs.com/google_containers/kube-controller-manager:v1.19.0
registry.aliyuncs.com/google_containers/kube-scheduler:v1.19.0
registry.aliyuncs.com/google_containers/kube-proxy:v1.19.0
registry.aliyuncs.com/google_containers/pause:3.2
registry.aliyuncs.com/google_containers/etcd:3.4.13-0
registry.aliyuncs.com/google_containers/coredns:1.7.0

# kubeadm初始化前将需要pull的镜像先拉下来,初始化就更快
kubeadm config images pull --config kubeadm.yaml
W0701 07:31:13.779122   14263 configset.go:348] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.19.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.19.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.19.0
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.19.0
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.2
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.4.13-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:1.7.0
# 初始化
kubeadm init --config kubeadm.yaml

# 打印很多日志,最重要的几行,提示successfully,表示成功了.
Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 192.168.188.220:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:5d388fa6411653afb9018a799fb1798e4d5fbfa4cee0255f4a4ddffdfb9411a7
# 根据日志的提示的命令执行,复制粘贴,注意root用户,可以不用复制sudo,防止有些作了策略的报错.
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

# 检查kubelet是否起来
systemctl status kubelet.service

9. node节点加入集群

  • 所有node节点
  • 复制master节点init的提示
kubeadm join 192.168.188.220:6443 --token abcdef.0123456789abcdef \
    --discovery-token-ca-cert-hash sha256:5d388fa6411653afb9018a799fb1798e4d5fbfa4cee0255f4a4ddffdfb9411a7
    
# 验证集群,现在集群节点都是NotReady,是因为还没有装网络插件.
kubectl get nodes
NAME          STATUS     ROLES    AGE   VERSION
k8s-master    NotReady   master   19m   v1.19.3
k8s-node-01   NotReady   <none>   23s   v1.19.3
k8s-node02    NotReady   <none>   8s    v1.19.3

# 后续添加节点生产token,复制粘贴直接可以追加节点
kubeadm token create --print-join-command

# 这个token有效期为1天,可以生产永久token
kubeadm token create --ttl 0

10. CNI

  • 只需要在master节点上操作,虽然集群节点都要装flannel,但是下载的yml会创建DaemonSet,之前做了集群,所以只需要在master操作即可.
  • 虽然用的flannel但是还是推荐calico.后续补充calico安装的过程
# 下载flannel的yml
wget https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml
# 修改yml
      containers:
      - name: kube-flannel
        image: quay.io/coreos/flannel:v0.11.0-amd64		# 如果本地有flannel的镜像可以改为本地镜像地址.
        command:
        - /opt/bin/flanneld
        args:
        - --ip-masq
        - --kube-subnet-mgr
        - --iface=ens192			# 改成服务器使用的的网卡名
# 先拉取镜像
docker pull quay.io/coreos/flannel:v0.11.0-amd64

# 创建flannel
kubectl create -f kube-flannel.yml

1. coredns

  • k8s使用的coredns做dns解析
  • 启动pod的时候,会把kube-dns服务的cluster-ip地址注入到pod的resolve解析配置中,同时添加对应的namespace的search域。 因此跨namespace通过service name访问的话,需要添加对应的namespace名称: service_name.namespace
# 先创建一个busybox的容器
apiVersion: v1
kind: Pod
metadata:
  name: busybox1
  labels:
    name: busybox
spec:
  containers:
  - image: busybox:1.28
    command:
      - sleep
      - "3600"
    name: busybox


# 我们可以随便进一个有nslookup命令的容器(busybox镜像有这个命令),然后使用nslookup
[root@k8s-master ~/minio]# kubectl get svc
NAME            TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                         AGE
kubernetes      ClusterIP   10.96.0.1      <none>        443/TCP                         27h
minio           ClusterIP   None           <none>        9000/TCP,5000/TCP               18h
minio-service   NodePort    10.109.91.83   <none>        9000:32707/TCP,5000:30670/TCP   18h

[root@k8s-master ~/minio]# kubectl exec  busybox1 -- nslookup minio
Server:    10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local

Name:      minio
Address 1: 10.244.1.21 minio-0.minio.default.svc.cluster.local
Address 2: 10.244.0.17 minio-2.minio.default.svc.cluster.local
Address 3: 10.244.3.14 minio-3.minio.default.svc.cluster.local
Address 4: 10.244.2.23 10-244-2-23.minio-service.default.svc.cluster.local

11. 验证集群

[root@k8s-master ~]# kubectl get no
NAME          STATUS   ROLES    AGE   VERSION
k8s-master    Ready    master   65m   v1.19.3
k8s-node-01   Ready    <none>   46m   v1.19.3
k8s-node02    Ready    <none>   46m   v1.19.3
# 可以看到ROLES列,node为空,给node节点打标签
kubectl label nodes k8s-node-01  node-role.kubernetes.io/node01=
kubectl label nodes k8s-node02  node-role.kubernetes.io/node02=

# 再看
kubectl get nodes
NAME          STATUS   ROLES    AGE   VERSION
k8s-master    Ready    master   77m   v1.19.3
k8s-node-01   Ready    node01   57m   v1.19.3
k8s-node02    Ready    node02   57m   v1.19.3
# 查看master的污点.
kubectl describe nodes k8s-master  | grep -i  taint

# 装完的集群,默认master无法调度业务pod,根据需求是否要将master是否可调度.如下命令是将污点去除.master可以调度业务pod
kubectl taint node k8s-master node-role.kubernetes.io/master:NoSchedule-

12. 集群健康

  • 如果kube-system的pod没起来基本就和网络有关,检查images
# 查看系统pod都起来了,但是却不健康.
kubectl get pod -n kube-system
NAME                                 READY   STATUS    RESTARTS   AGE
coredns-6d56c8448f-pvbgd             1/1     Running   0          99m
coredns-6d56c8448f-x7zm5             1/1     Running   0          99m
etcd-k8s-master                      1/1     Running   0          99m
kube-apiserver-k8s-master            1/1     Running   0          99m
kube-controller-manager-k8s-master   1/1     Running   0          99m
kube-flannel-ds-amd64-75js8          1/1     Running   0          35m
kube-flannel-ds-amd64-9whvn          1/1     Running   0          35m
kube-flannel-ds-amd64-j7l7j          1/1     Running   0          35m
kube-proxy-l4hh2                     1/1     Running   0          80m
kube-proxy-m7rfh                     1/1     Running   0          80m
kube-proxy-q849r                     1/1     Running   0          99m
kube-scheduler-k8s-master            1/1     Running   0          99m
[root@k8s-master ~]# kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS      MESSAGE                                                                                       ERROR
scheduler            Unhealthy   Get "http://127.0.0.1:10251/healthz": dial tcp 127.0.0.1:10251: connect: connection refused   
controller-manager   Unhealthy   Get "http://127.0.0.1:10252/healthz": dial tcp 127.0.0.1:10252: connect: connection refused   
etcd-0               Healthy     {"health":"true"}                                    
# 修改,将这两个文件的port=0这一行注释掉,重启kubelet就可以
vim /etc/kubernetes/manifests/kube-controller-manager.yaml
vim /etc/kubernetes/manifests/kube-scheduler.yaml

systemctl restart kubelet.service

# 再次查看就健康了
kubectl get cs
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE             ERROR
scheduler            Healthy   ok                  
controller-manager   Healthy   ok                  
etcd-0               Healthy   {"health":"true"}   

13. 命令补全

  • master节点
yum -y install bash-completion
source /usr/share/bash-completion/bash_completion
source <(kubectl completion bash)

# 写入bashrc
echo "source <(kubectl completion bash)" >> ~/.bashrc

14.对接NFS

  • NFS不放在集群内部
# nfs服务端
yum install -y nfs-utils rpcbind
mkdir -p /data/{preview,testing,develop}

vim /etc/exports
/data/develop	192.168.188.0/24(rw,sync,no_root_squash)
/data/testing	192.168.188.0/24(rw,sync,no_root_squash)
/data/preview	192.168.188.0/24(rw,sync,no_root_squash)

# 开机自启
systemctl start rpcbind;systemctl enable rpcbind
systemctl start nfs;systemctl enable nfs 

# 测试
showmount -e localhost

1. 静态对接

  • 开发创建PVC,运维就手动创建一个PV与之对接.显然不合适
# maste节点和node节点
yum install -y nfs-utils

# 创建目录,存放pv和pvc的yaml文件
mkdir yaml-pv -p && cd yaml-pv

# 编写pv.yml,一共4个pv
vim pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv-develop
  labels:
    pv: nfs-develop
spec:
  capacity: 
    storage: 100Gi
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain 
  storageClassName: nfs-develop
  nfs:
    path: /data/develop
    server: 192.168.188.225
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv-testing
  labels:
    pv: nfs-testing
spec:
  capacity: 
    storage: 100Gi
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain 
  storageClassName: nfs-testing
  nfs:
    path: /data/testing
    server: 192.168.188.225
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv-preview
  labels:
    pv: nfs-preview
spec:
  capacity:
    storage: 100Gi
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain 
  storageClassName: nfs-preview
  nfs:
    path: /data/preview
    server: 192.168.188.225
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv-jenkins
  labels:
    pv: nfs-jenkins
spec:
  capacity:
    storage: 100Gi
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  storageClassName: nfs-jenkins
  nfs:
    path: /data/jenkins
    server: 192.168.188.225
# 访问模式,storage大小(pvc大小需要小于pv大小),以及 PV 和 PVC 的 storageClassName 字段必须一样,这样才能够进行绑定。

# PersistentVolumeController会不断地循环去查看每一个 PVC,是不是已经处于 Bound(已绑定)状态。如果不是,那它就会遍历所有的、可用的 PV,并尝试将其与未绑定的 PVC 进行绑定,这样,Kubernetes 就可以保证用户提交的每一个 PVC,只要有合适的 PV 出现,它就能够很快进入绑定状态。而所谓将一个 PV 与 PVC 进行“绑定”,其实就是将这个 PV 对象的名字,填在了 PVC 对象的 spec.volumeName 字段上。
# 编写pvc.yaml,这个只有根据需求来编写,因为namespace不同.这个pvc只是用于测试功能.
vim pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-nfs-testing
  namespace: default
spec:
  accessModes:
  - ReadWriteOnce
  storageClassName: nfs-testing
  resources:
    requests:
      storage: 10Gi
  selector:
    matchLabels:
      pv: nfs-testing
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-nfs-develop
  namespace: default
spec:
  accessModes:     
  - ReadWriteOnce
  storageClassName: nfs-develop
  resources:
    requests:
      storage: 10Gi
  selector:
    matchLabels:
      pv: nfs-develop
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-nfs-preview
  namespace: default
spec:
  accessModes:     
  - ReadWriteOnce
  storageClassName: nfs-preview
  resources:
    requests:
      storage: 10Gi
  selector:
    matchLabels:
      pv: nfs-preview
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-nfs-jenkins
  namespace: default
spec:
  accessModes:
  - ReadWriteOnce
  storageClassName: nfs-jenkins
  resources:
    requests:
      storage: 10Gi
  selector:
    matchLabels:
      pv: nfs-jenkins
# 创建pv和pvc资源
kubectl create -f pvc.yaml
kubectl create -f pv.yaml

# 查看,注意pv是全局资源,后面只需要改pvc配置.
[root@k8s-master yaml-pv]# kubectl get pvc
NAME              STATUS   VOLUME           CAPACITY   ACCESS MODES   STORAGECLASS   AGE
pvc-nfs-develop   Bound    nfs-pv-testing   100Gi      RWO            nfs            3s
pvc-nfs-preview   Bound    nfs-pv-preview   100Gi      RWO            nfs            3s
pvc-nfs-testing   Bound    nfs-pv-develop   100Gi      RWO            nfs            3s
[root@k8s-master yaml-pv]# kubectl get pv
NAME             CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                     STORAGECLASS   REASON   AGE
nfs-pv-develop   100Gi      RWO            Recycle          Bound    default/pvc-nfs-testing   nfs                     14m
nfs-pv-preview   100Gi      RWO            Recycle          Bound    default/pvc-nfs-preview   nfs                     14m
nfs-pv-testing   100Gi      RWO            Recycle          Bound    default/pvc-nfs-develop   nfs                     14m
# 创建个资源测试,编写pod的yaml,注意修改claimName
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-test
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:alpine
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 80
          name: web
        volumeMounts: 
        - name: www
          mountPath: /usr/local/
      volumes:
      - name: www
        persistentVolumeClaim:
          claimName: pvc-nfs-testing
# 验证是否对接NFS,可以看到有相关的文件,说明成功了.
/usr/local # ls
testing-appstore  testing-bidding   testing-crm       testing-gateway   testing-order     testing-pms       testing-search    testing-user
testing-auth      testing-cms       testing-csc       testing-imc       testing-parking   testing-psi       testing-synchros  testing-workflow
testing-base      testing-coupon    testing-finance   testing-mcs       testing-payment   testing-rms       testing-system

2. 动态对接

  • 使用storageclass动态对接.
  • 因为PVC和PV是一一对应的关系,PV是运维人员来创建的,开发操作PVC,可是大规模集群中可能会有很多PV,如果这些PV都需要运维手动来处理这也是一件很繁琐的事情,所以就有了动态供给概念.
  • k8s本身支持的动态PV创建不包括nfs,这里存储后端使用的是 nfs,那么我们就需要使用到一个 nfs-client 的自动配置程序,也可以称之为Provisioner
  • 可以在多个namespace创建.比如开发,运维,测试用于不同的namespace,那么就可以创建不同的几套StorageClass来实现只创建PVC就自动创建PV
  • 以下是两套storageclass
    • 第一套storageclass
# provisioner
vim provisioner.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner		# 修改名字,以下都要修改
  labels:
    app: nfs-client-provisioner
  namespace: nfs-provisioner		# 修改namespace
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nfs-client-provisioner
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: quay.io/external_storage/nfs-client-provisioner:latest
          imagePullPolicy: IfNotPresent
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: hsj/nfs			# 和storageclass的provisioner字段必须一致才对接的上.
            - name: NFS_SERVER
              value: 192.168.188.225	# nfs服务端地址
            - name: NFS_PATH
              value: /data/testing		# 挂载的路径
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.188.225		# 和上面一样修改
            path: /data/testing
# rbac需要的clusterrole和role,这个文件基本就是改name和namespace,其他给绑定的role不用改.
vim rbac.yaml

kind: ServiceAccount
apiVersion: v1
metadata:
  name: nfs-client-provisioner
      namespace: nfs-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
  namespace: nfs-provisioner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
  namespace: nfs-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: nfs-provisioner
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  namespace: nfs-provisioner
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  namespace: nfs-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: nfs-provisioner
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io
# storageclass,改name和provisioner,让provisioner和让provisioner.yaml中对应才关联的起来
vim storageclass.yaml

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: nfs
provisioner: hsj/nfs

# 测试PVC
vim pvc.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-pvc
  namespace: nfs-provisioner
  annotations:
      #volume.beta.kubernetes.io/storage-class: "nfs"
spec:
  storageClassName: nfs			# 这一定要写明是关联的哪个storageclass
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Mi
      
# 以上是第一套storageclass,创建玩PVC看是否是Bound,创建玩PVC,会自动创建PV.
  • 第二套storageclass
# provisioner
vim provisioner.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-testing-provisioner
  labels:
    app: nfs-testing-provisioner
  namespace: testing
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nfs-testing-provisioner
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-testing-provisioner
  template:
    metadata:
      labels:
        app: nfs-testing-provisioner
    spec:
      serviceAccountName: nfs-testing-provisioner
      containers:
        - name: nfs-testing-provisioner
          image: quay.io/external_storage/nfs-client-provisioner:latest
          imagePullPolicy: IfNotPresent
          volumeMounts:
            - name: nfs-testing-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: testing/nfs
            - name: NFS_SERVER
              value: 192.168.188.225
            - name: NFS_PATH
              value: /data/testing
      volumes:
        - name: nfs-testing-root
          nfs:
            server: 192.168.188.225
            path: /data/testing
# rbac
vim rbac.yaml

kind: ServiceAccount
apiVersion: v1
metadata:
  name: nfs-testing-provisioner
  namespace: testing
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-testing-provisioner-runner
  namespace: testing
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-testing-provisioner
  namespace: testing
subjects:
  - kind: ServiceAccount
    name: nfs-testing-provisioner
    namespace: testing
roleRef:
  kind: ClusterRole
  name: nfs-testing-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-testing-provisioner
  namespace: testing
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-testing-provisioner
  namespace: testing
subjects:
  - kind: ServiceAccount
    name: nfs-testing-provisioner
    namespace: testing
roleRef:
  kind: Role
  name: leader-locking-nfs-testing-provisioner
  apiGroup: rbac.authorization.k8s.io
# storageclass
vim storageclass

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: testing-storageclass
provisioner: testing/nfs


# 测试pvc
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: test-pvc
  namespace: testing
  #annotations:
      #volume.beta.kubernetes.io/storage-class: "nfs"
spec:
  storageClassName: testing-storageclass
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Mi
# 两个PVC结果,可以作用于不同的namespace
kubectl -n testing get pvc
NAME       STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS           AGE
test-pvc   Bound    pvc-44ca9e5b-8253-48c1-a5c5-0542dd6a8c6d   1Mi        RWX            testing-storageclass   36m

kubectl -n nfs-provisioner get pvc
NAME       STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
test-pvc   Bound    pvc-a1fce90e-a5fa-4e9b-8e46-6e7c7cab5e40   1Mi        RWX            nfs            35m

16. IPVS

  • master
# 修改kube-proxy的cm,将mode修改为ipvs
kubectl edit configmap kube-proxy -n kube-system 
    kind: KubeProxyConfiguration
    metricsBindAddress: ""
    mode: "ipvs"			# 修改这里
    nodePortAddresses: null

# 删除所有的kube-proxy的容器
kubectl delete pod ${pod_name} -n kube-system

# 查看kube-proxy是否重启且running
kubectl get pod -nkube-system |grep kube-proxy
kube-proxy-8cfct                     1/1     Running   0          41s
kube-proxy-t7khm                     1/1     Running   0          72s
kube-proxy-xrhvf                     1/1     Running   0          21s

# 查看kube-proxy的日志,验证是否开启ipvs,日志中有Using ipvs Proxier的关键字就表示使用了IPVS.
kubectl logs ${pod_name} -n kube-system
I0702 10:05:47.071961       1 server_others.go:259] Using ipvs Proxier.
E0702 10:05:47.072302       1 proxier.go:381] can't set sysctl net/ipv4/vs/conn_reuse_mode, kernel version must be at least 4.1
W0702 10:05:47.072584       1 proxier.go:434] IPVS scheduler not specified, use rr by default
I0702 10:05:47.073021       1 server.go:650] Version: v1.19.0

# 但是我们看到上面的kube-proxy报错需要内核4.1的.可以不用理会
can't set sysctl net/ipv4/vs/conn_reuse_mode, kernel version must be at least 4.1

# 下载ipvsadm,方便以后找问题,如果资源不紧张,所有节点都可以下载一下.
yum -y install ipvsadm

17. nginx-ingress

  • 常用的nginx ingres
$ wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.30.0/deploy/static/mandatory.yaml

## 或者使用myblog/deployment/ingress/mandatory.yaml
## 修改部署节点
190 apiVersion: apps/v1
191 kind: DaemonSet				# 将DeployMent改为DaemonSet
192 metadata:
193   name: nginx-ingress-controller
194   namespace: ingress-nginx
195   labels:
196     app.kubernetes.io/name: ingress-nginx
197     app.kubernetes.io/part-of: ingress-nginx
198 spec:
199  # replicas: 1				# 注释掉
200   selector:
201     matchLabels:
202       app.kubernetes.io/name: ingress-nginx
203       app.kubernetes.io/part-of: ingress-nginx
204   template:
205     metadata:
206       labels:
207         app.kubernetes.io/name: ingress-nginx
208         app.kubernetes.io/part-of: ingress-nginx
209       annotations:
210         prometheus.io/port: "10254"
211         prometheus.io/scrape: "true"
212     spec:
213       hostNetwork: true					# 增加这一行
214       # wait up to five minutes for the drain of connections
215       terminationGracePeriodSeconds: 300
216       serviceAccountName: nginx-ingress-serviceaccount
217       nodeSelector:
218         ingress: "true"					# 增加这一行
219         #kubernetes.io/os: linux		# 注释掉


# 给master打标签
kubectl label node k8s-master ingress=true
kubectl label node k8s-node01 ingress=true
kubectl label node k8s-node02 ingress=true

# 创建
kubectl create -f mandatory.yaml

# 查看
kubectl -n ingress-nginx  get pod
  • ingress报错解决,直接创建一个svc就可以.

[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-8nxlgv5v-1635172457002)(images/image-20210727103918461.png)]

apiVersion: v1
kind: Service
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
spec:
  type: ClusterIP
  ports:
  - name: http
    port: 80
    targetPort: 80
    protocol: TCP
  - name: https
    port: 443
    targetPort: 443
    protocol: TCP
  selector:
    app.kubernetes.io/name: ingress-nginx
# 创建后查看是否还有err services "ingress-nginx" not found
kubectl -n ingress-nginx logs --tail=10 ${POD_NAME}

18. HPA

wget https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.6/components.yaml

# 修改yaml
 84       containers:
 85       - name: metrics-server
 86         image: registry.aliyuncs.com/google_containers/metrics-server-amd64:v0.3.6		# 修改为这个镜像地址
 87         imagePullPolicy: IfNotPresent
 88         args:
 89           - --cert-dir=/tmp
 90           - --secure-port=4443
 91           - --kubelet-insecure-tls   						# 添加如下这两行
 92           - --kubelet-preferred-address-types=InternalIP
# 检查是否起来,名字为metrics-server-xxxx
kubectl -n kube-system get pod

# 测试,可以看到节点的资源使用情况,这时候可以刷新下dashboard,里面也会显示pod等资源的使用PUC,内存情况。
kubectl top nodes
NAME          CPU(cores)   CPU%   MEMORY(bytes)   MEMORY%   
k8s-master    307m         3%     8682Mi          55%       
k8s-node-01   117m         1%     3794Mi          24%       
k8s-node02    127m         1%     4662Mi          29%    

19. 内网配置清单

  • nfs上
  • host解析
server {
    listen       2780;
    server_name  k8s-yaml.com;

    location / {
	autoindex on;
	default_type text/plain;
	root /sj_yaml/list;
    }
}

# 创建目录,重载nginx,访问

20. 映射外部服务

  • 集群将k8s集群外的服务映射成k8s内部的服务,解决连接问题,需要创建2个服务
    • k8s中每创建一个svc资源就会创建一个endpoint资源
    • 所以svc和Endpoints名字要相同
# 创建endpoint资源,前提是集群要和外部服务网络要通
vim mysql_endpoint.yaml
kind: Endpoints
apiVersion: v1
metadata:
  name: mysql-production
  namespace: public-toilet		#
subsets:
  - addresses:
      - ip: 192.168.188.12		# nfs节点mysql
    ports:
      - port: 3306
---
# 将外部服务转成k8s集群内部的mysql-production的svc便于访问.
apiVersion: v1
kind: Service
metadata:
  name: mysql-production
  namespace: public-toilet
spec:
  ports:
    - port: 3306
# 进入容器测试看是不是解析的ip
nslookup mysql-production.public-toilet.svc.cluster.local

21. 跨namespace连接

  • 一般公共组件会统一放在一个公共的namespace,以便不同namespace的连接,那么就需要跨namespace.
  • coredns来来实现集群内部的管理,同时svc是ping不同的
    • 可以在pod内部看/etc/resolv.conf
    • 通过nslookup ${SVC_NAME}来查看解析的地址,是否是svc地址
  • 创建完ExternalName类型的svc,就可以直接在应用上通过svc name来建立连接
# 比如公共组件在pub名称空间,需要连接的在app1名称空间,那么就在app1创建svc,不指定selector,同时指定svc的type为ExternalName,externalName的value为${SVC_NAME}.${NAMESPACE}.svc.cluster.local
vim link-rabbitmq-svc.yaml
apiVersion: v1
kind: Service
metadata:
 name: rabbitmq
 namespace: app1
spec:
 ports:
 - port: 5672
   name: amqp
 sessionAffinity: None
 type: ExternalName
 externalName: rabbitmq.pub.svc.cluster.local

22. RBAC

  • role
    • 服务于某个namespace
  • clusterrole
    • 集群资源
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: secret-reader
# namespace不用写,ClusterRoles是集群资源
rules:
- apiGroups: [""]			# "" 指定核心API组
  resources: ["secrets"]
  verbs: ["get", "watch", "list"]

三. dashboard

  • 可视化管理

1. 配置

# 下载yaml
wget https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.0-rc5/aio/deploy/recommended.yaml

# 修改,大概在45行
vi recommended.yaml

kind: Service
apiVersion: v1
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  ports:
    - port: 443
      targetPort: 8443
      nodePort: 31755				# 固定Port
  selector:
    k8s-app: kubernetes-dashboard
  type: NodePort  					# 加上type=NodePort 让dashboard的服务暴露出去,可以浏览器访问.
 
# 创建登录用户
vim admin.yaml

kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
  name: admin
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
roleRef:
  kind: ClusterRole
  name: cluster-admin
  apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
  name: admin
  namespace: kubernetes-dashboard

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: admin
  namespace: kubernetes-dashboard
  
# 然后直接创建.可以在浏览中登录
kubectl -n kubernetes-dashboard get svc
NAME                        TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP   10.106.34.228   <none>        8000/TCP        15m
kubernetes-dashboard        NodePort    10.110.236.83   <none>        443:31755/TCP   15m

2. token

  • 不推荐
# token登录
https://192.168.188.220:31755

# 获取token,复制里面的token登录dashboard
kubectl -n kubernetes-dashboard describe secrets admin-token-ddz4h

3. kubeconfig

  • token登录还是很麻烦的,需要配置kubeconfig来实现登录,如下配置过程
# 创建kubeconfig的文件的目录
mkdir -p /data/apps/dashboard/certs && cd /data/apps/dashboard/

#
openssl genrsa -des3 -passout pass:x -out certs/dashboard.pass.key 2048

#
openssl rsa -passin pass:x -in certs/dashboard.pass.key -out certs/dashboard.key

#
openssl req -new -key certs/dashboard.key -out certs/dashboard.csr -subj '/CN=kube-dashboard'

#
openssl x509 -req -sha256 -days 3650 -in certs/dashboard.csr -signkey certs/dashboard.key -out certs/dashboard.crt

#
rm certs/dashboard.pass.key

#
kubectl create secret generic kubernetes-dashboard-certs --from-file=certs -n kube-system

# 要给哪个用户创建kubeconfig,这里是给admin用户创建的.可echo这个变量看看
DASH_TOCKEN=$(kubectl -n kubernetes-dashboard get secret admin-token-ddz4h  -o jsonpath={.data.token}|base64 -d)

# 
kubectl config set-cluster kubernetes --server=192.168.188.220:8443 --kubeconfig=/data/apps/dashboard/certs/dashbord-admin.conf

#
kubectl config set-credentials dashboard-admin --token=$DASH_TOCKEN --kubeconfig=/data/apps/dashboard/certs/dashbord-admin.conf

#
kubectl config set-context dashboard-admin@kubernetes --cluster=kubernetes --user=dashboard-admin --kubeconfig=/data/apps/dashboard/certs/dashbord-admin.conf

# 这个dashbord.admin.conf就是admin用户的登录kubeconfig,把他下载带win上,登录的时候直接指定他就OK
kubectl config use-context dashboard-admin@kubernetes --kubeconfig=/data/apps/dashboard/certs/dashbord-admin.conf

4. ingress

  • 给dashboard配置ing
vim dash-ingress.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  labels:
    k8s-app: kubernetes-dashboard
  annotations:
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/ssl-passthrough: "true"
    nginx.ingress.kubernetes.io/backend-protocol: HTTPS
  name: kubernetes-dashboard
  namespace: kubernetes-dashboard
spec:
  rules:
  - host: hsj.dashboard.com				# 改想要登录的URL
    http:
      paths:
      - path: /
        backend:
          servicePort: 443
          serviceName: kubernetes-dashboard
  tls:
  - hosts:
    - hsj.dashboard.com					# 改想要登录的URL
    secretName: kubernetes-dashboard-certs

5. 其他用户

  • 可以创建clusterrole,并给定权限,让其他人也可以登录dashboard.做权限限制

四. harbor

  • 镜像仓库
  • nfs节点上,非集群内

1. docker compose + harbor

  • harbor需要安装docker compose.
# 先安装docker compose, 其实并不一定要安装上,只需要他的二进制文件拿到即可.
curl -o /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com/repo/epel-7.repo
yum install -y docker-compose

# harbor
# 下载offline harbor1.8.3版本.
wget https://storage.googleapis.com/harbor-releases/release-1.8.0/harbor-offline-installer-v1.8.3.tgz

tar xf harbor-offline-installer-v1.8.3.tgz -C /opt/

# 软链,控制版本,便于版本升级
mv /opt/harbor /opt/harbor-v1.83
ln -s /opt/harbor-v1.83/ /opt/harbor

2. harbor.yml

  • 这也是他的配置文件,修改他
# 如下是我的配置
hostname: 192.168.188.225
http:
  port: 180
harbor_admin_password: ${登录密码}
database:
  password: root123
data_volume: /harbor/images
clair: 
  updaters_interval: 12
  http_proxy:
  https_proxy:
  no_proxy: 127.0.0.1,localhost,core,registry
jobservice:
  max_job_workers: 10
chart:
  absolute_url: disabled
log:
  level: info
  rotate_count: 50
  rotate_size: 200M
  location: /var/log/harbor
_version: 1.8.0

3. nginx

1. daemon.json

# daemon.json添加一行
vim /etc/docker/daemon.json
"insecure-registries": ["http://192.168.188.225:180"],

# 重启docker
systemctl restart docker

2. nginx

# nginx配置
server {
    listen       80;
    server_name  harbor.echronos.com;

    client_max_body_size 1000m;

    location / {
        proxy_pass http://127.0.0.1:180;
    }
}

# 重启 
systemctl enable nginx;systemctl restart nginx

# 浏览器访问,创建一个项目,注意项目得是公开的.

# docker login看等否登上.推一下镜像
docker login 192.168.188.225:180

五. jenkins + gitlab

(一). 配置过程

1. jenkins

vim jenkins-all.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: jenkins
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: jenkins
  namespace: jenkins
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: jenkins-crb
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: ServiceAccount
  name: jenkins
  namespace: jenkins
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: jenkins-master
  namespace: jenkins
spec:
  replicas: 1
  selector:
    matchLabels:
      devops: jenkins-master
  template:
    metadata:
      labels:
        devops: jenkins-master
    spec:
      nodeSelector:
        jenkins: "true"
      serviceAccount: jenkins # Pod 需要使用的服务账号
      initContainers:
      - name: fix-permissions
        image: busybox
        imagePullPolicy: IfNotPresent
        command: ["sh", "-c", "chown -R 1000:1000 /var/jenkins_home"]
        securityContext:
          privileged: true
        volumeMounts:
        - name: jenkinshome
          mountPath: /var/jenkins_home
      containers:
      - name: jenkins
        image: jenkinsci/blueocean:1.23.2
        imagePullPolicy: IfNotPresent
        ports:
        - name: http #Jenkins Master Web 服务端口
          containerPort: 8080
        - name: slavelistener #Jenkins Master 供未来 Slave 连接的端口
          containerPort: 50000
        volumeMounts:
        - name: jenkinshome
          mountPath: /var/jenkins_home
        - name: date-config
          mountPath: /etc/localtime
        env:
        - name: JAVA_OPTS
          value: "-Xms4096m -Xmx5120m -Duser.timezone=Asia/Shanghai -Dhudson.model.DirectoryBrowserSupport.CSP="
      volumes:
      - name: jenkinshome
        hostPath:
          path: /var/jenkins_home/
      - name: date-config
        hostPath:
          path: /usr/share/zoneinfo/Asia/Shanghai
---
apiVersion: v1
kind: Service
metadata:
  name: jenkins
  namespace: jenkins
spec:
  ports:
  - name: http
    port: 8080
    targetPort: 8080
  - name: slavelistener
    port: 50000
    targetPort: 50000
  type: ClusterIP
  selector:
    devops: jenkins-master
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: jenkins-web
  namespace: jenkins
spec:
  rules:
  - host: jenkins.test.com		# 改ingress的host
    http:
      paths:
      - backend:
          serviceName: jenkins
          servicePort: 8080
        path: /
# 为k8s-slave1打标签,将jenkins-master部署在k8s-slave1节点
kubectl label node k8s-node02 jenkins=true

# 部署服务
kubectl create -f jenkins-all.yaml

# 查看服务
kubectl -n jenkins get po
NAME                              READY   STATUS    RESTARTS   AGE
jenkins-master-767df9b574-lgdr5   1/1     Running   0          20s

2. secret

# secret
vim gitlab-secret.txt
postgres.user.root=root
postgres.pwd.root=1qaz2wsx

# 创建secret资源,创建的时候需要选择一个类型,这里选择的是generic,并指定名字gitlab-secret
$ kubectl -n jenkins create secret generic gitlab-secret --from-env-file=gitlab-secret.txt

3. postgres

  • 部署gitlab依赖postgres
# 改镜像,其他不需要改什么.
vim postgres.yaml
apiVersion: v1
kind: Service
metadata:
  name: postgres
  labels:
    app: postgres
  namespace: jenkins
spec:
  ports:
  - name: server
    port: 5432
    targetPort: 5432
    protocol: TCP
  selector:
    app: postgres
---
apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: jenkins
  name: postgres
  labels:
    app: postgres
spec:
  replicas: 1
  selector:
    matchLabels:
      app: postgres
  template:
    metadata:
      labels:
        app: postgres
    spec:
      nodeSelector:
        postgres: "true"
      tolerations:
      - operator: "Exists"
      containers:
      - name: postgres
        image:  192.168.136.10:5000/postgres:11.4 	# 若本地没有启动该仓库,换成postgres:11.4
        imagePullPolicy: "IfNotPresent"
        ports:
        - containerPort: 5432
        env:
        - name: POSTGRES_USER           #PostgreSQL 用户名
          valueFrom:
            secretKeyRef:
              name: gitlab-secret
              key: postgres.user.root
        - name: POSTGRES_PASSWORD       #PostgreSQL 密码
          valueFrom:
            secretKeyRef:
              name: gitlab-secret
              key: postgres.pwd.root
        resources:
          limits:
            cpu: 1000m
            memory: 2048Mi
          requests:
            cpu: 50m
            memory: 100Mi
        volumeMounts:
        - mountPath: /var/lib/postgresql/data
          name: postgredb
      volumes:
      - name: postgredb
        hostPath:
          path: /var/lib/postgres/
# 把gitlab都部署到k8s-node02节点,给他打个标签
kubectl label node k8s-node02 postgres=true

# 创建postgres
$ kubectl create -f postgres.yaml

# 进入容器中,创建数据gitlab库,为后面部署gitlab组件使用
$ kubectl -n jenkins exec -ti postgres-7ff9b49f4c-nt8zh bash
root@postgres-7ff9b49f4c-nt8zh:/# psql
root=# create database gitlab;

4. redis

# 好像不需要改什么
apiVersion: v1
kind: Service
metadata:
  name: redis
  labels:
    app: redis
  namespace: jenkins
spec:
  ports:
  - name: server
    port: 6379
    targetPort: 6379
    protocol: TCP
  selector:
    app: redis
---
apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: jenkins
  name: redis
  labels:
    app: redis
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
  template:
    metadata:
      labels:
        app: redis
    spec:
      tolerations:
      - operator: "Exists"
      containers:
      - name: redis
        image:  sameersbn/redis:4.0.9-2
        imagePullPolicy: "IfNotPresent"
        ports:
        - containerPort: 6379
        resources:
          limits:
            cpu: 1000m
            memory: 2048Mi
          requests:
            cpu: 50m
            memory: 100Mi

5. gitlab

# 改ingress,给节点标签
vim gitlab.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: gitlab
  namespace: jenkins
  annotations:
    nginx.ingress.kubernetes.io/proxy-body-size: "50m"
spec:
  rules:
  - host: gitlab.com
    http:
      paths:
      - backend:
          serviceName: gitlab
          servicePort: 80
        path: /
---
apiVersion: v1
kind: Service
metadata:
  name: gitlab
  labels:
    app: gitlab
  namespace: jenkins
spec:
  ports:
  - name: server
    port: 80
    targetPort: 80
    protocol: TCP
  selector:
    app: gitlab
---
apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: jenkins
  name: gitlab
  labels:
    app: gitlab
spec:
  replicas: 1
  selector:
    matchLabels:
      app: gitlab
  template:
    metadata:
      labels:
        app: gitlab
    spec:
      nodeSelector:
        gitlab: "true"
      tolerations:
      - operator: "Exists"
      containers:
      - name: gitlab
        image:  sameersbn/gitlab:13.2.2
        imagePullPolicy: "IfNotPresent"
        env:
        - name: GITLAB_HOST
          value: "hsj.gitlab.com"		# 这里最好配置和gitlab的URL一致,代码远程仓库push的地址.
        - name: GITLAB_PORT
          value: "80"
        - name: GITLAB_SECRETS_DB_KEY_BASE
          value: "long-and-random-alpha-numeric-string"
        - name: GITLAB_SECRETS_DB_KEY_BASE
          value: "long-and-random-alpha-numeric-string"
        - name: GITLAB_SECRETS_SECRET_KEY_BASE
          value: "long-and-random-alpha-numeric-string"
        - name: GITLAB_SECRETS_OTP_KEY_BASE
          value: "long-and-random-alpha-numeric-string"
        - name: DB_HOST
          value: "postgres"
        - name: DB_NAME
          value: "gitlab"
        - name: DB_USER
          valueFrom:
            secretKeyRef:
              name: gitlab-secret
              key: postgres.user.root
        - name: DB_PASS
          valueFrom:
            secretKeyRef:
              name: gitlab-secret
              key: postgres.pwd.root
        - name: REDIS_HOST
          value: "redis"
        - name: REDIS_PORT
          value: "6379"
        ports:
        - containerPort: 80
        resources:
          limits:
            cpu: 2000m
            memory: 5048Mi
          requests:
            cpu: 100m
            memory: 500Mi
        volumeMounts:
        - mountPath: /home/git/data
          name: data
      volumes:
      - name: data
        hostPath:
          path: /var/lib/gitlab/
# 部署到k8s-node02节点
$ kubectl label node k8s-node02 gitlab=true

# 创建
$ kubectl create -f gitlab.yaml
# 最终的要保证4个pod以及其他的ingress,secret,SVC等正常运行.
kubectl -n  jenkins get po
NAME                              READY   STATUS    RESTARTS   AGE
gitlab-9db4f85df-kkmbx            1/1     Running   0          157m
jenkins-master-79f766cc69-k8vww   1/1     Running   0          66m
postgres-64746c8589-ccdgg         1/1     Running   0          3h41m
redis-548dc5569f-2rds8            1/1     Running   0          3h39m

# 服务器和win都做host解析
192.168.188.220 hsj.jenkins.com hsj.gitlab.com

(二). 优化过程

1. jenkins源

 

标签: 连接器xf3h

锐单商城拥有海量元器件数据手册IC替代型号,打造 电子元器件IC百科大全!

锐单商城 - 一站式电子元器件采购平台