Infrastructure as a Service(laas,以基础架构为服务) 阿里云 platform as a service(Paas,平台作为服务) 新浪云 Software as a Service(Saas,软件作为服务) Office 365
Apache MESOS资源管理器 docker Swarm
最好是高可用集群副本数据>= 3奇数个
资料地址: https://gitee.com/WeiboGe2012/kubernetes-kubeadm-install https://gitee.com/llqhz/ingress-nginx根据版本找到信息
k8s架构
APISERVER:统一访问所有服务 口 CrontrollerManager:保持副本预期数 Scheduler:负责介绍任务,选择合适的节点分配任务 ETCD:存储数据库中的键值K8S集群所有 重要信息(持久) Kubelet:直接与容器引擎交互,实现容器生命周期管理 Kube-prox:负责写规则IPTABLES、 IPVS 实现服务映射访问
其它插件说明 COREDNS:可以集群SVC创建域名IP对应关系的分析 DASHBOARD:给K8S 集群提供个B/S结构访间系统 INGRESS CONTROLLER:官方只能实现四层代理,INGRESS 可以实现七层代理 FEDERATION:提供跨集群中心提供多个K8S统一管 理功能 PROMETHEUS:提供K8S集群监控能力 ELK:提供K8S集群日志统一分析介入平台
Pod概念
pod控制器类型
Repl icationController & ReplicaSet & Deployment
ReplicationController用于确保容器应用的副本数始终保持在用户定义的副本数中,即如果容器异常退出,将自动创建新的副本数Pod替换;如果容器有异常,也会自动回收。Kubernetes 中建议使用ReplicaSet 来取代Replicat ionControlle
ReplicaSet跟Repl icat ionController没有本质的区别,只是名字不同,而且Repl icaSet支持集合式selector
虽然ReplicaSet可独立使用,但一般建议使用Deployment 来自动管理ReplicaSet,这样就不用担心与其他机制的不兼容(比如ReplicaSet 不支持rolling-update但Deployment 支持)
HPA (Hor izontalPodAutoScale)
Horizontal Pod Autoscaling 仅适用于Deployment和ReplicaSet ,在V仅在版本中支持基础Pod的CPU利用扩所容,在vlalpha 支持根据内存和用户定制版本metric 扩缩容
StatefullSet
StatefulSet解决有状态服务问题(对应)Deployments 和Repl icaSets其应用场景包括:
*稳定的持久存储,即Pod基于相同的持久数据,重新调度后仍可访问PVC来实现 *网络标志稳定,即Pod重新调度后PodName 和HostName 不变,基于Headless Service(即没有Cluster IP的Service )来实现 *有序部署,有序扩展,即Pod有序的,在部署或扩展时,应按照定义的顺序依次进行(即从0到N-1,在下一个Pod运行前的一切Pod必须都是Running 和Ready状态),基于init containers 来实现 *有序收缩,有序删除(即从N-1到0)
DaemonSet
DaemonSet确保一切(或-) Node上运行一个Pod副本Node加入集群时,也会给他们增加一个新的Pod。当有Node这些都是从集群中移除的Pod也会被回收。删除DaemonSet 它创建的一切都将被删除Pod 使用DaemonSet 的一些 典型用法:
*运行集群存储daemon, 例如在每个Node上运行 glusterd、 ceph。 *在每个Node收集上运行日志daemon,例如fluentd、 logstash。 *在每个Node上运行监控daemon, 例如Prometheus Node Exporter
Job,Cronjob
Job负责批处理任务,即只执行一次任务, 它保证批处理任务的-一个或多个Pod成功结束
Cron Job基于时间的管理Job,即:
*在给定的时间点只运行一次 *周期性地在给定时间点运行
服务发现
网络通信模式
Kubernetes假设所有的网络模型都假设了Pod它们都在一个可以直接连接的扁平网络空间中GCE (Google Compute Engine) 它是现成的网络模型,Kubernetes 假设网络已经存在。建在私有云中Kubernetes 集群不能假设网络已经存在。我们需要在不同的节点上实现这个网络假设Docker 先打开容器之间的相互访问,然后运行Kubernetes
同一个Pod内多个容器之间:lo 各Pod之间的通信: Overlay Network Pod与Service 通信:每个节点之间的通信:Iptables 规则
网络解决方案kubernetes flannel
Flannel是CoreOS 团队针对Kubernetes 网络规划服务的设计,简单地说,它的功能是由集群中的不同节点主机创建的Docker全集群唯-虚拟容器IP地址。而且它也可以在这里IP建立地址-覆盖网络(Overlay Network) ,通过这个覆盖网络,将数据包原封不动地传输到目标容器中 ETCD之Flannel 提供说明:
存储管理Flannel 可分配的IP地址段资源 监控ETCD中每个Pod在内存中建立实际地址和维护Pod节点路由表
在不同的情况下,网络通信模式不同
1、同一个Pod内部通信:同一个 Pod共享同一个网络命名空间Linux协议栈Pod1至Pod2 2、Pod1至Pod2
- 不在同一 台主机,Pod的地址是与docker0在同一网段,但是docker0网段与宿主机网卡是两个完全不同的IP而且网段不同Node通信可以通过宿主机的物理网卡进行。Pod的IP和所在Node的IP关联起来,通过这个关联让步Pod可以互相访问!
- Pod1与Pod2在同一.台机器,由Docker0 网桥直接转发请求Pod2, 不需要经过Flannel演示
3、Pod至Service 网络:目前基于性能考虑,都是iptables 维护和转发 4、Pod到外网: Pod向外网发送请求,查找路由表,将数据包转发到宿主机网卡,宿主网卡完成路由选择后,iptables执行Masquerade,把源IP更改为宿主网卡IP, 然后向外网服务器发送请求 5、外网访问Pod: Service
组件通信示意图
k8s集群安装
前期准备
1、安装k8s节点必须大于1核心CPU 2.安装节点的网络信息192.168.192.0/24 master:131 node1:130 node2:129
四台CentOS7:一个master主服务器和两个node节点和一个harbor,都是:只有主机模式 一台Windows10:并安装koolshare KoolCenter固件下载服务器:http://fw.koolcenter.com/ 下载IMG写盘工具
创建虚拟机
看一下是哪一个网卡是仅主机运行 设置完成后,浏览器访问koolshare路由ip:192.168.1.1密码:koolshare 修改成与k8s集群一样的网段 再次登录使用的是新的路由ip:192.168.192.1 诊断访问国内网站正常 访问国外网站,点击【酷软】 下载koolss 直接搜索SSR节点或有SSR服务器直接填写上去
k8s集群安装
设置系统主机名以及hosts文件
hostnamectl set-hostname k8s-master01
hostnamectl set-hostname k8s-node01
hostnamectl set-hostname k8s-node02
设置ip地址 vi /etc/sysconfig/network-scripts/ifcfg-ens33 master主机
TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="static"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="ens33"
UUID="ea3b61ed-9232-4b69-b6c0-2f863969e750"
DEVICE="ens33"
ONBOOT="yes"
IPADDR=192.168.192.131
NETMASK=255.255.255.0
GATEWAY=192.168.192.1
DNS1=192.168.192.1
DNS2=114.114.114.114
node01主机
TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="static"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="ens33"
UUID="ea3b61ed-9232-4b69-b6c0-2f863969e750"
DEVICE="ens33"
ONBOOT="yes"
IPADDR=192.168.192.130
NETMASK=255.255.255.0
GATEWAY=192.168.192.1
DNS1=192.168.192.1
DNS2=114.114.114.114
node02主机
TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="static"
DEFROUTE="yes"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
IPV6_ADDR_GEN_MODE="stable-privacy"
NAME="ens33"
UUID="ea3b61ed-9232-4b69-b6c0-2f863969e750"
DEVICE="ens33"
ONBOOT="yes"
IPADDR=192.168.192.129
NETMASK=255.255.255.0
GATEWAY=192.168.192.1
DNS1=192.168.192.1
DNS2=114.114.114.114
三台主机重启网络
service network restart
设置master主机的hosts文件,加入以下主机名 vi /etc/hosts
192.168.192.131 k8s-master01
192.168.192.130 k8s-node01
192.168.192.129 k8s-node02
将master主机的hosts文件拷贝到node01、node02主机上
[root@localhost ~]# scp /etc/hosts root@k8s-node01:/etc/hosts
The authenticity of host 'k8s-node01 (192.168.192.130)' can't be established. ECDSA key fingerprint is SHA256:M5BalHyNXU5W49c5/9iZgC4Hl370O0Wr/c5S/FYFIvw. ECDSA key fingerprint is MD5:28:23:b8:eb:af:d1:bd:bb:8c:77:e0:01:3c:62:7a:cb. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'k8s-node01,192.168.192.130' (ECDSA) to the list of known hosts. root@k8s-node01's password:
hosts 100% 241 217.8KB/s 00:00
[root@localhost ~]# scp /etc/hosts root@k8s-node02:/etc/hosts
The authenticity of host 'k8s-node02 (192.168.192.129)' can't be established. ECDSA key fingerprint is SHA256:M5BalHyNXU5W49c5/9iZgC4Hl370O0Wr/c5S/FYFIvw. ECDSA key fingerprint is MD5:28:23:b8:eb:af:d1:bd:bb:8c:77:e0:01:3c:62:7a:cb. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added 'k8s-node02,192.168.192.129' (ECDSA) to the list of known hosts. root@k8s-node02's password:
hosts 100% 241 143.1KB/s 00:00
三台主机都安装依赖包
yum install -y conntrack ntpdate ntp ipvsadm ipset jq iptables curl sysstat libseccomp wget vim net-tools git
三台主机:设置防火墙为iptables并设置空规则
systemctl stop firewalld && systemctl disable firewalld
yum -y install iptables-services && systemctl start iptables && systemctl enable iptables && iptables -F && service iptables save
三台主机:关闭selinux
swapoff -a && sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
setenforce 0 && sed -i 's/^SELINUX=.*/SELINUX=disabled/' /etc/selinux/config
三台主机:调整内核参数,对于K8S
cat > kubernetes.conf <<EOF net.bridge.bridge-nf-call-iptables=1 net.bridge.bridge-nf-call-ip6tables=1 net.ipv4.ip_forward=1 net.ipv4.tcp_tw_recycle=0 vm.swappiness=0 # 禁止使用 swap 空间,只有当系统 OOM 时才允许使用它 vm.overcommit_memory=1 # 不检查物理内存是否够用 vm.panic_on_oom=0 # 开启 OOM fs.inotify.max_user_instances=8192 fs.inotify.max_user_watches=1048576 fs.file-max=52706963 fs.nr_open=52706963 net.ipv6.conf.all.disable_ipv6=1 net.netfilter.nf_conntrack_max=2310720 EOF
cp kubernetes.conf /etc/sysctl.d/kubernetes.conf
sysctl -p /etc/sysctl.d/kubernetes.conf
三台主机:调整系统时区
# 设置系统时区为中国/上海
timedatectl set-timezone Asia/Shanghai
# 将当前的UTC时间写入硬件时钟
timedatectl set-local-rtc 0
# 重启依赖于系统时间的服务
systemctl restart rsyslog
systemctl restart crond
三台主机:关闭系统不需要的服务
systemctl stop postfix && systemctl disable postfix
三台主机:设置rsyslogd和systemd journald
mkdir /var/log/journal # 持久化保存日志的目录
mkdir /etc/systemd/journald.conf.d
cat > /etc/systemd/journald.conf.d/99-prophet.conf <<EOF [Journal] # 持久化保存到磁盘 Storage=persistent # 压缩历史日志 Compress=yes SyncIntervalSec=5m RateLimitInterval=30s RateLimitBurst=1000 # 最大占用空间10G SystemMaxUse=10G # 但日志文件最大 200M SystemMaxFileSeize=200M # 日志保存时间2周 MaxRetentionSec=2week # 不将日志转发到syslog ForwardToSyslog=no EOF
systemctl restart systemd-journald
三台主机:升级系统内核为5.4
CentOS 7.x系统自带的3.10.x内核存在一些Bugs,导致运行的Docker、Kubernetes 不稳定,例如: rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
uname -r 命令查看内核信息
rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm
# 安装完成后检查/boot/grub2/grub.cfg 中对应内核menuentry 中是否包含initrd16 配置,如果没有,再安装一次!
yum --enablerepo=elrepo-kernel install -y kernel-lt
# 上面查询出内核版本后,设置开机从新内核启动
grub2-set-default "CentOS Linux (4.4.182-1.el7.elrepo.x86_64) 7 (Core)"
grub2-set-default "CentOS Linux (5.4.195-1.el7.elrepo.x86_64) 7 (Core)"
三台主机:kube-proxy开启ipvs的前置条件
# ==============================================================内核4.44版本
modprobe br_netfilter
cat > /etc/sysconfig/modules/ipvs.modules <<EOF #!/bin/bash modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack_ipv4 EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_Vs -e nf_conntrack_ipv4
# ==============================================================内核5.4版本
# 1.安装ipset和ipvsadm
yum install ipset ipvsadmin -y
# 2.添加需要加载的模块写入脚本文件
cat > /etc/sysconfig/modules/ipvs.modules <<EOF #!/bin/bash modprobe -- ip_vs modprobe -- ip_vs_rr modprobe -- ip_vs_wrr modprobe -- ip_vs_sh modprobe -- nf_conntrack EOF
# 3.为脚本添加执行权限
chmod +x /etc/sysconfig/modules/ipvs.modules
# 4.执行脚本文件
cd /etc/sysconfig/modules/
./ipvs.modules
# 5.查看对应的模块是否加载成功
lsmod | grep -e -ip_vs -e nf_conntrack
三台主机:安装Docker软件
参考安装docker:https://blog.csdn.net/DDJ_TEST/article/details/114983746
# 设置存储库
# 安装yum-utils包(提供yum-config-manager 实用程序)并设置稳定存储库。
$ sudo yum install -y yum-utils
$ sudo yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
# 安装指定版本的 Docker Engine 和 containerd
$ sudo yum install -y docker-ce-18.09.9 docker-ce-cli-18.09.9 containerd.io
启动docker
$ sudo systemctl start docker
通过运行hello-world 映像验证 Docker Engine 是否已正确安装。
$ sudo docker run hello-world
设置开机定时启动docker
$ sudo systemctl enable docker
关闭docker服务
$ sudo systemctl stop docker
重启docker服务
$ sudo systemctl restart docker
验证
[root@k8s-master01 ~]# docker version
Client:
Version: 18.09.9
API version: 1.39
Go version: go1.11.13
Git commit: 039a7df9ba
Built: Wed Sep 4 16:51:21 2019
OS/Arch: linux/amd64
Experimental: false
Server: Docker Engine - Community
Engine:
Version: 18.09.9
API version: 1.39 (minimum version 1.12)
Go version: go1.11.13
Git commit: 039a7df
Built: Wed Sep 4 16:22:32 2019
OS/Arch: linux/amd64
Experimental: false
# 创建/etc/docker目录
mkdir /etc/docker
# 配置daemon
cat > /etc/docker/daemon.json <<EOF { "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "insecure-registries": ["https://hub.atguigu.com"] } EOF
mkdir -p /etc/systemd/system/docker.service.d
# 重启docker服务
systemctl daemon-reload && systemctl restart docker && systemctl enable docker
三台主机:安装kubeadm(主从配置)
cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
yum -y install kubeadm-1.15.1 kubectl-1.15.1 kubelet-1.15.1
systemctl enable kubelet.service
kubeadm-basic.images.tar.gz和harbor-offline-installer-v2.3.2.tgz 下载 上传到master节点上
tar -zxvf kubeadm-basic.images.tar.gz
vim load-images.sh将镜像文件上传到docker
#!/bin/bash
ls /root/kubeadm-basic.images > /tmp/image-list.txt
cd /root/kubeadm-basic.images
for i in $( cat /tmp/image-list.txt )
do
docker load -i $i
done
rm -rf /tmp/image-list.txt
chmod a+x load-images.sh
./load-images.sh
将解压的文件复制到node01、node02
scp -r kubeadm-basic.images load-images.sh root@k8s-node01:/root
scp -r kubeadm-basic.images load-images.sh root@k8s-node02:/root
在node01、node02上执行load-images.sh文件
./load-images.sh
初始化主节点
# 生成模板文件
kubeadm config print init-defaults > kubeadm-config.yaml
vim kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.192.131 # 改成master主机的ip
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: k8s-master01
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {
}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io
kind: ClusterConfiguration
kubernetesVersion: v1.15.1 # 改成安装的版本号
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16" # 添加默认podSubnet网段
serviceSubnet: 10.96.0.0/12
scheduler: {
}
---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
featureGates:
SupportIPVSProxyMode: true
mode: ipvs
kubeadm init --config=kubeadm-config.yaml --experimental-upload-certs | tee kubeadm-init.log
遇到报错,找问题3、4
查看日志文件
[root@k8s-master01 ~]# vim kubeadm-init.log
[init] Using Kubernetes version: v1.15.1
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [k8s-master01 localhost] and IPs [192.168.192.131 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [k8s-master01 localhost] and IPs [192.168.192.131 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [k8s-master01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.192.131]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 23.004030 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Storing the certificates in Secret "kubeadm-certs" in the "kube-system" Namespace
[upload-certs] Using certificate key:
ecaa903ab475ec8d361a7a844feb3973b437a6e36981be7d949dccda63c15d00
[mark-control-plane] Marking the node k8s-master01 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node k8s-master01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: abcdef.0123456789abcdef
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.192.131:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:3236bf910c84de4e1f5ad24b1b627771602d5bad03e7819aad18805c440fd8aa
~
执行上面的命令
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
查看k8s节点
[root@k8s-master01 ~]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master01 NotReady master 19m v1.15.1
加入主节点以及其余工作节点
执行安装日志中的加入命令即可
部署网络
下面这条不用
kubectl apply -f https://github.com/WeiboGe2012/kube-flannel.yml/blob/master/kube-flannel.yml
执行下面的命令
[root@k8s-master01 ~]# mkdir -p install-k8s/core
[root@k8s-master01 ~]# mv kubeadm-init.log kubeadm-config.yaml install-k8s/core
[root@k8s-master01 ~]# cd install-k8s/
[root@k8s-master01 install-k8s]# mkdir -p plugin/flannel
[root@k8s-master01 install-k8s]# cd plugin/flannel
[root@k8s-master01 flannel]# wget https://github.com/WeiboGe2012/kube-flannel.yml/blob/master/kube-flannel.yml
[root@k8s-master01 flannel]# kubectl create -f kube-flannel.yml
[root@k8s-master01 flannel]# kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-5c98db65d4-4kj2t 1/1 Running 0 92m
coredns-5c98db65d4-7zsr7 1/1 Running 0 92m
etcd-k8s-master01 1/1 Running 0 91m
kube-apiserver-k8s-master01 1/1 Running 0 91m
kube-controller-manager-k8s-master01 1/1 Running 0 91m
kube-flannel-ds-amd64-g4gh9 1/1 Running 0 18m
kube-proxy-t7v46 1/1 Running 0 92m
kube-scheduler-k8s-master01 1/1 Running 0 91m
[root@k8s-master01 flannel]# kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-master01 Ready master 93m v1.15.1
[root@k8s-master01 flannel]# ifconfig
cni0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 10.244.0.1 netmask 255.255.255.0 broadcast 10.244.0.255
ether c6:13:60:e7:e8:21 txqueuelen 1000 (Ethernet)
RX packets 4809 bytes 329578 (321.8 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 4854 bytes 1485513 (1.4 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 0.0.0.0
ether 02:42:71:d8:f1:e2 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
ens33: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet 192.168.192.131 netmask 255.255.255.0 broadcast 192.168.192.255
ether 00:0c:29:8c:51:ba txqueuelen 1000 (Ethernet)
RX packets 536379 bytes 581462942 (554.5 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 1359677 bytes 1764989232 (1.6 GiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
flannel.1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
inet 10.244.0.0 netmask 255.255.255.255 broadcast 0.0.0.0
ether 16:0c:14:08:a6:51 txqueuelen 0 (Ethernet)
RX packets 0 bytes 0 (0.0 B)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 0 bytes 0 (0.0 B)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
lo: flags=73<UP,LOOPBACK,RUNNING> mtu 65536
inet 127.0.0.1 netmask 255.0.0.0
loop txqueuelen 1000 (Local Loopback)
RX packets 625548 bytes 102038881 (97.3 MiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 625548 bytes 102038881 (97.3 MiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
veth350261c9: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
ether f2:95:97:91:06:00 txqueuelen 0 (Ethernet)
RX packets 2400 bytes 198077 (193.4 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 2424 bytes 741548 (724.1 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
vethd9ac2bc1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST> mtu 1450
ether 16:5c:e0:81:25:ba txqueuelen 0 (Ethernet)
RX packets 2409 bytes 198827 (194.1 KiB)
RX errors 0 dropped 0 overruns 0 frame 0
TX packets 2435 bytes 744163 (726.7 KiB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0