Sealos 打包部署 KubeSphere 容器平台
sealos 是一个基于 Kubernetes 内核的云操作系统,能够自由组合各种分布式应用,轻松定制和快速打包部署 Kubernetes 任何高可用的分布式应用程序。 kubesphere 部署前置要求:
- Kubernetes 版本:1.20.x、1.21.x、1.22.x、1.23.x(实验);
- Kubernetes 默认存储类在集群中。
kubeshere 包装工艺的应用:
- 下载 kubeshpere 镜像清单
- 下载 kubesphere 部署yaml文件
- 创建 Dockerfile 并使用 sealos 构建镜像
镜像是在网络环境中制作的
1.准备一台联网机器 ubuntu 22.04 LTS
为例,安装sealos工具
wget -c https://sealyun-home.oss-cn-beijing.aliyuncs.com/sealos-4.0/latest/sealos-amd64 -O sealos && \ chmod x sealos && mv sealos /usr/bin
2、创建 kubesphere 可参考应用镜像构建目录sealos官方示例
mkdir -p /root/kubesphere/v3.3.0/ cd /root/kubesphere/v3.3.0/ mkdir -p images/shim mkdir manifests
3、下载kubesphere镜像清单到images/shim/
目录下
wget https://github.com/kubesphere/ks-installer/releases/download/v3.3.0/images-list.txt -P images/shim/
清除镜像清单中的注释行kubesphere全镜像,共153个,后续施工会比较耗时。
cat images/shim/images-list.txt |grep -v "^#" > images/shim/KubesphereImageList rm -rf images/shim/images-list.txt
4、下载kubesphere yaml文件到manifests目录下
wget https://raw.githubusercontent.com/kubesphere/ks-installer/v3.3.0/deploy/kubesphere-installer.yaml -P manifests/ wget https://raw.githubusercontent.com/kubesphere/ks-installer/v3.3.0/deploy/cluster-configuration.yaml -P manifests/
5、创建Dockerfile文件
cat > Dockerfile<<EOF FROM scratch COPY manifests ./manifests COPY registry ./registry CMD ["kubectl apply -f manifests/kubesphere-installer.yaml","kubectl apply -f manifests/cluster-configuration.yaml"] EOF
最终目录结构如下
root@ubuntu:~/kubesphere/v3.3.0# tree . ├── Dockerfile ├── images │ └── shim │ └── KubesphereImageList └── manifests ├── cluster-configuration.yaml └── kubesphere-installer.yaml 3 directories, 4 files
6、使用 sealos 构建 kubesphere 应用镜像
sealos build -f Dockerfile -t docker.io/willdockerhub/kubesphere:v3.3.0 .
检查构建的镜像
root@ubuntu:~# sealos images REPOSITORY TAG IMAGE ID CREATED SIZE docker.io/willdockerhub/kubesphere v3.3.0 831b87502b54 18 hours ago 11.5 GB
因为镜像太大,试着推dockerhub失败
sealos login docker.io -u xxx -p xxx sealos push docker.io/willdockerhub/kubesphere:v3.3.0
这里选择推送到阿里云镜像仓库(耗时,操作谨慎,建议本地使用镜像,不要推送到远程仓库)
sealos login registry.cn-shenzhen.aliyuncs.com -u xxx -p xxx sealos tag docker.io/willdockerhub/kubesphere:v3.3.0 registry.cn-shenzhen.aliyuncs.com/cnmirror/kubesphere:v3.3.0 sealos push registry.cn-shenzhen.aliyuncs.com/cnmirror/kubesphere:v3.3.0
录阿里云镜像仓库查看
7、打包 kubesphere 应用镜像文件
sealos save -o kubesphere_v3.3.0.tar registry.cn-shenzhen.aliyuncs.com/cnmirror/kubesphere:v3.3.0
8、其他依赖应用下载,sealos官方已在 dockerhub 提供大量制作好的应用镜像,如果无需自定义配置参数直接拉取使用
sealos pull labring/kubernetes:v1.22.11
sealos pull labring/calico:v3.22.1
sealos pull labring/openebs:v1.9.0
sealos save -o kubernetes_v1.22.11.tar labring/kubernetes:v1.22.11
sealos save -o calico_v3.22.1.tar labring/calico:v3.22.1
sealos save -o openebs_v1.9.0.tar labring/openebs:v1.9.0
应用镜像说明:
- labring/kubernetes:v1.22.11:kubernetes核心
- labring/calico:v3.22.1: kubernetes集群需要calico网络插件
- labring/openebs:v1.9.0: 部署kubesphere依赖持久化存储,这里选择openebs本地卷
8、复制以上打包后的镜像文件及sealos二进制文件到离线环境,如果为在线环境部署,则无需save操作。
离线安装kubesphere
以3个master节点及1个worker节点为例,操作系统类型为Ubuntu server 22.04 LTS,配置为4C/16G/100GB磁盘。
- 所有节点主机名需提前配置并且主机名必须不同。
- 以下所有操作必须在集群第一个节点执行,当前暂不支持在集群外节点部署。
1、安装sealos
mv sealos /usr/bin
2、解压镜像包
sealos load -i kubernetes_v1.22.11.tar
sealos load -i kubesphere_v3.3.0.tar
sealos load -i calico_v3.22.1.tar
sealos load -i openebs_v1.9.0.tar
3、执行以下命令直接部署
说明:每个应用组件也可以单独部署,例如完成kubernetes核心组件部署后,可以执行sealos run labring/calico:v3.22.1
依次部署后续应用。
sealos run \
--masters 192.168.72.50,192.168.72.51,192.168.72.52 \
--nodes 192.168.72.53 -p 123456 \
labring/kubernetes:v1.22.11 \
labring/calico:v3.22.1 \
labring/openebs:v1.9.0 \
registry.cn-shenzhen.aliyuncs.com/cnmirror/kubesphere:v3.3.0
查看部署日志,整体耗时约15分钟
root@node01:~# sealos run \
--masters 192.168.72.50,192.168.72.51,192.168.72.52 \
--nodes 192.168.72.53 -p 123456 \
labring/kubernetes:v1.22.11 \
labring/calico:v3.22.1 \
labring/openebs:v1.9.0 \
registry.cn-shenzhen.aliyuncs.com/cnmirror/kubesphere:v3.3.0
2022-07-23 10:58:02 [INFO] Start to create a new cluster: master [192.168.72.50 192.168.72.51 192.168.72.52], worker [192.168.72.53]
2022-07-23 10:58:02 [INFO] Executing pipeline Check in CreateProcessor.
2022-07-23 10:58:02 [INFO] checker:hostname [192.168.72.50:22 192.168.72.51:22 192.168.72.52:22 192.168.72.53:22]
2022-07-23 10:58:05 [INFO] checker:timeSync [192.168.72.50:22 192.168.72.51:22 192.168.72.52:22 192.168.72.53:22]
2022-07-23 10:58:06 [INFO] Executing pipeline PreProcess in CreateProcessor.
Resolving "labring/kubernetes" using unqualified-search registries (/etc/containers/registries.conf)
Trying to pull docker.io/labring/kubernetes:v1.22.11...
Getting image source signatures
Copying blob 030fcef18dcb [--------------------------------------] 0.0b / 382.3MiB (skipped: 0.0b = 0.00%)
......
......
2022-07-23 11:14:44 [INFO] guest cmd is kubectl apply -f manifests/kubesphere-installer.yaml
customresourcedefinition.apiextensions.k8s.io/clusterconfigurations.installer.kubesphere.io created
namespace/kubesphere-system created
serviceaccount/ks-installer created
clusterrole.rbac.authorization.k8s.io/ks-installer created
clusterrolebinding.rbac.authorization.k8s.io/ks-installer created
deployment.apps/ks-installer created
2022-07-23 11:14:46 [INFO] guest cmd is kubectl apply -f manifests/cluster-configuration.yaml
clusterconfiguration.installer.kubesphere.io/ks-installer created
2022-07-23 11:14:48 [INFO] succeeded in creating a new cluster, enjoy it!
2022-07-23 11:14:48 [INFO]
___ ___ ___ ___ ___ ___
/\ \ /\ \ /\ \ /\__\ /\ \ /\ \
/::\ \ /::\ \ /::\ \ /:/ / /::\ \ /::\ \
/:/\ \ \ /:/\:\ \ /:/\:\ \ /:/ / /:/\:\ \ /:/\ \ \
_\:\~\ \ \ /::\~\:\ \ /::\~\:\ \ /:/ / /:/ \:\ \ _\:\~\ \ \
/\ \:\ \ \__\ /:/\:\ \:\__\ /:/\:\ \:\__\ /:/__/ /:/__/ \:\__\ /\ \:\ \ \__\
\:\ \:\ \/__/ \:\~\:\ \/__/ \/__\:\/:/ / \:\ \ \:\ \ /:/ / \:\ \:\ \/__/
\:\ \:\__\ \:\ \:\__\ \::/ / \:\ \ \:\ /:/ / \:\ \:\__\
\:\/:/ / \:\ \/__/ /:/ / \:\ \ \:\/:/ / \:\/:/ /
\::/ / \:\__\ /:/ / \:\__\ \::/ / \::/ /
\/__/ \/__/ \/__/ \/__/ \/__/ \/__/
Website :https://www.sealos.io/
Address :github.com/labring/sealos
4、查看kubernetes集群状态
root@node01:~# kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
node01 Ready control-plane,master 6m19s v1.22.11 192.168.72.50 <none> Ubuntu 22.04 LTS 5.15.0-41-generic containerd://1.6.2
node02 Ready control-plane,master 5m12s v1.22.11 192.168.72.51 <none> Ubuntu 22.04 LTS 5.15.0-41-generic containerd://1.6.2
node03 Ready control-plane,master 4m16s v1.22.11 192.168.72.52 <none> Ubuntu 22.04 LTS 5.15.0-41-generic containerd://1.6.2
node04 Ready <none> 3m59s v1.22.11 192.168.72.53 <none> Ubuntu 22.04 LTS 5.15.0-41-generic containerd://1.6.2
5、查看kubesphere pods状态
root@node01:~# kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
calico-system calico-kube-controllers-57fbd7bd59-7chg8 1/1 Running 0 11m
calico-system calico-node-d22gx 1/1 Running 0 11m
calico-system calico-node-pzkqn 1/1 Running 0 11m
calico-system calico-node-s5blv 1/1 Running 0 11m
calico-system calico-node-xdb5q 1/1 Running 0 11m
calico-system calico-typha-d5c88bd64-gj7p6 1/1 Running 0 11m
calico-system calico-typha-d5c88bd64-wgwq7 1/1 Running 0 11m
kube-system coredns-78fcd69978-hhkz5 1/1 Running 0 13m
kube-system coredns-78fcd69978-j5tc9 1/1 Running 0 13m
kube-system etcd-node01 1/1 Running 0 14m
kube-system etcd-node02 1/1 Running 0 12m
kube-system etcd-node03 1/1 Running 0 11m
kube-system kube-apiserver-node01 1/1 Running 0 13m
kube-system kube-apiserver-node02 1/1 Running 0 12m
kube-system kube-apiserver-node03 1/1 Running 0 11m
kube-system kube-controller-manager-node01 1/1 Running 1 (12m ago) 14m
kube-system kube-controller-manager-node02 1/1 Running 0 12m
kube-system kube-controller-manager-node03 1/1 Running 0 11m
kube-system kube-proxy-dfpph 1/1 Running 0 11m
kube-system kube-proxy-f2spn 1/1 Running 0 12m
kube-system kube-proxy-fp48n 1/1 Running 0 13m
kube-system kube-proxy-q6gt7 1/1 Running 0 11m
kube-system kube-scheduler-node01 1/1 Running 1 (12m ago) 13m
kube-system kube-scheduler-node02 1/1 Running 0 12m
kube-system kube-scheduler-node03 1/1 Running 0 11m
kube-system kube-sealyun-lvscare-node04 1/1 Running 0 11m
kube-system snapshot-controller-0 1/1 Running 0 8m45s
kubesphere-controls-system default-http-backend-5bf68ff9b8-lf6p8 1/1 Running 0 7m4s
kubesphere-controls-system kubectl-admin-6dbcb94855-j6tct 1/1 Running 0 39s
kubesphere-monitoring-system alertmanager-main-0 2/2 Running 0 3m36s
kubesphere-monitoring-system alertmanager-main-1 2/2 Running 0 3m36s
kubesphere-monitoring-system alertmanager-main-2 2/2 Running 0 3m36s
kubesphere-monitoring-system kube-state-metrics-7bdc7484cf-qjv2q 3/3 Running 0 3m47s
kubesphere-monitoring-system node-exporter-2d5qq 2/2 Running 0 3m50s
kubesphere-monitoring-system node-exporter-42fsd 2/2 Running 0 3m50s
kubesphere-monitoring-system node-exporter-8t5p7 2/2 Running 0 3m50s
kubesphere-monitoring-system node-exporter-lhtzb 2/2 Running 0 3m49s
kubesphere-monitoring-system notification-manager-deployment-78664576cb-qv8kw 2/2 Running 0 2m44s
kubesphere-monitoring-system notification-manager-deployment-78664576cb-wxvmc 2/2 Running 0 2m44s
kubesphere-monitoring-system notification-manager-operator-7d44854f54-9d2vs 2/2 Running 0 3m7s
kubesphere-monitoring-system prometheus-k8s-0 2/2 Running 0 3m43s
kubesphere-monitoring-system prometheus-k8s-1 2/2 Running 0 3m43s
kubesphere-monitoring-system prometheus-operator-8955bbd98-vx7k2 2/2 Running 0 3m51s
kubesphere-system ks-apiserver-7f4c99bb7-d8qqk 1/1 Running 1 (35s ago) 7m4s
kubesphere-system ks-apiserver-7f4c99bb7-kdcwd 1/1 Running 0 7m4s
kubesphere-system ks-apiserver-7f4c99bb7-rgs24 1/1 Running 0 7m4s
kubesphere-system ks-console-54bd5bcbc6-92w5x 1/1 Running 0 7m4s
kubesphere-system ks-console-54bd5bcbc6-m2f4x 1/1 Running 0 7m4s
kubesphere-system ks-console-54bd5bcbc6-zvj5k 1/1 Running 0 7m4s
kubesphere-system ks-controller-manager-8f6b985c5-glzfw 1/1 Running 0 7m4s
kubesphere-system ks-controller-manager-8f6b985c5-l6j5n 1/1 Running 0 7m4s
kubesphere-system ks-controller-manager-8f6b985c5-zs945 1/1 Running 0 7m4s
kubesphere-system ks-installer-6976cf49f5-4r94t 1/1 Running 0 11m
kubesphere-system redis-d744b7468-c5pqc 1/1 Running 0 8m35s
openebs openebs-localpv-provisioner-5c759b49cf-qrdwz 1/1 Running 0 11m
openebs openebs-ndm-cluster-exporter-7868564dcc-l9q4r 1/1 Running 0 11m
openebs openebs-ndm-node-exporter-mlgsb 1/1 Running 0 9m56s
openebs openebs-ndm-operator-84f74f957b-cp8tv 1/1 Running 0 11m
tigera-operator tigera-operator-d499f5c8f-575x2 1/1 Running 0 11m
登录kubesphere 容器平台
默认使用 NodePort暴露服务
控制台地址:http://192.168.72.50:30880/
用户名:admin
密码:P@88w0rd
登录控制台后如下:
kubesphere为可插拔设计,可通过修改配置文件启用其他功能,以devops功能组件为例,在ClusterConfiguration中编辑yaml,找到 devops enabled 字段改为 true
待组件运行正常后如下