Harbor Registry(又称Harbor云原生产品仓库或Harbor镜像仓库)由VMware云原生实验室,公司中国研发中心,于2016年3月开源。Harbor在Docker Registry的基础上增加了企业用户必需的权限控制、镜像签名、安全漏洞扫描和远程复制等重要功能,还提供了图形管理界面及面向国内用户的中文支持,开源后迅速在开发者和用户社区流行,成为云原生用户的主流容器镜像仓库。
本篇是TKGm组件集成系列的最后一篇文章是通过组件集成中的软件Tanzu packge 签名、发布、部署的VMware的支持。
TKGm通过组件集成实现生产就绪,有些组件的功能可以在实际生产中多个集群共享,因此有些组件不需要部署在生产中TKGm单一集群。TKGm从集群结构规划的角度来看,分为管理集群、工作集群、共享服务集群三个角色。管理集群、工作集群,前一篇文章反复提到,管理集群管理工作集群的生命周期,用于工作负荷的运行。
共享服务集群用于运行组件集成中的共享组件,如EFK在日志系统中Elasticsearch,Kibana以及容器镜像仓库Harbor,或者其他可以多集群共享使用的周边组件可以部署在这个集群中。
下图是生产集群规划中三类集群角色的划分和组件集成部署。
作为容器云平台的重要组成部分,建议在集群规划中部署共享服务集群。TKGm 通过Tanzu Packages整合业内最受欢迎的Harbor作为标准镜像仓库,实现Harbor 部署、更新、卸载、TKGm深度集成,Harbor实现标准化生命周期管理,自动生成自签证,将自签证注入工作集群。
Harbor Registry由VMware云原生实验室,公司中国研发中心,于2016年3月开源。Harbor在Docker Registry在此基础上,增加了企业用户必要的权限控制、镜像签名、安全漏洞扫描和远程复制等重要功能,并为国内用户提供图形管理界面和中文支持。开源后,它在中国开发者和用户社区迅速流行,成为云本地用户的主流容器镜像仓库。
2018年7月,VMware捐赠Harbor给CNCF,使Harbor它也是中国第一个成为社区共同维护的开源项目CNCF项目。在加入CNCF之后,Harbor许多合作伙伴、用户和开发者都参与了云原生社区的全球融合Harbor成千上万的用户在生产系统中部署和使用项目贡献Harbor,Harbor每月下载量超过3万次。2020年6月,Harbor成为首个中国原创的CNCF毕业项目。
根据2020 年 CNCF 根据中国云原生调查报告,Harbor 实现国内用户生产系统的利用率 与一年前的调查结果(27%)相比,47%大幅增加 75%。
Harbor旨在提供安全可靠的云原生产品管理,支持镜像签名和内容扫描,确保产品管理的合规性、高效性和可操作性。Harbor功能主要包括四类:多用户控制(基于角色访问控制和项目隔离)、镜像管理策略(存储配额、产品保留、漏洞扫描、源签名、不可变产品、垃圾回收等)、安全合规(身份认证、扫描及CVE例外规则等。(Webhook、远程复制内容,可插拔扫描器,REST API、机器人账号等)。
:Harbor 支持容器镜像和 Helm chart,云原生环境可环境,如容器运行和编排平台 Registry。
:用户可以通过项目访问不同的存储库,用户可以对项目下的镜像或 Helm chart有不同的权限。
过滤器(仓库、标签和标签)可以基于策略 Registry复制(同步)实例之间的镜像和chart。若遇到任何错误,Harbor 将自动重新复制。它可以用于辅助负载平衡,实现高可用性,促进混合和多云场景中的多数据中心部署。
:Harbor 定期扫描镜像以发现漏洞,并进行战略检查,防止部署易受攻击的镜像。
Harbor 现有企业 LDAP/AD 集成用户身份验证和管理,支持 LDAP 组导入 Harbor,然后可以授予特定项目的权限。
Harbor 利用 OpenID Connect (OIDC) 验证外部授权服务器或身份提供商认证的用户的身份。您可以使用单点登录登录 Harbor 门户。
系统管理员可以操作垃圾收集操作,以删除镜像(悬挂列表和未引用列表) blob)并定期释放它们的空间。
支持使用 Docker Content Trust(利用 Notary)签名容器镜像,以确保真实性和来源。此外,还可以激活防止未签名镜像部署的策略。
用户可以轻松浏览、搜索存储库和管理项目。
通过日志跟踪存储库的所有操作。
提供 RESTful API 是为了方便管理操作,并且易于与外部系统集成。嵌入式 Swagger UI 可用于探索和测试 API。
易于部署:Harbor 可以通过 Docker compose 以及 Helm Chart 、Harbor Operator进行部署, 也支持通过VMware Tanzu package部署包管理器
测试内容
测试拓扑
注:为便于测试和理解,管理网络、业务网络和节点网络使用相同的网络mgmt (生产部署需要分开
测试步骤
1
服务共享集群发布步骤与工作集群发布步骤相同,参与拙文Tanzu学习系列之Tanzu学习系列之TKGm 1.4 for vSphere 快速部署,成功创建后需要标记
1.发布集群之后,运行命令查看当前集群
[tkgm-admin@tkgm|default] [root@bootstrap ~]# tanzu cluster list NAME NAMESPACE STATUS CONTROLPLANE WORKERS KUBERNETES ROLES PLAN share-cluster default running 1/1 3/3 v1.21.2 vmware.1 <none> prod workload01 default running 1/1 3/3 v1.21.2 vmware.1 <none> prod
2.切换到管理集群,对集群share-cluster进行label,标记服务共享集群角色tanzu-services,通过Tanzu命令检查确认
[tkgm-admin@tkgm|default] [root@bootstrap ~]# kubectl label cluster.cluster.x-k8s.io/share-cluster cluster-role.tkg.tanzu.vmware.com/tanzu-services="" --overwrite=true [tkgm-admin@tkgm|default] [root@bootstrap ~]# tanzu cluster list --include-management-cluster NAME NAMESPACE STATUS CONTROLPLANE WORKERS KUBERNETES ROLES PLAN share-cluster default running 1/1 3/3 v1.21.2 vmware.1 tanzu-services prod workload01 default running 1/1 3/3 v1.21.2 vmware.1 <none> prod tkgm tkg-system running 1/1 1/1 v1.21.2 vmware.1 management dev
2
通过Tanzu Packages部署Cert Manager 、Contour、ExternalDNS (可选) packages
详见拙文Tanzu学习系列之TKGm 1.4 for vSphere 组件集成(一)
查看当前可用的packages
[share-cluster-admin@share-cluster|default] [root@bootstrap ~]# tanzu package available list -A
\ Retrieving available packages...
NAME DISPLAY-NAME SHORT-DESCRIPTION NAMESPACE
cert-manager.tanzu.vmware.com cert-manager Certificate management tanzu-package-repo-global
contour.tanzu.vmware.com Contour An ingress controller tanzu-package-repo-global
external-dns.tanzu.vmware.com external-dns This package provides DNS synchronization functionality. tanzu-package-repo-global
fluent-bit.tanzu.vmware.com fluent-bit Fluent Bit is a fast Log Processor and Forwarder tanzu-package-repo-global
grafana.tanzu.vmware.com grafana Visualization and analytics software tanzu-package-repo-global
harbor.tanzu.vmware.com Harbor OCI Registry tanzu-package-repo-global
multus-cni.tanzu.vmware.com multus-cni This package provides the ability for enabling attaching multiple network interfaces to pods in Kubernetes tanzu-package-repo-global
prometheus.tanzu.vmware.com prometheus A time series database for your metrics tanzu-package-repo-global
addons-manager.tanzu.vmware.com tanzu-addons-manager This package provides TKG addons lifecycle management capabilities. tkg-system
ako-operator.tanzu.vmware.com ako-operator NSX Advanced Load Balancer using ako-operator tkg-system
antrea.tanzu.vmware.com antrea networking and network security solution for containers tkg-system
calico.tanzu.vmware.com calico Networking and network security solution for containers. tkg-system
kapp-controller.tanzu.vmware.com kapp-controller Kubernetes package manager tkg-system
load-balancer-and-ingress-service.tanzu.vmware.com load-balancer-and-ingress-service Provides L4+L7 load balancing for TKG clusters running on vSphere tkg-system
metrics-server.tanzu.vmware.com metrics-server Metrics Server is a scalable, efficient source of container resource metrics for Kubernetes built-in autoscaling pipelines. tkg-system
pinniped.tanzu.vmware.com pinniped Pinniped provides identity services to Kubernetes. tkg-system
vsphere-cpi.tanzu.vmware.com vsphere-cpi The Cluster API brings declarative, Kubernetes-style APIs to cluster creation, configuration and management. Cluster API Provider for vSphere is a concrete implementation of Cluster API for vSphere. tkg-system
vsphere-csi.tanzu.vmware.com vsphere-csi vSphere CSI provider tkg-system
2. 查看当前部署的Packages,确保cert-manager、contour、external-dns 已经部署
[share-cluster-admin@share-cluster|default] [root@bootstrap clusterconfigs]# tanzu package installed list -A
\ Retrieving installed packages...
NAME PACKAGE-NAME PACKAGE-VERSION STATUS NAMESPACE
cert-manager cert-manager.tanzu.vmware.com 1.1.0+vmware.2-tkg.1 Reconcile succeeded cert-manager
contour contour.tanzu.vmware.com 1.17.2+vmware.1-tkg.2 Reconcile succeeded default
external-dns external-dns.tanzu.vmware.com 0.8.0+vmware.1-tkg.1 Reconcile succeeded default
antrea antrea.tanzu.vmware.com 0.13.3+vmware.1-tkg.1 Reconcile succeeded tkg-system
load-balancer-and-ingress-service load-balancer-and-ingress-service.tanzu.vmware.com 1.4.3+vmware.1-tkg.1 Reconcile succeeded tkg-system
metrics-server metrics-server.tanzu.vmware.com 0.4.0+vmware.1-tkg.1 Reconcile succeeded tkg-system
vsphere-cpi vsphere-cpi.tanzu.vmware.com 1.21.0+vmware.1-tkg.1 Reconcile succeeded tkg-system
vsphere-csi vsphere-csi.tanzu.vmware.com 2.3.0+vmware.1-tkg.3 Reconcile succeeded tkg-system
[share-cluster-admin@share-cluster|default] [root@bootstrap clusterconfigs]#
3. 确定package中的harbor的软件版本
[share-cluster-admin@share-cluster|default] [root@bootstrap ~]# tanzu package available list harbor.tanzu.vmware.com -A
| Retrieving package versions for harbor.tanzu.vmware.com...
NAME VERSION RELEASED-AT NAMESPACE
harbor.tanzu.vmware.com 2.2.3+vmware.1-tkg.2 2021-07-08 02:00:00 +0800 CST tanzu-package-repo-global
4. 创建harbor-data-values.yaml参数文件,通过模版导出,并拷贝一个新的文件
导出packages 模版的方法,拙文 Tanzu学习系列之TKGm 1.4 for vSphere 组件集成(二)有详细介绍
[share-cluster-admin@share-cluster|default] [root@bootstrap ~]# image_url=$(kubectl -n tanzu-package-repo-global get packages harbor.tanzu.vmware.com.2.2.3+vmware.1-tkg.2 -o jsonpath='{.spec.template.spec.fetch[0].imgpkgBundle.image}')
[share-cluster-admin@share-cluster|default] [root@bootstrap ~]# imgpkg pull -b $image_url -o /tmp/harbor-package-2.2.3+vmware.1-tkg.2
Pulling bundle 'projects.registry.vmware.com/tkg/packages/standard/harbor@sha256:ab53d81c62bc85b06bbe5c0b8f5a404e98b0bc554606d387896fd3b673224fb4'
Extracting layer 'sha256:c6a9bd33f109e3d8c98a0226809e0655ffe14d9208c860a89ab2b8b2f997a7cf' (1/1)
Locating image lock file images...
One or more images not found in bundle repo; skipping lock file update
Succeeded
[share-cluster-admin@share-cluster|default] [root@bootstrap ~]# cp /tmp/harbor-package-2.2.3+vmware.1-tkg.2/config/values.yaml harbor-data-values.yaml
参数文件如下
#@data/values
#@overlay/match-child-defaults missing_ok=True
---
#! The namespace to install Harbor
namespace: tanzu-system-registry
#! The FQDN for accessing Harbor admin UI and Registry service.
hostname: harbor.yourdomain.com
#! The network port of the Envoy service in Contour or other Ingress Controller.
port:
https: 443
#! The log level of core, exporter, jobservice, registry. Its value is debug, info, warning, error or fatal.
logLevel: info
#! [Optional] The certificate for the ingress if you want to use your own TLS certificate.
#! We will issue the certificate by cert-manager when it's empty.
tlsCertificate:
#! [Required] the certificate
tls.crt:
#! [Required] the private key
tls.key:
#! [Optional] the certificate of CA, this enables the download
#! link on portal to download the certificate of CA
ca.crt:
#! Use contour http proxy instead of the ingress when it's true
enableContourHttpProxy: true
#! [Required] The initial password of Harbor admin.
harborAdminPassword:
#! [Required] The secret key used for encryption. Must be a string of 16 chars.
secretKey:
database:
#! [Required] The initial password of the postgres database.
password:
core:
replicas: 1
#! [Required] Secret is used when core server communicates with other components.
secret:
#! [Required] The XSRF key. Must be a string of 32 chars.
xsrfKey:
jobservice:
replicas: 1
#! [Required] Secret is used when job service communicates with other components.
secret:
registry:
replicas: 1
#! [Required] Secret is used to secure the upload state from client
#! and registry storage backend.
#! See: https://github.com/docker/distribution/blob/master/docs/configuration.md#http
secret:
notary:
#! Whether to install Notary
enabled: true
trivy:
#! enabled the flag to enable Trivy scanner
enabled: true
replicas: 1
#! gitHubToken the GitHub access token to download Trivy DB
gitHubToken: ""
#! skipUpdate the flag to disable Trivy DB downloads from GitHub
#
#! You might want to set the value of this flag to `true` in test or CI/CD environments to avoid GitHub rate limiting issues.
#! If the value is set to `true` you have to manually download the `trivy.db` file and mount it in the
#! `/home/scanner/.cache/trivy/db/trivy.db` path.
skipUpdate: false
#! The persistence is always enabled and a default StorageClass
#! is needed in the k8s cluster to provision volumes dynamically.
#! Specify another StorageClass in the "storageClass" or set "existingClaim"
#! if you have already existing persistent volumes to use
#
#! For storing images and charts, you can also use "azure", "gcs", "s3",
#! "swift" or "oss". Set it in the "imageChartStorage" section
persistence:
persistentVolumeClaim:
registry:
#! Use the existing PVC which must be created manually before bound,
#! and specify the "subPath" if the PVC is shared with other components
existingClaim: ""
#! Specify the "storageClass" used to provision the volume. Or the default
#! StorageClass will be used(the default).
#! Set it to "-" to disable dynamic provisioning
storageClass: ""
subPath: ""
accessMode: ReadWriteOnce
size: 10Gi
jobservice:
existingClaim: ""
storageClass: ""
subPath: ""
accessMode: ReadWriteOnce
size: 1Gi
database:
existingClaim: ""
storageClass: ""
subPath: ""
accessMode: ReadWriteOnce
size: 1Gi
redis:
existingClaim: ""
storageClass: ""
subPath: ""
accessMode: ReadWriteOnce
size: 1Gi
trivy:
existingClaim: ""
storageClass: ""
subPath: ""
accessMode: ReadWriteOnce
size: 5Gi
#! Define which storage backend is used for registry and chartmuseum to store
#! images and charts. Refer to
#! https://github.com/docker/distribution/blob/master/docs/configuration.md#storage
#! for the detail.
imageChartStorage:
#! Specify whether to disable `redirect` for images and chart storage, for
#! backends which not supported it (such as using minio for `s3` storage type), please disable
#! it. To disable redirects, simply set `disableredirect` to `true` instead.
#! Refer to
#! https://github.com/docker/distribution/blob/master/docs/configuration.md#redirect
#! for the detail.
disableredirect: false
#! Specify the "caBundleSecretName" if the storage service uses a self-signed certificate.
#! The secret must contain keys named "ca.crt" which will be injected into the trust store
#! of registry's and chartmuseum's containers.
#! caBundleSecretName:
#! Specify the type of storage: "filesystem", "azure", "gcs", "s3", "swift",
#! "oss" and fill the information needed in the corresponding section. The type
#! must be "filesystem" if you want to use persistent volumes for registry
#! and chartmuseum
type: filesystem
filesystem:
rootdirectory: /storage
#maxthreads: 100
azure:
accountname: accountname #! required
accountkey: base64encodedaccountkey #! required
container: containername #! required
realm: core.windows.net #! optional
gcs:
bucket: bucketname #! required
#! The base64 encoded json file which contains the key
encodedkey: base64-encoded-json-key-file #! optional
rootdirectory: null #! optional
chunksize: 5242880 #! optional
s3:
region: us-west-1 #! required
bucket: bucketname #! required
accesskey: null #! eg, awsaccesskey
secretkey: null #! eg, awssecretkey
regionendpoint: null #! optional, eg, http://myobjects.local
encrypt: false #! optional
keyid: null #! eg, mykeyid
secure: true #! optional
v4auth: true #! optional
chunksize: null #! optional
rootdirectory: null #! optional
storageclass: STANDARD #! optional
swift:
authurl: https://storage.myprovider.com/v3/auth
username: username
password: password
container: containername
region: null #! eg, fr
tenant: null #! eg, tenantname
tenantid: null #! eg, tenantid
domain: null #! eg, domainname
domainid: null #! eg, domainid
trustid: null #! eg, trustid
insecureskipverify: null #! bool eg, false
chunksize: null #! eg, 5M
prefix: null #! eg
secretkey: null #! eg, secretkey
accesskey: null #! eg, accesskey
authversion: null #! eg, 3
endpointtype: null #! eg, public
tempurlcontainerkey: null #! eg, false
tempurlmethods: null #! eg
oss:
accesskeyid: accesskeyid
accesskeysecret: accesskeysecret
region: regionname
bucket: bucketname
endpoint: null #! eg, endpoint
internal: null #! eg, false
encrypt: null #! eg, false
secure: null #! eg, true
chunksize: null #! eg, 10M
rootdirectory: null #! eg, rootdirectory
#! The http/https network proxy for core, exporter, jobservice, trivy
proxy:
httpProxy:
httpsProxy:
noProxy: 127.0.0.1,localhost,.local,.internal
#! The PSP names used by Harbor pods. The names are separated by ','. 'null' means all PSP can be used.
pspNames: null
#! The metrics used by core, registry and exporter
metrics:
enabled: false
core:
path: /metrics
port: 8001
registry:
path: /metrics
port: 8001
exporter:
path: /metrics
port: 8001
5.设置参数文件如下参数的密码,可以通过generate-passwords.sh批量生成
备注:generate-passwords.sh位于packages 模版导出的目录下
harborAdminPassword
secretKey
database.password
core.secret
core.xsrfKey
jobservice.secret
registry.secret
[share-cluster-admin@share-cluster|default] [root@bootstrap ~]# bash /tmp/harbor-package-2.2.3+vmware.1-tkg.2/config/scripts/generate-passwords.sh harbor-data-values.yaml
Successfully generated random passwords and secrets in harbor-data-values.yaml
6. 通过Tanzu package方式部署harbor,可以通过cert-manager package颁发自签名证书,实现https方式访问,通过ContourHttpProxy 类型的ingress 进行访问
备注:
要使用您自己的证书,请使用您的证书、密钥和 CA 证书的内容更新harbor-data-values.yaml 文件中 tls.crt、tls.key 和 ca.crt 设置。证书可以由受信任的机构签名或自签名。如果您将这些留空,Tanzu Kubernetes Grid 会自动生成自签名证书。
#! The certificate for the ingress if you want to use your own TLS certificate.
#! We will issue the certificate by cert-manager when it's empty.
tlsCertificate:
#! [Required] the certificate
tls.crt: 留空
#! [Required] the private key
tls.key: 留空
#! [Optional] the certificate of CA, this enables the download
#! link on portal to download the certificate of CA
ca.crt: 留空
7. 设置harbor-data-values.yaml文件中Harbor 的hostname 域名 (可以通过ExternalDNS自动更新到DNS服务器,ExternalDNS内容可以参考 Tanzu学习系列之TKGm 1.4 for vSphere 组件集成(一))和 storageClass
查看当前集群使用的 storageClass
[share-cluster-admin@share-cluster|default] [root@bootstrap clusterconfigs]# k get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
default (default) csi.vsphere.vmware.com Delete Immediate true 2d10h
8. 完成设置的harbor-data-values.yaml参考文件内容如下
备注:
可以使用 yq -i eval '... comments=""' harbor-data-values.yaml
删除 Harbor-data-values.yaml 文件中的所有注释:
[share-cluster-admin@share-cluster|default] [root@bootstrap ~]# yq -i eval '... comments=""' harbor-data-values.yaml
参数文件如下:
namespace: tanzu-system-registry
hostname: harbor.corp.local
port:
https: 443
logLevel: info
tlsCertificate:
tls.crt:
tls.key:
ca.crt:
enableContourHttpProxy: true
harborAdminPassword: Q4OORfFAJHt3gpNR
secretKey: ofwexVB6oe29Ke1v
database:
password: tzufypsqSq599WRq
core:
replicas: 1
secret: QztlEZ7eswPsbKja
xsrfKey: pteo7TFva2gPfsBvoIA8GN6IH0IU5RoO
jobservice:
replicas: 1
secret: hJQbRj9Jy7c5JPXC
registry:
replicas: 1
secret: RoIrWKEsPLCCD4DE
notary:
enabled: true
trivy:
enabled: true
replicas: 1
gitHubToken: ""
skipUpdate: false
persistence:
persistentVolumeClaim:
registry:
existingClaim: ""
storageClass: "default"
subPath: ""
accessMode: ReadWriteOnce
size: 10Gi
jobservice:
existingClaim: ""
storageClass: "default"
subPath: ""
accessMode: ReadWriteOnce
size: 1Gi
database:
existingClaim: ""
storageClass: "default"
subPath: ""
accessMode: ReadWriteOnce
size: 1Gi
redis:
existingClaim: ""
storageClass: "default"
subPath: ""
accessMode: ReadWriteOnce
size: 1Gi
trivy:
existingClaim: ""
storageClass: "default"
subPath: ""
accessMode: ReadWriteOnce
size: 5Gi
imageChartStorage:
disableredirect: false
type: filesystem
filesystem:
rootdirectory: /storage
azure:
accountname: accountname
accountkey: base64encodedaccountkey
container: containername
realm: core.windows.net
gcs:
bucket: bucketname
encodedkey: base64-encoded-json-key-file
rootdirectory: null
chunksize: 5242880
s3:
region: us-west-1
bucket: bucketname
accesskey: null
secretkey: null
regionendpoint: null
encrypt: false
keyid: null
secure: true
v4auth: true
chunksize: null
rootdirectory: null
storageclass: STANDARD
swift:
authurl: https://storage.myprovider.com/v3/auth
username: username
password: password
container: containername
region: null
tenant: null
tenantid: null
domain: null
domainid: null
trustid: null
insecureskipverify: null
chunksize: null
prefix: null
secretkey: null
accesskey: null
authversion: null
endpointtype: null
tempurlcontainerkey: null
tempurlmethods: null
oss:
accesskeyid: accesskeyid
accesskeysecret: accesskeysecret
region: regionname
bucket: bucketname
endpoint: null
internal: null
encrypt: null
secure: null
chunksize: null
rootdirectory: null
proxy:
httpProxy:
httpsProxy:
noProxy: 127.0.0.1,localhost,.local,.internal
pspNames: null
metrics:
enabled: false
core:
path: /metrics
port: 8001
registry:
path: /metrics
port: 8001
exporter:
path: /metrics
port: 8001
9. 使用package命令安装harbor,指定参数来自harbor-data-values.yaml
[share-cluster-admin@share-cluster|default] [root@bootstrap ~]# tanzu package install harbor --package-name harbor.tanzu.vmware.com --version 2.2.3+vmware.1-tkg.2 --values-file harbor-data-values.yaml --namespace tanzu-system-registry --create-namespace
- Installing package 'harbor.tanzu.vmware.com'
| Creating namespace 'tanzu-system-registry'
| Getting package metadata for 'harbor.tanzu.vmware.com'
| Creating service account 'harbor-tanzu-system-registry-sa'
| Creating cluster admin role 'harbor-tanzu-system-registry-cluster-role'
| Creating cluster role binding 'harbor-tanzu-system-registry-cluster-rolebinding'
| Creating secret 'harbor-tanzu-system-registry-values'
- Creating package resource
/ Package install status: Reconciling
Added installed package 'harbor' in namespace 'tanzu-system-registry'
10. 查询harbor package 是否安装成功
[share-cluster-admin@share-cluster|default] [root@bootstrap ~]# tanzu package installed get harbor -n tanzu-system-registry
\ Retrieving installation details for harbor...
NAME: harbor
PACKAGE-NAME: harbor.tanzu.vmware.com
PACKAGE-VERSION: 2.2.3+vmware.1-tkg.2
STATUS: Reconcile succeeded
CONDITIONS: [{ReconcileSucceeded True }]
USEFUL-ERROR-MESSAGE:
[share-cluster-admin@share-cluster|default] [root@bootstrap ~]# kubectl get apps -A
NAMESPACE NAME DESCRIPTION SINCE-DEPLOY AGE
cert-manager cert-manager Reconcile succeeded 84s 84m
default contour Reconcile succeeded 34s 76m
default external-dns Reconcile succeeded 25s 70m
tanzu-system-registry harbor Reconcile succeeded 22s 9m7s
tkg-system antrea Reconcile succeeded 5m24s 9h
tkg-system load-balancer-and-ingress-service Reconcile succeeded 4m33s 9h
tkg-system metrics-server Reconcile succeeded 4m53s 9h
tkg-system vsphere-cpi Reconcile succeeded 118s 9h
tkg-system vsphere-csi Reconcile succeeded 41s 9h
11. 查看controur envoy 负载均衡Ip
[share-cluster-admin@share-cluster|default] [root@bootstrap config]# k get svc -n tanzu-system-ingress
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
contour ClusterIP 100.68.132.240 <none> 8001/TCP 24h
envoy LoadBalancer 100.67.220.80 192.168.110.66 80:32519/TCP,443:32081/TCP 24h
13.查看harbor ingress,FQDN DNS解析也通过ExternalDNS自动更新到DNS服务器
[share-cluster-admin@share-cluster|default] [root@bootstrap config]# kubectl get httpproxy -n tanzu-system-registry
NAME FQDN TLS SECRET STATUS STATUS DESCRIPTION
harbor-httpproxy harbor.corp.local harbor-tls valid Valid HTTPProxy
harbor-httpproxy-notary notary.harbor.corp.local harbor-tls valid Valid HTTPProxy
14. 如果安装配置了ExternalDNS,域名会自动更新到dns,可以直接通过https方式访问;如果没有配置ExternalDNS,手动更新到dns 或者通过hosts文件解析
用户名admin,密码:参数文件harbor-data-values.yaml中的harborAdminPassword: Q4OORfFAJHt3gpNR
3
Tanzu Packages部署的组件,支持更新正在运行的部署。
如果需要在部署后更改 Harbor 包的配置,请按照以下步骤更新已部署的 Harbor 包。更新 Harbor-data-values.yaml 中的 Harbor 配置。
例如:增加存储大小,pod副本数量等
运行更新命令
tanzu package installed update harbor \
--version 2.2.3+vmware.1-tkg.1 \
--values-file harbor-data-values.yaml \
--namespace my-packages
Harbor package 将使用添加的一个或多个新值进行更改。kapp-controller 最多可能需要五分钟来应用更改。
4
获得Harbor CA certificate,保存证书为ca.crt文件
[share-cluster-admin@share-cluster|default] [root@bootstrap ~]# kubectl -n tanzu-system-registry get secret harbor-tls -o=jsonpath="{.data.ca\.crt}" | base64 -d
-----BEGIN CERTIFICATE-----
MIIDWzCCAkOgAwIBAgIRAPWvNIAnezw31fPPF8fD5fowDQYJKoZIhvcNAQELBQAw
LTEXMBUGA1UEChMOUHJvamVjdCBIYXJib3IxEjAQBgNVBAMTCUhhcmJvciBDQTAe
Fw0yMjAxMTExNDIzNDlaFw0zMjAxMDkxNDIzNDlaMC0xFzAVBgNVBAoTDlByb2pl
Y3QgSGFyYm9yMRIwEAYDVQQDEwlIYXJib3IgQ0EwggEiMA0GCSqGSIb3DQEBAQUA
A4IBDwAwggEKAoIBAQDvDae0WDGZA0jQz01mtLp9w5Iw+Wqor+1foraMEqVEaG7v
VfLH5Lmh39Lsd2k6OsTEkJu7BDQFoCytz7qaY3rbMp8WRDboTThFsGvz/hDiek07
5tSDa/VWDONNMCCLwLCSoiYsRy3J1/dUp8KcTwBHJ/nBnLnyEw2LBYFAxO3ycIv+
+LWWM+duMizZ6q+o5Pn/8dTSz5BL3kYocqdvFZVxuCBI05otbxFI+WBTf28jaPlC
pVQ20klzDKRleljJhYZ01S68f59ZCLaBvK3KcxSak4CJ5wzHUevf5zqjphImHQ3j
w96i3ZKCRczV2mkVaRchhiU9A9l7uJaYWe5/g+kjAgMBAAGjdjB0MA4GA1UdDwEB
/wQEAwICBDAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwDwYDVR0TAQH/
BAUwAwEB/zAdBgNVHQ4EFgQUD1dhBGgh9yKXIRnioTVpLkmPN8MwEwYDVR0RBAww
CoIIaGFyYm9yY2EwDQYJKoZIhvcNAQELBQADggEBAGS5V3tfU5gynw+E8xTrSGj7
6dREAGnaHJXXALPE7/EmBqgD8oGD3BqMIGrO5v0EpEsHqaZk1Regzp93gO0zMrYy
+nXYBE25d0dKr9DyTwEcSXzxFpLrtf1POA6m9sYkbov7MBiFdybFSpUy2zEEItV1
UjNKu7CsB4TcNliQ2Q4hPXzM/Nqu6D/YBXbccC9tzgWtj57cH8oH3XwZdqyPhxky
zL83g4Ad5V46mbz5vEJJsQ+NFhliMKUOWpZZIzgdGDy7lHIRicRHwErL+yesIWAq
x2VRRwFVYj1l9Ew5oD+vihvu/LthkoTxqpHVIZykxnvc3bnp57ld5H1SU/9aMeU=
-----END CERTIFICATE-----
2. 在bootstrap 机器的/etc/docker/certs.d/目录下创建域名harbor.corp.local 目录,放置ca.crt文件,可以通过docker login方式访问harbor,首次登陆要输入admin 用户名和密码
[root@bootstrap clusterconfigs]# docker login harbor.corp.local
Authenticating with existing credentials...
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
[root@bootstrap clusterconfigs]#
5
由于harbor 采用自签名证书,需要在每个kubernetes 的node节点放置自CA签名证书,才能从harbor pull镜像,TKGm可以通过ytt 在部署工作集群时候,自动注入CA自签名证书到node节点
1.切换到bootstrap的
/root/.config/tanzu/tkg/providers/infrastructure-vsphere/ytt目录下,拷贝CA 证书到这个目录,命名 tkg-custom-ca.pem,同时创建vsphere-overlay.yaml 文件,文件内容如下,发布工作集群时候会通过ytt把ca证书tkg-custom-ca.pem注入到集群的node节点
[workload03-admin@workload03|default] [root@bootstrap ytt]# pwd
/root/.config/tanzu/tkg/providers/infrastructure-vsphere/ytt
[workload03-admin@workload03|default] [root@bootstrap ytt]# ls
tkg-custom-ca.pem vsphere-overlay.yaml
[workload03-admin@workload03|default] [root@bootstrap ytt]# cat vsphere-overlay.yaml
#! Please add any overlays specific to vSphere provider under this file.
#@ load("@ytt:overlay", "overlay")
#@ load("@ytt:data", "data")
#! This ytt overlay adds additional custom CA certificates on TKG cluster nodes, so containerd and other tools trust these CA certificates.
#! It works when using Photon or Ubuntu as the TKG node template on all TKG infrastructure providers.
#! Trust your custom CA certificates on all Control Plane nodes.
#@overlay/match by=overlay.subset({"kind":"KubeadmControlPlane"})
---
spec:
kubeadmConfigSpec:
#@overlay/match missing_ok=True
files:
#@overlay/append
- content: #@ data.read("tkg-custom-ca.pem")
owner: root:root
permissions: "0644"
path: /etc/ssl/certs/tkg-custom-ca.pem
#@overlay/match missing_ok=True
preKubeadmCommands:
#! For Photon OS
#@overlay/append
- '! which rehash_ca_certificates.sh 2>/dev/null || rehash_ca_certificates.sh'
#! For Ubuntu
#@overlay/append
- '! which update-ca-certificates 2>/dev/null || (mv /etc/ssl/certs/tkg-custom-ca.pem /usr/local/share/ca-certificates/tkg-custom-ca.crt && update-ca-certificates)'
#! Trust your custom CA certificates on all worker nodes.
#@overlay/match by=overlay.subset({"kind":"KubeadmConfigTemplate"}), expects="1+"
---
spec:
template:
spec:
#@overlay/match missing_ok=True
files:
#@overlay/append
- content: #@ data.read("tkg-custom-ca.pem")
owner: root:root
permissions: "0644"
path: /etc/ssl/certs/tkg-custom-ca.pem
#@overlay/match missing_ok=True
preKubeadmCommands:
#! For Photon OS
#@overlay/append
- '! which rehash_ca_certificates.sh 2>/dev/null || rehash_ca_certificates.sh'
#! For Ubuntu
#@overlay/append
- '! which update-ca-certificates 2>/dev/null || (mv /etc/ssl/certs/tkg-custom-ca.pem /usr/local/share/ca-certificates/tkg-custom-ca.crt && update-ca-certificates)'
tkg-custom-ca.pem内容如下
[workload03-admin@workload03|default] [root@bootstrap ytt]# cat tkg-custom-ca.pem
-----BEGIN CERTIFICATE-----
MIIDWzCCAkOgAwIBAgIRAPWvNIAnezw31fPPF8fD5fowDQYJKoZIhvcNAQELBQAw
LTEXMBUGA1UEChMOUHJvamVjdCBIYXJib3IxEjAQBgNVBAMTCUhhcmJvciBDQTAe
Fw0yMjAxMTExNDIzNDlaFw0zMjAxMDkxNDIzNDlaMC0xFzAVBgNVBAoTDlByb2pl
Y3QgSGFyYm9yMRIwEAYDVQQDEwlIYXJib3IgQ0EwggEiMA0GCSqGSIb3DQEBAQUA
A4IBDwAwggEKAoIBAQDvDae0WDGZA0jQz01mtLp9w5Iw+Wqor+1foraMEqVEaG7v
VfLH5Lmh39Lsd2k6OsTEkJu7BDQFoCytz7qaY3rbMp8WRDboTThFsGvz/hDiek07
5tSDa/VWDONNMCCLwLCSoiYsRy3J1/dUp8KcTwBHJ/nBnLnyEw2LBYFAxO3ycIv+
+LWWM+duMizZ6q+o5Pn/8dTSz5BL3kYocqdvFZVxuCBI05otbxFI+WBTf28jaPlC
pVQ20klzDKRleljJhYZ01S68f59ZCLaBvK3KcxSak4CJ5wzHUevf5zqjphImHQ3j
w96i3ZKCRczV2mkVaRchhiU9A9l7uJaYWe5/g+kjAgMBAAGjdjB0MA4GA1UdDwEB
/wQEAwICBDAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwDwYDVR0TAQH/
BAUwAwEB/zAdBgNVHQ4EFgQUD1dhBGgh9yKXIRnioTVpLkmPN8MwEwYDVR0RBAww
CoIIaGFyYm9yY2EwDQYJKoZIhvcNAQELBQADggEBAGS5V3tfU5gynw+E8xTrSGj7
6dREAGnaHJXXALPE7/EmBqgD8oGD3BqMIGrO5v0EpEsHqaZk1Regzp93gO0zMrYy
+nXYBE25d0dKr9DyTwEcSXzxFpLrtf1POA6m9sYkbov7MBiFdybFSpUy2zEEItV1
UjNKu7CsB4TcNliQ2Q4hPXzM/Nqu6D/YBXbccC9tzgWtj57cH8oH3XwZdqyPhxky
zL83g4Ad5V46mbz5vEJJsQ+NFhliMKUOWpZZIzgdGDy7lHIRicRHwErL+yesIWAq
x2VRRwFVYj1l9Ew5oD+vihvu/LthkoTxqpHVIZykxnvc3bnp57ld5H1SU/9aMeU=
-----END CERTIFICATE-----
2. 发布工作集群workload03
发布成功之后,登陆到node节点,查看harbor ca证书已经注入
root@workload03-control-plane-kml2q [ ~ ]# ls -l /etc/ssl/certs/tkg-custom-ca.pem
-rw-r--r-- 1 root root 1224 Jan 11 16:01 /etc/ssl/certs/tkg-custom-ca.pem
3. 准备应用yaml文件,image镜像位置为
harbor.corp.local/library/hello-kubernetes:1.5
测试容器引擎containerd 信任harbor镜像的自签名CA证书,可以拉取镜像
apiVersion: v1
kind: Service
metadata:
name: hello-kubernetes
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 8080
selector:
app: hello-kubernetes
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-kubernetes
spec:
replicas: 2
selector:
matchLabels:
app: hello-kubernetes
template:
metadata:
labels:
app: hello-kubernetes
spec:
containers:
- name: hello-kubernetes
image: harbor.corp.local/library/hello-kubernetes:1.5
ports:
- containerPort: 8080
env:
- name: MESSAGE
value: yang lu just deployed a pod on a TKC!
4. 发布应用,应用发布成功
[workload03-admin@workload03|default] [root@bootstrap new]# k apply -f tkghello.yaml
service/hello-kubernetes created
deployment.apps/hello-kubernetes created
[workload03-admin@workload03|default] [root@bootstrap new]# k get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hello-kubernetes LoadBalancer 100.67.178.78 192.168.110.69 80:30954/TCP 37s
kubernetes ClusterIP 100.64.0.1 <none> 443/TCP 23h
[workload03-admin@workload03|default] [root@bootstrap new]#
User-Managed Packages CLI参考
TKG的组件由Tanzu Package进行管理,通过Tanzu CLI方式部署、更新,卸载,Tanzu Package采用的是Carvel 项目的kapp-controller工具,Tanzu CLI化实现。
Tanzu Package分为Core packages 和User-Managed Packages,Core packages是在部署和发布TKGm管理集群和工作集群的时候就同时安装部署了,User-Managed Packages是在部署和发布TKGm集群之后可以根据规划选择性部署。
User-Managed Packages在 Tanzu CLI 中分组到包存储库中。如果包含User-Managed Packages的软件包存储库在目标集群中可用,您可以使用 Tanzu CLI 安装和管理该存储库中的任何软件包。
使用 Tanzu CLI,可以从内置的 tanzu-standard 软件包存储库或添加到目标集群的软件包存储库安装User-Managed Packages。从 tanzu-standard 软件包存储库中,可以安装 Cert Manager、Contour、External DNS、Fluent Bit、Grafana、Harbor、Multus CNI 和 Prometheus 软件包
Package Repositories 相关命令
Command |
|
tanzu package repository list |
Lists all package repositories. |
tanzu package repository get |
Shows the details of a package repository. |
tanzu package repository add |
Adds a package repository. |
tanzu package repository update |
Updates a package repository. |
tanzu package repository delete |
Removes a package repository. |
Packages 管理相关命令
以下部分描述了如何列出、安装、更新和删除软件包。
Command |
|
tanzu package available list |
Lists all available packages and package versions. |
tanzu package available get |
Shows the details of an available package. |
tanzu package installed list |
Lists all installed packages. |
tanzu package installed get |
Shows the details of an installed package. |
tanzu package install |
Installs a package. |
tanzu package installed create |
Same as tanzu package install. |
tanzu package installed update |
Updates a package. |
tanzu package installed delete |
Deletes a package. |
全文完
要想了解云原生、机器学习和区块链等技术原理,请立即长按以下二维码,关注本公众号( henglibiji ),以免错过更新。